rs_replace_table: Replace or upsert redshift table

Description Usage Arguments Examples

Description

Upload a table to S3 and then load it with redshift, replacing the contents of that table. The table on redshift has to have the same structure and column ordering to work correctly.

Usage

1
2
3
4
5
rs_replace_table(data, dbcon, tableName, split_files,
  bucket = Sys.getenv("AWS_BUCKET_NAME"),
  region = Sys.getenv("AWS_DEFAULT_REGION"),
  access_key = Sys.getenv("AWS_ACCESS_KEY_ID"),
  secret_key = Sys.getenv("AWS_SECRET_ACCESS_KEY"))

Arguments

data

a data frame

dbcon

an RPostgres connection to the redshift server

tableName

the name of the table to replace

split_files

optional parameter to specify amount of files to split into. If not specified will look at amount of slices in Redshift to determine an optimal amount.

bucket

the name of the temporary bucket to load the data. Will look for AWS_BUCKET_NAME on environment if not specified.

region

the region of the bucket. Will look for AWS_DEFAULT_REGION on environment if not specified.

access_key

the access key with permissions for the bucket. Will look for AWS_ACCESS_KEY_ID on environment if not specified.

secret_key

the secret key with permissions fot the bucket. Will look for AWS_SECRET_ACCESS_KEY on environment if not specified.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
library(DBI)

a=data.frame(a=seq(1,10000), b=seq(10000,1))

## Not run: 
con <- dbConnect(RPostgres::Postgres(), dbname="dbname",
host='my-redshift-url.amazon.com', port='5439',
user='myuser', password='mypassword',sslmode='require')

rs_replace_table(data=a, dbcon=con, tableName='testTable',
bucket="my-bucket", split_files=4)


## End(Not run)

RDAdams/RedShifteR documentation built on May 8, 2019, 5:50 a.m.