copy_S3_redshift: Copy data stored in S3 .csv files into redshift table

Description Usage Arguments Value

Description

Using function inputs for JDBC connection, redshift table name, and bucket path, perform copy/insert to redshift

Usage

1
2
3
4
5
6
copy_S3_redshift(env, connection, table_name, bucket_path,
  credentials = Sys.getenv(paste0(env, "_REDSHIFT_IAM_ROLE")),
  role = Sys.getenv(paste0(env, "_REDSHIFT_S3_ROLE")), delimiter = ",",
  region = Sys.getenv(paste0(env, "_REDSHIFT_REGION")),
  ignore_header = "1", dateformat = "auto", null = "NA",
  file_format = "csv")

Arguments

env

variable for AWS to select specific systems using getenv query

connection

RJDBC connection string

table_name

name of the table in redshift to copy to

bucket_path

name of the S3 bucket the includes the csv to copy

credentials

IAM role with access defined in .RProfile

role

S3 role with access defined in .RProfile

delimiter

identifier for column seperation

region

AWS region for redshift cluster defined in .RProfile

ignore_header

default ignore header to TRUE

dateformat

format for data fields in redshift cluster default 'auto'

null

fill value for nulls in redshift

file_format

file format to upload/copy/insert

Value

return TRUE if successful


themechanicalbear/tastytrade documentation built on June 28, 2019, 10:16 p.m.