AthenaWriteTables | R Documentation |
Convenience functions for reading/writing DBMS tables
## S4 method for signature 'AthenaConnection,character,data.frame' dbWriteTable( conn, name, value, overwrite = FALSE, append = FALSE, row.names = NA, field.types = NULL, partition = NULL, s3.location = NULL, file.type = c("tsv", "csv", "parquet", "json"), compress = FALSE, max.batch = Inf, ... ) ## S4 method for signature 'AthenaConnection,Id,data.frame' dbWriteTable( conn, name, value, overwrite = FALSE, append = FALSE, row.names = NA, field.types = NULL, partition = NULL, s3.location = NULL, file.type = c("tsv", "csv", "parquet", "json"), compress = FALSE, max.batch = Inf, ... ) ## S4 method for signature 'AthenaConnection,SQL,data.frame' dbWriteTable( conn, name, value, overwrite = FALSE, append = FALSE, row.names = NA, field.types = NULL, partition = NULL, s3.location = NULL, file.type = c("tsv", "csv", "parquet", "json"), compress = FALSE, max.batch = Inf, ... )
conn |
An |
name |
A character string specifying a table name. Names will be automatically quoted so you can use any sequence of characters, not just any valid bare table name. |
value |
A data.frame to write to the database. |
overwrite |
Allows overwriting the destination table. Cannot be |
append |
Allow appending to the destination table. Cannot be
|
row.names |
Either If A string is equivalent to For backward compatibility, |
field.types |
Additional field types used to override derived types. |
partition |
Partition Athena table (needs to be a named list or vector) for example: |
s3.location |
s3 bucket to store Athena table, must be set as a s3 uri for example ("s3://mybucket/data/").
By default, the s3.location is set to s3 staging directory from |
file.type |
What file type to store data.frame on s3, RAthena currently supports ["tsv", "csv", "parquet", "json"]. Default delimited file type is "tsv", in previous versions
of |
compress |
|
max.batch |
Split the data frame by max number of rows i.e. 100,000 so that multiple files can be uploaded into AWS S3. By default when compression
is set to |
... |
Other arguments used by individual methods. |
dbWriteTable()
returns TRUE
, invisibly. If the table exists, and both append and overwrite
arguments are unset, or append = TRUE and the data frame with the new data has different column names,
an error is raised; the remote table remains unchanged.
dbWriteTable
## Not run: # Note: # - Require AWS Account to run below example. # - Different connection methods can be used please see `RAthena::dbConnect` documnentation library(DBI) # Demo connection to Athena using profile name con <- dbConnect(RAthena::athena()) # List existing tables in Athena dbListTables(con) # Write data.frame to Athena table dbWriteTable(con, "mtcars", mtcars, partition=c("TIMESTAMP" = format(Sys.Date(), "%Y%m%d")), s3.location = "s3://mybucket/data/") # Read entire table from Athena dbReadTable(con, "mtcars") # List all tables in Athena after uploading new table to Athena dbListTables(con) # Checking if uploaded table exists in Athena dbExistsTable(con, "mtcars") # using default s3.location dbWriteTable(con, "iris", iris) # Read entire table from Athena dbReadTable(con, "iris") # List all tables in Athena after uploading new table to Athena dbListTables(con) # Checking if uploaded table exists in Athena dbExistsTable(con, "iris") # Disconnect from Athena dbDisconnect(con) ## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.