dbWriteTable,MoJAthenaConnection,character,data.frame-method | R Documentation |
See noctua::dbWriteTable()
. Note that you must have write permission to the s3 directory where the data is stored.
In general you will not have this permission for the automatically generated directory generated by connect_athena()
so you must specify an s3 directory where you do have write permission.
You can do this either as an argument to connect_athena
(which will affect all your Athena transactions), or
specifically to the dbWriteTable
call using the s3.location
argument.
This function calls noctua::dbWriteTable()
, after replacing any references to __temp__
in the statement with your temporary database in Athena. Your temporary database will be created
if you do not already have one.
## S4 method for signature 'MoJAthenaConnection,character,data.frame'
dbWriteTable(
conn,
name,
value,
overwrite = FALSE,
append = FALSE,
row.names = NA,
field.types = NULL,
partition = NULL,
s3.location = NULL,
file.type = c("tsv", "csv", "parquet", "json"),
compress = FALSE,
max.batch = Inf,
...
)
conn |
A DBIConnection object, as returned by |
name |
A character string specifying a table name. Names will be automatically quoted so you can use any sequence of characters, not just any valid bare table name. |
value |
A data.frame to write to the database. |
overwrite |
Allows overwriting the destination table. Cannot be |
append |
Allow appending to the destination table. Cannot be
|
row.names |
Either If A string is equivalent to For backward compatibility, |
field.types |
Additional field types used to override derived types. |
partition |
Partition Athena table (needs to be a named list or vector) for example: |
s3.location |
s3 bucket to store Athena table, must be set as a s3 uri for example ("s3://mybucket/data/").
By default, the s3.location is set to s3 staging directory from |
file.type |
What file type to store data.frame on s3, noctua currently supports ["tsv", "csv", "parquet", "json"]. Default delimited file type is "tsv", in previous versions
of |
compress |
|
max.batch |
Split the data frame by max number of rows i.e. 100,000 so that multiple files can be uploaded into AWS S3. By default when compression
is set to |
... |
Other arguments used by individual methods. |
# Either specify the location to dbWriteTable itself
con <- connect_athena()
dbWriteTable(con, "__temp__.table_name", dataframe, s3.location = "s3://bucket_you_have_write_permission/dir")
# Or to the connection object
con <- connect_athena(staging_dir = "s3://bucket_you_have_write_permission/dir")
dbWriteTable(con, "__temp__.table_name", dataframe)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.