spark_write_td: Write a Spark DataFrame to Treasure Data

Description Usage Arguments See Also Examples

View source: R/spark_td.R

Description

Write a Spark DataFrame to Treasure Data

Usage

1
2
spark_write_td(x, name, mode = NULL, options = list(),
  partition_by = NULL, ...)

Arguments

x

A Spark DataFrame or dplyr operation

name

The name to write table.

mode

A character element. Specifies the behavior when data or table already exists. Supported values include: 'error', 'append', 'overwrite' and 'ignore'. Notice that 'overwrite' will also change the column structure.

options

A list of strings with additional options.

partition_by

A character vector. Partitions the output by the given columns on the file system.

...

Optional arguments; currently unused.

See Also

Other Spark serialization routines: spark_execute_td_presto, spark_read_td_presto, spark_read_td_query, spark_read_td

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
## Not run: 
config <- spark_config()

config$spark.td.apikey <- Sys.getenv("TD_API_KEY")
config$spark.serializer <- "org.apache.spark.serializer.KryoSerializer"
config$spark.sql.execution.arrow.enabled <- "true"

sc <- spark_connect(master = "local", config = config)

spark_mtcars <- dplyr::copy_to(sc, mtcars, "spark_mtcars", overwrite = TRUE)

spark_write_td(
  spark_mtcars,
  name = "mydb.mtcars",
  mode = "overwrite"
)

## End(Not run)

chezou/sparklytd documentation built on Oct. 27, 2019, 2:32 a.m.