spark_write_jdbc: Write to a JDBC table

Description Usage Arguments Details Examples

View source: R/read-write.R

Description

Write to a JDBC table

Usage

1
2
3
4
5
6
7
8
9
spark_write_jdbc(
  .data,
  url,
  table = NULL,
  mode = "error",
  partition_by = NULL,
  driver = NULL,
  ...
)

Arguments

.data

a spark_tbl

url

string, the jdbc URL

table

sting, the table name

mode

string, either "error" (default), "overwrite", or "append".

partition_by

string, the column names to partition by

driver

string, the driver class to use, e.g. "org.postgresql.Driver"

...

additional connection options such as user, password, etc.

Details

connection properties can be set by other named arguments in the ... JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included. "batchsize" can be used to control the number of rows per insert. "isolationLevel" can be one of "NONE", "READ_COMMITTED", "READ_UNCOMMITTED", "REPEATABLE_READ", or "SERIALIZABLE", corresponding to standard transaction isolation levels defined by JDBC's Connection object, with default of "READ_UNCOMMITTED".

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
## Not run: 
spark_session_reset(sparkPackages = c("org.postgresql:postgresql:42.2.12"))

iris_tbl <- spark_tbl(iris)

iris_tbl %>%
  spark_write_jdbc(url = "jdbc:postgresql://localhost/tidyspark_test",
                   table = "iris_test",
                   partition_by = "Species",
                   mode = "overwrite",
                   user = "tidyspark_tester", password = "test4life",
                   driver = "org.postgresql.Driver")

## End(Not run)

danzafar/tidyspark documentation built on Sept. 30, 2020, 12:19 p.m.