View source: R/sdf_interface.R
sdf_copy_to | R Documentation |
Copy an object into Spark, and return an R object wrapping the copied object (typically, a Spark DataFrame).
sdf_copy_to(sc, x, name, memory, repartition, overwrite, struct_columns, ...)
sdf_import(x, sc, name, memory, repartition, overwrite, struct_columns, ...)
sc |
The associated Spark connection. |
x |
An R object from which a Spark DataFrame can be generated. |
name |
The name to assign to the copied table in Spark. |
memory |
Boolean; should the table be cached into memory? |
repartition |
The number of partitions to use when distributing the table across the Spark cluster. The default (0) can be used to avoid partitioning. |
overwrite |
Boolean; overwrite a pre-existing table with the name |
struct_columns |
(only supported with Spark 2.4.0 or higher) A list of columns from the source data frame that should be converted to Spark SQL StructType columns. The source columns can contain either json strings or nested lists. All rows within each source column should have identical schemas (because otherwise the conversion result will contain unexpected null values or missing values as Spark currently does not support schema discovery on individual rows within a struct column). |
... |
Optional arguments, passed to implementing methods. |
sdf_copy_to
is an S3 generic that, by default, dispatches to
sdf_import
. Package authors that would like to implement
sdf_copy_to
for a custom object type can accomplish this by
implementing the associated method on sdf_import
.
Other Spark data frames:
sdf_distinct()
,
sdf_random_split()
,
sdf_register()
,
sdf_sample()
,
sdf_sort()
,
sdf_weighted_sample()
## Not run:
sc <- spark_connect(master = "spark://HOST:PORT")
sdf_copy_to(sc, iris)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.