write.df: Save the contents of SparkDataFrame to a data source.

Description Usage Arguments Details Note See Also Examples

Description

The data source is specified by the source and a set of options (...). If source is not specified, the default data source configured by spark.sql.sources.default will be used.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
write.df(df, path = NULL, ...)

saveDF(df, path, source = NULL, mode = "error", ...)

write.df(df, path = NULL, ...)

## S4 method for signature 'SparkDataFrame'
write.df(
  df,
  path = NULL,
  source = NULL,
  mode = "error",
  partitionBy = NULL,
  ...
)

## S4 method for signature 'SparkDataFrame,character'
saveDF(df, path, source = NULL, mode = "error", ...)

Arguments

df

a SparkDataFrame.

path

a name for the table.

...

additional argument(s) passed to the method.

source

a name for external data source.

mode

one of 'append', 'overwrite', 'error', 'errorifexists', 'ignore' save mode (it is 'error' by default)

partitionBy

a name or a list of names of columns to partition the output by on the file system. If specified, the output is laid out on the file system similar to Hive's partitioning scheme.

Details

Additionally, mode is used to specify the behavior of the save operation when data already exists in the data source. There are four modes:

Note

write.df since 1.4.0

saveDF since 1.4.0

See Also

Other SparkDataFrame functions: SparkDataFrame-class, agg(), alias(), arrange(), as.data.frame(), attach,SparkDataFrame-method, broadcast(), cache(), checkpoint(), coalesce(), collect(), colnames(), coltypes(), createOrReplaceTempView(), crossJoin(), cube(), dapplyCollect(), dapply(), describe(), dim(), distinct(), dropDuplicates(), dropna(), drop(), dtypes(), exceptAll(), except(), explain(), filter(), first(), gapplyCollect(), gapply(), getNumPartitions(), group_by(), head(), hint(), histogram(), insertInto(), intersectAll(), intersect(), isLocal(), isStreaming(), join(), limit(), localCheckpoint(), merge(), mutate(), ncol(), nrow(), persist(), printSchema(), randomSplit(), rbind(), rename(), repartitionByRange(), repartition(), rollup(), sample(), saveAsTable(), schema(), selectExpr(), select(), showDF(), show(), storageLevel(), str(), subset(), summary(), take(), toJSON(), unionAll(), unionByName(), union(), unpersist(), withColumn(), withWatermark(), with(), write.jdbc(), write.json(), write.orc(), write.parquet(), write.stream(), write.text()

Examples

1
2
3
4
5
6
7
8
## Not run: 
sparkR.session()
path <- "path/to/file.json"
df <- read.json(path)
write.df(df, "myfile", "parquet", "overwrite", partitionBy = c("col1", "col2"))
saveDF(df, parquetPath2, "parquet", mode = "append", mergeSchema = TRUE)

## End(Not run)

SparkR documentation built on June 3, 2021, 5:05 p.m.