write.orc: Save the contents of SparkDataFrame as an ORC file,...

Description Usage Arguments Note See Also Examples

Description

Save the contents of a SparkDataFrame as an ORC file, preserving the schema. Files written out with this method can be read back in as a SparkDataFrame using read.orc().

Usage

1
2
3
4
write.orc(x, path, ...)

## S4 method for signature 'SparkDataFrame,character'
write.orc(x, path, mode = "error", ...)

Arguments

x

A SparkDataFrame

path

The directory where the file is saved

...

additional argument(s) passed to the method.

mode

one of 'append', 'overwrite', 'error', 'errorifexists', 'ignore' save mode (it is 'error' by default)

Note

write.orc since 2.0.0

See Also

Other SparkDataFrame functions: SparkDataFrame-class, agg(), alias(), arrange(), as.data.frame(), attach,SparkDataFrame-method, broadcast(), cache(), checkpoint(), coalesce(), collect(), colnames(), coltypes(), createOrReplaceTempView(), crossJoin(), cube(), dapplyCollect(), dapply(), describe(), dim(), distinct(), dropDuplicates(), dropna(), drop(), dtypes(), exceptAll(), except(), explain(), filter(), first(), gapplyCollect(), gapply(), getNumPartitions(), group_by(), head(), hint(), histogram(), insertInto(), intersectAll(), intersect(), isLocal(), isStreaming(), join(), limit(), localCheckpoint(), merge(), mutate(), ncol(), nrow(), persist(), printSchema(), randomSplit(), rbind(), rename(), repartitionByRange(), repartition(), rollup(), sample(), saveAsTable(), schema(), selectExpr(), select(), showDF(), show(), storageLevel(), str(), subset(), summary(), take(), toJSON(), unionAll(), unionByName(), union(), unpersist(), withColumn(), withWatermark(), with(), write.df(), write.jdbc(), write.json(), write.parquet(), write.stream(), write.text()

Examples

1
2
3
4
5
6
7
## Not run: 
sparkR.session()
path <- "path/to/file.json"
df <- read.json(path)
write.orc(df, "/tmp/sparkr-tmp1/")

## End(Not run)

SparkR documentation built on June 3, 2021, 5:05 p.m.