spark_write_parquet: Write a 'spark_tbl' to Parquet format

Description Usage Arguments Details

View source: R/read-write.R

Description

Write a spark_tbl to a parquet file.

Usage

1
spark_write_parquet(.data, path, mode = "error", partition_by = NULL, ...)

Arguments

.data

a spark_tbl

path

string, the path where the file is to be saved.

mode

string, usually "error" (default), "overwrite", "append", or "ignore"

partition_by

string, column names to partition by on disk

...

any other named options. See details below.

Details

For Parquet, compression can be set using .... Compression (default is the value specified in spark.sql.orc.compression.codec): compression codec to use when saving to file. This can be one of the known case-insensitive shorten names(none, snappy, zlib, and lzo).. More information can be found here: https://spark.apache.org/docs/latest/api/java/org/apache/spark/sql/DataFrameWriter.html#parquet-java.lang.String-


danzafar/tidyspark documentation built on Sept. 30, 2020, 12:19 p.m.