stream_write_csv | R Documentation |
Write files to the stream
stream_write_csv(
x,
path,
mode = c("append", "complete", "update"),
trigger = stream_trigger_interval(),
checkpoint = file.path(path, "checkpoint"),
header = TRUE,
delimiter = ",",
quote = "\"",
escape = "\\",
charset = "UTF-8",
null_value = NULL,
options = list(),
partition_by = NULL,
...
)
stream_write_text(
x,
path,
mode = c("append", "complete", "update"),
trigger = stream_trigger_interval(),
checkpoint = file.path(path, "checkpoints", random_string("")),
options = list(),
partition_by = NULL,
...
)
stream_write_json(
x,
path,
mode = c("append", "complete", "update"),
trigger = stream_trigger_interval(),
checkpoint = file.path(path, "checkpoints", random_string("")),
options = list(),
partition_by = NULL,
...
)
stream_write_parquet(
x,
path,
mode = c("append", "complete", "update"),
trigger = stream_trigger_interval(),
checkpoint = file.path(path, "checkpoints", random_string("")),
options = list(),
partition_by = NULL,
...
)
stream_write_orc(
x,
path,
mode = c("append", "complete", "update"),
trigger = stream_trigger_interval(),
checkpoint = file.path(path, "checkpoints", random_string("")),
options = list(),
partition_by = NULL,
...
)
stream_write_kafka(
x,
mode = c("append", "complete", "update"),
trigger = stream_trigger_interval(),
checkpoint = file.path("checkpoints", random_string("")),
options = list(),
partition_by = NULL,
...
)
stream_write_console(
x,
mode = c("append", "complete", "update"),
options = list(),
trigger = stream_trigger_interval(),
partition_by = NULL,
...
)
stream_write_delta(
x,
path,
mode = c("append", "complete", "update"),
checkpoint = file.path("checkpoints", random_string("")),
options = list(),
partition_by = NULL,
...
)
x |
A Spark DataFrame or dplyr operation |
path |
The path to the file. Needs to be accessible from the cluster. Supports the ‘"hdfs://"’, ‘"s3a://"’ and ‘"file://"’ protocols. |
mode |
Specifies how data is written to a streaming sink. Valid values are
|
trigger |
The trigger for the stream query, defaults to micro-batches
running every 5 seconds. See |
checkpoint |
The location where the system will write all the checkpoint information to guarantee end-to-end fault-tolerance. |
header |
Should the first row of data be used as a header? Defaults to |
delimiter |
The character used to delimit each column, defaults to |
quote |
The character used as a quote. Defaults to ‘'"'’. |
escape |
The character used to escape other characters, defaults to |
charset |
The character set, defaults to |
null_value |
The character to use for default values, defaults to |
options |
A list of strings with additional options. |
partition_by |
Partitions the output by the given list of columns. |
... |
Optional arguments; currently unused. |
Other Spark stream serialization:
stream_write_memory()
,
stream_write_table()
## Not run:
sc <- spark_connect(master = "local")
dir.create("csv-in")
write.csv(iris, "csv-in/data.csv", row.names = FALSE)
csv_path <- file.path("file://", getwd(), "csv-in")
stream <- stream_read_csv(sc, csv_path) %>% stream_write_csv("csv-out")
stream_stop(stream)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.