open_dataset | R Documentation |
Arrow Datasets allow you to query against data that has been split across
multiple files. This sharding of data may indicate partitioning, which
can accelerate queries that only touch some partitions (files). Call
open_dataset()
to point to a directory of data files and return a
Dataset
, then use dplyr
methods to query it.
open_dataset(
sources,
schema = NULL,
partitioning = hive_partition(),
hive_style = NA,
unify_schemas = NULL,
format = c("parquet", "arrow", "ipc", "feather", "csv", "tsv", "text", "json"),
factory_options = list(),
...
)
sources |
One of:
When |
schema |
Schema for the |
partitioning |
When
The default is to autodetect Hive-style partitions unless
|
hive_style |
Logical: should |
unify_schemas |
logical: should all data fragments (files, |
format |
A FileFormat object, or a string identifier of the format of
the files in
|
factory_options |
list of optional FileSystemFactoryOptions:
|
... |
additional arguments passed to |
A Dataset R6 object. Use dplyr
methods on it to query the data,
or call $NewScan()
to construct a query directly.
Data is often split into multiple files and nested in subdirectories based on the value of one or more columns in the data. It may be a column that is commonly referenced in queries, or it may be time-based, for some examples. Data that is divided this way is "partitioned," and the values for those partitioning columns are encoded into the file path segments. These path segments are effectively virtual columns in the dataset, and because their values are known prior to reading the files themselves, we can greatly speed up filtered queries by skipping some files entirely.
Arrow supports reading partition information from file paths in two forms:
"Hive-style", deriving from the Apache Hive project and common to some
database systems. Partitions are encoded as "key=value" in path segments,
such as "year=2019/month=1/file.parquet"
. While they may be awkward as
file names, they have the advantage of being self-describing.
"Directory" partitioning, which is Hive without the key names, like
"2019/01/file.parquet"
. In order to use these, we need know at least
what names to give the virtual columns that come from the path segments.
The default behavior in open_dataset()
is to inspect the file paths
contained in the provided directory, and if they look like Hive-style, parse
them as Hive. If your dataset has Hive-style partitioning in the file paths,
you do not need to provide anything in the partitioning
argument to
open_dataset()
to use them. If you do provide a character vector of
partition column names, they will be ignored if they match what is detected,
and if they don't match, you'll get an error. (If you want to rename
partition columns, do that using select()
or rename()
after opening the
dataset.). If you provide a Schema
and the names match what is detected,
it will use the types defined by the Schema. In the example file path above,
you could provide a Schema to specify that "month" should be int8()
instead of the int32()
it will be parsed as by default.
If your file paths do not appear to be Hive-style, or if you pass
hive_style = FALSE
, the partitioning
argument will be used to create
Directory partitioning. A character vector of names is required to create
partitions; you may instead provide a Schema
to map those names to desired
column types, as described above. If neither are provided, no partitioning
information will be taken from the file paths.
# Set up directory for examples
tf <- tempfile()
dir.create(tf)
on.exit(unlink(tf))
write_dataset(mtcars, tf, partitioning = "cyl")
# You can specify a directory containing the files for your dataset and
# open_dataset will scan all files in your directory.
open_dataset(tf)
# You can also supply a vector of paths
open_dataset(c(file.path(tf, "cyl=4/part-0.parquet"), file.path(tf, "cyl=8/part-0.parquet")))
## You must specify the file format if using a format other than parquet.
tf2 <- tempfile()
dir.create(tf2)
on.exit(unlink(tf2))
write_dataset(mtcars, tf2, format = "ipc")
# This line will results in errors when you try to work with the data
## Not run:
open_dataset(tf2)
## End(Not run)
# This line will work
open_dataset(tf2, format = "ipc")
## You can specify file partitioning to include it as a field in your dataset
# Create a temporary directory and write example dataset
tf3 <- tempfile()
dir.create(tf3)
on.exit(unlink(tf3))
write_dataset(airquality, tf3, partitioning = c("Month", "Day"), hive_style = FALSE)
# View files - you can see the partitioning means that files have been written
# to folders based on Month/Day values
tf3_files <- list.files(tf3, recursive = TRUE)
# With no partitioning specified, dataset contains all files but doesn't include
# directory names as field names
open_dataset(tf3)
# Now that partitioning has been specified, your dataset contains columns for Month and Day
open_dataset(tf3, partitioning = c("Month", "Day"))
# If you want to specify the data types for your fields, you can pass in a Schema
open_dataset(tf3, partitioning = schema(Month = int8(), Day = int8()))
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.