nanoparquet is a reader and writer for a common subset of Parquet files.
INTERVAL,
UNKNOWN.Install the R package from CRAN:
install.packages("nanoparquet")
Call read_parquet() to read a Parquet file:
df <- nanoparquet::read_parquet("example.parquet")
To see the columns of a Parquet file and how their types are mapped to
R types by read_parquet(), call read_parquet_schema() first:
nanoparquet::read_parquet_schema("example.parquet")
Folders of similar-structured Parquet files (e.g. produced by Spark) can be read like this:
df <- data.table::rbindlist(lapply(
Sys.glob("some-folder/part-*.parquet"),
nanoparquet::read_parquet
))
Call write_parquet() to write a data frame to a Parquet file:
nanoparquet::write_parquet(mtcars, "mtcars.parquet")
To see how the columns of the data frame will be mapped to Parquet types
by write_parquet(), call infer_parquet_schema() first:
nanoparquet::infer_parquet_schema(mtcars)
Call read_parquet_info(), read_parquet_schema(), or
read_parquet_metadata() to see various kinds of metadata from a Parquet
file:
read_parquet_info() shows a basic summary of the file.read_parquet_schema() shows all columns, including non-leaf columns,
and how they are mapped to R types by read_parquet().read_parquet_metadata() shows the most complete metadata information:
file meta data, the schema, the row groups and column chunks of the
file.nanoparquet::read_parquet_info("mtcars.parquet")
nanoparquet::read_parquet_schema("mtcars.parquet")
nanoparquet::read_parquet_metadata("mtcars.parquet")
If you find a file that should be supported but isn't, please open an issue here with a link to the file.
See also ?parquet_options() for further details.
nanoparquet.class: extra class to add to data frames returned by
read_parquet(). If it is not defined, the default is "tbl",
which changes how the data frame is printed if the pillar package is
loaded.nanoparquet.compression_level: See ?parquet_options() for the
defaults and the possible values for each compression method. Inf
selects maximum compression for each method.nanoparquet.num_rows_per_row_group: The number of rows to put into a
row group by write_parquet(), if row groups are not specified
explicitly. It should be an integer scalar. Defaults to 10 million.nanoparquet.use_arrow_metadata: unless this is set to FALSE,
read_parquet() will make use of Arrow metadata in the Parquet file.
Currently this is used to detect factor columns.nanoparquet.write_arrow_metadata: unless this is set to FALSE,
write_parquet() will add Arrow metadata to the Parquet file.
This helps preserving classes of columns, e.g. factors will be read
back as factors, both by nanoparquet and Arrow.nanoparquet.write_data_page_version: Data version to write by default.
Possible values are 1 and 2. Default is 1.nanoparquet.write_minmax_values: Whether to write minimum and maximum
values per row group, for data types that support this in
write_parquet().MIT
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.