big.read.table: Read in chunks from a large file with row/column filtering to...

View source: R/big.read.table.R

big.read.tableR Documentation

Read in chunks from a large file with row/column filtering to obtain a reasonable-sized data.frame.

Description

Read in chunks from a large file with row/column filtering to obtain a reasonable-sized data.frame.

Usage

big.read.table(
  file,
  nrows = 1e+05,
  sep = ",",
  header = TRUE,
  row.names = NULL,
  cols = NULL,
  rowfilter = NULL,
  as.is = TRUE,
  estimate = FALSE
)

Arguments

file

the name of the file, obviously

nrows

the chunk size; consider reducing this if there are lots of columns

sep

by default we expect a CSV file

header

is TRUE by default

row.names

I really dislike row names

cols

for filtering column by name or number (supporting negative indexing)

rowfilter

a function that is assumed to take a chunk as a data frame and return a smaller data frame (with fewer rows), separately from the column filtering.

as.is

TRUE by default

estimate

do a preliminary estimation of the work to be done, and then have a chance to bail out if it looks like a bad idea

Note

This is very much 'in development' and could be buggy. I put it here as I used some example in one of my courses, but then I needed to update the package to keep CRAN happy. So here it is. Buyer Beware. - Jay

Examples

data(CO2)
write.csv(CO2, "CO2.csv", row.names=FALSE)
x <- big.read.table("CO2.csv", nrows=10)
unlink("CO2.csv")
head(x)

YaleToolkit documentation built on May 10, 2022, 1:05 a.m.