read_big_data: Read a really big dataframe

Description Usage Arguments

View source: R/condoR.R

Description

Read a really big dataframe

Usage

1
2
read_big_data(csvfile, key_colname = "Bird", value_colname = "Found",
  chunksize = 100, delim = " ")

Arguments

csvfile

the name of the .csv file (it can also be compressed, so something like file.csv.gz will work too)

key_colname

This is the name for the new "key" column that will be generated, it refers to the non-capitalized columns in the csv

value_colname

This is the column for the non-zero values (they're mostly '1', but occasionally X). It's left to the user to figure out how to deal with these

chunksize

The number of rows to read at a time. The default is really small. If you make it bigger it will go faster, but if you make it too big, your computer will die

delim

default is " " for space delimited. If you use tabs use "\t", for commas, use ","


CreRecombinase/condoR documentation built on May 6, 2019, 12:52 p.m.