View source: R/micro_read_chunked.R
read_ipums_micro_chunked | R Documentation |
Read a microdata dataset downloaded from the IPUMS extract system in chunks.
Use these functions to read a file that is too large to store in memory at a single time. The file is processed in chunks of a given size, with a provided callback function applied to each chunk.
Two files are required to load IPUMS microdata extracts:
A DDI codebook file (.xml) used to parse the extract's data file
A data file (either .dat.gz or .csv.gz)
See Downloading IPUMS files below for more information about downloading these files.
read_ipums_micro_chunked()
and read_ipums_micro_list_chunked()
differ
in their handling of extracts that contain multiple record types.
See Data structures below.
Note that Stata, SAS, and SPSS file formats are not supported by ipumsr readers. Convert your extract to fixed-width or CSV format, or see haven for help loading these files.
read_ipums_micro_chunked(
ddi,
callback,
chunk_size = 10000,
vars = NULL,
data_file = NULL,
verbose = TRUE,
var_attrs = c("val_labels", "var_label", "var_desc"),
lower_vars = FALSE
)
read_ipums_micro_list_chunked(
ddi,
callback,
chunk_size = 10000,
vars = NULL,
data_file = NULL,
verbose = TRUE,
var_attrs = c("val_labels", "var_label", "var_desc"),
lower_vars = FALSE
)
ddi |
Either a path to a DDI .xml file downloaded from
IPUMS, or an
ipums_ddi object parsed by |
callback |
An ipums_callback object, or a function
that will be converted to an |
chunk_size |
Integer number of observations to read per chunk. Higher values use more RAM, but typically result in faster processing. Defaults to 10,000. |
vars |
Names of variables to include in the output. Accepts a
vector of names or a tidyselect selection.
If For hierarchical data, the |
data_file |
Path to the data (.gz) file associated with
the provided |
verbose |
Logical indicating whether to display IPUMS conditions and progress information. |
var_attrs |
Variable attributes from the DDI to add to the columns of
the output data. Defaults to all available attributes.
See |
lower_vars |
If reading a DDI from a file,
a logical indicating whether to convert variable names to lowercase.
Defaults to This argument will be ignored if argument Note that if reading in chunks from a .csv or .csv.gz file, the callback function will be called before variable names are converted to lowercase, and thus should reference uppercase variable names. |
Depends on the provided callback object. See ipums_callback.
Files from IPUMS projects that contain data for multiple types of records (e.g. household records and person records) may be either rectangular or hierarchical.
Rectangular data are transformed such that each row of data represents only one type of record. For instance, each row will represent a person record, and all household-level information for that person will be included in the same row.
Hierarchical data have records of different types interspersed in a single file. For instance, a household record will be included in its own row followed by the person records associated with that household.
Hierarchical data can be read in two different formats:
read_ipums_micro_chunked()
reads each chunk of data into a
tibble
where each row represents a single record,
regardless of record type. Variables that do not apply to a particular
record type will be filled with NA
in rows of that record type. For
instance, a person-specific variable will be missing in all rows
associated with household records. The provided callback
function should
therefore operate on a tibble
object.
read_ipums_micro_list_chunked()
reads each chunk of data into a list of
tibble
objects, where each list element contains
only one record type. Each list element is named with its corresponding
record type. The provided callback
function should therefore operate
on a list object. In this case, the chunk size references the total
number of rows across record types, rather than in each
record type.
You must download both the DDI codebook and the data file from the IPUMS
extract system to load the data into R. read_ipums_micro_*()
functions
assume that the data file and codebook share a common base file name and
are present in the same directory. If this is not the case, provide a
separate path to the data file with the data_file
argument.
If using the IPUMS extract interface:
Download the data file by clicking Download .dat under Download Data.
Download the DDI codebook by right clicking on the DDI link in the Codebook column of the extract interface and selecting Save as... (on Safari, you may have to select Download Linked File as...). Be sure that the codebook is downloaded in .xml format.
If using the IPUMS API:
For supported collections, use download_extract()
to download a completed
extract via the IPUMS API. This automatically downloads both the DDI
codebook and the data file from the extract and
returns the path to the codebook file.
read_ipums_micro_yield()
for more flexible handling of large
IPUMS microdata files.
read_ipums_micro()
to read data from an IPUMS microdata extract.
read_ipums_ddi()
to read metadata associated with an IPUMS microdata
extract.
read_ipums_sf()
to read spatial data from an IPUMS extract.
ipums_list_files()
to list files in an IPUMS extract.
suppressMessages(library(dplyr))
# Example codebook file
cps_rect_ddi_file <- ipums_example("cps_00157.xml")
# Function to extract Minnesota cases from CPS example
# (This can also be accomplished by including case selections
# in an extract definition)
#
# Function must take `x` and `pos` to refer to data and row position,
# respectively.
filter_mn <- function(x, pos) {
x[x$STATEFIP == 27, ]
}
# Initialize callback
filter_mn_callback <- IpumsDataFrameCallback$new(filter_mn)
# Process data in chunks, filtering to MN cases in each chunk
read_ipums_micro_chunked(
cps_rect_ddi_file,
callback = filter_mn_callback,
chunk_size = 1000,
verbose = FALSE
)
# Tabulate INCTOT average by state without storing full dataset in memory
read_ipums_micro_chunked(
cps_rect_ddi_file,
callback = IpumsDataFrameCallback$new(
function(x, pos) {
x %>%
mutate(
INCTOT = lbl_na_if(
INCTOT,
~ grepl("Missing|N.I.U.", .lbl)
)
) %>%
filter(!is.na(INCTOT)) %>%
group_by(STATEFIP = as_factor(STATEFIP)) %>%
summarize(INCTOT_SUM = sum(INCTOT), n = n(), .groups = "drop")
}
),
chunk_size = 1000,
verbose = FALSE
) %>%
group_by(STATEFIP) %>%
summarize(avg_inc = sum(INCTOT_SUM) / sum(n))
# `x` will be a list when using `read_ipums_micro_list_chunked()`
read_ipums_micro_list_chunked(
ipums_example("cps_00159.xml"),
callback = IpumsSideEffectCallback$new(function(x, pos) {
print(
paste0(
nrow(x$PERSON), " persons and ",
nrow(x$HOUSEHOLD), " households in this chunk."
)
)
}),
chunk_size = 1000,
verbose = FALSE
)
# Using the biglm package, you can even run a regression without storing
# the full dataset in memory
if (requireNamespace("biglm")) {
lm_results <- read_ipums_micro_chunked(
ipums_example("cps_00160.xml"),
IpumsBiglmCallback$new(
INCTOT ~ AGE + HEALTH, # Model formula
function(x, pos) {
x %>%
mutate(
INCTOT = lbl_na_if(
INCTOT,
~ grepl("Missing|N.I.U.", .lbl)
),
HEALTH = as_factor(HEALTH)
)
}
),
chunk_size = 1000,
verbose = FALSE
)
summary(lm_results)
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.