knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>",
  fig.path = "man/figures/dropin-",
  out.width = "100%"
)

Sys.setenv(DUCKPLYR_META_ENABLE = FALSE)

set.seed(20230702)

clean_output <- function(x, options) {
  x <- gsub("0x[0-9a-f]+", "0xdeadbeef", x)
  x <- gsub("dataframe_[0-9]*_[0-9]*", "      dataframe_42_42      ", x)
  x <- gsub("[0-9]*\\.___row_number ASC", "42.___row_number ASC", x)

  index <- x
  index <- gsub("─", "-", index)
  index <- strsplit(paste(index, collapse = "\n"), "\n---\n")[[1]][[2]]
  writeLines(index, "index.md")

  # FIXME: Change to the main site after release
  x <- gsub('(`vignette[(]"([^"]+)"[)]`)', "[\\1](https://duckplyr.tidyverse.org/articles/\\2.html)", x)
  x <- fansi::strip_sgr(x)
  x
}

options(
  cli.num_colors = 256,
  cli.width = 71,
  width = 71,
  pillar.bold = TRUE,
  pillar.max_title_chars = 5,
  pillar.min_title_chars = 5,
  pillar.max_footer_lines = 12,
  conflicts.policy = list(warn = FALSE)
)

local({
  hook_source <- knitr::knit_hooks$get("document")
  knitr::knit_hooks$set(document = clean_output)
})

duckplyr

Lifecycle: stable R-CMD-check Codecov test coverage

A drop-in replacement for dplyr, powered by DuckDB for speed.

dplyr is the grammar of data manipulation in the tidyverse. The duckplyr package will run all of your existing dplyr code with identical results, using DuckDB where possible to compute the results faster. In addition, you can analyze larger-than-memory datasets straight from files on your disk or from the web.

If you are new to dplyr, the best place to start is the data transformation chapter in R for Data Science.

Installation

Install duckplyr from CRAN with:

install.packages("duckplyr")

You can also install the development version of duckplyr from R-universe:

install.packages("duckplyr", repos = c("https://tidyverse.r-universe.dev", "https://cloud.r-project.org"))

Or from GitHub with:

# install.packages("pak")
pak::pak("tidyverse/duckplyr")

Drop-in replacement for dplyr

Calling library(duckplyr) overwrites dplyr methods, enabling duckplyr for the entire session.

library(conflicted)
library(duckplyr)
# Done after library(duckplyr) to leave the original output
pkgload::load_all()
Sys.setenv(DUCKPLYR_FALLBACK_COLLECT = 0)
conflict_prefer("filter", "dplyr", quiet = TRUE)

The following code aggregates the inflight delay by year and month for the first half of the year. We use a variant of the nycflights13::flights dataset, where the timezone has been set to UTC to work around a current limitation of duckplyr, see vignette("limits").

flights_df()

out <-
  flights_df() |>
  filter(!is.na(arr_delay), !is.na(dep_delay)) |>
  mutate(inflight_delay = arr_delay - dep_delay) |>
  summarize(
    .by = c(year, month),
    mean_inflight_delay = mean(inflight_delay),
    median_inflight_delay = median(inflight_delay),
  ) |>
  filter(month <= 6)

The result is a plain tibble:

class(out)

Nothing has been computed yet. Querying the number of rows, or a column, starts the computation:

out$month

Note that, unlike dplyr, the results are not ordered, see ?config for details. However, once materialized, the results are stable:

out

If a computation is not supported by DuckDB, duckplyr will automatically fall back to dplyr.

flights_df() |>
  summarize(
    .by = origin,
    dest = paste(sort(unique(dest)), collapse = " ")
  )

Restart R, or call duckplyr::methods_restore() to revert to the default dplyr implementation.

duckplyr::methods_restore()

Analyzing larger-than-memory data

An extended variant of the nycflights13::flights dataset is also available for download as Parquet files.

year <- 2022:2024
base_url <- "https://blobs.duckdb.org/flight-data-partitioned/"
files <- paste0("Year=", year, "/data_0.parquet")
urls <- paste0(base_url, files)
tibble(urls)

Using the httpfs DuckDB extension, we can query these files directly from R, without even downloading them first.

db_exec("INSTALL httpfs")
db_exec("LOAD httpfs")

flights <- read_parquet_duckdb(urls)

Like with local data frames, queries on the remote data are executed lazily. Unlike with local data frames, the default is to disallow automatic materialization if the result is too large in order to protect memory: the results are not materialized until explicitly requested, with a collect() call for instance.

nrow(flights)

For printing, only the first few rows of the result are fetched.

flights
flights |>
  count(Year)

Complex queries can be executed on the remote data. Note how only the relevant columns are fetched and the 2024 data isn't even touched, as it's not needed for the result.

out <-
  flights |>
  mutate(InFlightDelay = ArrDelay - DepDelay) |>
  summarize(
    .by = c(Year, Month),
    MeanInFlightDelay = mean(InFlightDelay, na.rm = TRUE),
    MedianInFlightDelay = median(InFlightDelay, na.rm = TRUE),
  ) |>
  filter(Year < 2024)

out |>
  explain()

out |>
  print() |>
  system.time()

Over 10M rows analyzed in about 10 seconds over the internet, that's not bad. Of course, working with Parquet, CSV, or JSON files downloaded locally is possible as well.

For full compatibility, na.rm = FALSE by default in the aggregation functions:

flights |>
  summarize(mean(ArrDelay - DepDelay))

Further reading

Getting help

If you encounter a clear bug, please file an issue with a minimal reproducible example on GitHub. For questions and other discussion, please use forum.posit.co.

Code of conduct

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.



duckdblabs/duckplyr documentation built on Feb. 15, 2025, 9:31 p.m.