run_benchmark: Run a Benchmark across a range of parameters

View source: R/run.R

run_benchmarkR Documentation

Run a Benchmark across a range of parameters

Description

Run a Benchmark across a range of parameters

Usage

run_benchmark(
  bm,
  ...,
  params = get_default_parameters(bm, ...),
  n_iter = 1,
  dry_run = FALSE,
  profiling = FALSE,
  read_only = FALSE,
  run_id = NULL,
  run_name = NULL,
  run_reason = NULL
)

Arguments

bm

Benchmark() object

...

Optional benchmark parameters to run across

params

data.frame of parameter combinations. By default, this will be constructed from the expansion of the ... arguments, the declared parameter options in bm$setup, and any restrictions potentially defined in bm$valid_params().

n_iter

Integer number of iterations to replicate each benchmark. If n_iter is also supplied in params, that takes precedence.

dry_run

logical: just return the R source code that would be run in a subprocess? Default is FALSE, meaning that the benchmarks will be run.

profiling

Logical: collect prof info? If TRUE, the result data will contain a prof_file field, which you can read in with profvis::profvis(prof_input = file). Default is FALSE

read_only

this will only attempt to read benchmark files and will not run any that it cannot find.

run_id

Unique ID for the run

run_name

Name for the run. If not specified, will use ⁠{run_reason}: {commit hash}⁠

run_reason

Low-cardinality reason for the run, e.g. "commit" or "test"

Value

A BenchmarkResults object, containing results attribute of a list of length nrow(params), each of those a BenchmarkResult object. For a simpler view of results, call as.data.frame() on it.


ursa-labs/arrowbench documentation built on July 8, 2023, 11:36 a.m.