run_benchmark | R Documentation |
Run a Benchmark across a range of parameters
run_benchmark(
bm,
...,
params = get_default_parameters(bm, ...),
n_iter = 1,
dry_run = FALSE,
profiling = FALSE,
read_only = FALSE,
run_id = NULL,
run_name = NULL,
run_reason = NULL
)
bm |
|
... |
Optional benchmark parameters to run across |
params |
|
n_iter |
Integer number of iterations to replicate each benchmark. If
|
dry_run |
logical: just return the R source code that would be run in
a subprocess? Default is |
profiling |
Logical: collect prof info? If |
read_only |
this will only attempt to read benchmark files and will not run any that it cannot find. |
run_id |
Unique ID for the run |
run_name |
Name for the run. If not specified, will use |
run_reason |
Low-cardinality reason for the run, e.g. "commit" or "test" |
A BenchmarkResults
object, containing results
attribute of a list
of length nrow(params)
, each of those a BenchmarkResult
object.
For a simpler view of results, call as.data.frame()
on it.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.