Evaluates provided expression in a loop and reports mean evaluation time.
This is inferior to
microbenchmark and other benchmarking tools in many
ways except that it has zero dependencies or suggests which helps with
package build and test times. Used in vignettes.
bench_mark(..., times = 1000L, deparse.width = 40)
expressions to benchmark, are captured unevaluated
how many times to loop, defaults to 1000
how many characters to deparse for labels
gc() before each expression is evaluated. Expressions are evaluated
in the order provided. Attempts to estimate the overhead of the loop by
running a loop that evaluates
Unfortunately because this computes the average of all iterations it is very susceptible to outliers in small sample runs, particularly with fast running code. For that reason the default number of iterations is one thousand.
NULL, invisibly, reports timings as a side effect as screen output
bench_mark(runif(1000), Sys.sleep(0.001), times=10)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.