knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>",
  fig.path = "tools/README-"
)
set.seed(0)

comparer

CRAN_Status_Badge Travis-CI Build Status Coverage Status Coverage Status R-CMD-check

The goal of comparer is to make it easy to compare the results of different code chunks that are trying to do the same thing. The R package microbenchmark is great for comparing the speed of code, but there's no way to compare their output to see which is more accurate.

Installation

You can install comparer from GitHub with:

# install.packages("devtools")
# devtools::install_github("CollinErickson/comparer")

mbc

One of the two main functions of this package is mbc, for "model benchmark compare." It is designed to be similar to the package microbenchmark, allow for fast comparisons except including the output/accuracy of the code evaluated instead of just timing.

Suppose you want to see how the mean and median of a sample of 100 randomly generated data points from an exponential distribution compare. Then, as demonstrated below, you can use the function mbc, with the functions mean and median, and then input=rexp(100). The value of input will be stored as x, so mean(x) will find the mean of that data. It outputs the run times of each, and then the results from the five trials, where five is the default setting for times. The run times aren't useful because they are all fast. For more precise timing (<0.01 seconds), you should use microbenchmark. The trials all have the same output since there is no randomness, the same data is used for each trial. The "Output summary" shows that the mean is near 1, while the median is near 0.6.

## basic example code
library(comparer)
mbc(mean(x), median(x), input=rexp(100))

To get the data to be generated for each trial, use the inputi argument to set a variable that the functions call. The arguments mean(x) and median(x) are captured as expressions. rexp(100) will be stored as x by default. You can see that the values are now different for each trial.

## Regenerate the data each time
mbc(mean(x), median(x), inputi=rexp(100))

The variable name, or multiple variables, can be set in inputi by using braces {} In the example below, values are set for a and b, which can then be called by the expressions to be compared.

mbc(mean(a+b), mean(a-b), inputi={a=rexp(100);b=runif(100)})

ffexp

The other main function of the package is ffexp, an abbreviation for full-factorial experiment. It will run a function using all possible combinations of input parameters given. It is useful for running experiments that take a long time to complete.

The first arguments given to ffexp$new should give the possible values for each input parameter. In the example below, a can be 1, 2, or 3, and b can "a", "b", or "c". Then eval_func should be given that can operate on these parameters. For example, using eval_func = paste will paste together the value of a with the value of b.

f1 <- ffexp$new(
  a=1:3,
  b=c("a","b","c"),
  eval_func=paste
)

After creating the ffexp object, we can call f1$run_all to run eval_func on every combination of a and b.

f1$run_all()

Now to see the results in a clean format, look at f1$outcleandf.

f1$outcleandf

hype: Hyperparameter Optimization

hype uses Bayesian optimization to find the best parameters/inputs for a function that is slow to evaluate. (If the function can be evaluated quickly, then you can use standard optimization methods.) A common use case is for hyperparameter tuning: when fitting a model that has multiple hyperparameters, you want to find the best values to set the hyperparameters to but can only evaluate a small number of settings since each is slow.



CollinErickson/comparer documentation built on Feb. 25, 2023, 6:36 p.m.