benchmark: Benchmarking Different Accelerating Methods for Various...

View source: R/benchmark.R

benchmarkR Documentation

Benchmarking Different Accelerating Methods for Various Fixed-Point Iteration Problems

Description

A benchmarking function used to test the performance of different accelerating methods on various fixed-point interation problems.

Usage

benchmark(
  task = "poissmix",
  algorithm = c("raw"),
  ntimes = 100,
  control = list(tol = 1e-07, convtype = "parameter", maxiter = 2000),
  control.spec = list(),
  verbose = TRUE,
  sample_every_time = TRUE,
  seed = NULL,
  store_tasks = FALSE,
  ...
)

Arguments

task

A list or function or string for the problem to be benchmarked. Explained in Details below.

algorithm

An string array of algorithms that will be run in the benchmarking. "raw" for the original algorithm in the task. "squarem" for SQUAREM. "daarem" for DAAREM. "qn" for Quasi-Newton, "pem" for parabolic-EM and "nes" for restart Nesterov.

ntimes

Integer, number of repetitions for the task in benchmarking.

control

A list for control parameters that will be shared to all the algorithm tried.

control.spec

A named list to pass control parameters for specific algorithms. For example list(squarem=listA, qn=listB) will pass parameters in listA to SQUAREM and those in listB to Quasi-Newton.

verbose

A bool value indicating whether to print out informating when the function is running.

sample_every_time

A bool value indicating whether we should create a new task list in every repetition. This is only usable when task is a function.

seed

An integer for random seed

store_tasks

A bool value indicating whether to store all tasks in the result.

...

Other arguments required by task if it is a function.

Details

The task argument above indicates the problem to be benchmarked. In the simplest case, it will be a list containing initfn for parameter initialization function; fixptfn for fixed-point updating function; objfn for objective function and other arguments required for fixptfn and objfn. But task can also be a function taking parameters and return such list. We also register some character names for specific tasks: "poissmix" for Poisson mixture; "mvt_mmf" for Multivariate t-distribution; "lasso" for LASSO logistic regression; "bvs" for variational Bayes variable selection; "tsne" for tSNE in COIL-20 and "sinkhorn" for Sinkhorn iteration in matrix balancing.

Value

A list of results in format of "benchmark" class.

result_table

A dataframe containing necessary information for each repetition.

all_results

A list combining all results lists from all methods in all repetitions.

all_tasks

A list containing all task lists in every repetition. If store_task is FALSE this will be empty.

task

A string for task name.

Examples

## Not run: 
set.seed(54321)
benchmark("poissmix", c("raw", "squarem", "daarem", "pem", "qn", "nes"),  ntimes=100)

## End(Not run)


bhtang127/AccelBenchmark documentation built on May 30, 2022, 2:21 a.m.