README.md

benchmarkVis - Benchmark Visualizations in R

Build Status codecov

benchmarkVis is a R package to visualize benchmark results in different ways. It is working with standard csv, json and rds files and can also be combined with several R benchmark packages like microbenchmark , rbenchmark or mlr through integrated wrappers. Thanks to the universal input table structure it is also possible to integrate results from batchtools or frameworks outside the R language like pythons scikit-learn.

Getting Started

Description

Benchmarking is a good way to compare the performances of different algorithms. To evaluate the results often the same procedures are used. But even though different benchmarks normally contain similar information, the structure can differ significantly. This increases the effort to visualize and analyze them. You have to do the same creation steps over and over again with just a few little changes. At this point the benchmarkVis package comes into play. It aims to convert various formats into a default data table which can be visualized in multiple ways.

Compatible data table

| problem | problem.parameter | algorithm | algorithm.parameter | replication | replication.parameter | measure.* | list.* | |---|---|---|---|---|---|---|---| | character | list | character | list | character | list | numeric | numeric vector | | mandatory | optional | mandatory | optional | optional | optional | optional | optional |

As you can see, each column has a fixed name and data type. Also some of the columns are optional while others are mandatory. The table can contain any number of measures and lists. It is important that at least one column of type measure or list is contained and that the column names start with "measure." / "list.".

Table components

To get the components of your input data table you can use following methods:

getMeasures(data.table)
getLists(data.table)
getMainColumns(data.table)
getParameterColumns(data.table)
getParameters(data.table, parameter.column)

The main columns always consist of problem and algorithm and can also contain replication.

Algorithm tuning

One special case occurs if you try to tune your algorithm by changing its parameters through multiple iterations. If this is the case you need to add the numeric field iteration to the algorithm.parameter list of the corresponding algorithm. It is important that no value occurs multiple times for the same combination of problem, algorithm and replication.

To see all tuning combinations in your data table just execute:

getTunings(data.table)

Quick Start

In this example we will use one of the provided wrappers (in this case the wrapper for microbenchmarks) as input data and create a bar plot and a list line chart.

Create input data:

library(benchmarkVis)
library(microbenchmark)

x = runif(100)
benchmark = microbenchmark(sqrt(x), x ^ 0.5)

table = useMicrobenchmarkWrapper(benchmark)

See a list with all visualizations usable with the input data:

getValidPlots(table)

Create Plots:

createBarPlot(table, "measure.mean")
createListLinePlot(table, "list.values", "mean", TRUE)

Shiny Application

The package functionality can also be reached via a shiny app which you can start with:

runShinyApp()

Next steps

For more complex examples take a look at the Example Use Cases.

If you want to use your own data you can import csv, json and rds files:

CSV:

table = csvImport("PATH.TO.CSV.FILE")

JSON:

table = jsonImport("PATH.TO.JSON.FILE")

RDS:

table = rdsImport("PATH.TO.RDS.FILE")

Check out our tutorial in the Wiki for detailed information.



collinleiber/benchmarkVis documentation built on May 28, 2019, 8:23 a.m.