RebeccaGroh/seqbtests: Sequential Bayesian Comparison of ML Algorithm Performances

This package implements different Bayesian tests to compare the performance of Machine Learning Algorithms. The performance of the algorithms to be compared can be built up within the packages by a function written by the user and can be directly compared within the tests. For this purpose, the sequential Bayesian tests are provided, in which the number of generated performance values does not have to be fixed at the beginning. Instead, after each replication it is checked whether there is a 95% probability to determine which algorithm performs better. If this threshold is reached, the build-up of replications is stopped early, which can result in significant time savings. These tests work either on a single or multiple data sets. Further, some standard Hypothesis tests are provided.

Getting started

Package details

Maintainer
LicenseGPL-3
Version0.1.0
Package repositoryView on GitHub
Installation Install the latest version of this package by entering the following in R:
install.packages("remotes")
remotes::install_github("RebeccaGroh/seqbtests")
RebeccaGroh/seqbtests documentation built on Nov. 17, 2021, 8:50 a.m.