Nothing
This benchmark suite runs repeated K-fold cross-validation and compares
out-of-sample performance between bartMachine and randomForest across
datasets drawn from standard benchmark libraries (e.g., datasets, MASS,
mlbench, ISLR).
install.packages(c("bartMachine", "randomForest", "mlbench", "ISLR", "pROC"))
Rscript inst/benchmarks/run_benchmark_suite.R --folds=5 --repeats=3
inst/benchmarks/results:benchmark_folds.csv: per-fold metricsbenchmark_summary.csv: mean and standard deviation by dataset/modelbenchmark_skipped.csv: datasets that were skipped and whybenchmark_results.rds: all results + configsessionInfo.txt: session metadatabartMachine. You can do this via:options(java.parameters = c("-Xmx20g", "--add-modules=jdk.incubator.vector", "-XX:+UseZGC"))--list to view all dataset definitions and --dry-run to list the
filtered selection without running.--skip-tags=large.pROC is optional; if installed, AUC is computed for classification tasks.Rscript inst/benchmarks/run_benchmark_suite.R --packages=datasets,MASS --folds=3
Rscript inst/benchmarks/run_benchmark_suite.R --datasets=Boston,BostonHousing
Rscript inst/benchmarks/run_benchmark_suite.R --skip-tags=large --repeats=2
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.