inst/benchmarks/README.md

Benchmark Suite: bartMachine vs randomForest

This benchmark suite runs repeated K-fold cross-validation and compares out-of-sample performance between bartMachine and randomForest across datasets drawn from standard benchmark libraries (e.g., datasets, MASS, mlbench, ISLR).

Quick start

  1. Install the package and optional dataset libraries:
install.packages(c("bartMachine", "randomForest", "mlbench", "ISLR", "pROC"))
  1. Run the suite from the package root:
Rscript inst/benchmarks/run_benchmark_suite.R --folds=5 --repeats=3
  1. Outputs are written to inst/benchmarks/results:
  2. benchmark_folds.csv: per-fold metrics
  3. benchmark_summary.csv: mean and standard deviation by dataset/model
  4. benchmark_skipped.csv: datasets that were skipped and why
  5. benchmark_results.rds: all results + config
  6. sessionInfo.txt: session metadata

Notes

Example filters

Rscript inst/benchmarks/run_benchmark_suite.R --packages=datasets,MASS --folds=3
Rscript inst/benchmarks/run_benchmark_suite.R --datasets=Boston,BostonHousing
Rscript inst/benchmarks/run_benchmark_suite.R --skip-tags=large --repeats=2


Try the bartMachine package in your browser

Any scripts or data that you put into this service are public.

bartMachine documentation built on Jan. 19, 2026, 9:06 a.m.