The current implementation of the Bot follows the following scheme:
From the old bot - xgboost - svm - kernel knn - random forest - rpart - glmnet
New learners - Multinomial Logit (from mxnet?) - Cubist - fully connected neural networks (mxnet?) up to depth 3 or 4
Worthy Candidates (From Kaggle etc.) - ExtraTrees (we can enable this in ranger) - Lightgbm / Catboost (Probably to similar to xgboost) - LibFM (Factorization Machines)[https://github.com/dselivanov/rsparse] - (LiquidSVM)[https://cran.r-project.org/web/packages/liquidSVM/index.html] - Adaboost / (FastAdaBoost)[https://cran.r-project.org/web/packages/fastAdaboost/fastAdaboost.pdf]
gblinearbe sampled with equal probability?
We currently require a OML
task.id for the bot to run
bot = OMLRandomBot$new(11) bot$run()
# Benchmark library(mlr) library(batchtools) library(R6) library(callr) library(data.table) library(ParamHelpers) # Learners library(rpart) library(glmnet) library(e1071) library(ranger) library(xgboost)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.