Description Usage Arguments Details Value Examples
This function attempts to replicate Cascade Forest using xgboost. It performs Complete-Random Tree Forest in a Neural Network directed acrylic graph like in Neural Networks, but only for simple graphs (e.g use previous layer output data for next layer training each time). You can specify your learning objective using objective
and the metric to check for using eval_metric
. You can plug custom objectives instead of the objectives provided by xgboost
. As with any uncalibrated machine learning methods, this method suffers uncalibrated outputs. Therefore, the usage of scale-dependent metrics is discouraged (please use scale-invariant metrics, such as Accuracy, AUC, R-squared, Spearman correlation...).
1 2 3 4 5 6 7 8 9 | CascadeForest(training_data, validation_data, training_labels,
validation_labels, folds, boosting = FALSE, nthread = 1, cascade_lr = 1,
training_start = NULL, validation_start = NULL, cascade_forests = rep(4,
5), cascade_trees = 500, cascade_rf = 2,
cascade_seeds = 1:length(cascade_forests), objective = "reg:linear",
eval_metric = Laurae::df_rmse, multi_class = FALSE, early_stopping = 2,
maximize = FALSE, verbose = TRUE, low_memory = FALSE,
essentials = FALSE, garbage = FALSE, work_dir = NULL,
fail_safe = 65536)
|
training_data |
Type: data.table. The training data. Columns are added during training if |
validation_data |
Type: data.table. The validation data to check for metric performance. Set to |
training_labels |
Type: numeric vector. The training labels. |
validation_labels |
Type: numeric vector. The validation labels. |
folds |
Type: list. The folds as list for cross-validation. |
boosting |
Type: logical. Whether to perform boosting or not for training. It may converge faster, but may overfit faster and therefore needs control via |
nthread |
Type: numeric. The number of threads using for multithreading. 1 means singlethread (uses only one core). Higher may mean faster training if the memory overhead is not too large. Defaults to |
cascade_lr |
Type: numeric vector or numeric. The shrinkage affected to each tree per layer to avoid overfitting, for each layer. You may specify a vector to change the learning rate per layer, such as |
training_start |
Type: numeric vector. The initial training prediction labels. Set to |
validation_start |
Type: numeric vector. The initial validation prediction labels. Set to |
cascade_forests |
Type: numeric vector (mandatory). The number of forest models per layer in the architecture to create for the Cascade Forest. Inputting 0 means it will take the latest number forever until it stops after convergence using |
cascade_trees |
Type: numeric vector or numeric. The number of trees per forest model per layer in the architecture to create for the Cascade Forest. You may specify a vector to change the learning rate per layer, such as |
cascade_rf |
Type: numeric vector or numeric. The number of Random Forest model per layer in the architecture to create for the Cascade Forest. You may specify a vector to change the learning rate per layer, such as |
cascade_seeds |
Type: numeric vector or numeric. Random seed for reproducibility per layer. Do not set it to a value which is identical throughout the architecture, you will train on the same features over and over otherwise! When using a single value as seed, it automatically adds 1 each time an advance in the layer is made. Defaults to |
objective |
Type: character or function. The function which leads |
eval_metric |
Type: function. The function which evaluates |
multi_class |
Type: numeric. Defines the number of classes internally for whether you are doing multi class classification or not to use specific routines for multiclass problems when using |
early_stopping |
Type: numeric. Defines how many architecture layers without improvement to require before stopping early (therefore, you must remove 1 to that value - for instance, a stopping of 2 means it will stop after 3 failures to improve). 0 means instantly stop at the first failure for improvement. -1 means no stopping. Requires |
maximize |
Type: logical. Whether to maximize or not the loss evaluation metric. Defaults to |
verbose |
Type: logical. Whether to print training evaluation. Defaults to |
low_memory |
Type: logical. Whether to perform the data.table transformations in place to lower memory usage. Defaults to |
essentials |
Type: logical. Whether to store intermediary predictions or not. Set it to |
garbage |
Type: logical. Whether to perform garbage collect regularly. Defaults to |
work_dir |
Type: character, allowing concatenation with another character text (ex: "dev/tools/save_in_this_folder/" = add slash, or "dev/tools/save_here/prefix_" = don't add slash). The working directory to store models. If you provide a working directory, the models will be saved inside that directory (and all other models will get wiped if they are under the same names). It will lower severely the memory usage as the models will not be saved anymore in memory. Combined with |
fail_safe |
Type: numeric. In case of infinite training ( |
For implementation details of Cascade Forest / Complete-Random Tree Forest / Multi-Grained Scanning / Deep Forest, check this: https://github.com/Microsoft/LightGBM/issues/331#issuecomment-283942390 by Laurae.
Cascade Forests tend to aim what Neural Networks are doing: architecturing the model in multiple layers. Cascade Forests abuse the stacking ensemble method to perform training on these layers. Using randomness of Random Forests and Complete-Random Tree Forests, a Cascade Forest aims to outperform simple Convolutional Neural Networks (CNNs). The computational cost, however, is massive and should be taken into account before learning a large (and potentially unlimited) Cascade Forest. Due to their nature and to stacking ensemble properties, Cascade Forests have a hard time to overfit themselves.
Putting a Cascade Forest on top a Multi-Grained Scanning model results in a gcForest
Laurae recommends using xgboost or LightGBM on top of gcForest or Cascade Forest. See the rationale here: https://github.com/Microsoft/LightGBM/issues/331#issuecomment-284689795.
A data.table based on target
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | ## Not run:
# Load libraries
library(data.table)
library(Matrix)
library(xgboost)
# Create data
data(agaricus.train, package = "lightgbm")
data(agaricus.test, package = "lightgbm")
agaricus_data_train <- data.table(as.matrix(agaricus.train$data))
agaricus_data_test <- data.table(as.matrix(agaricus.test$data))
agaricus_label_train <- agaricus.train$label
agaricus_label_test <- agaricus.test$label
folds <- Laurae::kfold(agaricus_label_train, 5)
# Train a model (binary classification)
model <- CascadeForest(training_data = agaricus_data_train, # Training data
validation_data = agaricus_data_test, # Validation data
training_labels = agaricus_label_train, # Training labels
validation_labels = agaricus_label_test, # Validation labels
folds = folds, # Folds for cross-validation
boosting = FALSE, # Do not touch this unless you are expert
nthread = 1, # Change this to use more threads
cascade_lr = 1, # Do not touch this unless you are expert
training_start = NULL, # Do not touch this unless you are expert
validation_start = NULL, # Do not touch this unless you are expert
cascade_forests = rep(4, 5), # Number of forest models
cascade_trees = 10, # Number of trees per forest
cascade_rf = 2, # Number of Random Forest in models
cascade_seeds = 1:5, # Seed per layer
objective = "binary:logistic",
eval_metric = Laurae::df_logloss,
multi_class = 2, # Modify this for multiclass problems
early_stopping = 2, # stop after 2 bad combos of forests
maximize = FALSE, # not a maximization task
verbose = TRUE, # print information during training
low_memory = FALSE)
# Attempt to perform fake multiclass problem
agaricus_label_train[1:100] <- 2
# Train a model (multiclass classification)
model <- CascadeForest(training_data = agaricus_data_train, # Training data
validation_data = agaricus_data_test, # Validation data
training_labels = agaricus_label_train, # Training labels
validation_labels = agaricus_label_test, # Validation labels
folds = folds, # Folds for cross-validation
boosting = FALSE, # Do not touch this unless you are expert
nthread = 1, # Change this to use more threads
cascade_lr = 1, # Do not touch this unless you are expert
training_start = NULL, # Do not touch this unless you are expert
validation_start = NULL, # Do not touch this unless you are expert
cascade_forests = rep(4, 5), # Number of forest models
cascade_trees = 10, # Number of trees per forest
cascade_rf = 2, # Number of Random Forest in models
cascade_seeds = 1:5, # Seed per layer
objective = "multi:softprob",
eval_metric = Laurae::df_logloss,
multi_class = 3, # Modify this for multiclass problems
early_stopping = 2, # stop after 2 bad combos of forests
maximize = FALSE, # not a maximization task
verbose = TRUE, # print information during training
low_memory = FALSE)
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.