R/AutoCatBoostRegression.R

Defines functions AutoCatBoostRegression

Documented in AutoCatBoostRegression

# AutoQuant is a package for quickly creating high quality visualizations under a common and easy api.
# Copyright (C) <year>  <name of author>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program.  If not, see <https://www.gnu.org/licenses/>.

#' @title AutoCatBoostRegression
#'
#' @description AutoCatBoostRegression is an automated modeling function that runs a variety of steps. First, the function will run a random grid tune over N number of models and find which model is the best (a default model is always included in that set). Once the model is identified and built, several other outputs are generated: validation data with predictions, evaluation plot, evaluation boxplot, evaluation metrics, variable importance, partial dependence calibration plots, partial dependence calibration box plots, and column names used in model fitting. You can download the catboost package using devtools, via: devtools::install_github('catboost/catboost', subdir = 'catboost/R-package')
#'
#' @author Adrian Antico
#' @family Automated Supervised Learning - Regression
#'
#' @param OutputSelection You can select what type of output you want returned. Choose from c('Importances', 'EvalPlots', 'EvalMetrics', 'Score_TrainData')
#' @param ReturnShap TRUE. Set to FALSE to not generate shap values.
#' @param data This is your data set for training and testing your model
#' @param TrainOnFull Set to TRUE to train on full data and skip over evaluation steps
#' @param ValidationData This is your holdout data set used in modeling either refine your hyperparameters. Catboost using both training and validation data in the training process so you should evaluate out of sample performance with this data set.
#' @param TestData This is your holdout data set. Catboost using both training and validation data in the training process so you should evaluate out of sample performance with this data set.
#' @param TargetColumnName Either supply the target column name OR the column number where the target is located (but not mixed types).
#' @param FeatureColNames Either supply the feature column names OR the column number where the target is located (but not mixed types)
#' @param PrimaryDateColumn Supply a date or datetime column for catboost to utilize time as its basis for handling categorical features, instead of random shuffling
#' @param WeightsColumnName Supply a column name for your weights column. Leave NULL otherwise
#' @param IDcols A vector of column names or column numbers to keep in your data but not include in the modeling.
#' @param EncodeMethod 'binary', 'm_estimator', 'credibility', 'woe', 'target_encoding', 'poly_encode', 'backward_difference', 'helmert'
#' @param TransformNumericColumns Set to NULL to do nothing; otherwise supply the column names of numeric variables you want transformed
#' @param Methods Choose from 'YeoJohnson', 'BoxCox', 'Asinh', 'Log', 'LogPlus1', 'Sqrt', 'Asin', or 'Logit'. If more than one is selected, the one with the best normalization pearson statistic will be used. Identity is automatically selected and compared.
#' @param task_type Set to 'GPU' to utilize your GPU for training. Default is 'CPU'.
#' @param NumGPUs Set to 1, 2, 3, etc.
#' @param eval_metric Select from 'RMSE', 'MAE', 'MAPE', 'R2', 'Poisson', 'MedianAbsoluteError', 'SMAPE', 'MSLE', 'NumErrors', 'FairLoss', 'Tweedie', 'Huber', 'LogLinQuantile', 'Quantile', 'Lq', 'Expectile', 'MultiRMSE'
#' @param eval_metric_value Used with the specified eval_metric. See https://catboost.ai/docs/concepts/loss-functions-regression.html
#' @param loss_function Used in model training for model fitting. 'MAPE', 'MAE', 'RMSE', 'Poisson', 'Tweedie', 'Huber', 'LogLinQuantile', 'Quantile', 'Lq', 'Expectile', 'MultiRMSE'
#' @param loss_function_value Used with the specified loss function if an associated value is required. 'Tweedie', 'Huber', 'LogLinQuantile', 'Quantile' 'Lq', 'Expectile'. See https://catboost.ai/docs/concepts/loss-functions-regression.html
#' @param grid_eval_metric Choose from 'mae', 'mape', 'rmse', 'r2'. Case sensitive
#' @param model_path A character string of your path file to where you want your output saved
#' @param metadata_path A character string of your path file to where you want your model evaluation output saved. If left NULL, all output will be saved to model_path.
#' @param SaveInfoToPDF Set to TRUE to save modeling information to PDF. If model_path or metadata_path aren't defined then output will be saved to the working directory
#' @param ModelID A character string to name your model and output
#' @param NumOfParDepPlots Tell the function the number of partial dependence calibration plots you want to create. Calibration boxplots will only be created for numerical features (not dummy variables)
#' @param ReturnModelObjects Set to TRUE to output all modeling objects (E.g. plots and evaluation metrics)
#' @param SaveModelObjects Set to TRUE to return all modeling objects to your environment
#' @param PassInGrid Defaults to NULL. Pass in a single row of grid from a previous output as a data.table (they are collected as data.tables)
#' @param GridTune Set to TRUE to run a grid tuning procedure. Set a number in MaxModelsInGrid to tell the procedure how many models you want to test.
#' @param BaselineComparison Set to either 'default' or 'best'. Default is to compare each successive model build to the baseline model using max trees (from function args). Best makes the comparison to the current best model.
#' @param MaxModelsInGrid Number of models to test from grid options
#' @param MaxRunMinutes Maximum number of minutes to let this run
#' @param MaxRunsWithoutNewWinner Number of models built before calling it quits
#' @param MetricPeriods Number of periods to use between Catboost evaluations
#' @param langevin Set to TRUE to enable
#' @param diffusion_temperature Defaults to 10000
#' @param Trees Standard + Grid Tuning. Bandit grid partitioned. The maximum number of trees you want in your models
#' @param Depth Standard + Grid Tuning. Bandit grid partitioned. Number, or vector for depth to test.  For running grid tuning, a NULL value supplied will mean these values are tested seq(4L, 16L, 2L)
#' @param L2_Leaf_Reg Standard + Grid Tuning. Random testing. Supply a single value for non-grid tuning cases. Otherwise, supply a vector for the L2_Leaf_Reg values to test. For running grid tuning, a NULL value supplied will mean these values are tested seq(1.0, 10.0, 1.0)
#' @param RandomStrength Standard + Grid Tuning. A multiplier of randomness added to split evaluations. Default value is 1 which adds no randomness.
#' @param BorderCount Standard + Grid Tuning. Number of splits for numerical features. Catboost defaults to 254 for CPU and 128 for GPU
#' @param LearningRate Standard + Grid Tuning. Default varies if RMSE, MultiClass, or Logloss is utilized. Otherwise default is 0.03. Bandit grid partitioned. Supply a single value for non-grid tuning cases. Otherwise, supply a vector for the LearningRate values to test. For running grid tuning, a NULL value supplied will mean these values are tested c(0.01,0.02,0.03,0.04)
#' @param RSM CPU only. Standard + Grid Tuning. If GPU is set, this is turned off. Random testing. Supply a single value for non-grid tuning cases. Otherwise, supply a vector for the RSM values to test. For running grid tuning, a NULL value supplied will mean these values are tested c(0.80, 0.85, 0.90, 0.95, 1.0)
#' @param BootStrapType Standard + Grid Tuning. NULL value to default to catboost default (Bayesian for GPU and MVS for CPU). Random testing. Supply a single value for non-grid tuning cases. Otherwise, supply a vector for the BootStrapType values to test. For running grid tuning, a NULL value supplied will mean these values are tested c('Bayesian', 'Bernoulli', 'Poisson', 'MVS', 'No')
#' @param GrowPolicy Standard + Grid Tuning. Catboost default of SymmetricTree. Random testing. Default 'SymmetricTree', character, or vector for GrowPolicy to test. For grid tuning, supply a vector of values. For running grid tuning, a NULL value supplied will mean these values are tested c('SymmetricTree', 'Depthwise', 'Lossguide')
#' @param model_size_reg Defaults to 0.5. Set to 0 to allow for bigger models. This is for models with high cardinality categorical features. Valuues greater than 0 will shrink the model and quality will decline but models won't be huge.
#' @param feature_border_type Defaults to 'GreedyLogSum'. Other options include: Median, Uniform, UniformAndQuantiles, MaxLogSum, MinEntropy
#' @param sampling_unit Default is Group. Other option is Object. if GPU is selected, this will be turned off unless the loss_function is YetiRankPairWise
#' @param subsample Default is NULL. Catboost will turn this into 0.66 for BootStrapTypes Poisson and Bernoulli. 0.80 for MVS. Doesn't apply to others.
#' @param score_function Default is Cosine. CPU options are Cosine and L2. GPU options are Cosine, L2, NewtonL2, and NewtomCosine (not available for Lossguide)
#' @param min_data_in_leaf Default is 1. Cannot be used with SymmetricTree is GrowPolicy
#' @param DebugMode Set to TRUE to get a printout of which step the function is on. FALSE, otherwise
#' @examples
#' \dontrun{
#' # Create some dummy correlated data
#' data <- AutoQuant::FakeDataGenerator(
#'   Correlation = 0.85,
#'   N = 10000,
#'   ID = 2,
#'   ZIP = 0,
#'   AddDate = FALSE,
#'   Classification = FALSE,
#'   MultiClass = FALSE)
#'
#' # Run function
#' TestModel <- AutoQuant::AutoCatBoostRegression(
#'
#'   # GPU or CPU and the number of available GPUs
#'   TrainOnFull = FALSE,
#'   task_type = 'GPU',
#'   NumGPUs = 1,
#'   DebugMode = FALSE,
#'
#'   # Metadata args
#'   OutputSelection = c('Importances', 'EvalPlots', 'EvalMetrics', 'Score_TrainData'),
#'   ModelID = 'Test_Model_1',
#'   model_path = normalizePath('./'),
#'   metadata_path = normalizePath('./'),
#'   SaveModelObjects = FALSE,
#'   SaveInfoToPDF = FALSE,
#'   ReturnModelObjects = TRUE,
#'
#'   # Data args
#'   data = data,
#'   ValidationData = NULL,
#'   TestData = NULL,
#'   TargetColumnName = 'Adrian',
#'   FeatureColNames = names(data)[!names(data) %in%
#'     c('IDcol_1', 'IDcol_2','Adrian')],
#'   PrimaryDateColumn = NULL,
#'   WeightsColumnName = NULL,
#'   IDcols = c('IDcol_1','IDcol_2'),
#'   EncodeMethod = 'credibility',
#'   TransformNumericColumns = 'Adrian',
#'   Methods = c('BoxCox', 'Asinh', 'Asin', 'Log',
#'     'LogPlus1', 'Sqrt', 'Logit'),
#'
#'   # Model evaluation
#'   eval_metric = 'RMSE',
#'   eval_metric_value = 1.5,
#'   loss_function = 'RMSE',
#'   loss_function_value = 1.5,
#'   MetricPeriods = 10L,
#'   NumOfParDepPlots = ncol(data)-1L-2L,
#'
#'   # Grid tuning args
#'   PassInGrid = NULL,
#'   GridTune = FALSE,
#'   MaxModelsInGrid = 30L,
#'   MaxRunsWithoutNewWinner = 20L,
#'   MaxRunMinutes = 60*60,
#'   BaselineComparison = 'default',
#'
#'   # ML args
#'   langevin = FALSE,
#'   diffusion_temperature = 10000,
#'   Trees = 1000,
#'   Depth = 9,
#'   L2_Leaf_Reg = NULL,
#'   RandomStrength = 1,
#'   BorderCount = 128,
#'   LearningRate = NULL,
#'   RSM = 1,
#'   BootStrapType = NULL,
#'   GrowPolicy = 'SymmetricTree',
#'   model_size_reg = 0.5,
#'   feature_border_type = 'GreedyLogSum',
#'   sampling_unit = 'Object',
#'   subsample = NULL,
#'   score_function = 'Cosine',
#'   min_data_in_leaf = 1)
#' }
#' @return Saves to file and returned in list: VariableImportance.csv, Model, ValidationData.csv, EvalutionPlot.png, EvalutionBoxPlot.png, EvaluationMetrics.csv, ParDepPlots.R a named list of features with partial dependence calibration plots, ParDepBoxPlots.R, GridCollect, catboostgrid, and a transformation details file.
#' @export
AutoCatBoostRegression <- function(OutputSelection = c('Importances', 'EvalPlots', 'EvalMetrics', 'Score_TrainData'),
                                   ReturnShap = TRUE,
                                   data = NULL,
                                   ValidationData = NULL,
                                   TestData = NULL,
                                   TargetColumnName = NULL,
                                   FeatureColNames = NULL,
                                   PrimaryDateColumn = NULL,
                                   WeightsColumnName = NULL,
                                   IDcols = NULL,
                                   EncodeMethod = 'credibility',
                                   TransformNumericColumns = NULL,
                                   Methods = c('BoxCox', 'Asinh', 'Log', 'LogPlus1', 'Sqrt', 'Asin', 'Logit'),
                                   TrainOnFull = FALSE,
                                   task_type = 'GPU',
                                   NumGPUs = 1,
                                   DebugMode = FALSE,
                                   ReturnModelObjects = TRUE,
                                   SaveModelObjects = FALSE,
                                   ModelID = 'FirstModel',
                                   model_path = NULL,
                                   metadata_path = NULL,
                                   SaveInfoToPDF = FALSE,
                                   eval_metric = 'RMSE',
                                   eval_metric_value = 1.5,
                                   loss_function = 'RMSE',
                                   loss_function_value = 1.5,
                                   grid_eval_metric = 'r2',
                                   NumOfParDepPlots = 0L,
                                   PassInGrid = NULL,
                                   GridTune = FALSE,
                                   MaxModelsInGrid = 30L,
                                   MaxRunsWithoutNewWinner = 20L,
                                   MaxRunMinutes = 24L*60L,
                                   BaselineComparison = 'default',
                                   MetricPeriods = 10L,
                                   Trees = 500L,
                                   Depth = 9,
                                   L2_Leaf_Reg = 3.0,
                                   RandomStrength = 1,
                                   BorderCount = 254,
                                   LearningRate = NULL,
                                   RSM = 1,
                                   BootStrapType = NULL,
                                   GrowPolicy = 'SymmetricTree',
                                   langevin = FALSE,
                                   diffusion_temperature = 10000,
                                   model_size_reg = 0.5,
                                   feature_border_type = 'GreedyLogSum',
                                   sampling_unit = 'Object',
                                   subsample = NULL,
                                   score_function = 'Cosine',
                                   min_data_in_leaf = 1) {
  # Load catboost ----
  loadNamespace(package = 'catboost')

  # Args Checking (ensure args are set consistently) ----
  if(DebugMode) print('Running CatBoostArgsCheck()')
  Output <- CatBoostArgsCheck(ModelType=if(loss_function == 'MultiRMSE') 'vector' else 'regression', data.=data, FeatureColNames.=FeatureColNames, PrimaryDateColumn.=PrimaryDateColumn, GridTune.=GridTune, model_path.=model_path, metadata_path.=metadata_path, ClassWeights.=NULL, LossFunction.=NULL, loss_function.=loss_function, loss_function_value.=loss_function_value, eval_metric.=eval_metric, eval_metric_value.=eval_metric_value, task_type.=task_type, NumGPUs.=NumGPUs, MaxModelsInGrid.=MaxModelsInGrid, NumOfParDepPlots.=NumOfParDepPlots,ReturnModelObjects.=ReturnModelObjects, SaveModelObjects.=SaveModelObjects, PassInGrid.=PassInGrid, MetricPeriods.=MetricPeriods, langevin.=langevin, diffusion_temperature.=diffusion_temperature, Trees.=Trees, Depth.=Depth, LearningRate.=LearningRate, L2_Leaf_Reg.=L2_Leaf_Reg,RandomStrength.=RandomStrength, BorderCount.=BorderCount, RSM.=RSM, BootStrapType.=BootStrapType, GrowPolicy.=GrowPolicy, model_size_reg.=model_size_reg, feature_border_type.=feature_border_type, sampling_unit.=sampling_unit, subsample.=subsample, score_function.=score_function, min_data_in_leaf.=min_data_in_leaf)
  score_function <- Output$score_function
  BootStrapType <- Output$BootStrapType
  sampling_unit <- Output$sampling_unit
  LossFunction <- Output$LossFunction
  GrowPolicy <- Output$GrowPolicy
  EvalMetric <- Output$EvalMetric
  task_type <- Output$task_type
  GridTune <- Output$GridTune
  HasTime <- Output$HasTime
  NumGPUs <- Output$NumGPUs
  Depth <- Output$Depth
  RSM <- Output$RSM; rm(Output)

  # Grab all official parameters and their evaluated arguments
  ArgsList <- c(as.list(environment()))
  ArgsList[['data']] <- NULL
  ArgsList[['ValidationData']] <- NULL
  ArgsList[['TestData']] <- NULL
  ArgsList[['Algo']] <- "CatBoost"
  ArgsList[['TargetType']] <- "Regression"
  ArgsList[['PredictionColumnName']] <- "Predict"
  if(SaveModelObjects) {
    if(!is.null(metadata_path)) {
      save(ArgsList, file = file.path(metadata_path, paste0(ModelID, "_ArgsList.Rdata")))
    } else if(!is.null(model_path)) {
      save(ArgsList, file = file.path(model_path, paste0(ModelID, "_ArgsList.Rdata")))
    }
  }

  # Data Prep (model data prep, dummify, create sets) ----
  if(DebugMode) print('Running CatBoostDataPrep()')
  Output <- CatBoostDataPrep(OutputSelection.=OutputSelection, EncodeMethod. = EncodeMethod, ModelType='regression', data.=data, ValidationData.=ValidationData, TestData.=TestData, TargetColumnName.=TargetColumnName, FeatureColNames.=FeatureColNames, PrimaryDateColumn.=PrimaryDateColumn, WeightsColumnName.=WeightsColumnName, IDcols.=IDcols,TrainOnFull.=TrainOnFull, SaveModelObjects.=SaveModelObjects, TransformNumericColumns.=TransformNumericColumns, Methods.=Methods, model_path.=metadata_path, ModelID.=ModelID, LossFunction.=LossFunction, EvalMetric.=EvalMetric)
  TransformationResults <- Output$TransformationResults; Output$TransformationResults <- NULL
  FactorLevelsList <- Output$FactorLevelsList; Output$FactorLevelsList <- NULL
  FinalTestTarget <- Output$FinalTestTarget; Output$FinalTestTarget <- NULL
  FeatureColNames <- Output$FeatureColNames; Output$FeatureColNames <- NULL
  UseBestModel <- Output$UseBestModel; Output$UseBestModel <- NULL
  TrainTarget <- Output$TrainTarget; Output$TrainTarget <- NULL
  TrainTargetMerge <- Output$TrainTargetMerge; Output$TrainTargetMerge <- NULL
  CatFeatures <- Output$CatFeatures; Output$CatFeatures <- NULL
  if(length(CatFeatures) == 0) CatFeatures <- NULL
  TestTarget <- Output$TestTarget; Output$TestTarget <- NULL
  TrainMerge <- Output$TrainMerge; Output$TrainMerge <- NULL
  dataTrain <- Output$dataTrain; Output$dataTrain <- NULL
  TestMerge <- Output$TestMerge; Output$TestMerge <- NULL
  dataTest <- Output$dataTest; Output$dataTest <- NULL
  TestData <- Output$TestData; Output$TestData <- NULL
  Names <- Output$Names; Output$Names <- NULL; rm(Output)

  # Create catboost data objects ----
  if(DebugMode) print('Running CatBoostDataConversion()')
  Output <- CatBoostDataConversion(CatFeatures.=CatFeatures, dataTrain.=dataTrain, dataTest.=dataTest, TestData.=TestData, TrainTarget.=TrainTarget, TestTarget.=TestTarget, FinalTestTarget.=FinalTestTarget, TrainOnFull.=TrainOnFull, Weights.=WeightsColumnName)
  FinalTestPool <- Output$FinalTestPool; Output$FinalTestPool <- NULL
  TrainPool <- Output$TrainPool; Output$TrainPool <- NULL
  TestPool <- Output$TestPool; Output$TestPool <- NULL; rm(Output)

  # Bring into existence ----
  ExperimentalGrid <- NULL; BestGrid <- NULL

  # Grid tuning ----
  if(GridTune) {
    Output <- CatBoostGridTuner(ModelType='regression', TrainOnFull.=TrainOnFull, HasTime=HasTime, BaselineComparison.=BaselineComparison, TargetColumnName.=TargetColumnName, DebugMode.=DebugMode, task_type.=task_type, Trees.=Trees, Depth.=Depth, LearningRate.=LearningRate, L2_Leaf_Reg.=L2_Leaf_Reg, BorderCount.=BorderCount, RandomStrength.=RandomStrength, RSM.=RSM, BootStrapType.=BootStrapType, GrowPolicy.=GrowPolicy, NumGPUs=NumGPUs, LossFunction=LossFunction, EvalMetric=EvalMetric, MetricPeriods=MetricPeriods, ClassWeights=NULL, CostMatrixWeights=NULL, data=data, TrainPool.=TrainPool, TestPool.=TestPool, FinalTestTarget.=FinalTestTarget, TestTarget.=TestTarget, FinalTestPool.=FinalTestPool, TestData.=TestData, TestMerge.=TestMerge, TargetLevels.=NULL, MaxRunsWithoutNewWinner=MaxRunsWithoutNewWinner, MaxModelsInGrid=MaxModelsInGrid, MaxRunMinutes=MaxRunMinutes, SaveModelObjects=SaveModelObjects, metadata_path=metadata_path, model_path=model_path, ModelID=ModelID, grid_eval_metric.=grid_eval_metric)
    ExperimentalGrid <- Output$ExperimentalGrid
    BestGrid <- Output$BestGrid
  }

  # Final Parameters (put parameters in list to pass into catboost) ----
  if(DebugMode) print('Running CatBoostFinalParams()')
  base_params <- CatBoostFinalParams(ModelType='regression', UseBestModel.=UseBestModel, ClassWeights.=NULL, PassInGrid. = PassInGrid, BestGrid.=BestGrid, ExperimentalGrid. = ExperimentalGrid, GridTune.=GridTune, TrainOnFull.=TrainOnFull, MetricPeriods.=MetricPeriods, LossFunction.=LossFunction, EvalMetric.=EvalMetric, score_function.=score_function, HasTime.=HasTime, task_type.=task_type, NumGPUs.=NumGPUs, NTrees.=Trees, Depth.=Depth, LearningRate.=LearningRate, L2_Leaf_Reg.=L2_Leaf_Reg, langevin.=langevin, diffusion_temperature.=diffusion_temperature, sampling_unit.=sampling_unit, RandomStrength.=RandomStrength, BorderCount.=BorderCount, RSM.=RSM, GrowPolicy.=GrowPolicy, BootStrapType.=BootStrapType, model_size_reg.=model_size_reg, feature_border_type.=feature_border_type, subsample.=subsample, min_data_in_leaf.=min_data_in_leaf)

  # Regression Train Final Model ----
  if(DebugMode) print('Running catboost.train')
  if(!TrainOnFull && length(TestPool) > 0L) {
    model <- catboost::catboost.train(learn_pool = TrainPool, test_pool = TestPool, params = base_params)
  } else {
    model <- catboost::catboost.train(learn_pool = TrainPool, params = base_params)
  }

  # Regression Save Model ----
  if(DebugMode) print('Running catboost.save_model')
  if(SaveModelObjects) catboost::catboost.save_model(model = model, model_path = file.path(model_path, ModelID))

  # TrainData + ValidationData Scoring + Shap
  if('score_traindata' %chin% tolower(OutputSelection) && !TrainOnFull) {
    predict <- data.table::as.data.table(catboost::catboost.predict(model = model, pool = TrainPool, prediction_type = 'RawFormulaVal', thread_count = parallel::detectCores()))
    if(!is.null(TestPool)) {
      predict_validate <- data.table::as.data.table(catboost::catboost.predict(model = model, pool = TestPool, prediction_type = 'RawFormulaVal', thread_count = parallel::detectCores()))
      predict <- data.table::rbindlist(list(predict, predict_validate))
      if(ncol(predict) > 1L) {
        data.table::setnames(predict, names(predict), paste0('Predict.', names(predict)))
      } else {
        data.table::setnames(predict, names(predict), 'Predict')
      }
      rm(predict_validate)
    }
    if(!is.null(TestPool)) {
      TrainData <- CatBoostValidationData(ModelType='regression', TrainOnFull.=TRUE, TestDataCheck=FALSE, FinalTestTarget.=FinalTestTarget, TestTarget.=TestTarget, TrainTarget.=TrainTarget, TrainMerge.=TrainMerge, TestMerge.=TestMerge, dataTest.=NULL, data.=dataTrain, predict.=predict, TargetColumnName.=TargetColumnName, SaveModelObjects. = SaveModelObjects, metadata_path.=metadata_path, model_path.=metadata_path, ModelID.=ModelID, LossFunction.=NULL, TransformNumericColumns.=TransformNumericColumns, GridTune.=GridTune, TransformationResults.=TransformationResults, TargetLevels.=NULL)
    } else {
      TrainData <- CatBoostValidationData(ModelType='regression', TrainOnFull.=TRUE, TestDataCheck=FALSE, FinalTestTarget.=FinalTestTarget, TestTarget.=TestTarget, TrainTarget.=TrainTargetMerge, TrainMerge.=TrainMerge, TestMerge.=TestMerge, dataTest.=NULL, data.=dataTrain, predict.=predict, TargetColumnName.=TargetColumnName, SaveModelObjects. = SaveModelObjects, metadata_path.=metadata_path, model_path.=metadata_path, ModelID.=ModelID, LossFunction.=NULL, TransformNumericColumns.=TransformNumericColumns, GridTune.=GridTune, TransformationResults.=TransformationResults, TargetLevels.=NULL)
    }

    if(ncol(predict) > 1L) {
      if(!'Predict.V1' %chin% names(TrainData)) data.table::setnames(TrainData, c('V1','V2'), paste0('Predict', c('V1','V2')))
    } else {
      if(!'Predict' %chin% names(TrainData)) data.table::setnames(TrainData, 'V1', 'Predict')
    }
  } else {
    TrainData <- NULL
  }

  # Regression Score Final Test Data ----
  if(DebugMode) print('Running catboost.predict')
  predict <- catboost::catboost.predict(model = model, pool = if(!is.null(TestData)) FinalTestPool else if(TrainOnFull) TrainPool else TestPool, prediction_type = 'RawFormulaVal', thread_count = parallel::detectCores())

  # Regression Validation Data (generate validation data, back transform, save to file) ----
  if(DebugMode) print('Running CatBoostValidationData()')
  ValidationData <- CatBoostValidationData(ModelType='regression', TrainOnFull.=TrainOnFull, TestDataCheck=!is.null(TestData), FinalTestTarget.=FinalTestTarget, TestTarget.=TestTarget, TrainTarget.=TrainTarget, TestMerge.=TestMerge, dataTest.=dataTest, data.=data, predict.=predict, TargetColumnName.=TargetColumnName, SaveModelObjects. = SaveModelObjects, metadata_path.=metadata_path, model_path.=metadata_path, ModelID.=ModelID, LossFunction.=LossFunction, TransformNumericColumns. = TransformNumericColumns, GridTune. = GridTune, TransformationResults. = TransformationResults, TargetLevels.=NULL)

  # Gather importance and shap values ----
  if(DebugMode) print('Running CatBoostImportances()')
  if(any(c('importances','importance') %chin% tolower(OutputSelection))) {
    Output <- tryCatch({CatBoostImportances(ModelType='regression', ReturnShap = ReturnShap, TargetColumnName.=TargetColumnName, TrainPool.=TrainPool, TestPool.=TestPool, FinalTestPool.=FinalTestPool, TrainData.=TrainData, ValidationData.=ValidationData, SaveModelObjects.=SaveModelObjects, model.=model, ModelID.=ModelID, model_path.=model_path, metadata_path.=metadata_path, GrowPolicy.=GrowPolicy)}, error = function(x) list(Interaction = NULL, VariableImportance = NULL, ShapValues = NULL))
    Interaction <- Output$Interaction; Output$Interaction <- NULL
    VariableImportance <- Output$VariableImportance; Output$VariableImportance <- NULL
    ShapValues <- Output$ShapValues; Output$ShapValues <- NULL; rm(Output)
  }

  # Regression Metrics ----
  if(DebugMode) print('Running RegressionMetrics()')
  EvalMetricsList <- list()
  if('evalmetrics' %chin% tolower(OutputSelection)) {
    if('score_traindata' %chin% tolower(OutputSelection) && !TrainOnFull) {
      EvalMetricsList[['TrainData']] <- RegressionMetrics(SaveModelObjects.=FALSE, data.=data, ValidationData.=TrainData, TrainOnFull.=TrainOnFull, LossFunction.=LossFunction, EvalMetric.=EvalMetric, TargetColumnName.=TargetColumnName, ModelID.=ModelID, model_path.=model_path, metadata_path.=metadata_path)
      if(SaveModelObjects) {
        if(!is.null(metadata_path)) {
          data.table::fwrite(EvalMetricsList[['TrainData']], file = file.path(metadata_path, paste0(ModelID, "_Train_EvaluationMetrics.csv")))
        } else if(!is.null(model_path)) {
          data.table::fwrite(EvalMetricsList[['TrainData']], file = file.path(model_path, paste0(ModelID, "_Train_EvaluationMetrics.csv")))
        }
      }
    }
    EvalMetricsList[['TestData']] <- RegressionMetrics(SaveModelObjects.=FALSE, data.=data, ValidationData.=ValidationData, TrainOnFull.=TrainOnFull, LossFunction.=LossFunction, EvalMetric.=EvalMetric, TargetColumnName.=TargetColumnName, ModelID.=ModelID, model_path.=model_path, metadata_path.=metadata_path)
    if(SaveModelObjects) {
      if(!is.null(metadata_path)) {
        data.table::fwrite(EvalMetricsList[['TestData']], file = file.path(metadata_path, paste0(ModelID, "_Test_EvaluationMetrics.csv")))
      } else if(!is.null(model_path)) {
        data.table::fwrite(EvalMetricsList[['TestData']], file = file.path(model_path, paste0(ModelID, "_Test_EvaluationMetrics.csv")))
      }
    }
  }

  # Regression Plots ----
  if(DebugMode) print('Running ML_EvalPlots()')
  PlotList <- list()
  if('evalplots' %chin% tolower(OutputSelection)) {
    if('score_traindata' %chin% tolower(OutputSelection) && !TrainOnFull) {
      Output <- ML_EvalPlots(ModelType='regression', DataType = 'Train', TrainOnFull.=TrainOnFull, LossFunction.=LossFunction, EvalMetric.=EvalMetric, EvaluationMetrics.=EvalMetricsList, ValidationData.=TrainData, NumOfParDepPlots.=NumOfParDepPlots, VariableImportance.=VariableImportance, TargetColumnName.=TargetColumnName, FeatureColNames.=FeatureColNames, SaveModelObjects.=SaveModelObjects, ModelID.=ModelID, metadata_path.=metadata_path, model_path.=metadata_path, predict.=NULL, DateColumnName.=PrimaryDateColumn)
      PlotList[['Train_EvaluationPlot']] <- Output$EvaluationPlot; Output$EvaluationPlot <- NULL
      PlotList[['Train_EvaluationBoxPlot']] <- Output$EvaluationBoxPlot; Output$EvaluationBoxPlot <- NULL
      PlotList[['Train_ParDepPlots']] <- Output$ParDepPlots;  Output$ParDepPlots <- NULL
      PlotList[['Train_ParDepBoxPlots']] <- Output$ParDepBoxPlots; Output$ParDepBoxPlots <- NULL
      PlotList[['Train_ResidualsHistogram']] <- Output$ResidualsHistogram; Output$ResidualsHistogram <- NULL
      PlotList[['Train_ResidualTime']] <- Output$ResidualTime; Output$ResidualTime <- NULL
      PlotList[['Train_ScatterPlot']] <- Output$ScatterPlot; Output$ScatterPlot <- NULL
      PlotList[['Train_CopulaPlot']] <- Output$CopulaPlot; rm(Output)
      if(!is.null(VariableImportance$Train_Importance) && "plotly" %chin% installed.packages()) PlotList[['Train_VariableImportance']] <- plotly::ggplotly(VI_Plot(Type = 'catboost', VariableImportance$Train_Importance)) else if(!is.null(VariableImportance$Train_Importance)) PlotList[['Train_VariableImportance']] <- VI_Plot(Type = 'catboost', VariableImportance$Train_Importance)
      if(!is.null(VariableImportance$Validation_Importance) && "plotly" %chin% installed.packages()) PlotList[['Validation_VariableImportance']] <- plotly::ggplotly(VI_Plot(Type = 'catboost', VariableImportance$Validation_Importance)) else if(!is.null(VariableImportance$Validation_Importance)) PlotList[['Validation_VariableImportance']] <- VI_Plot(Type = 'catboost', VariableImportance$Validation_Importance)
    }
    Output <- ML_EvalPlots(ModelType='regression', DataType = 'Test', TrainOnFull.=TrainOnFull, LossFunction.=LossFunction, EvalMetric.=EvalMetric, EvaluationMetrics.=EvalMetricsList, ValidationData.=ValidationData, NumOfParDepPlots.=NumOfParDepPlots, VariableImportance.=VariableImportance, TargetColumnName.=TargetColumnName, FeatureColNames.=FeatureColNames, SaveModelObjects.=SaveModelObjects, ModelID.=ModelID, metadata_path.=metadata_path, model_path.=metadata_path, predict.=NULL, DateColumnName.=PrimaryDateColumn)
    PlotList[['Test_EvaluationPlot']] <- Output$EvaluationPlot; Output$EvaluationPlot <- NULL
    PlotList[['Test_EvaluationBoxPlot']] <- Output$EvaluationBoxPlot; Output$EvaluationBoxPlot <- NULL
    PlotList[['Test_ParDepPlots']] <- Output$ParDepPlots;  Output$ParDepPlots <- NULL
    PlotList[['Test_ParDepBoxPlots']] <- Output$ParDepBoxPlots; Output$ParDepBoxPlots <- NULL
    PlotList[['Test_ResidualsHistogram']] <- Output$ResidualsHistogram; Output$ResidualsHistogram <- NULL
    PlotList[['Test_ResidualTime']] <- Output$ResidualTime; Output$ResidualTime <- NULL
    PlotList[['Test_ScatterPlot']] <- Output$ScatterPlot; Output$ScatterPlot <- NULL
    PlotList[['Test_CopulaPlot']] <- Output$CopulaPlot; rm(Output)
    if(!is.null(VariableImportance[['Test_VariableImportance']]) && "plotly" %chin% installed.packages()) PlotList[['Test_VariableImportance']] <- plotly::ggplotly(VI_Plot(Type = 'catboost', VariableImportance[['Test_VariableImportance']])) else if(!is.null(VariableImportance[['Test_VariableImportance']])) PlotList[['Test_VariableImportance']] <- VI_Plot(Type = 'catboost', VariableImportance[['Test_VariableImportance']])
  }

  # Subset Transformation Object ----
  if(!is.null(TransformNumericColumns) && !((!is.null(LossFunction) && LossFunction == 'MultiRMSE') || (!is.null(EvalMetric) && EvalMetric == 'MultiRMSE'))) {
    if(TargetColumnName == 'Target') {
      TransformationResults <- TransformationResults[!(ColumnName %chin% c('Predict'))]
    } else {
      TransformationResults <- TransformationResults[!(ColumnName %chin% c('Predict', 'Target'))]
    }
  }

  # Remove extenal files if GridTune is TRUE ----
  if(DebugMode) print('Running CatBoostRemoveFiles()')
  CatBoostRemoveFiles(GridTune. = GridTune, model_path. = model_path)

  # Final Garbage Collection ----
  if(tolower(task_type) == 'gpu') gc()

  # Regression Return Model Objects ----
  if(DebugMode) print('Return Model Objects')
  if(ReturnModelObjects) {
    outputList <- list()
    outputList[["Model"]] <- model
    outputList[["TrainData"]] <- if(exists('ShapValues') && !is.null(ShapValues[['Train_Shap']])) ShapValues[['Train_Shap']] else if(exists('TrainData')) TrainData else NULL
    outputList[["TestData"]] <- if(exists('ShapValues') && !is.null(ShapValues[['Test_Shap']])) ShapValues[['Test_Shap']] else if(exists('ValidationData')) ValidationData else NULL
    outputList[["PlotList"]] <- if(exists('PlotList')) PlotList else NULL
    outputList[["EvaluationMetrics"]] <- if(exists('EvalMetricsList')) EvalMetricsList else NULL
    outputList[["VariableImportance"]] <- if(exists('VariableImportance')) VariableImportance else NULL
    outputList[["InteractionImportance"]] <- if(exists('Interaction')) Interaction else NULL
    outputList[["GridMetrics"]] <- if(exists('ExperimentalGrid') && !is.null(ExperimentalGrid)) ExperimentalGrid else NULL
    outputList[["ColNames"]] <- if(exists('Names')) Names else NULL
    outputList[["TransformationResults"]] <- if(exists('TransformationResults')) TransformationResults else NULL
    outputList[["FactorLevelsList"]] <- if(exists('FactorLevelsList')) FactorLevelsList else NULL
    outputList[["ArgsList"]] <- ArgsList
    return(outputList)
  }
}
AdrianAntico/RemixAutoML documentation built on Feb. 3, 2024, 3:32 a.m.