R/gbm.R

Defines functions .h2o.train_segments_gbm h2o.gbm

Documented in h2o.gbm

# This file is auto-generated by h2o-3/h2o-bindings/bin/gen_R.py
# Copyright 2016 H2O.ai;  Apache License Version 2.0 (see LICENSE for details) 
#'
# -------------------------- Gradient Boosting Machine -------------------------- #
#'
#' Build gradient boosted classification or regression trees
#' 
#' Builds gradient boosted classification trees and gradient boosted regression trees on a parsed data set.
#' The default distribution function will guess the model type based on the response column type.
#' In order to run properly, the response column must be an numeric for "gaussian" or an
#' enum for "bernoulli" or "multinomial".
#'
#' @param x (Optional) A vector containing the names or indices of the predictor variables to use in building the model.
#'        If x is missing, then all columns except y are used.
#' @param y The name or column index of the response variable in the data. 
#'        The response must be either a numeric or a categorical/factor variable. 
#'        If the response is numeric, then a regression model will be trained, otherwise it will train a classification model.
#' @param training_frame Id of the training data frame.
#' @param model_id Destination id for this model; auto-generated if not specified.
#' @param validation_frame Id of the validation data frame.
#' @param nfolds Number of folds for K-fold cross-validation (0 to disable or >= 2). Defaults to 0.
#' @param keep_cross_validation_models \code{Logical}. Whether to keep the cross-validation models. Defaults to TRUE.
#' @param keep_cross_validation_predictions \code{Logical}. Whether to keep the predictions of the cross-validation models. Defaults to FALSE.
#' @param keep_cross_validation_fold_assignment \code{Logical}. Whether to keep the cross-validation fold assignment. Defaults to FALSE.
#' @param score_each_iteration \code{Logical}. Whether to score during each iteration of model training. Defaults to FALSE.
#' @param score_tree_interval Score the model after every so many trees. Disabled if set to 0. Defaults to 0.
#' @param fold_assignment Cross-validation fold assignment scheme, if fold_column is not specified. The 'Stratified' option will
#'        stratify the folds based on the response variable, for classification problems. Must be one of: "AUTO",
#'        "Random", "Modulo", "Stratified". Defaults to AUTO.
#' @param fold_column Column with cross-validation fold index assignment per observation.
#' @param ignore_const_cols \code{Logical}. Ignore constant columns. Defaults to TRUE.
#' @param offset_column Offset column. This will be added to the combination of columns before applying the link function.
#' @param weights_column Column with observation weights. Giving some observation a weight of zero is equivalent to excluding it from
#'        the dataset; giving an observation a relative weight of 2 is equivalent to repeating that row twice. Negative
#'        weights are not allowed. Note: Weights are per-row observation weights and do not increase the size of the
#'        data frame. This is typically the number of times a row is repeated, but non-integer values are supported as
#'        well. During training, rows with higher weights matter more, due to the larger loss function pre-factor. If
#'        you set weight = 0 for a row, the returned prediction frame at that row is zero and this is incorrect. To get
#'        an accurate prediction, remove all rows with weight == 0.
#' @param balance_classes \code{Logical}. Balance training data class counts via over/under-sampling (for imbalanced data). Defaults to
#'        FALSE.
#' @param class_sampling_factors Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will
#'        be automatically computed to obtain class balance during training. Requires balance_classes.
#' @param max_after_balance_size Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires
#'        balance_classes. Defaults to 5.0.
#' @param ntrees Number of trees. Defaults to 50.
#' @param max_depth Maximum tree depth (0 for unlimited). Defaults to 5.
#' @param min_rows Fewest allowed (weighted) observations in a leaf. Defaults to 10.
#' @param nbins For numerical columns (real/int), build a histogram of (at least) this many bins, then split at the best point
#'        Defaults to 20.
#' @param nbins_top_level For numerical columns (real/int), build a histogram of (at most) this many bins at the root level, then
#'        decrease by factor of two per level Defaults to 1024.
#' @param nbins_cats For categorical columns (factors), build a histogram of this many bins, then split at the best point. Higher
#'        values can lead to more overfitting. Defaults to 1024.
#' @param r2_stopping r2_stopping is no longer supported and will be ignored if set - please use stopping_rounds, stopping_metric
#'        and stopping_tolerance instead. Previous version of H2O would stop making trees when the R^2 metric equals or
#'        exceeds this Defaults to 1.797693135e+308.
#' @param stopping_rounds Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the
#'        stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable) Defaults to 0.
#' @param stopping_metric Metric to use for early stopping (AUTO: logloss for classification, deviance for regression and anomaly_score
#'        for Isolation Forest). Note that custom and custom_increasing can only be used in GBM and DRF with the Python
#'        client. Must be one of: "AUTO", "deviance", "logloss", "MSE", "RMSE", "MAE", "RMSLE", "AUC", "AUCPR",
#'        "lift_top_group", "misclassification", "mean_per_class_error", "custom", "custom_increasing". Defaults to
#'        AUTO.
#' @param stopping_tolerance Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this
#'        much) Defaults to 0.001.
#' @param max_runtime_secs Maximum allowed runtime in seconds for model training. Use 0 to disable. Defaults to 0.
#' @param seed Seed for random numbers (affects certain parts of the algo that are stochastic and those might or might not be enabled by default).
#'        Defaults to -1 (time-based random number).
#' @param build_tree_one_node \code{Logical}. Run on one node only; no network overhead but fewer cpus used. Suitable for small datasets.
#'        Defaults to FALSE.
#' @param learn_rate Learning rate (from 0.0 to 1.0) Defaults to 0.1.
#' @param learn_rate_annealing Scale the learning rate by this factor after each tree (e.g., 0.99 or 0.999)  Defaults to 1.
#' @param distribution Distribution function Must be one of: "AUTO", "bernoulli", "quasibinomial", "multinomial", "gaussian",
#'        "poisson", "gamma", "tweedie", "laplace", "quantile", "huber", "custom". Defaults to AUTO.
#' @param quantile_alpha Desired quantile for Quantile regression, must be between 0 and 1. Defaults to 0.5.
#' @param tweedie_power Tweedie power for Tweedie regression, must be between 1 and 2. Defaults to 1.5.
#' @param huber_alpha Desired quantile for Huber/M-regression (threshold between quadratic and linear loss, must be between 0 and
#'        1). Defaults to 0.9.
#' @param checkpoint Model checkpoint to resume training with.
#' @param sample_rate Row sample rate per tree (from 0.0 to 1.0) Defaults to 1.
#' @param sample_rate_per_class A list of row sample rates per class (relative fraction for each class, from 0.0 to 1.0), for each tree
#' @param col_sample_rate Column sample rate (from 0.0 to 1.0) Defaults to 1.
#' @param col_sample_rate_change_per_level Relative change of the column sampling rate for every level (must be > 0.0 and <= 2.0) Defaults to 1.
#' @param col_sample_rate_per_tree Column sample rate per tree (from 0.0 to 1.0) Defaults to 1.
#' @param min_split_improvement Minimum relative improvement in squared error reduction for a split to happen Defaults to 1e-05.
#' @param histogram_type What type of histogram to use for finding optimal split points Must be one of: "AUTO", "UniformAdaptive",
#'        "Random", "QuantilesGlobal", "RoundRobin", "UniformRobust". Defaults to AUTO.
#' @param max_abs_leafnode_pred Maximum absolute value of a leaf node prediction Defaults to 1.797693135e+308.
#' @param pred_noise_bandwidth Bandwidth (sigma) of Gaussian multiplicative noise ~N(1,sigma) for tree node predictions Defaults to 0.
#' @param categorical_encoding Encoding scheme for categorical features Must be one of: "AUTO", "Enum", "OneHotInternal", "OneHotExplicit",
#'        "Binary", "Eigen", "LabelEncoder", "SortByResponse", "EnumLimited". Defaults to AUTO.
#' @param calibrate_model \code{Logical}. Use Platt Scaling (default) or Isotonic Regression to calculate calibrated class
#'        probabilities. Calibration can provide more accurate estimates of class probabilities. Defaults to FALSE.
#' @param calibration_frame Data for model calibration
#' @param calibration_method Calibration method to use Must be one of: "AUTO", "PlattScaling", "IsotonicRegression". Defaults to AUTO.
#' @param custom_metric_func Reference to custom evaluation function, format: `language:keyName=funcName`
#' @param custom_distribution_func Reference to custom distribution, format: `language:keyName=funcName`
#' @param export_checkpoints_dir Automatically export generated models to this directory.
#' @param in_training_checkpoints_dir Create checkpoints into defined directory while training process is still running. In case of cluster
#'        shutdown, this checkpoint can be used to restart training.
#' @param in_training_checkpoints_tree_interval Checkpoint the model after every so many trees. Parameter is used only when in_training_checkpoints_dir is
#'        defined Defaults to 1.
#' @param monotone_constraints A mapping representing monotonic constraints. Use +1 to enforce an increasing constraint and -1 to specify a
#'        decreasing constraint.
#' @param check_constant_response \code{Logical}. Check if response column is constant. If enabled, then an exception is thrown if the response
#'        column is a constant value.If disabled, then model will train regardless of the response column being a
#'        constant value or not. Defaults to TRUE.
#' @param gainslift_bins Gains/Lift table number of bins. 0 means disabled.. Default value -1 means automatic binning. Defaults to -1.
#' @param auc_type Set default multinomial AUC type. Must be one of: "AUTO", "NONE", "MACRO_OVR", "WEIGHTED_OVR", "MACRO_OVO",
#'        "WEIGHTED_OVO". Defaults to AUTO.
#' @param interaction_constraints A set of allowed column interactions.
#' @param auto_rebalance \code{Logical}. Allow automatic rebalancing of training and validation datasets Defaults to TRUE.
#' @param verbose \code{Logical}. Print scoring history to the console (Metrics per tree). Defaults to FALSE.
#' @seealso \code{\link{predict.H2OModel}} for prediction
#' @examples
#' \dontrun{
#' library(h2o)
#' h2o.init()
#' 
#' # Run regression GBM on australia data
#' australia_path <- system.file("extdata", "australia.csv", package = "h2o")
#' australia <- h2o.uploadFile(path = australia_path)
#' independent <- c("premax", "salmax", "minairtemp", "maxairtemp", "maxsst",
#'                  "maxsoilmoist", "Max_czcs")
#' dependent <- "runoffnew"
#' h2o.gbm(y = dependent, x = independent, training_frame = australia,
#'         ntrees = 3, max_depth = 3, min_rows = 2)
#' }
#' @export
h2o.gbm <- function(x,
                    y,
                    training_frame,
                    model_id = NULL,
                    validation_frame = NULL,
                    nfolds = 0,
                    keep_cross_validation_models = TRUE,
                    keep_cross_validation_predictions = FALSE,
                    keep_cross_validation_fold_assignment = FALSE,
                    score_each_iteration = FALSE,
                    score_tree_interval = 0,
                    fold_assignment = c("AUTO", "Random", "Modulo", "Stratified"),
                    fold_column = NULL,
                    ignore_const_cols = TRUE,
                    offset_column = NULL,
                    weights_column = NULL,
                    balance_classes = FALSE,
                    class_sampling_factors = NULL,
                    max_after_balance_size = 5.0,
                    ntrees = 50,
                    max_depth = 5,
                    min_rows = 10,
                    nbins = 20,
                    nbins_top_level = 1024,
                    nbins_cats = 1024,
                    r2_stopping = 1.797693135e+308,
                    stopping_rounds = 0,
                    stopping_metric = c("AUTO", "deviance", "logloss", "MSE", "RMSE", "MAE", "RMSLE", "AUC", "AUCPR", "lift_top_group", "misclassification", "mean_per_class_error", "custom", "custom_increasing"),
                    stopping_tolerance = 0.001,
                    max_runtime_secs = 0,
                    seed = -1,
                    build_tree_one_node = FALSE,
                    learn_rate = 0.1,
                    learn_rate_annealing = 1,
                    distribution = c("AUTO", "bernoulli", "quasibinomial", "multinomial", "gaussian", "poisson", "gamma", "tweedie", "laplace", "quantile", "huber", "custom"),
                    quantile_alpha = 0.5,
                    tweedie_power = 1.5,
                    huber_alpha = 0.9,
                    checkpoint = NULL,
                    sample_rate = 1,
                    sample_rate_per_class = NULL,
                    col_sample_rate = 1,
                    col_sample_rate_change_per_level = 1,
                    col_sample_rate_per_tree = 1,
                    min_split_improvement = 1e-05,
                    histogram_type = c("AUTO", "UniformAdaptive", "Random", "QuantilesGlobal", "RoundRobin", "UniformRobust"),
                    max_abs_leafnode_pred = 1.797693135e+308,
                    pred_noise_bandwidth = 0,
                    categorical_encoding = c("AUTO", "Enum", "OneHotInternal", "OneHotExplicit", "Binary", "Eigen", "LabelEncoder", "SortByResponse", "EnumLimited"),
                    calibrate_model = FALSE,
                    calibration_frame = NULL,
                    calibration_method = c("AUTO", "PlattScaling", "IsotonicRegression"),
                    custom_metric_func = NULL,
                    custom_distribution_func = NULL,
                    export_checkpoints_dir = NULL,
                    in_training_checkpoints_dir = NULL,
                    in_training_checkpoints_tree_interval = 1,
                    monotone_constraints = NULL,
                    check_constant_response = TRUE,
                    gainslift_bins = -1,
                    auc_type = c("AUTO", "NONE", "MACRO_OVR", "WEIGHTED_OVR", "MACRO_OVO", "WEIGHTED_OVO"),
                    interaction_constraints = NULL,
                    auto_rebalance = TRUE,
                    verbose = FALSE)
{
  # Validate required training_frame first and other frame args: should be a valid key or an H2OFrame object
  training_frame <- .validate.H2OFrame(training_frame, required=TRUE)
  validation_frame <- .validate.H2OFrame(validation_frame, required=FALSE)

  # Validate other required args
  # If x is missing, then assume user wants to use all columns as features.
  if (missing(x)) {
     if (is.numeric(y)) {
         x <- setdiff(col(training_frame), y)
     } else {
         x <- setdiff(colnames(training_frame), y)
     }
  }

  # Validate other args
  # Required maps for different names params, including deprecated params
  .gbm.map <- c("x" = "ignored_columns",
                "y" = "response_column")

  # Build parameter list to send to model builder
  parms <- list()
  parms$training_frame <- training_frame
  args <- .verify_dataxy(training_frame, x, y)
  if( !missing(offset_column) && !is.null(offset_column))  args$x_ignore <- args$x_ignore[!( offset_column == args$x_ignore )]
  if( !missing(weights_column) && !is.null(weights_column)) args$x_ignore <- args$x_ignore[!( weights_column == args$x_ignore )]
  if( !missing(fold_column) && !is.null(fold_column)) args$x_ignore <- args$x_ignore[!( fold_column == args$x_ignore )]
  parms$ignored_columns <- args$x_ignore
  parms$response_column <- args$y

  if (!missing(model_id))
    parms$model_id <- model_id
  if (!missing(validation_frame))
    parms$validation_frame <- validation_frame
  if (!missing(nfolds))
    parms$nfolds <- nfolds
  if (!missing(keep_cross_validation_models))
    parms$keep_cross_validation_models <- keep_cross_validation_models
  if (!missing(keep_cross_validation_predictions))
    parms$keep_cross_validation_predictions <- keep_cross_validation_predictions
  if (!missing(keep_cross_validation_fold_assignment))
    parms$keep_cross_validation_fold_assignment <- keep_cross_validation_fold_assignment
  if (!missing(score_each_iteration))
    parms$score_each_iteration <- score_each_iteration
  if (!missing(score_tree_interval))
    parms$score_tree_interval <- score_tree_interval
  if (!missing(fold_assignment))
    parms$fold_assignment <- fold_assignment
  if (!missing(fold_column))
    parms$fold_column <- fold_column
  if (!missing(ignore_const_cols))
    parms$ignore_const_cols <- ignore_const_cols
  if (!missing(offset_column))
    parms$offset_column <- offset_column
  if (!missing(weights_column))
    parms$weights_column <- weights_column
  if (!missing(balance_classes))
    parms$balance_classes <- balance_classes
  if (!missing(class_sampling_factors))
    parms$class_sampling_factors <- class_sampling_factors
  if (!missing(max_after_balance_size))
    parms$max_after_balance_size <- max_after_balance_size
  if (!missing(ntrees))
    parms$ntrees <- ntrees
  if (!missing(max_depth))
    parms$max_depth <- max_depth
  if (!missing(min_rows))
    parms$min_rows <- min_rows
  if (!missing(nbins))
    parms$nbins <- nbins
  if (!missing(nbins_top_level))
    parms$nbins_top_level <- nbins_top_level
  if (!missing(nbins_cats))
    parms$nbins_cats <- nbins_cats
  if (!missing(r2_stopping))
    parms$r2_stopping <- r2_stopping
  if (!missing(stopping_rounds))
    parms$stopping_rounds <- stopping_rounds
  if (!missing(stopping_metric))
    parms$stopping_metric <- stopping_metric
  if (!missing(stopping_tolerance))
    parms$stopping_tolerance <- stopping_tolerance
  if (!missing(max_runtime_secs))
    parms$max_runtime_secs <- max_runtime_secs
  if (!missing(seed))
    parms$seed <- seed
  if (!missing(build_tree_one_node))
    parms$build_tree_one_node <- build_tree_one_node
  if (!missing(learn_rate))
    parms$learn_rate <- learn_rate
  if (!missing(learn_rate_annealing))
    parms$learn_rate_annealing <- learn_rate_annealing
  if (!missing(distribution))
    parms$distribution <- distribution
  if (!missing(quantile_alpha))
    parms$quantile_alpha <- quantile_alpha
  if (!missing(tweedie_power))
    parms$tweedie_power <- tweedie_power
  if (!missing(huber_alpha))
    parms$huber_alpha <- huber_alpha
  if (!missing(checkpoint))
    parms$checkpoint <- checkpoint
  if (!missing(sample_rate))
    parms$sample_rate <- sample_rate
  if (!missing(sample_rate_per_class))
    parms$sample_rate_per_class <- sample_rate_per_class
  if (!missing(col_sample_rate))
    parms$col_sample_rate <- col_sample_rate
  if (!missing(col_sample_rate_change_per_level))
    parms$col_sample_rate_change_per_level <- col_sample_rate_change_per_level
  if (!missing(col_sample_rate_per_tree))
    parms$col_sample_rate_per_tree <- col_sample_rate_per_tree
  if (!missing(min_split_improvement))
    parms$min_split_improvement <- min_split_improvement
  if (!missing(histogram_type))
    parms$histogram_type <- histogram_type
  if (!missing(max_abs_leafnode_pred))
    parms$max_abs_leafnode_pred <- max_abs_leafnode_pred
  if (!missing(pred_noise_bandwidth))
    parms$pred_noise_bandwidth <- pred_noise_bandwidth
  if (!missing(categorical_encoding))
    parms$categorical_encoding <- categorical_encoding
  if (!missing(calibrate_model))
    parms$calibrate_model <- calibrate_model
  if (!missing(calibration_frame))
    parms$calibration_frame <- calibration_frame
  if (!missing(calibration_method))
    parms$calibration_method <- calibration_method
  if (!missing(custom_metric_func))
    parms$custom_metric_func <- custom_metric_func
  if (!missing(custom_distribution_func))
    parms$custom_distribution_func <- custom_distribution_func
  if (!missing(export_checkpoints_dir))
    parms$export_checkpoints_dir <- export_checkpoints_dir
  if (!missing(in_training_checkpoints_dir))
    parms$in_training_checkpoints_dir <- in_training_checkpoints_dir
  if (!missing(in_training_checkpoints_tree_interval))
    parms$in_training_checkpoints_tree_interval <- in_training_checkpoints_tree_interval
  if (!missing(monotone_constraints))
    parms$monotone_constraints <- monotone_constraints
  if (!missing(check_constant_response))
    parms$check_constant_response <- check_constant_response
  if (!missing(gainslift_bins))
    parms$gainslift_bins <- gainslift_bins
  if (!missing(auc_type))
    parms$auc_type <- auc_type
  if (!missing(interaction_constraints))
    parms$interaction_constraints <- interaction_constraints
  if (!missing(auto_rebalance))
    parms$auto_rebalance <- auto_rebalance

  # Error check and build model
  model <- .h2o.modelJob('gbm', parms, h2oRestApiVersion=3, verbose=verbose)
  return(model)
}
.h2o.train_segments_gbm <- function(x,
                                    y,
                                    training_frame,
                                    validation_frame = NULL,
                                    nfolds = 0,
                                    keep_cross_validation_models = TRUE,
                                    keep_cross_validation_predictions = FALSE,
                                    keep_cross_validation_fold_assignment = FALSE,
                                    score_each_iteration = FALSE,
                                    score_tree_interval = 0,
                                    fold_assignment = c("AUTO", "Random", "Modulo", "Stratified"),
                                    fold_column = NULL,
                                    ignore_const_cols = TRUE,
                                    offset_column = NULL,
                                    weights_column = NULL,
                                    balance_classes = FALSE,
                                    class_sampling_factors = NULL,
                                    max_after_balance_size = 5.0,
                                    ntrees = 50,
                                    max_depth = 5,
                                    min_rows = 10,
                                    nbins = 20,
                                    nbins_top_level = 1024,
                                    nbins_cats = 1024,
                                    r2_stopping = 1.797693135e+308,
                                    stopping_rounds = 0,
                                    stopping_metric = c("AUTO", "deviance", "logloss", "MSE", "RMSE", "MAE", "RMSLE", "AUC", "AUCPR", "lift_top_group", "misclassification", "mean_per_class_error", "custom", "custom_increasing"),
                                    stopping_tolerance = 0.001,
                                    max_runtime_secs = 0,
                                    seed = -1,
                                    build_tree_one_node = FALSE,
                                    learn_rate = 0.1,
                                    learn_rate_annealing = 1,
                                    distribution = c("AUTO", "bernoulli", "quasibinomial", "multinomial", "gaussian", "poisson", "gamma", "tweedie", "laplace", "quantile", "huber", "custom"),
                                    quantile_alpha = 0.5,
                                    tweedie_power = 1.5,
                                    huber_alpha = 0.9,
                                    checkpoint = NULL,
                                    sample_rate = 1,
                                    sample_rate_per_class = NULL,
                                    col_sample_rate = 1,
                                    col_sample_rate_change_per_level = 1,
                                    col_sample_rate_per_tree = 1,
                                    min_split_improvement = 1e-05,
                                    histogram_type = c("AUTO", "UniformAdaptive", "Random", "QuantilesGlobal", "RoundRobin", "UniformRobust"),
                                    max_abs_leafnode_pred = 1.797693135e+308,
                                    pred_noise_bandwidth = 0,
                                    categorical_encoding = c("AUTO", "Enum", "OneHotInternal", "OneHotExplicit", "Binary", "Eigen", "LabelEncoder", "SortByResponse", "EnumLimited"),
                                    calibrate_model = FALSE,
                                    calibration_frame = NULL,
                                    calibration_method = c("AUTO", "PlattScaling", "IsotonicRegression"),
                                    custom_metric_func = NULL,
                                    custom_distribution_func = NULL,
                                    export_checkpoints_dir = NULL,
                                    in_training_checkpoints_dir = NULL,
                                    in_training_checkpoints_tree_interval = 1,
                                    monotone_constraints = NULL,
                                    check_constant_response = TRUE,
                                    gainslift_bins = -1,
                                    auc_type = c("AUTO", "NONE", "MACRO_OVR", "WEIGHTED_OVR", "MACRO_OVO", "WEIGHTED_OVO"),
                                    interaction_constraints = NULL,
                                    auto_rebalance = TRUE,
                                    segment_columns = NULL,
                                    segment_models_id = NULL,
                                    parallelism = 1)
{
  # formally define variables that were excluded from function parameters
  model_id <- NULL
  verbose <- NULL
  destination_key <- NULL
  # Validate required training_frame first and other frame args: should be a valid key or an H2OFrame object
  training_frame <- .validate.H2OFrame(training_frame, required=TRUE)
  validation_frame <- .validate.H2OFrame(validation_frame, required=FALSE)

  # Validate other required args
  # If x is missing, then assume user wants to use all columns as features.
  if (missing(x)) {
     if (is.numeric(y)) {
         x <- setdiff(col(training_frame), y)
     } else {
         x <- setdiff(colnames(training_frame), y)
     }
  }

  # Validate other args
  # Required maps for different names params, including deprecated params
  .gbm.map <- c("x" = "ignored_columns",
                "y" = "response_column")

  # Build parameter list to send to model builder
  parms <- list()
  parms$training_frame <- training_frame
  args <- .verify_dataxy(training_frame, x, y)
  if( !missing(offset_column) && !is.null(offset_column))  args$x_ignore <- args$x_ignore[!( offset_column == args$x_ignore )]
  if( !missing(weights_column) && !is.null(weights_column)) args$x_ignore <- args$x_ignore[!( weights_column == args$x_ignore )]
  if( !missing(fold_column) && !is.null(fold_column)) args$x_ignore <- args$x_ignore[!( fold_column == args$x_ignore )]
  parms$ignored_columns <- args$x_ignore
  parms$response_column <- args$y

  if (!missing(validation_frame))
    parms$validation_frame <- validation_frame
  if (!missing(nfolds))
    parms$nfolds <- nfolds
  if (!missing(keep_cross_validation_models))
    parms$keep_cross_validation_models <- keep_cross_validation_models
  if (!missing(keep_cross_validation_predictions))
    parms$keep_cross_validation_predictions <- keep_cross_validation_predictions
  if (!missing(keep_cross_validation_fold_assignment))
    parms$keep_cross_validation_fold_assignment <- keep_cross_validation_fold_assignment
  if (!missing(score_each_iteration))
    parms$score_each_iteration <- score_each_iteration
  if (!missing(score_tree_interval))
    parms$score_tree_interval <- score_tree_interval
  if (!missing(fold_assignment))
    parms$fold_assignment <- fold_assignment
  if (!missing(fold_column))
    parms$fold_column <- fold_column
  if (!missing(ignore_const_cols))
    parms$ignore_const_cols <- ignore_const_cols
  if (!missing(offset_column))
    parms$offset_column <- offset_column
  if (!missing(weights_column))
    parms$weights_column <- weights_column
  if (!missing(balance_classes))
    parms$balance_classes <- balance_classes
  if (!missing(class_sampling_factors))
    parms$class_sampling_factors <- class_sampling_factors
  if (!missing(max_after_balance_size))
    parms$max_after_balance_size <- max_after_balance_size
  if (!missing(ntrees))
    parms$ntrees <- ntrees
  if (!missing(max_depth))
    parms$max_depth <- max_depth
  if (!missing(min_rows))
    parms$min_rows <- min_rows
  if (!missing(nbins))
    parms$nbins <- nbins
  if (!missing(nbins_top_level))
    parms$nbins_top_level <- nbins_top_level
  if (!missing(nbins_cats))
    parms$nbins_cats <- nbins_cats
  if (!missing(r2_stopping))
    parms$r2_stopping <- r2_stopping
  if (!missing(stopping_rounds))
    parms$stopping_rounds <- stopping_rounds
  if (!missing(stopping_metric))
    parms$stopping_metric <- stopping_metric
  if (!missing(stopping_tolerance))
    parms$stopping_tolerance <- stopping_tolerance
  if (!missing(max_runtime_secs))
    parms$max_runtime_secs <- max_runtime_secs
  if (!missing(seed))
    parms$seed <- seed
  if (!missing(build_tree_one_node))
    parms$build_tree_one_node <- build_tree_one_node
  if (!missing(learn_rate))
    parms$learn_rate <- learn_rate
  if (!missing(learn_rate_annealing))
    parms$learn_rate_annealing <- learn_rate_annealing
  if (!missing(distribution))
    parms$distribution <- distribution
  if (!missing(quantile_alpha))
    parms$quantile_alpha <- quantile_alpha
  if (!missing(tweedie_power))
    parms$tweedie_power <- tweedie_power
  if (!missing(huber_alpha))
    parms$huber_alpha <- huber_alpha
  if (!missing(checkpoint))
    parms$checkpoint <- checkpoint
  if (!missing(sample_rate))
    parms$sample_rate <- sample_rate
  if (!missing(sample_rate_per_class))
    parms$sample_rate_per_class <- sample_rate_per_class
  if (!missing(col_sample_rate))
    parms$col_sample_rate <- col_sample_rate
  if (!missing(col_sample_rate_change_per_level))
    parms$col_sample_rate_change_per_level <- col_sample_rate_change_per_level
  if (!missing(col_sample_rate_per_tree))
    parms$col_sample_rate_per_tree <- col_sample_rate_per_tree
  if (!missing(min_split_improvement))
    parms$min_split_improvement <- min_split_improvement
  if (!missing(histogram_type))
    parms$histogram_type <- histogram_type
  if (!missing(max_abs_leafnode_pred))
    parms$max_abs_leafnode_pred <- max_abs_leafnode_pred
  if (!missing(pred_noise_bandwidth))
    parms$pred_noise_bandwidth <- pred_noise_bandwidth
  if (!missing(categorical_encoding))
    parms$categorical_encoding <- categorical_encoding
  if (!missing(calibrate_model))
    parms$calibrate_model <- calibrate_model
  if (!missing(calibration_frame))
    parms$calibration_frame <- calibration_frame
  if (!missing(calibration_method))
    parms$calibration_method <- calibration_method
  if (!missing(custom_metric_func))
    parms$custom_metric_func <- custom_metric_func
  if (!missing(custom_distribution_func))
    parms$custom_distribution_func <- custom_distribution_func
  if (!missing(export_checkpoints_dir))
    parms$export_checkpoints_dir <- export_checkpoints_dir
  if (!missing(in_training_checkpoints_dir))
    parms$in_training_checkpoints_dir <- in_training_checkpoints_dir
  if (!missing(in_training_checkpoints_tree_interval))
    parms$in_training_checkpoints_tree_interval <- in_training_checkpoints_tree_interval
  if (!missing(monotone_constraints))
    parms$monotone_constraints <- monotone_constraints
  if (!missing(check_constant_response))
    parms$check_constant_response <- check_constant_response
  if (!missing(gainslift_bins))
    parms$gainslift_bins <- gainslift_bins
  if (!missing(auc_type))
    parms$auc_type <- auc_type
  if (!missing(interaction_constraints))
    parms$interaction_constraints <- interaction_constraints
  if (!missing(auto_rebalance))
    parms$auto_rebalance <- auto_rebalance

  # Build segment-models specific parameters
  segment_parms <- list()
  if (!missing(segment_columns))
    segment_parms$segment_columns <- segment_columns
  if (!missing(segment_models_id))
    segment_parms$segment_models_id <- segment_models_id
  segment_parms$parallelism <- parallelism

  # Error check and build segment models
  segment_models <- .h2o.segmentModelsJob('gbm', segment_parms, parms, h2oRestApiVersion=3)
  return(segment_models)
}

Try the h2o package in your browser

Any scripts or data that you put into this service are public.

h2o documentation built on Aug. 9, 2023, 9:06 a.m.