R/modelselection.R

Defines functions h2o.get_best_model_predictors h2o.get_predictors_removed_per_step h2o.get_predictors_added_per_step h2o.get_best_r2_values .h2o.train_segments_modelselection h2o.modelSelection

Documented in h2o.get_best_model_predictors h2o.get_best_r2_values h2o.get_predictors_added_per_step h2o.get_predictors_removed_per_step h2o.modelSelection

# This file is auto-generated by h2o-3/h2o-bindings/bin/gen_R.py
# Copyright 2016 H2O.ai;  Apache License Version 2.0 (see LICENSE for details) 
#'
# -------------------------- Model Selection -------------------------- #
#'
#' H2O ModelSelection is used to build the best model with one predictor, two predictors, ... up to max_predictor_number 
#' specified in the algorithm parameters when mode=allsubsets.  The best model is the one with the highest R2 value.  When
#' mode=maxr, the model returned is no longer guaranteed to have the best R2 value.
#'
#' @param x (Optional) A vector containing the names or indices of the predictor variables to use in building the model.
#'        If x is missing, then all columns except y are used.
#' @param y The name or column index of the response variable in the data. 
#'        The response must be either a numeric or a categorical/factor variable. 
#'        If the response is numeric, then a regression model will be trained, otherwise it will train a classification model.
#' @param training_frame Id of the training data frame.
#' @param model_id Destination id for this model; auto-generated if not specified.
#' @param validation_frame Id of the validation data frame.
#' @param nfolds Number of folds for K-fold cross-validation (0 to disable or >= 2). Defaults to 0.
#' @param seed Seed for random numbers (affects certain parts of the algo that are stochastic and those might or might not be enabled by default).
#'        Defaults to -1 (time-based random number).
#' @param fold_assignment Cross-validation fold assignment scheme, if fold_column is not specified. The 'Stratified' option will
#'        stratify the folds based on the response variable, for classification problems. Must be one of: "AUTO",
#'        "Random", "Modulo", "Stratified". Defaults to AUTO.
#' @param fold_column Column with cross-validation fold index assignment per observation.
#' @param ignore_const_cols \code{Logical}. Ignore constant columns. Defaults to TRUE.
#' @param score_each_iteration \code{Logical}. Whether to score during each iteration of model training. Defaults to FALSE.
#' @param score_iteration_interval Perform scoring for every score_iteration_interval iterations Defaults to 0.
#' @param offset_column Offset column. This will be added to the combination of columns before applying the link function.
#' @param weights_column Column with observation weights. Giving some observation a weight of zero is equivalent to excluding it from
#'        the dataset; giving an observation a relative weight of 2 is equivalent to repeating that row twice. Negative
#'        weights are not allowed. Note: Weights are per-row observation weights and do not increase the size of the
#'        data frame. This is typically the number of times a row is repeated, but non-integer values are supported as
#'        well. During training, rows with higher weights matter more, due to the larger loss function pre-factor. If
#'        you set weight = 0 for a row, the returned prediction frame at that row is zero and this is incorrect. To get
#'        an accurate prediction, remove all rows with weight == 0.
#' @param family Family. For maxr/maxrsweep, only gaussian.  For backward, ordinal and multinomial families are not supported
#'        Must be one of: "AUTO", "gaussian", "binomial", "fractionalbinomial", "quasibinomial", "poisson", "gamma",
#'        "tweedie", "negativebinomial". Defaults to AUTO.
#' @param link Link function. Must be one of: "family_default", "identity", "logit", "log", "inverse", "tweedie", "ologit".
#'        Defaults to family_default.
#' @param tweedie_variance_power Tweedie variance power Defaults to 0.
#' @param tweedie_link_power Tweedie link power Defaults to 0.
#' @param theta Theta Defaults to 0.
#' @param solver AUTO will set the solver based on given data and the other parameters. IRLSM is fast on on problems with small
#'        number of predictors and for lambda-search with L1 penalty, L_BFGS scales better for datasets with many
#'        columns. Must be one of: "AUTO", "IRLSM", "L_BFGS", "COORDINATE_DESCENT_NAIVE", "COORDINATE_DESCENT",
#'        "GRADIENT_DESCENT_LH", "GRADIENT_DESCENT_SQERR". Defaults to IRLSM.
#' @param alpha Distribution of regularization between the L1 (Lasso) and L2 (Ridge) penalties. A value of 1 for alpha
#'        represents Lasso regression, a value of 0 produces Ridge regression, and anything in between specifies the
#'        amount of mixing between the two. Default value of alpha is 0 when SOLVER = 'L-BFGS'; 0.5 otherwise.
#' @param lambda Regularization strength Defaults to c(0.0).
#' @param lambda_search \code{Logical}. Use lambda search starting at lambda max, given lambda is then interpreted as lambda min
#'        Defaults to FALSE.
#' @param early_stopping \code{Logical}. Stop early when there is no more relative improvement on train or validation (if provided)
#'        Defaults to FALSE.
#' @param nlambdas Number of lambdas to be used in a search. Default indicates: If alpha is zero, with lambda search set to True,
#'        the value of nlamdas is set to 30 (fewer lambdas are needed for ridge regression) otherwise it is set to 100.
#'        Defaults to 0.
#' @param standardize \code{Logical}. Standardize numeric columns to have zero mean and unit variance Defaults to TRUE.
#' @param missing_values_handling Handling of missing values. Either MeanImputation, Skip or PlugValues. Must be one of: "MeanImputation",
#'        "Skip", "PlugValues". Defaults to MeanImputation.
#' @param plug_values Plug Values (a single row frame containing values that will be used to impute missing values of the
#'        training/validation frame, use with conjunction missing_values_handling = PlugValues)
#' @param compute_p_values \code{Logical}. Request p-values computation, p-values work only with IRLSM solver and no regularization
#'        Defaults to FALSE.
#' @param remove_collinear_columns \code{Logical}. In case of linearly dependent columns, remove some of the dependent columns Defaults to FALSE.
#' @param intercept \code{Logical}. Include constant term in the model Defaults to TRUE.
#' @param non_negative \code{Logical}. Restrict coefficients (not intercept) to be non-negative Defaults to FALSE.
#' @param max_iterations Maximum number of iterations Defaults to 0.
#' @param objective_epsilon Converge if  objective value changes less than this. Default (of -1.0) indicates: If lambda_search is set to
#'        True the value of objective_epsilon is set to .0001. If the lambda_search is set to False and lambda is equal
#'        to zero, the value of objective_epsilon is set to .000001, for any other value of lambda the default value of
#'        objective_epsilon is set to .0001. Defaults to -1.
#' @param beta_epsilon Converge if  beta changes less (using L-infinity norm) than beta esilon, ONLY applies to IRLSM solver
#'        Defaults to 0.0001.
#' @param gradient_epsilon Converge if  objective changes less (using L-infinity norm) than this, ONLY applies to L-BFGS solver. Default
#'        (of -1.0) indicates: If lambda_search is set to False and lambda is equal to zero, the default value of
#'        gradient_epsilon is equal to .000001, otherwise the default value is .0001. If lambda_search is set to True,
#'        the conditional values above are 1E-8 and 1E-6 respectively. Defaults to -1.
#' @param startval double array to initialize fixed and random coefficients for HGLM, coefficients for GLM.
#' @param prior Prior probability for y==1. To be used only for logistic regression iff the data has been sampled and the mean
#'        of response does not reflect reality. Defaults to 0.
#' @param cold_start \code{Logical}. Only applicable to multiple alpha/lambda values.  If false, build the next model for next set
#'        of alpha/lambda values starting from the values provided by current model.  If true will start GLM model from
#'        scratch. Defaults to FALSE.
#' @param lambda_min_ratio Minimum lambda used in lambda search, specified as a ratio of lambda_max (the smallest lambda that drives all
#'        coefficients to zero). Default indicates: if the number of observations is greater than the number of
#'        variables, then lambda_min_ratio is set to 0.0001; if the number of observations is less than the number of
#'        variables, then lambda_min_ratio is set to 0.01. Defaults to 0.
#' @param beta_constraints Beta constraints
#' @param max_active_predictors Maximum number of active predictors during computation. Use as a stopping criterion to prevent expensive model
#'        building with many predictors. Default indicates: If the IRLSM solver is used, the value of
#'        max_active_predictors is set to 5000 otherwise it is set to 100000000. Defaults to -1.
#' @param obj_reg Likelihood divider in objective value computation, default (of -1.0) will set it to 1/nobs Defaults to -1.
#' @param stopping_rounds Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the
#'        stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable) Defaults to 0.
#' @param stopping_metric Metric to use for early stopping (AUTO: logloss for classification, deviance for regression and anomaly_score
#'        for Isolation Forest). Note that custom and custom_increasing can only be used in GBM and DRF with the Python
#'        client. Must be one of: "AUTO", "deviance", "logloss", "MSE", "RMSE", "MAE", "RMSLE", "AUC", "AUCPR",
#'        "lift_top_group", "misclassification", "mean_per_class_error", "custom", "custom_increasing". Defaults to
#'        AUTO.
#' @param stopping_tolerance Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this
#'        much) Defaults to 0.001.
#' @param balance_classes \code{Logical}. Balance training data class counts via over/under-sampling (for imbalanced data). Defaults to
#'        FALSE.
#' @param class_sampling_factors Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will
#'        be automatically computed to obtain class balance during training. Requires balance_classes.
#' @param max_after_balance_size Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires
#'        balance_classes. Defaults to 5.0.
#' @param max_runtime_secs Maximum allowed runtime in seconds for model training. Use 0 to disable. Defaults to 0.
#' @param custom_metric_func Reference to custom evaluation function, format: `language:keyName=funcName`
#' @param nparallelism number of models to build in parallel.  Defaults to 0.0 which is adaptive to the system capability Defaults to
#'        0.
#' @param max_predictor_number Maximum number of predictors to be considered when building GLM models.  Defaults to 1. Defaults to 1.
#' @param min_predictor_number For mode = 'backward' only.  Minimum number of predictors to be considered when building GLM models starting
#'        with all predictors to be included.  Defaults to 1. Defaults to 1.
#' @param mode Mode: Used to choose model selection algorithms to use.  Options include 'allsubsets' for all subsets, 'maxr'
#'        that uses sequential replacement and GLM to build all models, slow but works with cross-validation, validation
#'        frames for more robust results, 'maxrsweep' that uses sequential replacement and sweeping action, much faster
#'        than 'maxr', 'backward' for backward selection. Must be one of: "allsubsets", "maxr", "maxrsweep", "backward".
#'        Defaults to maxr.
#' @param build_glm_model \code{Logical}. For maxrsweep mode only.  If true, will return full blown GLM models with the desired
#'        predictorsubsets.  If false, only the predictor subsets, predictor coefficients are returned.  This is
#'        forspeeding up the model selection process.  The users can choose to build the GLM models themselvesby using
#'        the predictor subsets themselves.  Defaults to false. Defaults to FALSE.
#' @param p_values_threshold For mode='backward' only.  If specified, will stop the model building process when all coefficientsp-values
#'        drop below this threshold  Defaults to 0.
#' @param influence If set to dfbetas will calculate the difference in beta when a datarow is included and excluded in the
#'        dataset. Must be one of: "dfbetas".
#' @param multinode_mode \code{Logical}. For maxrsweep only.  If enabled, will attempt to perform sweeping action using multiple nodes
#'        in the cluster.  Defaults to false. Defaults to FALSE.
#' @examples
#' \dontrun{
#' library(h2o)
#' h2o.init()
#' # Run ModelSelection of VOL ~ all predictors
#' prostate_path <- system.file("extdata", "prostate.csv", package = "h2o")
#' prostate <- h2o.uploadFile(path = prostate_path)
#' prostate$CAPSULE <- as.factor(prostate$CAPSULE)
#' model <- h2o.modelSelection(y="VOL", x=c("RACE","AGE","RACE","DPROS"), training_frame=prostate)
#' }
#' @export
h2o.modelSelection <- function(x,
                               y,
                               training_frame,
                               model_id = NULL,
                               validation_frame = NULL,
                               nfolds = 0,
                               seed = -1,
                               fold_assignment = c("AUTO", "Random", "Modulo", "Stratified"),
                               fold_column = NULL,
                               ignore_const_cols = TRUE,
                               score_each_iteration = FALSE,
                               score_iteration_interval = 0,
                               offset_column = NULL,
                               weights_column = NULL,
                               family = c("AUTO", "gaussian", "binomial", "fractionalbinomial", "quasibinomial", "poisson", "gamma", "tweedie", "negativebinomial"),
                               link = c("family_default", "identity", "logit", "log", "inverse", "tweedie", "ologit"),
                               tweedie_variance_power = 0,
                               tweedie_link_power = 0,
                               theta = 0,
                               solver = c("AUTO", "IRLSM", "L_BFGS", "COORDINATE_DESCENT_NAIVE", "COORDINATE_DESCENT", "GRADIENT_DESCENT_LH", "GRADIENT_DESCENT_SQERR"),
                               alpha = NULL,
                               lambda = c(0.0),
                               lambda_search = FALSE,
                               early_stopping = FALSE,
                               nlambdas = 0,
                               standardize = TRUE,
                               missing_values_handling = c("MeanImputation", "Skip", "PlugValues"),
                               plug_values = NULL,
                               compute_p_values = FALSE,
                               remove_collinear_columns = FALSE,
                               intercept = TRUE,
                               non_negative = FALSE,
                               max_iterations = 0,
                               objective_epsilon = -1,
                               beta_epsilon = 0.0001,
                               gradient_epsilon = -1,
                               startval = NULL,
                               prior = 0,
                               cold_start = FALSE,
                               lambda_min_ratio = 0,
                               beta_constraints = NULL,
                               max_active_predictors = -1,
                               obj_reg = -1,
                               stopping_rounds = 0,
                               stopping_metric = c("AUTO", "deviance", "logloss", "MSE", "RMSE", "MAE", "RMSLE", "AUC", "AUCPR", "lift_top_group", "misclassification", "mean_per_class_error", "custom", "custom_increasing"),
                               stopping_tolerance = 0.001,
                               balance_classes = FALSE,
                               class_sampling_factors = NULL,
                               max_after_balance_size = 5.0,
                               max_runtime_secs = 0,
                               custom_metric_func = NULL,
                               nparallelism = 0,
                               max_predictor_number = 1,
                               min_predictor_number = 1,
                               mode = c("allsubsets", "maxr", "maxrsweep", "backward"),
                               build_glm_model = FALSE,
                               p_values_threshold = 0,
                               influence = c("dfbetas"),
                               multinode_mode = FALSE)
{
  # Validate required training_frame first and other frame args: should be a valid key or an H2OFrame object
  training_frame <- .validate.H2OFrame(training_frame, required=TRUE)
  validation_frame <- .validate.H2OFrame(validation_frame, required=FALSE)

  # Validate other required args
  # If x is missing, then assume user wants to use all columns as features.
  if (missing(x)) {
     if (is.numeric(y)) {
         x <- setdiff(col(training_frame), y)
     } else {
         x <- setdiff(colnames(training_frame), y)
     }
  }

  # Build parameter list to send to model builder
  parms <- list()
  parms$training_frame <- training_frame
  args <- .verify_dataxy(training_frame, x, y)
  parms$ignored_columns <- args$x_ignore
  parms$response_column <- args$y

  if (!missing(model_id))
    parms$model_id <- model_id
  if (!missing(validation_frame))
    parms$validation_frame <- validation_frame
  if (!missing(nfolds))
    parms$nfolds <- nfolds
  if (!missing(seed))
    parms$seed <- seed
  if (!missing(fold_assignment))
    parms$fold_assignment <- fold_assignment
  if (!missing(fold_column))
    parms$fold_column <- fold_column
  if (!missing(ignore_const_cols))
    parms$ignore_const_cols <- ignore_const_cols
  if (!missing(score_each_iteration))
    parms$score_each_iteration <- score_each_iteration
  if (!missing(score_iteration_interval))
    parms$score_iteration_interval <- score_iteration_interval
  if (!missing(offset_column))
    parms$offset_column <- offset_column
  if (!missing(weights_column))
    parms$weights_column <- weights_column
  if (!missing(family))
    parms$family <- family
  if (!missing(link))
    parms$link <- link
  if (!missing(tweedie_variance_power))
    parms$tweedie_variance_power <- tweedie_variance_power
  if (!missing(tweedie_link_power))
    parms$tweedie_link_power <- tweedie_link_power
  if (!missing(theta))
    parms$theta <- theta
  if (!missing(solver))
    parms$solver <- solver
  if (!missing(alpha))
    parms$alpha <- alpha
  if (!missing(lambda))
    parms$lambda <- lambda
  if (!missing(lambda_search))
    parms$lambda_search <- lambda_search
  if (!missing(early_stopping))
    parms$early_stopping <- early_stopping
  if (!missing(nlambdas))
    parms$nlambdas <- nlambdas
  if (!missing(standardize))
    parms$standardize <- standardize
  if (!missing(missing_values_handling))
    parms$missing_values_handling <- missing_values_handling
  if (!missing(plug_values))
    parms$plug_values <- plug_values
  if (!missing(compute_p_values))
    parms$compute_p_values <- compute_p_values
  if (!missing(remove_collinear_columns))
    parms$remove_collinear_columns <- remove_collinear_columns
  if (!missing(intercept))
    parms$intercept <- intercept
  if (!missing(non_negative))
    parms$non_negative <- non_negative
  if (!missing(max_iterations))
    parms$max_iterations <- max_iterations
  if (!missing(objective_epsilon))
    parms$objective_epsilon <- objective_epsilon
  if (!missing(beta_epsilon))
    parms$beta_epsilon <- beta_epsilon
  if (!missing(gradient_epsilon))
    parms$gradient_epsilon <- gradient_epsilon
  if (!missing(startval))
    parms$startval <- startval
  if (!missing(prior))
    parms$prior <- prior
  if (!missing(cold_start))
    parms$cold_start <- cold_start
  if (!missing(lambda_min_ratio))
    parms$lambda_min_ratio <- lambda_min_ratio
  if (!missing(beta_constraints))
    parms$beta_constraints <- beta_constraints
  if (!missing(max_active_predictors))
    parms$max_active_predictors <- max_active_predictors
  if (!missing(obj_reg))
    parms$obj_reg <- obj_reg
  if (!missing(stopping_rounds))
    parms$stopping_rounds <- stopping_rounds
  if (!missing(stopping_metric))
    parms$stopping_metric <- stopping_metric
  if (!missing(stopping_tolerance))
    parms$stopping_tolerance <- stopping_tolerance
  if (!missing(balance_classes))
    parms$balance_classes <- balance_classes
  if (!missing(class_sampling_factors))
    parms$class_sampling_factors <- class_sampling_factors
  if (!missing(max_after_balance_size))
    parms$max_after_balance_size <- max_after_balance_size
  if (!missing(max_runtime_secs))
    parms$max_runtime_secs <- max_runtime_secs
  if (!missing(custom_metric_func))
    parms$custom_metric_func <- custom_metric_func
  if (!missing(nparallelism))
    parms$nparallelism <- nparallelism
  if (!missing(max_predictor_number))
    parms$max_predictor_number <- max_predictor_number
  if (!missing(min_predictor_number))
    parms$min_predictor_number <- min_predictor_number
  if (!missing(mode))
    parms$mode <- mode
  if (!missing(build_glm_model))
    parms$build_glm_model <- build_glm_model
  if (!missing(p_values_threshold))
    parms$p_values_threshold <- p_values_threshold
  if (!missing(influence))
    parms$influence <- influence
  if (!missing(multinode_mode))
    parms$multinode_mode <- multinode_mode

  # Error check and build model
  model <- .h2o.modelJob('modelselection', parms, h2oRestApiVersion=3, verbose=FALSE)
  return(model)
}
.h2o.train_segments_modelselection <- function(x,
                                               y,
                                               training_frame,
                                               validation_frame = NULL,
                                               nfolds = 0,
                                               seed = -1,
                                               fold_assignment = c("AUTO", "Random", "Modulo", "Stratified"),
                                               fold_column = NULL,
                                               ignore_const_cols = TRUE,
                                               score_each_iteration = FALSE,
                                               score_iteration_interval = 0,
                                               offset_column = NULL,
                                               weights_column = NULL,
                                               family = c("AUTO", "gaussian", "binomial", "fractionalbinomial", "quasibinomial", "poisson", "gamma", "tweedie", "negativebinomial"),
                                               link = c("family_default", "identity", "logit", "log", "inverse", "tweedie", "ologit"),
                                               tweedie_variance_power = 0,
                                               tweedie_link_power = 0,
                                               theta = 0,
                                               solver = c("AUTO", "IRLSM", "L_BFGS", "COORDINATE_DESCENT_NAIVE", "COORDINATE_DESCENT", "GRADIENT_DESCENT_LH", "GRADIENT_DESCENT_SQERR"),
                                               alpha = NULL,
                                               lambda = c(0.0),
                                               lambda_search = FALSE,
                                               early_stopping = FALSE,
                                               nlambdas = 0,
                                               standardize = TRUE,
                                               missing_values_handling = c("MeanImputation", "Skip", "PlugValues"),
                                               plug_values = NULL,
                                               compute_p_values = FALSE,
                                               remove_collinear_columns = FALSE,
                                               intercept = TRUE,
                                               non_negative = FALSE,
                                               max_iterations = 0,
                                               objective_epsilon = -1,
                                               beta_epsilon = 0.0001,
                                               gradient_epsilon = -1,
                                               startval = NULL,
                                               prior = 0,
                                               cold_start = FALSE,
                                               lambda_min_ratio = 0,
                                               beta_constraints = NULL,
                                               max_active_predictors = -1,
                                               obj_reg = -1,
                                               stopping_rounds = 0,
                                               stopping_metric = c("AUTO", "deviance", "logloss", "MSE", "RMSE", "MAE", "RMSLE", "AUC", "AUCPR", "lift_top_group", "misclassification", "mean_per_class_error", "custom", "custom_increasing"),
                                               stopping_tolerance = 0.001,
                                               balance_classes = FALSE,
                                               class_sampling_factors = NULL,
                                               max_after_balance_size = 5.0,
                                               max_runtime_secs = 0,
                                               custom_metric_func = NULL,
                                               nparallelism = 0,
                                               max_predictor_number = 1,
                                               min_predictor_number = 1,
                                               mode = c("allsubsets", "maxr", "maxrsweep", "backward"),
                                               build_glm_model = FALSE,
                                               p_values_threshold = 0,
                                               influence = c("dfbetas"),
                                               multinode_mode = FALSE,
                                               segment_columns = NULL,
                                               segment_models_id = NULL,
                                               parallelism = 1)
{
  # formally define variables that were excluded from function parameters
  model_id <- NULL
  verbose <- NULL
  destination_key <- NULL
  # Validate required training_frame first and other frame args: should be a valid key or an H2OFrame object
  training_frame <- .validate.H2OFrame(training_frame, required=TRUE)
  validation_frame <- .validate.H2OFrame(validation_frame, required=FALSE)

  # Validate other required args
  # If x is missing, then assume user wants to use all columns as features.
  if (missing(x)) {
     if (is.numeric(y)) {
         x <- setdiff(col(training_frame), y)
     } else {
         x <- setdiff(colnames(training_frame), y)
     }
  }

  # Build parameter list to send to model builder
  parms <- list()
  parms$training_frame <- training_frame
  args <- .verify_dataxy(training_frame, x, y)
  parms$ignored_columns <- args$x_ignore
  parms$response_column <- args$y

  if (!missing(validation_frame))
    parms$validation_frame <- validation_frame
  if (!missing(nfolds))
    parms$nfolds <- nfolds
  if (!missing(seed))
    parms$seed <- seed
  if (!missing(fold_assignment))
    parms$fold_assignment <- fold_assignment
  if (!missing(fold_column))
    parms$fold_column <- fold_column
  if (!missing(ignore_const_cols))
    parms$ignore_const_cols <- ignore_const_cols
  if (!missing(score_each_iteration))
    parms$score_each_iteration <- score_each_iteration
  if (!missing(score_iteration_interval))
    parms$score_iteration_interval <- score_iteration_interval
  if (!missing(offset_column))
    parms$offset_column <- offset_column
  if (!missing(weights_column))
    parms$weights_column <- weights_column
  if (!missing(family))
    parms$family <- family
  if (!missing(link))
    parms$link <- link
  if (!missing(tweedie_variance_power))
    parms$tweedie_variance_power <- tweedie_variance_power
  if (!missing(tweedie_link_power))
    parms$tweedie_link_power <- tweedie_link_power
  if (!missing(theta))
    parms$theta <- theta
  if (!missing(solver))
    parms$solver <- solver
  if (!missing(alpha))
    parms$alpha <- alpha
  if (!missing(lambda))
    parms$lambda <- lambda
  if (!missing(lambda_search))
    parms$lambda_search <- lambda_search
  if (!missing(early_stopping))
    parms$early_stopping <- early_stopping
  if (!missing(nlambdas))
    parms$nlambdas <- nlambdas
  if (!missing(standardize))
    parms$standardize <- standardize
  if (!missing(missing_values_handling))
    parms$missing_values_handling <- missing_values_handling
  if (!missing(plug_values))
    parms$plug_values <- plug_values
  if (!missing(compute_p_values))
    parms$compute_p_values <- compute_p_values
  if (!missing(remove_collinear_columns))
    parms$remove_collinear_columns <- remove_collinear_columns
  if (!missing(intercept))
    parms$intercept <- intercept
  if (!missing(non_negative))
    parms$non_negative <- non_negative
  if (!missing(max_iterations))
    parms$max_iterations <- max_iterations
  if (!missing(objective_epsilon))
    parms$objective_epsilon <- objective_epsilon
  if (!missing(beta_epsilon))
    parms$beta_epsilon <- beta_epsilon
  if (!missing(gradient_epsilon))
    parms$gradient_epsilon <- gradient_epsilon
  if (!missing(startval))
    parms$startval <- startval
  if (!missing(prior))
    parms$prior <- prior
  if (!missing(cold_start))
    parms$cold_start <- cold_start
  if (!missing(lambda_min_ratio))
    parms$lambda_min_ratio <- lambda_min_ratio
  if (!missing(beta_constraints))
    parms$beta_constraints <- beta_constraints
  if (!missing(max_active_predictors))
    parms$max_active_predictors <- max_active_predictors
  if (!missing(obj_reg))
    parms$obj_reg <- obj_reg
  if (!missing(stopping_rounds))
    parms$stopping_rounds <- stopping_rounds
  if (!missing(stopping_metric))
    parms$stopping_metric <- stopping_metric
  if (!missing(stopping_tolerance))
    parms$stopping_tolerance <- stopping_tolerance
  if (!missing(balance_classes))
    parms$balance_classes <- balance_classes
  if (!missing(class_sampling_factors))
    parms$class_sampling_factors <- class_sampling_factors
  if (!missing(max_after_balance_size))
    parms$max_after_balance_size <- max_after_balance_size
  if (!missing(max_runtime_secs))
    parms$max_runtime_secs <- max_runtime_secs
  if (!missing(custom_metric_func))
    parms$custom_metric_func <- custom_metric_func
  if (!missing(nparallelism))
    parms$nparallelism <- nparallelism
  if (!missing(max_predictor_number))
    parms$max_predictor_number <- max_predictor_number
  if (!missing(min_predictor_number))
    parms$min_predictor_number <- min_predictor_number
  if (!missing(mode))
    parms$mode <- mode
  if (!missing(build_glm_model))
    parms$build_glm_model <- build_glm_model
  if (!missing(p_values_threshold))
    parms$p_values_threshold <- p_values_threshold
  if (!missing(influence))
    parms$influence <- influence
  if (!missing(multinode_mode))
    parms$multinode_mode <- multinode_mode

  # Build segment-models specific parameters
  segment_parms <- list()
  if (!missing(segment_columns))
    segment_parms$segment_columns <- segment_columns
  if (!missing(segment_models_id))
    segment_parms$segment_models_id <- segment_models_id
  segment_parms$parallelism <- parallelism

  # Error check and build segment models
  segment_models <- .h2o.segmentModelsJob('modelselection', segment_parms, parms, h2oRestApiVersion=3)
  return(segment_models)
}


#' Extracts the best R2 values for all predictor subset size.
#'
#' @param model is a H2OModel with algorithm name of modelselection
#' @export   
h2o.get_best_r2_values<- function(model) {
  if( is(model, "H2OModel") && (model@algorithm=='modelselection'))
    return(model@model$best_r2_values)
}

#' Extracts the predictor added to model at each step.
#'
#' @param model is a H2OModel with algorithm name of modelselection
#' @export   
h2o.get_predictors_added_per_step<- function(model) {
  if( is(model, "H2OModel") && (model@algorithm=='modelselection')) {
    if (model@allparameters$mode != 'backward') {
      return(model@model$predictors_added_per_step)
    } else {
      stop("h2o.get_predictors_added_per_step can not be called with model = backward")
    }
  }
}

#' Extracts the predictor removed to model at each step.
#'
#' @param model is a H2OModel with algorithm name of modelselection
#' @export   
h2o.get_predictors_removed_per_step<- function(model) {
  if( is(model, "H2OModel") && (model@algorithm=='modelselection')) {
    return(model@model$predictors_removed_per_step)
  }
}

#' Extracts the subset of predictor names that yield the best R2 value for each predictor subset size.
#'
#' @param model is a H2OModel with algorithm name of modelselection
#' @export 
h2o.get_best_model_predictors<-function(model) {
  if ( is(model, "H2OModel") && (model@algorithm=='modelselection'))
    return(model@model$best_predictors_subset)
}

    

Try the h2o package in your browser

Any scripts or data that you put into this service are public.

h2o documentation built on Aug. 9, 2023, 9:06 a.m.