R/glm.R

Defines functions h2o.computeGram h2o.getGLMFullRegularizationPath h2o.getLambdaMin h2o.getAlphaBest h2o.getLambdaMax h2o.getLambdaBest h2o.makeGLMModel .h2o.train_segments_glm h2o.glm

Documented in h2o.computeGram h2o.getAlphaBest h2o.getGLMFullRegularizationPath h2o.getLambdaBest h2o.getLambdaMax h2o.getLambdaMin h2o.glm h2o.makeGLMModel

# This file is auto-generated by h2o-3/h2o-bindings/bin/gen_R.py
# Copyright 2016 H2O.ai;  Apache License Version 2.0 (see LICENSE for details) 
#'
# -------------------------- H2O Generalized Linear Models -------------------------- #
#'
#' Fit a generalized linear model
#' 
#' Fits a generalized linear model, specified by a response variable, a set of predictors, and a
#' description of the error distribution.
#'
#' @param x (Optional) A vector containing the names or indices of the predictor variables to use in building the model.
#'        If x is missing, then all columns except y are used.
#' @param y The name or column index of the response variable in the data. 
#'        The response must be either a numeric or a categorical/factor variable. 
#'        If the response is numeric, then a regression model will be trained, otherwise it will train a classification model.
#' @param training_frame Id of the training data frame.
#' @param model_id Destination id for this model; auto-generated if not specified.
#' @param validation_frame Id of the validation data frame.
#' @param nfolds Number of folds for K-fold cross-validation (0 to disable or >= 2). Defaults to 0.
#' @param checkpoint Model checkpoint to resume training with.
#' @param export_checkpoints_dir Automatically export generated models to this directory.
#' @param seed Seed for random numbers (affects certain parts of the algo that are stochastic and those might or might not be enabled by default).
#'        Defaults to -1 (time-based random number).
#' @param keep_cross_validation_models \code{Logical}. Whether to keep the cross-validation models. Defaults to TRUE.
#' @param keep_cross_validation_predictions \code{Logical}. Whether to keep the predictions of the cross-validation models. Defaults to FALSE.
#' @param keep_cross_validation_fold_assignment \code{Logical}. Whether to keep the cross-validation fold assignment. Defaults to FALSE.
#' @param fold_assignment Cross-validation fold assignment scheme, if fold_column is not specified. The 'Stratified' option will
#'        stratify the folds based on the response variable, for classification problems. Must be one of: "AUTO",
#'        "Random", "Modulo", "Stratified". Defaults to AUTO.
#' @param fold_column Column with cross-validation fold index assignment per observation.
#' @param random_columns random columns indices for HGLM.
#' @param ignore_const_cols \code{Logical}. Ignore constant columns. Defaults to TRUE.
#' @param score_each_iteration \code{Logical}. Whether to score during each iteration of model training. Defaults to FALSE.
#' @param score_iteration_interval Perform scoring for every score_iteration_interval iterations Defaults to -1.
#' @param offset_column Offset column. This will be added to the combination of columns before applying the link function.
#' @param weights_column Column with observation weights. Giving some observation a weight of zero is equivalent to excluding it from
#'        the dataset; giving an observation a relative weight of 2 is equivalent to repeating that row twice. Negative
#'        weights are not allowed. Note: Weights are per-row observation weights and do not increase the size of the
#'        data frame. This is typically the number of times a row is repeated, but non-integer values are supported as
#'        well. During training, rows with higher weights matter more, due to the larger loss function pre-factor. If
#'        you set weight = 0 for a row, the returned prediction frame at that row is zero and this is incorrect. To get
#'        an accurate prediction, remove all rows with weight == 0.
#' @param family Family. Use binomial for classification with logistic regression, others are for regression problems. Must be
#'        one of: "AUTO", "gaussian", "binomial", "fractionalbinomial", "quasibinomial", "ordinal", "multinomial",
#'        "poisson", "gamma", "tweedie", "negativebinomial". Defaults to AUTO.
#' @param rand_family Random Component Family array.  One for each random component. Only support gaussian for now. Must be one of:
#'        "[gaussian]".
#' @param tweedie_variance_power Tweedie variance power Defaults to 0.
#' @param tweedie_link_power Tweedie link power Defaults to 1.
#' @param theta Theta Defaults to 1e-10.
#' @param solver AUTO will set the solver based on given data and the other parameters. IRLSM is fast on on problems with small
#'        number of predictors and for lambda-search with L1 penalty, L_BFGS scales better for datasets with many
#'        columns. Must be one of: "AUTO", "IRLSM", "L_BFGS", "COORDINATE_DESCENT_NAIVE", "COORDINATE_DESCENT",
#'        "GRADIENT_DESCENT_LH", "GRADIENT_DESCENT_SQERR". Defaults to AUTO.
#' @param alpha Distribution of regularization between the L1 (Lasso) and L2 (Ridge) penalties. A value of 1 for alpha
#'        represents Lasso regression, a value of 0 produces Ridge regression, and anything in between specifies the
#'        amount of mixing between the two. Default value of alpha is 0 when SOLVER = 'L-BFGS'; 0.5 otherwise.
#' @param lambda Regularization strength
#' @param lambda_search \code{Logical}. Use lambda search starting at lambda max, given lambda is then interpreted as lambda min
#'        Defaults to FALSE.
#' @param early_stopping \code{Logical}. Stop early when there is no more relative improvement on train or validation (if provided)
#'        Defaults to TRUE.
#' @param nlambdas Number of lambdas to be used in a search. Default indicates: If alpha is zero, with lambda search set to True,
#'        the value of nlamdas is set to 30 (fewer lambdas are needed for ridge regression) otherwise it is set to 100.
#'        Defaults to -1.
#' @param standardize \code{Logical}. Standardize numeric columns to have zero mean and unit variance Defaults to TRUE.
#' @param missing_values_handling Handling of missing values. Either MeanImputation, Skip or PlugValues. Must be one of: "MeanImputation",
#'        "Skip", "PlugValues". Defaults to MeanImputation.
#' @param plug_values Plug Values (a single row frame containing values that will be used to impute missing values of the
#'        training/validation frame, use with conjunction missing_values_handling = PlugValues)
#' @param compute_p_values \code{Logical}. Request p-values computation, p-values work only with IRLSM solver and no regularization
#'        Defaults to FALSE.
#' @param dispersion_parameter_method Method used to estimate the dispersion parameter for Tweedie, Gamma and Negative Binomial only. Must be one
#'        of: "deviance", "pearson", "ml". Defaults to pearson.
#' @param init_dispersion_parameter Only used for Tweedie, Gamma and Negative Binomial GLM.  Store the initial value of dispersion parameter.  If
#'        fix_dispersion_parameter is set, this value will be used in the calculation of p-values.Default to 1.0.
#'        Defaults to 1.
#' @param remove_collinear_columns \code{Logical}. In case of linearly dependent columns, remove some of the dependent columns Defaults to FALSE.
#' @param intercept \code{Logical}. Include constant term in the model Defaults to TRUE.
#' @param non_negative \code{Logical}. Restrict coefficients (not intercept) to be non-negative Defaults to FALSE.
#' @param max_iterations Maximum number of iterations Defaults to -1.
#' @param objective_epsilon Converge if  objective value changes less than this. Default (of -1.0) indicates: If lambda_search is set to
#'        True the value of objective_epsilon is set to .0001. If the lambda_search is set to False and lambda is equal
#'        to zero, the value of objective_epsilon is set to .000001, for any other value of lambda the default value of
#'        objective_epsilon is set to .0001. Defaults to -1.
#' @param beta_epsilon Converge if  beta changes less (using L-infinity norm) than beta esilon, ONLY applies to IRLSM solver
#'        Defaults to 0.0001.
#' @param gradient_epsilon Converge if  objective changes less (using L-infinity norm) than this, ONLY applies to L-BFGS solver. Default
#'        (of -1.0) indicates: If lambda_search is set to False and lambda is equal to zero, the default value of
#'        gradient_epsilon is equal to .000001, otherwise the default value is .0001. If lambda_search is set to True,
#'        the conditional values above are 1E-8 and 1E-6 respectively. Defaults to -1.
#' @param link Link function. Must be one of: "family_default", "identity", "logit", "log", "inverse", "tweedie", "ologit".
#'        Defaults to family_default.
#' @param rand_link Link function array for random component in HGLM. Must be one of: "[identity]", "[family_default]".
#' @param startval double array to initialize fixed and random coefficients for HGLM, coefficients for GLM.
#' @param calc_like \code{Logical}. if true, will return likelihood function value. Defaults to FALSE.
#' @param HGLM \code{Logical}. If set to true, will return HGLM model.  Otherwise, normal GLM model will be returned Defaults
#'        to FALSE.
#' @param prior Prior probability for y==1. To be used only for logistic regression iff the data has been sampled and the mean
#'        of response does not reflect reality. Defaults to -1.
#' @param cold_start \code{Logical}. Only applicable to multiple alpha/lambda values.  If false, build the next model for next set
#'        of alpha/lambda values starting from the values provided by current model.  If true will start GLM model from
#'        scratch. Defaults to FALSE.
#' @param lambda_min_ratio Minimum lambda used in lambda search, specified as a ratio of lambda_max (the smallest lambda that drives all
#'        coefficients to zero). Default indicates: if the number of observations is greater than the number of
#'        variables, then lambda_min_ratio is set to 0.0001; if the number of observations is less than the number of
#'        variables, then lambda_min_ratio is set to 0.01. Defaults to -1.
#' @param beta_constraints Beta constraints
#' @param max_active_predictors Maximum number of active predictors during computation. Use as a stopping criterion to prevent expensive model
#'        building with many predictors. Default indicates: If the IRLSM solver is used, the value of
#'        max_active_predictors is set to 5000 otherwise it is set to 100000000. Defaults to -1.
#' @param interactions A list of predictor column indices to interact. All pairwise combinations will be computed for the list.
#' @param interaction_pairs A list of pairwise (first order) column interactions.
#' @param obj_reg Likelihood divider in objective value computation, default (of -1.0) will set it to 1/nobs Defaults to -1.
#' @param stopping_rounds Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the
#'        stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable) Defaults to 0.
#' @param stopping_metric Metric to use for early stopping (AUTO: logloss for classification, deviance for regression and anomaly_score
#'        for Isolation Forest). Note that custom and custom_increasing can only be used in GBM and DRF with the Python
#'        client. Must be one of: "AUTO", "deviance", "logloss", "MSE", "RMSE", "MAE", "RMSLE", "AUC", "AUCPR",
#'        "lift_top_group", "misclassification", "mean_per_class_error", "custom", "custom_increasing". Defaults to
#'        AUTO.
#' @param stopping_tolerance Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this
#'        much) Defaults to 0.001.
#' @param balance_classes \code{Logical}. Balance training data class counts via over/under-sampling (for imbalanced data). Defaults to
#'        FALSE.
#' @param class_sampling_factors Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will
#'        be automatically computed to obtain class balance during training. Requires balance_classes.
#' @param max_after_balance_size Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires
#'        balance_classes. Defaults to 5.0.
#' @param max_runtime_secs Maximum allowed runtime in seconds for model training. Use 0 to disable. Defaults to 0.
#' @param custom_metric_func Reference to custom evaluation function, format: `language:keyName=funcName`
#' @param generate_scoring_history \code{Logical}. If set to true, will generate scoring history for GLM.  This may significantly slow down the
#'        algo. Defaults to FALSE.
#' @param auc_type Set default multinomial AUC type. Must be one of: "AUTO", "NONE", "MACRO_OVR", "WEIGHTED_OVR", "MACRO_OVO",
#'        "WEIGHTED_OVO". Defaults to AUTO.
#' @param dispersion_epsilon If changes in dispersion parameter estimation or loglikelihood value is smaller than dispersion_epsilon, will
#'        break out of the dispersion parameter estimation loop using maximum likelihood. Defaults to 0.0001.
#' @param tweedie_epsilon In estimating tweedie dispersion parameter using maximum likelihood, this is used to choose the lower and
#'        upper indices in the approximating of the infinite series summation. Defaults to 8e-17.
#' @param max_iterations_dispersion Control the maximum number of iterations in the dispersion parameter estimation loop using maximum likelihood.
#'        Defaults to 3000.
#' @param build_null_model \code{Logical}. If set, will build a model with only the intercept.  Default to false. Defaults to FALSE.
#' @param fix_dispersion_parameter \code{Logical}. Only used for Tweedie, Gamma and Negative Binomial GLM.  If set, will use the dispsersion
#'        parameter in init_dispersion_parameter as the standard error and use it to calculate the p-values. Default to
#'        false. Defaults to FALSE.
#' @param generate_variable_inflation_factors \code{Logical}. if true, will generate variable inflation factors for numerical predictors.  Default to false.
#'        Defaults to FALSE.
#' @param fix_tweedie_variance_power \code{Logical}. If true, will fix tweedie variance power value to the value set in tweedie_variance_power.
#'        Defaults to TRUE.
#' @param dispersion_learning_rate Dispersion learning rate is only valid for tweedie family dispersion parameter estimation using ml. It must be
#'        > 0.  This controls how much the dispersion parameter estimate is to be changed when the calculated
#'        loglikelihood actually decreases with the new dispersion.  In this case, instead of setting new dispersion =
#'        dispersion + change, we set new dispersion = dispersion + dispersion_learning_rate * change. Defaults to 0.5.
#'        Defaults to 0.5.
#' @param influence If set to dfbetas will calculate the difference in beta when a datarow is included and excluded in the
#'        dataset. Must be one of: "dfbetas".
#' @return A subclass of \code{\linkS4class{H2OModel}} is returned. The specific subclass depends on the machine
#'         learning task at hand (if it's binomial classification, then an \code{\linkS4class{H2OBinomialModel}} is
#'         returned, if it's regression then a \code{\linkS4class{H2ORegressionModel}} is returned). The default print-
#'         out of the models is shown, but further GLM-specifc information can be queried out of the object. To access
#'         these various items, please refer to the seealso section below. Upon completion of the GLM, the resulting
#'         object has coefficients, normalized coefficients, residual/null deviance, aic, and a host of model metrics
#'         including MSE, AUC (for logistic regression), degrees of freedom, and confusion matrices. Please refer to the
#'         more in-depth GLM documentation available here:
#'         \url{https://h2o-release.s3.amazonaws.com/h2o-dev/rel-shannon/2/docs-website/h2o-docs/index.html#Data+Science+Algorithms-GLM}
#' @seealso \code{\link{predict.H2OModel}} for prediction, \code{\link{h2o.mse}}, \code{\link{h2o.auc}},
#'          \code{\link{h2o.confusionMatrix}}, \code{\link{h2o.performance}}, \code{\link{h2o.giniCoef}},
#'          \code{\link{h2o.logloss}}, \code{\link{h2o.varimp}}, \code{\link{h2o.scoreHistory}}
#' @examples
#' \dontrun{
#' h2o.init()
#' 
#' # Run GLM of CAPSULE ~ AGE + RACE + PSA + DCAPS
#' prostate_path = system.file("extdata", "prostate.csv", package = "h2o")
#' prostate = h2o.importFile(path = prostate_path)
#' h2o.glm(y = "CAPSULE", x = c("AGE", "RACE", "PSA", "DCAPS"), training_frame = prostate,
#'         family = "binomial", nfolds = 0, alpha = 0.5, lambda_search = FALSE)
#' 
#' # Run GLM of VOL ~ CAPSULE + AGE + RACE + PSA + GLEASON
#' predictors = setdiff(colnames(prostate), c("ID", "DPROS", "DCAPS", "VOL"))
#' h2o.glm(y = "VOL", x = predictors, training_frame = prostate, family = "gaussian",
#'         nfolds = 0, alpha = 0.1, lambda_search = FALSE)
#' 
#' 
#' # GLM variable importance
#' # Also see:
#' #   https://github.com/h2oai/h2o/blob/master/R/tests/testdir_demos/runit_demo_VI_all_algos.R
#' bank = h2o.importFile(
#'   path="https://s3.amazonaws.com/h2o-public-test-data/smalldata/demos/bank-additional-full.csv"
#' )
#' predictors = 1:20
#' target = "y"
#' glm = h2o.glm(x = predictors, 
#'               y = target, 
#'               training_frame = bank, 
#'               family = "binomial", 
#'               standardize = TRUE,
#'               lambda_search = TRUE)
#' h2o.std_coef_plot(glm, num_of_features = 20)
#' }
#' @export
h2o.glm <- function(x,
                    y,
                    training_frame,
                    model_id = NULL,
                    validation_frame = NULL,
                    nfolds = 0,
                    checkpoint = NULL,
                    export_checkpoints_dir = NULL,
                    seed = -1,
                    keep_cross_validation_models = TRUE,
                    keep_cross_validation_predictions = FALSE,
                    keep_cross_validation_fold_assignment = FALSE,
                    fold_assignment = c("AUTO", "Random", "Modulo", "Stratified"),
                    fold_column = NULL,
                    random_columns = NULL,
                    ignore_const_cols = TRUE,
                    score_each_iteration = FALSE,
                    score_iteration_interval = -1,
                    offset_column = NULL,
                    weights_column = NULL,
                    family = c("AUTO", "gaussian", "binomial", "fractionalbinomial", "quasibinomial", "ordinal", "multinomial", "poisson", "gamma", "tweedie", "negativebinomial"),
                    rand_family = c("[gaussian]"),
                    tweedie_variance_power = 0,
                    tweedie_link_power = 1,
                    theta = 1e-10,
                    solver = c("AUTO", "IRLSM", "L_BFGS", "COORDINATE_DESCENT_NAIVE", "COORDINATE_DESCENT", "GRADIENT_DESCENT_LH", "GRADIENT_DESCENT_SQERR"),
                    alpha = NULL,
                    lambda = NULL,
                    lambda_search = FALSE,
                    early_stopping = TRUE,
                    nlambdas = -1,
                    standardize = TRUE,
                    missing_values_handling = c("MeanImputation", "Skip", "PlugValues"),
                    plug_values = NULL,
                    compute_p_values = FALSE,
                    dispersion_parameter_method = c("deviance", "pearson", "ml"),
                    init_dispersion_parameter = 1,
                    remove_collinear_columns = FALSE,
                    intercept = TRUE,
                    non_negative = FALSE,
                    max_iterations = -1,
                    objective_epsilon = -1,
                    beta_epsilon = 0.0001,
                    gradient_epsilon = -1,
                    link = c("family_default", "identity", "logit", "log", "inverse", "tweedie", "ologit"),
                    rand_link = c("[identity]", "[family_default]"),
                    startval = NULL,
                    calc_like = FALSE,
                    HGLM = FALSE,
                    prior = -1,
                    cold_start = FALSE,
                    lambda_min_ratio = -1,
                    beta_constraints = NULL,
                    max_active_predictors = -1,
                    interactions = NULL,
                    interaction_pairs = NULL,
                    obj_reg = -1,
                    stopping_rounds = 0,
                    stopping_metric = c("AUTO", "deviance", "logloss", "MSE", "RMSE", "MAE", "RMSLE", "AUC", "AUCPR", "lift_top_group", "misclassification", "mean_per_class_error", "custom", "custom_increasing"),
                    stopping_tolerance = 0.001,
                    balance_classes = FALSE,
                    class_sampling_factors = NULL,
                    max_after_balance_size = 5.0,
                    max_runtime_secs = 0,
                    custom_metric_func = NULL,
                    generate_scoring_history = FALSE,
                    auc_type = c("AUTO", "NONE", "MACRO_OVR", "WEIGHTED_OVR", "MACRO_OVO", "WEIGHTED_OVO"),
                    dispersion_epsilon = 0.0001,
                    tweedie_epsilon = 8e-17,
                    max_iterations_dispersion = 3000,
                    build_null_model = FALSE,
                    fix_dispersion_parameter = FALSE,
                    generate_variable_inflation_factors = FALSE,
                    fix_tweedie_variance_power = TRUE,
                    dispersion_learning_rate = 0.5,
                    influence = c("dfbetas"))
{
  # Validate required training_frame first and other frame args: should be a valid key or an H2OFrame object
  training_frame <- .validate.H2OFrame(training_frame, required=TRUE)
  validation_frame <- .validate.H2OFrame(validation_frame, required=FALSE)

  # Validate other required args
  # If x is missing, then assume user wants to use all columns as features.
  if (missing(x)) {
     if (is.numeric(y)) {
         x <- setdiff(col(training_frame), y)
     } else {
         x <- setdiff(colnames(training_frame), y)
     }
  }

  # Validate other args
  # if (!is.null(beta_constraints)) {
  #     if (!inherits(beta_constraints, 'data.frame') && !is.H2OFrame(beta_constraints))
  #       stop(paste('`beta_constraints` must be an H2OH2OFrame or R data.frame. Got: ', class(beta_constraints)))
  #     if (inherits(beta_constraints, 'data.frame')) {
  #       beta_constraints <- as.h2o(beta_constraints)
  #     }
  # }
  if (inherits(beta_constraints, 'data.frame')) {
    beta_constraints <- as.h2o(beta_constraints)
  }

  # Build parameter list to send to model builder
  parms <- list()
  parms$training_frame <- training_frame
  args <- .verify_dataxy(training_frame, x, y)
  if (HGLM && is.null(random_columns)) stop("HGLM: must specify random effect column!")
  if (HGLM && (!is.null(random_columns))) {
    temp <- .verify_dataxy(training_frame, random_columns, y)
    random_columns <- temp$x_i-1  # change column index to numeric column indices starting from 0
  }
  if( !missing(offset_column) && !is.null(offset_column))  args$x_ignore <- args$x_ignore[!( offset_column == args$x_ignore )]
  if( !missing(weights_column) && !is.null(weights_column)) args$x_ignore <- args$x_ignore[!( weights_column == args$x_ignore )]
  if( !missing(fold_column) && !is.null(fold_column)) args$x_ignore <- args$x_ignore[!( fold_column == args$x_ignore )]
  parms$ignored_columns <- args$x_ignore
  parms$response_column <- args$y    

  if (!missing(model_id))
    parms$model_id <- model_id
  if (!missing(validation_frame))
    parms$validation_frame <- validation_frame
  if (!missing(checkpoint))
    parms$checkpoint <- checkpoint
  if (!missing(export_checkpoints_dir))
    parms$export_checkpoints_dir <- export_checkpoints_dir
  if (!missing(seed))
    parms$seed <- seed
  if (!missing(keep_cross_validation_models))
    parms$keep_cross_validation_models <- keep_cross_validation_models
  if (!missing(keep_cross_validation_predictions))
    parms$keep_cross_validation_predictions <- keep_cross_validation_predictions
  if (!missing(keep_cross_validation_fold_assignment))
    parms$keep_cross_validation_fold_assignment <- keep_cross_validation_fold_assignment
  if (!missing(fold_assignment))
    parms$fold_assignment <- fold_assignment
  if (!missing(fold_column))
    parms$fold_column <- fold_column
  if (!missing(random_columns))
    parms$random_columns <- random_columns
  if (!missing(ignore_const_cols))
    parms$ignore_const_cols <- ignore_const_cols
  if (!missing(score_each_iteration))
    parms$score_each_iteration <- score_each_iteration
  if (!missing(score_iteration_interval))
    parms$score_iteration_interval <- score_iteration_interval
  if (!missing(offset_column))
    parms$offset_column <- offset_column
  if (!missing(weights_column))
    parms$weights_column <- weights_column
  if (!missing(family))
    parms$family <- family
  if (!missing(rand_family))
    parms$rand_family <- rand_family
  if (!missing(tweedie_variance_power))
    parms$tweedie_variance_power <- tweedie_variance_power
  if (!missing(tweedie_link_power))
    parms$tweedie_link_power <- tweedie_link_power
  if (!missing(theta))
    parms$theta <- theta
  if (!missing(solver))
    parms$solver <- solver
  if (!missing(alpha))
    parms$alpha <- alpha
  if (!missing(lambda))
    parms$lambda <- lambda
  if (!missing(lambda_search))
    parms$lambda_search <- lambda_search
  if (!missing(early_stopping))
    parms$early_stopping <- early_stopping
  if (!missing(nlambdas))
    parms$nlambdas <- nlambdas
  if (!missing(standardize))
    parms$standardize <- standardize
  if (!missing(plug_values))
    parms$plug_values <- plug_values
  if (!missing(compute_p_values))
    parms$compute_p_values <- compute_p_values
  if (!missing(dispersion_parameter_method))
    parms$dispersion_parameter_method <- dispersion_parameter_method
  if (!missing(init_dispersion_parameter))
    parms$init_dispersion_parameter <- init_dispersion_parameter
  if (!missing(remove_collinear_columns))
    parms$remove_collinear_columns <- remove_collinear_columns
  if (!missing(intercept))
    parms$intercept <- intercept
  if (!missing(non_negative))
    parms$non_negative <- non_negative
  if (!missing(max_iterations))
    parms$max_iterations <- max_iterations
  if (!missing(objective_epsilon))
    parms$objective_epsilon <- objective_epsilon
  if (!missing(beta_epsilon))
    parms$beta_epsilon <- beta_epsilon
  if (!missing(gradient_epsilon))
    parms$gradient_epsilon <- gradient_epsilon
  if (!missing(link))
    parms$link <- link
  if (!missing(rand_link))
    parms$rand_link <- rand_link
  if (!missing(startval))
    parms$startval <- startval
  if (!missing(calc_like))
    parms$calc_like <- calc_like
  if (!missing(HGLM))
    parms$HGLM <- HGLM
  if (!missing(prior))
    parms$prior <- prior
  if (!missing(cold_start))
    parms$cold_start <- cold_start
  if (!missing(lambda_min_ratio))
    parms$lambda_min_ratio <- lambda_min_ratio
  if (!missing(max_active_predictors))
    parms$max_active_predictors <- max_active_predictors
  if (!missing(interaction_pairs))
    parms$interaction_pairs <- interaction_pairs
  if (!missing(obj_reg))
    parms$obj_reg <- obj_reg
  if (!missing(stopping_rounds))
    parms$stopping_rounds <- stopping_rounds
  if (!missing(stopping_metric))
    parms$stopping_metric <- stopping_metric
  if (!missing(stopping_tolerance))
    parms$stopping_tolerance <- stopping_tolerance
  if (!missing(balance_classes))
    parms$balance_classes <- balance_classes
  if (!missing(class_sampling_factors))
    parms$class_sampling_factors <- class_sampling_factors
  if (!missing(max_after_balance_size))
    parms$max_after_balance_size <- max_after_balance_size
  if (!missing(max_runtime_secs))
    parms$max_runtime_secs <- max_runtime_secs
  if (!missing(custom_metric_func))
    parms$custom_metric_func <- custom_metric_func
  if (!missing(generate_scoring_history))
    parms$generate_scoring_history <- generate_scoring_history
  if (!missing(auc_type))
    parms$auc_type <- auc_type
  if (!missing(dispersion_epsilon))
    parms$dispersion_epsilon <- dispersion_epsilon
  if (!missing(tweedie_epsilon))
    parms$tweedie_epsilon <- tweedie_epsilon
  if (!missing(max_iterations_dispersion))
    parms$max_iterations_dispersion <- max_iterations_dispersion
  if (!missing(build_null_model))
    parms$build_null_model <- build_null_model
  if (!missing(fix_dispersion_parameter))
    parms$fix_dispersion_parameter <- fix_dispersion_parameter
  if (!missing(generate_variable_inflation_factors))
    parms$generate_variable_inflation_factors <- generate_variable_inflation_factors
  if (!missing(fix_tweedie_variance_power))
    parms$fix_tweedie_variance_power <- fix_tweedie_variance_power
  if (!missing(dispersion_learning_rate))
    parms$dispersion_learning_rate <- dispersion_learning_rate
  if (!missing(influence))
    parms$influence <- influence

  if( !missing(interactions) ) {
    # interactions are column names => as-is
    if( is.character(interactions) )       parms$interactions <- interactions
    else if( is.numeric(interactions) )    parms$interactions <- names(training_frame)[interactions]
    else stop("Don't know what to do with interactions. Supply vector of indices or names")
  }
  # For now, accept nfolds in the R interface if it is 0 or 1, since those values really mean do nothing.
  # For any other value, error out.
  # Expunge nfolds from the message sent to H2O, since H2O doesn't understand it.
  if (!missing(nfolds) && nfolds > 1)
    parms$nfolds <- nfolds
  if(!missing(beta_constraints))
    parms$beta_constraints <- beta_constraints
    if(!missing(missing_values_handling))
      parms$missing_values_handling <- missing_values_handling

  # Error check and build model
  model <- .h2o.modelJob('glm', parms, h2oRestApiVersion=3, verbose=FALSE)

  model@model$coefficients <- model@model$coefficients_table[,2]
  names(model@model$coefficients) <- model@model$coefficients_table[,1]
  if (!(is.null(model@model$random_coefficients_table))) {
      model@model$random_coefficients <- model@model$random_coefficients_table[,2]
      names(model@model$random_coefficients) <- model@model$random_coefficients_table[,1]
  }
  return(model)
}
.h2o.train_segments_glm <- function(x,
                                    y,
                                    training_frame,
                                    validation_frame = NULL,
                                    nfolds = 0,
                                    checkpoint = NULL,
                                    export_checkpoints_dir = NULL,
                                    seed = -1,
                                    keep_cross_validation_models = TRUE,
                                    keep_cross_validation_predictions = FALSE,
                                    keep_cross_validation_fold_assignment = FALSE,
                                    fold_assignment = c("AUTO", "Random", "Modulo", "Stratified"),
                                    fold_column = NULL,
                                    random_columns = NULL,
                                    ignore_const_cols = TRUE,
                                    score_each_iteration = FALSE,
                                    score_iteration_interval = -1,
                                    offset_column = NULL,
                                    weights_column = NULL,
                                    family = c("AUTO", "gaussian", "binomial", "fractionalbinomial", "quasibinomial", "ordinal", "multinomial", "poisson", "gamma", "tweedie", "negativebinomial"),
                                    rand_family = c("[gaussian]"),
                                    tweedie_variance_power = 0,
                                    tweedie_link_power = 1,
                                    theta = 1e-10,
                                    solver = c("AUTO", "IRLSM", "L_BFGS", "COORDINATE_DESCENT_NAIVE", "COORDINATE_DESCENT", "GRADIENT_DESCENT_LH", "GRADIENT_DESCENT_SQERR"),
                                    alpha = NULL,
                                    lambda = NULL,
                                    lambda_search = FALSE,
                                    early_stopping = TRUE,
                                    nlambdas = -1,
                                    standardize = TRUE,
                                    missing_values_handling = c("MeanImputation", "Skip", "PlugValues"),
                                    plug_values = NULL,
                                    compute_p_values = FALSE,
                                    dispersion_parameter_method = c("deviance", "pearson", "ml"),
                                    init_dispersion_parameter = 1,
                                    remove_collinear_columns = FALSE,
                                    intercept = TRUE,
                                    non_negative = FALSE,
                                    max_iterations = -1,
                                    objective_epsilon = -1,
                                    beta_epsilon = 0.0001,
                                    gradient_epsilon = -1,
                                    link = c("family_default", "identity", "logit", "log", "inverse", "tweedie", "ologit"),
                                    rand_link = c("[identity]", "[family_default]"),
                                    startval = NULL,
                                    calc_like = FALSE,
                                    HGLM = FALSE,
                                    prior = -1,
                                    cold_start = FALSE,
                                    lambda_min_ratio = -1,
                                    beta_constraints = NULL,
                                    max_active_predictors = -1,
                                    interactions = NULL,
                                    interaction_pairs = NULL,
                                    obj_reg = -1,
                                    stopping_rounds = 0,
                                    stopping_metric = c("AUTO", "deviance", "logloss", "MSE", "RMSE", "MAE", "RMSLE", "AUC", "AUCPR", "lift_top_group", "misclassification", "mean_per_class_error", "custom", "custom_increasing"),
                                    stopping_tolerance = 0.001,
                                    balance_classes = FALSE,
                                    class_sampling_factors = NULL,
                                    max_after_balance_size = 5.0,
                                    max_runtime_secs = 0,
                                    custom_metric_func = NULL,
                                    generate_scoring_history = FALSE,
                                    auc_type = c("AUTO", "NONE", "MACRO_OVR", "WEIGHTED_OVR", "MACRO_OVO", "WEIGHTED_OVO"),
                                    dispersion_epsilon = 0.0001,
                                    tweedie_epsilon = 8e-17,
                                    max_iterations_dispersion = 3000,
                                    build_null_model = FALSE,
                                    fix_dispersion_parameter = FALSE,
                                    generate_variable_inflation_factors = FALSE,
                                    fix_tweedie_variance_power = TRUE,
                                    dispersion_learning_rate = 0.5,
                                    influence = c("dfbetas"),
                                    segment_columns = NULL,
                                    segment_models_id = NULL,
                                    parallelism = 1)
{
  # formally define variables that were excluded from function parameters
  model_id <- NULL
  verbose <- NULL
  destination_key <- NULL
  # Validate required training_frame first and other frame args: should be a valid key or an H2OFrame object
  training_frame <- .validate.H2OFrame(training_frame, required=TRUE)
  validation_frame <- .validate.H2OFrame(validation_frame, required=FALSE)

  # Validate other required args
  # If x is missing, then assume user wants to use all columns as features.
  if (missing(x)) {
     if (is.numeric(y)) {
         x <- setdiff(col(training_frame), y)
     } else {
         x <- setdiff(colnames(training_frame), y)
     }
  }

  # Validate other args
  # if (!is.null(beta_constraints)) {
  #     if (!inherits(beta_constraints, 'data.frame') && !is.H2OFrame(beta_constraints))
  #       stop(paste('`beta_constraints` must be an H2OH2OFrame or R data.frame. Got: ', class(beta_constraints)))
  #     if (inherits(beta_constraints, 'data.frame')) {
  #       beta_constraints <- as.h2o(beta_constraints)
  #     }
  # }
  if (inherits(beta_constraints, 'data.frame')) {
    beta_constraints <- as.h2o(beta_constraints)
  }

  # Build parameter list to send to model builder
  parms <- list()
  parms$training_frame <- training_frame
  args <- .verify_dataxy(training_frame, x, y)
  if (HGLM && is.null(random_columns)) stop("HGLM: must specify random effect column!")
  if (HGLM && (!is.null(random_columns))) {
    temp <- .verify_dataxy(training_frame, random_columns, y)
    random_columns <- temp$x_i-1  # change column index to numeric column indices starting from 0
  }
  if( !missing(offset_column) && !is.null(offset_column))  args$x_ignore <- args$x_ignore[!( offset_column == args$x_ignore )]
  if( !missing(weights_column) && !is.null(weights_column)) args$x_ignore <- args$x_ignore[!( weights_column == args$x_ignore )]
  if( !missing(fold_column) && !is.null(fold_column)) args$x_ignore <- args$x_ignore[!( fold_column == args$x_ignore )]
  parms$ignored_columns <- args$x_ignore
  parms$response_column <- args$y    

  if (!missing(validation_frame))
    parms$validation_frame <- validation_frame
  if (!missing(checkpoint))
    parms$checkpoint <- checkpoint
  if (!missing(export_checkpoints_dir))
    parms$export_checkpoints_dir <- export_checkpoints_dir
  if (!missing(seed))
    parms$seed <- seed
  if (!missing(keep_cross_validation_models))
    parms$keep_cross_validation_models <- keep_cross_validation_models
  if (!missing(keep_cross_validation_predictions))
    parms$keep_cross_validation_predictions <- keep_cross_validation_predictions
  if (!missing(keep_cross_validation_fold_assignment))
    parms$keep_cross_validation_fold_assignment <- keep_cross_validation_fold_assignment
  if (!missing(fold_assignment))
    parms$fold_assignment <- fold_assignment
  if (!missing(fold_column))
    parms$fold_column <- fold_column
  if (!missing(random_columns))
    parms$random_columns <- random_columns
  if (!missing(ignore_const_cols))
    parms$ignore_const_cols <- ignore_const_cols
  if (!missing(score_each_iteration))
    parms$score_each_iteration <- score_each_iteration
  if (!missing(score_iteration_interval))
    parms$score_iteration_interval <- score_iteration_interval
  if (!missing(offset_column))
    parms$offset_column <- offset_column
  if (!missing(weights_column))
    parms$weights_column <- weights_column
  if (!missing(family))
    parms$family <- family
  if (!missing(rand_family))
    parms$rand_family <- rand_family
  if (!missing(tweedie_variance_power))
    parms$tweedie_variance_power <- tweedie_variance_power
  if (!missing(tweedie_link_power))
    parms$tweedie_link_power <- tweedie_link_power
  if (!missing(theta))
    parms$theta <- theta
  if (!missing(solver))
    parms$solver <- solver
  if (!missing(alpha))
    parms$alpha <- alpha
  if (!missing(lambda))
    parms$lambda <- lambda
  if (!missing(lambda_search))
    parms$lambda_search <- lambda_search
  if (!missing(early_stopping))
    parms$early_stopping <- early_stopping
  if (!missing(nlambdas))
    parms$nlambdas <- nlambdas
  if (!missing(standardize))
    parms$standardize <- standardize
  if (!missing(plug_values))
    parms$plug_values <- plug_values
  if (!missing(compute_p_values))
    parms$compute_p_values <- compute_p_values
  if (!missing(dispersion_parameter_method))
    parms$dispersion_parameter_method <- dispersion_parameter_method
  if (!missing(init_dispersion_parameter))
    parms$init_dispersion_parameter <- init_dispersion_parameter
  if (!missing(remove_collinear_columns))
    parms$remove_collinear_columns <- remove_collinear_columns
  if (!missing(intercept))
    parms$intercept <- intercept
  if (!missing(non_negative))
    parms$non_negative <- non_negative
  if (!missing(max_iterations))
    parms$max_iterations <- max_iterations
  if (!missing(objective_epsilon))
    parms$objective_epsilon <- objective_epsilon
  if (!missing(beta_epsilon))
    parms$beta_epsilon <- beta_epsilon
  if (!missing(gradient_epsilon))
    parms$gradient_epsilon <- gradient_epsilon
  if (!missing(link))
    parms$link <- link
  if (!missing(rand_link))
    parms$rand_link <- rand_link
  if (!missing(startval))
    parms$startval <- startval
  if (!missing(calc_like))
    parms$calc_like <- calc_like
  if (!missing(HGLM))
    parms$HGLM <- HGLM
  if (!missing(prior))
    parms$prior <- prior
  if (!missing(cold_start))
    parms$cold_start <- cold_start
  if (!missing(lambda_min_ratio))
    parms$lambda_min_ratio <- lambda_min_ratio
  if (!missing(max_active_predictors))
    parms$max_active_predictors <- max_active_predictors
  if (!missing(interaction_pairs))
    parms$interaction_pairs <- interaction_pairs
  if (!missing(obj_reg))
    parms$obj_reg <- obj_reg
  if (!missing(stopping_rounds))
    parms$stopping_rounds <- stopping_rounds
  if (!missing(stopping_metric))
    parms$stopping_metric <- stopping_metric
  if (!missing(stopping_tolerance))
    parms$stopping_tolerance <- stopping_tolerance
  if (!missing(balance_classes))
    parms$balance_classes <- balance_classes
  if (!missing(class_sampling_factors))
    parms$class_sampling_factors <- class_sampling_factors
  if (!missing(max_after_balance_size))
    parms$max_after_balance_size <- max_after_balance_size
  if (!missing(max_runtime_secs))
    parms$max_runtime_secs <- max_runtime_secs
  if (!missing(custom_metric_func))
    parms$custom_metric_func <- custom_metric_func
  if (!missing(generate_scoring_history))
    parms$generate_scoring_history <- generate_scoring_history
  if (!missing(auc_type))
    parms$auc_type <- auc_type
  if (!missing(dispersion_epsilon))
    parms$dispersion_epsilon <- dispersion_epsilon
  if (!missing(tweedie_epsilon))
    parms$tweedie_epsilon <- tweedie_epsilon
  if (!missing(max_iterations_dispersion))
    parms$max_iterations_dispersion <- max_iterations_dispersion
  if (!missing(build_null_model))
    parms$build_null_model <- build_null_model
  if (!missing(fix_dispersion_parameter))
    parms$fix_dispersion_parameter <- fix_dispersion_parameter
  if (!missing(generate_variable_inflation_factors))
    parms$generate_variable_inflation_factors <- generate_variable_inflation_factors
  if (!missing(fix_tweedie_variance_power))
    parms$fix_tweedie_variance_power <- fix_tweedie_variance_power
  if (!missing(dispersion_learning_rate))
    parms$dispersion_learning_rate <- dispersion_learning_rate
  if (!missing(influence))
    parms$influence <- influence

  if( !missing(interactions) ) {
    # interactions are column names => as-is
    if( is.character(interactions) )       parms$interactions <- interactions
    else if( is.numeric(interactions) )    parms$interactions <- names(training_frame)[interactions]
    else stop("Don't know what to do with interactions. Supply vector of indices or names")
  }
  # For now, accept nfolds in the R interface if it is 0 or 1, since those values really mean do nothing.
  # For any other value, error out.
  # Expunge nfolds from the message sent to H2O, since H2O doesn't understand it.
  if (!missing(nfolds) && nfolds > 1)
    parms$nfolds <- nfolds
  if(!missing(beta_constraints))
    parms$beta_constraints <- beta_constraints
    if(!missing(missing_values_handling))
      parms$missing_values_handling <- missing_values_handling

  # Build segment-models specific parameters
  segment_parms <- list()
  if (!missing(segment_columns))
    segment_parms$segment_columns <- segment_columns
  if (!missing(segment_models_id))
    segment_parms$segment_models_id <- segment_models_id
  segment_parms$parallelism <- parallelism

  # Error check and build segment models
  segment_models <- .h2o.segmentModelsJob('glm', segment_parms, parms, h2oRestApiVersion=3)
  return(segment_models)
}


#' Set betas of an existing H2O GLM Model
#'
#' This function allows setting betas of an existing glm model.
#' @param model an \linkS4class{H2OModel} corresponding from a \code{h2o.glm} call.
#' @param beta a new set of betas (a named vector)
#' @export
h2o.makeGLMModel <- function(model,beta) {
  res = .h2o.__remoteSend(method="POST", .h2o.__GLMMakeModel, model=model@model_id, names = paste("[",paste(paste("\"",names(beta),"\"",sep=""), collapse=","),"]",sep=""), beta = paste("[",paste(as.vector(beta),collapse=","),"]",sep=""))
  m <- h2o.getModel(model_id=res$model_id$name)
  m@model$coefficients <- m@model$coefficients_table[,2]
  names(m@model$coefficients) <- m@model$coefficients_table[,1]
  m
}

#' Extract best lambda value found from glm model.
#'
#' This function allows setting betas of an existing glm model.
#' @param model an \linkS4class{H2OModel} corresponding from a \code{h2o.glm} call.
#' @export
h2o.getLambdaBest <- function(model) {
  model@model$lambda_best
}

#' Extract the maximum lambda value used during lambda search from glm model.
#'
#' This function allows setting betas of an existing glm model.
#' @param model an \linkS4class{H2OModel} corresponding from a \code{h2o.glm} call.
#' @export
h2o.getLambdaMax <- function(model) {
  lambdaMax <- model@model$lambda_max
  if (lambdaMax < 0) # -1 if lambda_search=FALSE
    stop("getLambdaMax(model) can only be called when lambda_search=True or when you have multiple lambda values to try.")
  else 
    lambdaMax
}

#' Extract best alpha value found from glm model.
#'
#' This function allows setting betas of an existing glm model.
#' @param model an \linkS4class{H2OModel} corresponding from a \code{h2o.glm} call.
#' @export
h2o.getAlphaBest <- function(model) {
  model@model$alpha_best
}

#' Extract the minimum lambda value calculated during lambda search from glm model.
#' Note that due to early stop, this minimum lambda value may not be used in the actual lambda search.
#'
#' This function allows setting betas of an existing glm model.
#' @param model an \linkS4class{H2OModel} corresponding from a \code{h2o.glm} call.
#' @export
h2o.getLambdaMin <- function(model) {
  lambdaMin <- model@model$lambda_min # will be -1 if lambda_search=FALSE
  if (lambdaMin < 0)
    stop("getLambdaMin(model) can only be called when lambda_search=True or when you have multiple lambda values to try.")
  else 
    lambdaMin
}

#' Extract full regularization path from a GLM model
#'
#' Extract the full regularization path from a GLM model (assuming it was run with the lambda search option).
#'
#' @param model an \linkS4class{H2OModel} corresponding from a \code{h2o.glm} call.
#' @export
h2o.getGLMFullRegularizationPath <- function(model) {
  res = .h2o.__remoteSend(method="GET", .h2o.__GLMRegPath, model=model@model_id)
  colnames(res$coefficients) <- res$coefficient_names
  if(!is.null(res$coefficients_std) && length(res$coefficients_std) > 0L) {
    colnames(res$coefficients_std) <- res$coefficient_names
  }
  res
}

#' Compute weighted gram matrix.
#'
#' @param X an \linkS4class{H2OModel} corresponding to H2O framel.
#' @param weights character corresponding to name of weight vector in frame.
#' @param use_all_factor_levels logical flag telling h2o whether or not to skip first level of categorical variables during one-hot encoding.
#' @param standardize logical flag telling h2o whether or not to standardize data
#' @param skip_missing logical flag telling h2o whether skip rows with missing data or impute them with mean
#' @export
h2o.computeGram <- function(X,weights="", use_all_factor_levels=FALSE,standardize=TRUE,skip_missing=FALSE) {
  res = .h2o.__remoteSend(method="GET", .h2o.__ComputeGram, X=h2o.getId(X),W=weights,use_all_factor_levels=use_all_factor_levels,standardize=standardize,skip_missing=skip_missing)
  h2o.getFrame(res$destination_frame$name)
}

##' Start an H2O Generalized Linear Model Job
##'
##' Creates a background H2O GLM job.
##' @inheritParams h2o.glm
##' @return Returns a \linkS4class{H2OModelFuture} class object.
##' @export
#h2o.startGLMJob <- function(x, y, training_frame, model_id, validation_frame,
#                    #AUTOGENERATED Params
#                    max_iterations = 50,
#                    beta_epsilon = 0,
#                    solver = c("IRLSM", "L_BFGS"),
#                    standardize = TRUE,
#                    family = c("gaussian", "binomial", "poisson", "gamma", "tweedie"),
#                    link = c("family_default", "identity", "logit", "log", "inverse", "tweedie"),
#                    tweedie_variance_power = NaN,
#                    tweedie_link_power = NaN,
#                    alpha = 0.5,
#                    prior = 0.0,
#                    lambda = 1e-05,
#                    lambda_search = FALSE,
#                    nlambdas = -1,
#                    lambda_min_ratio = 1.0,
#                    nfolds = 0,
#                    beta_constraints = NULL,
#                    ...
#                    )
#{
#  # if (!is.null(beta_constraints)) {
#  #     if (!inherits(beta_constraints, "data.frame") && !is.H2OFrame("H2OFrame"))
#  #       stop(paste("`beta_constraints` must be an H2OH2OFrame or R data.frame. Got: ", class(beta_constraints)))
#  #     if (inherits(beta_constraints, "data.frame")) {
#  #       beta_constraints <- as.h2o(beta_constraints)
#  #     }
#  # }
#
#  if (!is.H2OFrame(training_frame))
#      tryCatch(training_frame <- h2o.getFrame(training_frame),
#               error = function(err) {
#                 stop("argument "training_frame" must be a valid H2OFrame or model ID")
#              })
#
#    parms <- list()
#    args <- .verify_dataxy(training_frame, x, y)
#    parms$ignored_columns <- args$x_ignore
#    parms$response_column <- args$y
#    parms$training_frame  <- training_frame
#    parms$beta_constraints <- beta_constraints
#    if(!missing(model_id))
#      parms$model_id <- model_id
#    if(!missing(validation_frame))
#      parms$validation_frame <- validation_frame
#    if(!missing(max_iterations))
#      parms$max_iterations <- max_iterations
#    if(!missing(beta_epsilon))
#      parms$beta_epsilon <- beta_epsilon
#    if(!missing(solver))
#      parms$solver <- solver
#    if(!missing(standardize))
#      parms$standardize <- standardize
#    if(!missing(family))
#      parms$family <- family
#    if(!missing(link))
#      parms$link <- link
#    if(!missing(tweedie_variance_power))
#      parms$tweedie_variance_power <- tweedie_variance_power
#    if(!missing(tweedie_link_power))
#      parms$tweedie_link_power <- tweedie_link_power
#    if(!missing(alpha))
#      parms$alpha <- alpha
#    if(!missing(prior))
#      parms$prior <- prior
#    if(!missing(lambda))
#      parms$lambda <- lambda
#    if(!missing(lambda_search))
#      parms$lambda_search <- lambda_search
#    if(!missing(nlambdas))
#      parms$nlambdas <- nlambdas
#    if(!missing(lambda_min_ratio))
#      parms$lambda_min_ratio <- lambda_min_ratio
#    if(!missing(nfolds))
#      parms$nfolds <- nfolds
#
#    .h2o.startModelJob('glm', parms, h2oRestApiVersion=.h2o.__REST_API_VERSION)
#}

Try the h2o package in your browser

Any scripts or data that you put into this service are public.

h2o documentation built on Aug. 9, 2023, 9:06 a.m.