gpe_rules_pre: Get rule learner for gpe which mimics behavior of pre

View source: R/pre.R

gpe_rules_preR Documentation

Get rule learner for gpe which mimics behavior of pre


gpe_rules_pre generates a learner which generates rules like pre, which can be supplied to the gpe base_learner argument.


  learnrate = 0.01,
  par.init = FALSE,
  mtry = Inf,
  maxdepth = 3L,
  ntrees = 500,
  tree.control = ctree_control(),
  use.grad = TRUE,
  removeduplicates = TRUE,
  removecomplements = TRUE,
  tree.unbiased = TRUE



numeric value > 0. Learning rate or boosting parameter.


logical. Should parallel foreach be used to generate initial ensemble? Only used when learnrate == 0. Note: Must register parallel beforehand, such as doMC or others. Furthermore, setting par.init = TRUE will likely only increase computation time for smaller datasets.


positive integer. Number of randomly selected predictor variables for creating each split in each tree. Ignored when tree.unbiased=FALSE.


positive integer. Maximum number of conditions in rules. If length(maxdepth) == 1, it specifies the maximum depth of of each tree grown. If length(maxdepth) == ntrees, it specifies the maximum depth of every consecutive tree grown. Alternatively, a random sampling function may be supplied, which takes argument ntrees and returns integer values. See also maxdepth_sampler.


positive integer value. Number of trees to generate for the initial ensemble.


list with control parameters to be passed to the tree fitting function, generated using ctree_control, mob_control (if use.grad = FALSE), or rpart.control (if tree.unbiased = FALSE).


logical. Should gradient boosting with regression trees be employed when learnrate > 0? If TRUE, use trees fitted by ctree or rpart as in Friedman (2001), but without the line search. If use.grad = FALSE, glmtree instead of ctree will be employed for rule induction, yielding longer computation times, higher complexity, but possibly higher predictive accuracy. See Details for supported combinations of family, use.grad and learnrate.


logical. Remove rules from the ensemble which are identical to an earlier rule?


logical. Remove rules from the ensemble which are identical to (1 - an earlier rule)?


logical. Should an unbiased tree generation algorithm be employed for rule generation? Defaults to TRUE, if set to FALSE, rules will be generated employing the CART algorithm (which suffers from biased variable selection) as implemented in rpart. See details below for possible combinations with family, use.grad and learnrate.


## Obtain same fits with pre and gpe
gpe.mod <- gpe(Ozone ~ ., data = airquality[complete.cases(airquality),],  
               base_learners = list(gpe_rules_pre(), gpe_linear()))
pre.mod <- pre(Ozone ~ ., data = airquality[complete.cases(airquality),],)

pre documentation built on June 11, 2022, 1:10 a.m.