s_LIHADBoost | R Documentation |
Boost a Linear Hard Additive Tree (i.e. LIHAD, i.e. LINAD with hard splits)
s_LIHADBoost(
x,
y = NULL,
x.test = NULL,
y.test = NULL,
resid = NULL,
boost.obj = NULL,
learning.rate = 0.5,
case.p = 1,
max.depth = 5,
gamma = 0.1,
alpha = 0,
lambda = 1,
lambda.seq = NULL,
minobsinnode = 2,
minobsinnode.lin = 10,
shrinkage = 1,
part.minsplit = 2,
part.xval = 0,
part.max.depth = 1,
part.cp = 0,
part.minbucket = 5,
lin.type = c("glmnet", "cv.glmnet", "lm.ridge", "allSubsets", "forwardStepwise",
"backwardStepwise", "glm", "sgd", "solve", "none"),
cv.glmnet.nfolds = 5,
which.cv.glmnet.lambda = "lambda.min",
max.iter = 10,
tune.n.iter = TRUE,
earlystop.params = setup.earlystop(),
lookback = TRUE,
init = NULL,
.gs = FALSE,
grid.resample.params = setup.resample("kfold", 5),
gridsearch.type = "exhaustive",
metric = NULL,
maximize = NULL,
cxrcoef = FALSE,
print.progress.every = 5,
print.error.plot = "final",
x.name = NULL,
y.name = NULL,
question = NULL,
base.verbose = FALSE,
verbose = TRUE,
grid.verbose = FALSE,
trace = 0,
prefix = NULL,
plot.fitted = NULL,
plot.predicted = NULL,
plot.theme = rtTheme,
print.plot = FALSE,
print.base.plot = FALSE,
print.tune.plot = TRUE,
plot.type = "l",
save.gridrun = FALSE,
outdir = NULL,
n.cores = rtCores,
save.mod = ifelse(!is.null(outdir), TRUE, FALSE),
...
)
x |
Numeric vector or matrix / data frame of features i.e. independent variables |
y |
Numeric vector of outcome, i.e. dependent variable |
x.test |
Numeric vector or matrix / data frame of testing set features
Columns must correspond to columns in |
y.test |
Numeric vector of testing set outcome |
learning.rate |
Float (0, 1] Learning rate for the additive steps |
max.iter |
Integer: Maximum number of iterations (additive steps) to perform. Default = 10 |
init |
Float: Initial value for prediction. Default = mean(y) |
print.error.plot |
String or Integer: "final" plots a training and validation (if available) error curve at the end of training. If integer, plot training and validation error curve every this many iterations during training |
x.name |
Character: Name for feature set |
y.name |
Character: Name for outcome |
question |
Character: the question you are attempting to answer with this model, in plain language. |
base.verbose |
Logical: |
verbose |
Logical: If TRUE, print summary to screen. |
trace |
Integer: If > 0, print diagnostic info to console |
plot.fitted |
Logical: if TRUE, plot True (y) vs Fitted |
plot.predicted |
Logical: if TRUE, plot True (y.test) vs Predicted.
Requires |
plot.theme |
Character: "zero", "dark", "box", "darkbox" |
print.plot |
Logical: if TRUE, produce plot using |
print.base.plot |
Logical: Passed to |
outdir |
Path to output directory.
If defined, will save Predicted vs. True plot, if available,
as well as full model output, if |
save.mod |
Logical: If TRUE, save all output to an RDS file in |
... |
Additional parameters to be passed to learner |
By default, early stopping works by checking training loss.
E.D. Gennatas
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.