GBlockBoost: Computation of the GBlockBoost Algorithm or Componentwise...

Description Usage Arguments Details Value Author(s) References See Also

View source: R/GBlockBoost.R

Description

This function fits a GLM based on penalized likelihood inference by the GBlockBoost algorithm. However, it is primarily intended for internal use. You can access it via the argument setting method = "GBlockBoost" in lqa, cv.lqa or plot.lqa. If you use componentwise = TRUE then componentwise boosting will be applied.

Usage

1
2
3
   GBlockBoost (x, y, family = NULL, penalty = NULL, intercept = 
       TRUE, weights = rep (1, nobs), control = lqa.control (), 
       componentwise, ...)

Arguments

x

matrix of standardized regressors. This matrix does not need to include a first column of ones when a GLM with intercept is to be fitted.

y

vector of observed response values.

family

a description of the error distribution and link function to be used in the model. This can be a character string naming a family function, a family function or the result of a call to a family function. See family() for further details.

penalty

a description of the penalty to be used in the fitting procedure, e.g. penalty = lasso (lambda = 1.7).

intercept

a logical object indicating whether the model should include an intercept (this is recommended) or not. The default value is intercept = TRUE.

weights

some additional weights for the observations.

control

a list of parameters for controlling the fitting process. See lqa.control.

componentwise

if TRUE then componentwise boosting will be applied, e.g. there is just a single regressors updated during each iteration. Otherwise GBlockBoost will be applied. If this argument is missing and your penalty is ridge then it is set to componentwise = TRUE. Note that this will coincide with an application of the RidgeBoost algorithm. In all other cases of a missing argument componentwise = FALSE is used.

...

further arguments.

Details

The GBlockBoost algorithm has been introduced in Ulbricht \& Tutz (2008). For a more detailed technical description, also for componentwise boosting, see Ulbricht (2010).

Value

GBlockBoost returns a list containing the following elements:

coefficients

the vector of standardized estimated coefficients.

beta.mat

matrix containing the estimated coefficients from all iterations (rowwise).

m.stop

the number of iterations until AIC reaches its minimum.

stop.at

the number of iterations until convergence.

aic.vec

vector of AIC criterion through all iterations.

bic.vec

vector of BIC criterion through all iterations.

converged

a logical variable. This will be TRUE if the algorithm has indeed converged.

min.aic

minimum value of AIC criterion.

min.bic

minimum value of BIC criterion.

tr.H

the trace of the hat matrix.

tr.Hatmat

vector of hat matrix traces through all iterations.

dev.m

vector of deviances through all iterations.

Author(s)

Jan Ulbricht

References

Ulbricht, Jan (2010) Variable Selection in Generalized Linear Models. Ph.D. Thesis. LMU Munich.

Ulbricht, J. \& G. Tutz (2008) Boosting correlation based penalization in generalized linear models. In Shalabh \& C. Heumann (Eds.) Recent Advances in Linear Models and Related Areas. Heidelberg: Springer.

See Also

lqa, ForwardBoost


lqa documentation built on May 30, 2017, 3:41 a.m.