Description Usage Arguments Details Value Author(s) See Also Examples
xtune
uses an Empirical Bayes approach to integrate external information into penalized linear regression models. It fits models with differential amount of shrinkage for each regression coefficient based on external information.
1 2 |
X |
Numeric design matrix of explanatory variables (n observations in rows, p predictors in columns), without an intercept. |
Y |
Outcome vector of dimension n. Quantitative for family="linear", or family="binary" for a 0/1 binary outcome variable. |
Z |
Numeric information matrix about the predictors (p rows, each corresponding to a predictor in X; q columns of external information about the predictors, such as prior biological importance). If Z is the grouping of predictors, it is best if user codes it as a dummy variable (i.e. each column indicating whether predictors belong to a specific group) |
family |
Response type. "linear" for continuous outcome, "binary" for 0/1 binary outcome. |
sigma.square |
A user-supplied noise variance estimate. Typically, this is left unspecified, and the function automatically computes an estimated sigma square values using R package |
method |
The type of regularization applied in the model. method = 'lasso' for Lasso regression, method = 'ridge' for Ridge regression |
message |
Generates diagnostic message in model fitting. Default is TRUE. |
control |
Specifies |
xtune
has two main usages:
The basic usage of it is to choose the tuning parameter λ in Lasso and Ridge regression using an
Empirical Bayes approach, as an alternative to the widely-used cross-validation. This is done by calling xtune
without specifying external information matrix Z.
More importantly, if an external information Z about the predictors X is provided, xtune
can allow differential shrinkage
parameters for regression coefficients in penalized regression models. The idea is that Z might be informative for the effect-size of regression coefficients, therefore we can guide the penalized regression model using Z.
Please note that the number of rows in Z should match with the number of columns in X. Since each column in Z is a feature about X. See here for more details on how to specify Z.
A majorization-minimization procedure is employed to fit xtune
.
An object with S3 class xtune
containing:
beta.est |
The fitted vector of coefficients. |
penalty.vector |
The estimated penalty vector applied to each regression coefficient. Similar to the |
lambda |
The estimated λ value. Note that the lambda value is calculated to reflect that the fact that penalty factors are internally rescaled to sum to nvars in glmnet. Similar to the |
n_iter |
Number of iterations used until convergence. |
method |
Same as in argument above |
sigma.square |
The estimated sigma square value using |
family |
same as above |
likelihood |
A vector containing the marginal likelihood value of the fitted model at each iteration. |
Chubing Zeng
predict.xtune, as well as glmnet.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | ## use simulated example data
set.seed(9)
data(example)
X <- example$X
Y <- example$Y
Z <- example$Z
## Empirical Bayes tuning to estimate tuning parameter, as an alternative to cross-validation:
fit.eb <- xtune(X,Y)
fit.eb$lambda
### compare with tuning parameter choosen by cross-validation, using glmnet
## Not run:
fit.cv <- cv.glmnet(X,Y,alpha = 1)
fit.cv$lambda.min
## End(Not run)
## Differential shrinkage based on external information Z:
fit.diff <- xtune(X,Y,Z)
fit.diff$penalty.vector
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.