xtune | R Documentation |
xtune
uses an Empirical Bayes approach to integrate external information into regularized regression models for both linear and categorical outcomes. It fits models with feature-specific penalty parameters based on external information.
xtune(
X,
Y,
Z = NULL,
U = NULL,
family = c("linear", "binary", "multiclass"),
c = 0.5,
epsilon = 5,
sigma.square = NULL,
message = TRUE,
control = list()
)
X |
Numeric design matrix of explanatory variables ( |
Y |
Outcome vector of dimension |
Z |
Numeric information matrix about the predictors ( |
U |
Covariates to be adjusted in the model (matrix with |
family |
The family of the model according to different types of outcomes including "linear", "binary", and "multiclass". |
c |
The elastic-net mixing parameter ranging from 0 to 1. When |
epsilon |
The parameter controls the boundary of the |
sigma.square |
A user-supplied noise variance estimate. Typically, this is left unspecified, and the function automatically computes an estimated sigma square values using R package |
message |
Generates diagnostic message in model fitting. Default is TRUE. |
control |
Specifies |
xtune
has two main usages:
The basic usage of it is to choose the tuning parameter \lambda
in elastic net regression using an
Empirical Bayes approach, as an alternative to the widely-used cross-validation. This is done by calling xtune
without specifying external information matrix Z.
More importantly, if an external information Z about the predictors X is provided, xtune
can allow predictor-specific shrinkage
parameters for regression coefficients in penalized regression models. The idea is that Z might be informative for the effect-size of regression coefficients, therefore we can guide the penalized regression model using Z.
Please note that the number of rows in Z should match with the number of columns in X. Since each column in Z is a feature about X. See here for more details on how to specify Z.
A majorization-minimization procedure is employed to fit xtune
.
An object with S3 class xtune
containing:
beta.est |
The fitted vector of coefficients. |
penalty.vector |
The estimated penalty vector applied to each regression coefficient. Similar to the |
lambda |
The estimated |
alpha.est |
The estimated second-level coefficient for prior covariate Z. The first value is the intercept of the second-level coefficient. |
n_iter |
Number of iterations used until convergence. |
method |
Same as in argument above |
sigma.square |
The estimated sigma square value using |
family |
same as above |
likelihood.score |
A vector containing the marginal likelihood value of the fitted model at each iteration. |
Jingxuan He and Chubing Zeng
predict_xtune, as well as glmnet.
## use simulated example data
set.seed(1234567)
data(example)
X <- example$X
Y <- example$Y
Z <- example$Z
## Empirical Bayes tuning to estimate tuning parameter, as an alternative to cross-validation:
fit.eb <- xtune(X=X,Y=Y, family = "linear")
fit.eb$lambda
### compare with tuning parameter chosen by cross-validation, using glmnet
fit.cv <- glmnet::cv.glmnet(x=X,y=Y,alpha = 0.5)
fit.cv$lambda.min
## Feature-specific penalties based on external information Z:
fit.diff <- xtune(X=X,Y=Y,Z=Z, family = "linear")
fit.diff$penalty.vector
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.