xtune: Tuning differential shrinkage parameters in penalized...

Description Usage Arguments Details Value Author(s) See Also Examples

Description

xtune uses an Empirical Bayes approach to integrate external information into penalized linear regression models. It fits models with differential amount of shrinkage for each regression coefficient based on external information.

Usage

1
2
xtune(X, Y, Z = NULL, family = c("linear", "binary"), sigma.square = NULL,
  method = c("lasso", "ridge"), message = TRUE, control = list())

Arguments

X

Numeric design matrix of explanatory variables (n observations in rows, p predictors in columns), without an intercept. xtune includes an intercept by default.

Y

Outcome vector of dimension n. Quantitative for family="linear", or family="binary" for a 0/1 binary outcome variable.

Z

Numeric information matrix about the predictors (p rows, each corresponding to a predictor in X; q columns of external information about the predictors, such as prior biological importance). If Z is the grouping of predictors, it is best if user codes it as a dummy variable (i.e. each column indicating whether predictors belong to a specific group)

family

Response type. "linear" for continuous outcome, "binary" for 0/1 binary outcome.

sigma.square

A user-supplied noise variance estimate. Typically, this is left unspecified, and the function automatically computes an estimated sigma square values using R package selectiveinference.

method

The type of regularization applied in the model. method = 'lasso' for Lasso regression, method = 'ridge' for Ridge regression

message

Generates diagnostic message in model fitting. Default is TRUE.

control

Specifies xtune control object. See xtune.control for more details.

Details

xtune has two main usages:

Please note that the number of rows in Z should match with the number of columns in X. Since each column in Z is a feature about X. See here for more details on how to specify Z.

A majorization-minimization procedure is employed to fit xtune.

Value

An object with S3 class xtune containing:

beta.est

The fitted vector of coefficients.

penalty.vector

The estimated penalty vector applied to each regression coefficient. Similar to the penalty.factor argument in glmnet.

lambda

The estimated λ value. Note that the lambda value is calculated to reflect that the fact that penalty factors are internally rescaled to sum to nvars in glmnet. Similar to the lambda argument in glmnet.

n_iter

Number of iterations used until convergence.

method

Same as in argument above

sigma.square

The estimated sigma square value using estimateVariance, if sigma.square is left unspecified.

family

same as above

likelihood

A vector containing the marginal likelihood value of the fitted model at each iteration.

Author(s)

Chubing Zeng

See Also

predict.xtune, as well as glmnet.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
## use simulated example data
set.seed(9)
data(example)
X <- example$X
Y <- example$Y
Z <- example$Z

## Empirical Bayes tuning to estimate tuning parameter, as an alternative to cross-validation:
fit.eb <- xtune(X,Y)
fit.eb$lambda

### compare with tuning parameter choosen by cross-validation, using glmnet
## Not run: 
fit.cv <- cv.glmnet(X,Y,alpha = 1)
fit.cv$lambda.min

## End(Not run)
## Differential shrinkage based on external information Z:
fit.diff <- xtune(X,Y,Z)
fit.diff$penalty.vector

ChubingZeng/classo documentation built on June 4, 2019, 12:37 p.m.