View source: R/cv.pemultinom.R
cv.pemultinom | R Documentation |
Fit a multinomial regression model with Lasso penalty. This function implements the l1-penalized multinomial regression model (parameterized with a reference level). A cross-validation procedure is applied to choose the tuning parameter. See Tian et al. (2024) for details.
cv.pemultinom(
x,
y,
ref = NULL,
nfolds = 5,
nlambda = 100,
max_iter = 200,
tol = 0.001,
ncores = 1,
standardized = TRUE,
weights = NULL,
info = TRUE,
lambda_min_ratio = 0.01
)
x |
the design/predictor matrix, each row of which is an observation vector. |
y |
the response variable. Can be of one type from factor/integer/character. |
ref |
the reference level. Default = NULL, which sets the reference level to the last category (sorted by alphabetical order) |
nfolds |
the number of cross-validation folds to use. Default = 5. |
nlambda |
the number of penalty parameter candidates. Default = 100. |
max_iter |
the maximum iteration rounds in each iteration of the coordinate descent algorithm. Default = 200. |
tol |
convergence threshold (tolerance level) for coordinate descent. Default = 1e-3. |
ncores |
the number of cores to use for parallel computing. Default = 1. |
standardized |
logical flag for x variable standardization, prior to fitting the model sequence. Default = TRUE. Note that the fitting results will be translated to the original scale before output. |
weights |
observation weights. Should be a vector of non-negative numbers of length n (the number of observations). Default = NULL, which sets equal weights for all observations. |
info |
whether to print the information or not. Default = TRUE. |
lambda_min_ratio |
the ratio between lambda.min and lambda.max, where lambda.max is automatically determined by the code and the lambda sequence will be determined by 'exp(seq(log(lambda.max), log(lambda.min), len = nlambda))'. |
A list with the following components.
beta.list |
the estimates of coefficients. It is a list of which the k-th component is the contrast coefficient between class k and the reference class corresponding to different lambda values. The j-th column of each list component corresponds to the j-th lambda value. |
beta.1se |
the coefficient estimate corresponding to lambda.1se. It is a matrix, and the k-th column is the contrast coefficient between class k and the reference class. |
beta.min |
the coefficient estimate corresponding to lambda.min. It is a matrix, and the k-th column is the contrast coefficient between class k and the reference class. |
lambda.1se |
the largest value of lambda such that error is within 1 standard error of the minimum. See Chapter 2.3 of Hastie et al. (2015) for more details. |
lambda.min |
the value of lambda that gives minimum cvm. |
cvm |
the weights in objective function. |
cvsd |
the estimated marginal probability for each class. |
lambda |
lambda values considered in the cross-validation process. |
Hastie, T., Tibshirani, R., & Wainwright, M. (2015). Statistical learning with sparsity. Monographs on statistics and applied probability, 143.
Tian, Y., Rusinek, H., Masurkar, A. V., & Feng, Y. (2024). L1‐Penalized Multinomial Regression: Estimation, Inference, and Prediction, With an Application to Risk Factor Identification for Different Dementia Subtypes. Statistics in Medicine, 43(30), 5711-5747.
debiased_lasso
, predict_pemultinom
.
# generate data from a logistic regression model with n = 100, p = 50, and K = 3
set.seed(0, kind = "L'Ecuyer-CMRG")
n <- 100
p <- 50
K <- 3
Sigma <- outer(1:p, 1:p, function(x,y) {
0.9^(abs(x-y))
})
R <- chol(Sigma)
s <- 3
beta_coef <- matrix(0, nrow = p+1, ncol = K-1)
beta_coef[1+1:s, 1] <- c(3, 3, 3)
beta_coef[1+1:s+s, 2] <- c(3, 3, 3)
x <- matrix(rnorm(n*p), ncol = p) %*% R
y <- sapply(1:n, function(j){
prob_i <- c(sapply(1:(K-1), function(k){
exp(sum(x[j, ]*beta_coef[-1, k]))
}), 1)
prob_i <- prob_i/sum(prob_i)
sample(1:K, size = 1, replace = TRUE, prob = prob_i)
})
# fit the l1-penalized multinomial regression model
fit <- cv.pemultinom(x, y, ncores = 2)
beta <- fit$beta.min
# generate test data from the same model
x.test <- matrix(rnorm(n*p), ncol = p) %*% R
y.test <- sapply(1:n, function(j){
prob_i <- c(sapply(1:(K-1), function(k){
exp(sum(x.test[j, ]*beta_coef[-1, k]))
}), 1)
prob_i <- prob_i/sum(prob_i)
sample(1:K, size = 1, replace = TRUE, prob = prob_i)
})
# predict labels of test data and calculate the misclassification error rate (using beta.min)
ypred.min <- predict_pemultinom(fit$beta.min, ref = 3, xnew = x.test, type = "class")
mean(ypred.min != y.test)
# predict labels of test data and calculate the misclassification error rate (using beta.1se)
ypred.1se <- predict_pemultinom(fit$beta.1se, ref = 3, xnew = x.test, type = "class")
mean(ypred.1se != y.test)
# predict posterior probabilities of test data
ypred.prob <- predict_pemultinom(fit$beta.min, ref = 3, xnew = x.test, type = "prob")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.