Description Usage Arguments Value References Examples
This function implements groupregularized negative binomial regression with a known size parameter α and the log link. In negative binomial regression, we assume that y_i \sim NB(α, μ_i), where
f(y_i  α, μ_i ) = \frac{Γ(y+α)}{y! Γ(α)} (\frac{μ_i}{μ_i+α})^{y}(\frac{α}{μ_i +α})^{α}, y = 0, 1, 2, ...
Then E(y_i) = μ_i, and we relate μ_i to a set of p covariates x_i through the log link,
\log(μ_i) = β_0 + x_i^T β, i=1,..., n
If the covariates in each x_i are grouped according to known groups g=1, ..., G, then this function may estimate some of the G groups of coefficients as all zero, depending on the amount of regularization.
Our implementation for regularized negative binomial regression is based on the least squares approximation approach of Wang and Leng (2007), and hence, the function does not allow the total number of covariates p to be greater than sample size.
1 2 
y 
n \times 1 vector of responses for training data. 
X 
n \times p design matrix for training data, where the jth column of 
X.test 
n_{test} \times p design matrix for test data to calculate predictions. 
groups 
pdimensional vector of group labels. The jth entry in 
nb.size 
known size parameter α in NB(α,μ_i) distribution for the responses. Default is 
penalty 
group regularization method to use on the groups of coefficients. The options are 
weights 
groupspecific, nonnegative weights for the penalty. Default is to use the square roots of the group sizes. 
taper 
tapering term γ in group SCAD and group MCP controlling how rapidly the penalty tapers off. Default is 
nlambda 
number of regularization parameters L. Default is 
lambda 
grid of L regularization parameters. The user may specify either a scalar or a vector. If the user does not provide this, the program chooses the grid automatically. 
max.iter 
maximum number of iterations in the algorithm. Default is 
tol 
convergence threshold for algorithm. Default is 
The function returns a list containing the following components:
lambda 
L \times 1 vector of regularization parameters 
beta0 
L \times 1 vector of estimated intercepts. The kth entry in 
beta 
p \times L matrix of estimated regression coefficients. The kth column in 
mu.pred 
n_{test} \times L matrix of predicted mean response values μ_{test} = E(Y_{test}) based on the test data in 
classifications 
G \times L matrix of classifications, where G is the number of groups. An entry of "1" indicates that the group was classified as nonzero, and an entry of "0" indicates that the group was classified as zero. The kth column of 
loss 
L \times 1 vector of negative loglikelihood of the fitted models. The kth entry in 
Breheny, P. and Huang, J. (2015). "Group descent algorithms for nonconvex penalized linear and logistic regression models with grouped predictors." Statistics and Computing, 25:173187.
Wang, H. and Leng, C. (2007). "Unified LASSO estimation by least squares approximation." Journal of the American Statistical Association, 102:10391048.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29  ## Generate training data
set.seed(1234)
X = matrix(runif(100*16), nrow=100)
n = dim(X)[1]
groups = c("A","A","A","B","B","B","C","C","D","E","E","F","G","H","H","H")
groups = as.factor(groups)
true.beta = c(2,2,2,0,0,0,0,0,0,1.5,1.5,0,0,2,2,2)
## Generate count responses from negative binomial regression
eta = crossprod(t(X), true.beta)
y = rnbinom(n,size=1, mu=exp(eta))
## Generate test data
n.test = 50
X.test = matrix(runif(n.test*16), nrow=n.test)
## Fit negative binomial regression models with the group SCAD penalty
nb.mod = grpreg.nb(y, X, X.test, groups, penalty="gSCAD")
## Tuning parameters used to fit models
nb.mod$lambda
# Predicted n.testdimensional vectors mu=E(Y.test) based on test data, X.test.
# The kth column of 'mu.pred' corresponds to the kth entry in 'lambda.'
nb.mod$mu.pred
# Classifications of the 8 groups. The kth column of 'classifications'
# corresponds to the kth entry in lambda.
nb.mod$classifications

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.