# SFGAM: Sparse Frequentist Generalized Additive Models In sparseGAM: Sparse Generalized Additive Models

## Description

This function implements sparse frequentist generalized additive models (GAMs) with the group LASSO, group SCAD, and group MCP penalties. Let y_i denote the ith response and x_i denote a p-dimensional vector of covariates. GAMs are of the form,

g(E(y_i)) = β_0 + ∑_{j=1}^{p} f_j (x_{ij}), i = 1, ..., n,

where g is a monotone increasing link function. The identity link function is used for Gaussian regression, the logit link is used for binomial regression, and the log link is used for Poisson, negative binomial, and gamma regression. The univariate functions are estimated using linear combinations of B-spline basis functions. Under group regularization of the basis coefficients, some of the univariate functions f_j(x_j) will be estimated as \hat{f}_j(x_j) = 0, depending on the size of the regularization parameter λ.

For implementation of sparse Bayesian GAMs with the SSGL penalty, use the SBGAM function.

## Usage

 1 2 3 4 SFGAM(y, X, X.test, df=6, family=c("gaussian","binomial", "poisson", "negativebinomial","gamma"), nb.size=1, gamma.shape=1, penalty=c("gLASSO","gMCP","gSCAD"), taper, nlambda=100, lambda, max.iter=10000, tol=1e-4) 

## Arguments

 y n \times 1 vector of responses for training data. X n \times p design matrix for training data, where the jth column of X corresponds to the jth overall covariate. X.test n_{test} \times p design matrix for test data to calculate predictions. X.test must have the same number of columns as X, but not necessarily the same number of rows. If no test data is provided or if in-sample predictions are desired, then the function automatically sets X.test=X in order to calculate in-sample predictions. df number of B-spline basis functions to use in each basis expansion. Default is df=6, but the user may specify degrees of freedom as any integer greater than or equal to 3. family exponential dispersion family. Allows for "gaussian", "binomial", "poisson", "negativebinomial", and "gamma". Note that for "negativebinomial", the size parameter must be specified, while for "gamma", the shape parameter must be specified. nb.size known size parameter α in NB(α,μ_i) distribution for negative binomial responses. Default is nb.size=1. Ignored if family is not "negativebinomial". gamma.shape known shape parameter ν in Gamma(μ_i,ν) distribution for gamma responses. Default is gamma.shape=1. Ignored if family is not "gamma". penalty group regularization method to use on the groups of basis coefficients. The options are "gLASSO", "gSCAD", and "gMCP". To implement sparse GAMs with the SSGL penalty, use the SBGAM function. taper tapering term γ in group SCAD and group MCP controlling how rapidly the penalty tapers off. Default is taper=4 for group SCAD and taper=3 for group MCP. Ignored if "gLASSO" is specified as the penalty. nlambda number of regularization parameters L. Default is nlambda=100. lambda grid of L regularization parameters. The user may specify either a scalar or a vector. If the user does not provide this, the program chooses the grid automatically. max.iter maximum number of iterations in the algorithm. Default is max.iter=10000. tol convergence threshold for algorithm. Default is tol=1e-4.

## Value

The function returns a list containing the following components:

 lambda L \times 1 vector of regularization parameters lambda used to fit the model. lambda is displayed in descending order. f.pred List of L n_{test} \times p matrices, where the kth matrix in the list corresponds to the kth regularization parameter in lambda. The jth column in each matrix in f.pred is the estimate of the jth function evaluated on the test data in X.test for the jth covariate (or training data X if X.test was not specified). mu.pred n_{test} \times L matrix of predicted mean response values μ_{test} = E(Y_{test}) based on the test data in X.test (or training data X if no argument was specified for X.test). The kth column in mu.pred corresponds to the predictions for the kth regularization parameter in lambda. classifications p \times L matrix of classifications. An entry of "1" indicates that the corresponding function was classified as nonzero, and an entry of "0" indicates that the function was classified as zero. The kth column of classifications corresponds to the kth regularization parameter in lambda. beta0 L \times 1 vector of estimated intercepts. The kth entry in beta0 corresponds to the kth regularization parameter in lambda. beta dp \times L matrix of estimated basis coefficients. The kth column in beta corresponds to the kth regularization parameter in lambda. loss vector of either the residual sum of squares ("gaussian") or the negative log-likelihood ("binomial", "poisson", "negativebinomial", "gamma") of the fitted model. The kth entry in loss corresponds to the kth regularization parameter in lambda.

## References

Breheny, P. and Huang, J. (2015). "Group descent algorithms for nonconvex penalized linear and logistic regression models with grouped predictors." Statistics and Computing, 25:173-187.

Wang, H. and Leng, C. (2007). "Unified LASSO estimation by least squares approximation." Journal of the American Statistical Association, 102:1039-1048.

Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68: 49-67.

## Examples

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 ## Generate data set.seed(12345) X = matrix(runif(100*20), nrow=100) n = dim(X)[1] y = 5*sin(2*pi*X[,1])-5*cos(2*pi*X[,2]) + rnorm(n) ## Test data with 50 observations X.test = matrix(runif(50*20), nrow=50) ## K-fold cross-validation with group MCP penalty cv.mod = cv.SFGAM(y, X, family="gaussian", penalty="gMCP") ## Plot CVE curve plot(cv.mod$lambda, cv.mod$cve, type="l", xlab="lambda", ylab="CVE") ## lambda which minimizes cross-validation error lambda.opt = cv.mod$lambda.min ## Fit a single model with lambda.opt SFGAM.mod = SFGAM(y, X, X.test, penalty="gMCP", lambda=lambda.opt) ## Classifications SFGAM.mod$classifications ## Predicted function evaluations on test data f.pred = SFGAM.mod\$f.pred ## Plot estimated first function x1 = X.test[,1] f1.hat = f.pred[,1] ## Plot x_1 against f_1(x_1) plot(x1[order(x1)], f1.hat[order(x1)], xlab=expression(x[1]), ylab=expression(f[1](x[1]))) 

sparseGAM documentation built on May 31, 2021, 5:09 p.m.