Fit a generalized linear model at grids of tuning parameter via penalized maximum likelihood. The regularization path is computed for a combination of sparse and smooth penalty at two grids of values for the regularization parameter lambda1(Lasso or MCP penalty) and lambda2(Laplacian penalty). Fits linear, logistic regression models.

Package: | glmgraph |

Type: | Package |

Version: | 1.0-0 |

Date: | 2015-03-11 |

License: | GPL-2 |

The algorithm accepts a design matrix `X`

, a vector of responses `Y`

and a Laplacian matrix `L`

.
Produces the regularization path over the grid of tuning parameter `lambda1`

and `lambda2`

.
It consists of the following main functions

`glmgraph`

`cv.glmgraph`

`plot.glmgraph`

`coef.glmgraph`

`predict.glmgraph`

Li Chen <li.chen@emory.edu>, Jun Chen <jun.chen2@mayo.edu>

Li Chen. Han Liu. Hongzhe Li. Jun Chen(2015) glmgraph: Graph-constrained Regularization for Sparse Generalized Linear Models.(Working paper)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ```
set.seed(1234)
library(glmgraph)
n <- 100
p1 <- 10
p2 <- 90
p <- p1+p2
X <- matrix(rnorm(n*p), n,p)
magnitude <- 1
## Construct Adjacency and Laplacian matrices
A <- matrix(rep(0,p*p),p,p)
A[1:p1,1:p1] <- 1
A[(p1+1):p,(p1+1):p] <- 1
diag(A) <- 0
diagL <- apply(A,1,sum)
L <- -A
diag(L) <- diagL
btrue <- c(rep(magnitude,p1),rep(0,p2))
intercept <- 0
eta <- intercept+X%*%btrue
Y <- eta+rnorm(n)
obj <- glmgraph(X,Y,L,family="gaussian")
plot(obj)
betas <- coef(obj)
betas <- coef(obj,lambda1=c(0.1,0.2))
yhat <- predict(obj,X,type="response")
cv.obj <- cv.glmgraph(X,Y,L)
plot(cv.obj)
beta.min <- coef(cv.obj)
yhat.min <- predict(cv.obj,X)
``` |

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.