# genlasso: Compute the generalized lasso solution path for arbitrary... In glmgen/genlasso: Path Algorithm for Generalized Lasso Problems

 genlasso R Documentation

## Compute the generalized lasso solution path for arbitrary penalty matrix

### Description

This function computes the solution path of the generalized lasso problem for an arbitrary penalty matrix. Speciality functions exist for the trend filtering and fused lasso problems; see trendfilter and fusedlasso.

### Usage

genlasso(y, X, D, approx = FALSE, maxsteps = 2000, minlam = 0,
rtol = 1e-07, btol = 1e-07, eps = 1e-4, verbose = FALSE,
svd = FALSE)


### Arguments

 y a numeric response vector. X an optional matrix of predictor variables, with observations along the rows, and variables along the columns. If missing, X is assumed to be the identity matrix. If the passed X does not have full column rank, then a warning is given, and a small ridge penalty is added to the generalized lasso criterion before the path is computed. D a penalty matrix. Its number of columns must be equal to the number of columns of X, or if no X is given, the length of y. This can be a sparse matrix from Matrix package, but this will be ignored (converted to a dense matrix) if D is row rank deficient or if X is specified. See "Details" below. approx a logical variable indicating if the approximate solution path should be used (with no dual coordinates leaving the boundary). Default is FALSE. maxsteps an integer specifying the maximum number of steps for the algorithm to take before termination. Default is 2000. minlam a numeric variable indicating the value of lambda at which the path should terminate. Default is 0. rtol a numeric variable giving the tolerance for determining the rank of a matrix: if a diagonal value in the R factor of a QR decomposition is less than R, in absolute value, then it is considered zero. Hence making rtol larger means being less stringent with determination of matrix rank. In general, do not change this unless you know what you are getting into! Default is 1e-7. btol a numeric variable giving the tolerance for accepting "late" hitting and leaving times: future hitting times and leaving times should always be less than the current knot in the path, but sometimes for numerical reasons they are larger; any computed hitting or leaving time larger than the current knot + btol is thrown away. Hence making btol larger means being less stringent with the determination of hitting and leaving times. Again, in general, do not change this unless you know what you are getting into! Default is 1e-7. eps a numeric variable indicating the multiplier for the ridge penalty, in the case that X is column rank deficient. Default is 1e-4. verbose a logical variable indicating if progress should be reported after each knot in the path. svd a logical variable indicating if the genlasso function should use singular value decomposition to solve least squares problems at each path step, which is slower, but should be more stable.

### Details

The generalized lasso estimate minimizes the criterion

1/2 \|y - X β\|_2^2 + λ \|D β\|_1.

The solution \hat{β} is computed as a function of the regularization parameter λ. The advantage of the genlasso function lies in its flexibility, i.e., the user can specify any penalty matrix D of their choosing. However, for a trend filtering problem or a fused lasso problem, it is strongly recommended to use one of the speciality functions, trendfilter or fusedlasso. When compared to these functions, genlasso is not as numerically stable and much less efficient.

Note that, when D is passed as a sparse matrix, the linear systems that arise at each step of the path algorithm are solved separately via a sparse solver. The usual strategy (when D is simply a matrix) is to maintain a matrix factorization of D, and solve these systems by (or downdating) this factorization, as these linear systems are highly related. Therefore, when D is sufficiently sparse and structured, it can be advantageous to pass it as a sparse matrix; but if D is truly dense, passing it as a sparse matrix will be highly inefficient.

### Value

Returns an object of class "genlasso", a list with at least following components:

 lambda values of lambda at which the solution path changes slope, i.e., kinks or knots. beta a matrix of primal coefficients, each column corresponding to a knot in the solution path. fit a matrix of fitted values, each column corresponding to a knot in the solution path. u a matrix of dual coefficients, each column corresponding to a knot in the solution path. hit a vector of logical values indicating if a new variable in the dual solution hit the box contraint boundary. A value of FALSE indicates a variable leaving the boundary. df a vector giving an unbiased estimate of the degrees of freedom of the fit at each knot in the solution path. y the observed response vector. Useful for plotting and other methods. completepath a logical variable indicating whether the complete path was computed (terminating the path early with the maxsteps or minlam options results in a value of FALSE). bls the least squares solution, i.e., the solution at lambda = 0. call the matched call.

### Author(s)

Taylor B. Arnold and Ryan J. Tibshirani

### References

Tibshirani, R. J. and Taylor, J. (2011), "The solution path of the generalized lasso", Annals of Statistics 39 (3) 1335–1371.

Arnold, T. B. and Tibshirani, R. J. (2014), "Efficient implementations of the generalized lasso dual path algorithm", arXiv: 1405.3222.

trendfilter, fusedlasso, coef.genlasso, predict.genlasso, plot.genlasso

### Examples

# Using the generalized lasso to run a standard lasso regression
# (for example purposes only! for pure lasso problems, use LARS
set.seed(1)
n = 100
p = 10
X = matrix(rnorm(n*p),nrow=n)
y = 3*X[,1] + rnorm(n)
D = diag(1,p)
out = genlasso(y,X,D)
coef(out, lambda=sqrt(n*log(p)))


glmgen/genlasso documentation built on Jan. 2, 2023, 7:01 a.m.