QR.lasso.admm: Quantile Regression (QR) with Adaptive Lasso Penalty (lasso)...

View source: R/QR.lasso.admm.R

QR.lasso.admmR Documentation

Quantile Regression (QR) with Adaptive Lasso Penalty (lasso) use Alternating Direction Method of Multipliers (ADMM) algorithm

Description

The adaptive lasso parameter base on the estimated coefficient without penalty function. The problem is well suited to distributed convex optimization and is based on Alternating Direction Method of Multipliers (ADMM) algorithm .

Usage

QR.lasso.admm(X,y,tau,lambda,rho,beta,maxit)

Arguments

X

the design matrix

y

response variable

tau

quantile level

lambda

The constant coefficient of penalty function. (default lambda=1)

rho

augmented Lagrangian parameter

beta

initial value of estimate coefficient (default naive guess by least square estimation)

maxit

maxim iteration (default 200)

Value

a list structure is with components

beta

the vector of estimated coefficient

b

intercept

Note

QR.lasso.admm(x,y,tau) work properly only if the least square estimation is good.

References

S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein.(2010) Distributed Optimization and Statistical Learning via the Alternating Direction. Method of Multipliers Foundations and Trends in Machine Learning, 3, No.1, 1–122

Wu, Yichao and Liu, Yufeng (2009). Variable selection in quantile regression. Statistica Sinica, 19, 801–817.

Examples

set.seed(1)
n=100
p=2
a=2*rnorm(n*2*p, mean = 1, sd =1)
x=matrix(a,n,2*p)
beta=2*rnorm(p,1,1)
beta=rbind(matrix(beta,p,1),matrix(0,p,1))
y=x%*%beta-matrix(rnorm(n,0.1,1),n,1)
# x is 1000*20 matrix, y is 1000*1 vector, beta is 20*1 vector with last ten zero value elements. 
QR.lasso.admm(x,y,0.1)

cqrReg documentation built on June 7, 2022, 9:06 a.m.

Related to QR.lasso.admm in cqrReg...