LassoSIR-Package | R Documentation |
Estimate the sufficient dimension reduction space using sparsed sliced inverse regression via Lasso (Lasso-SIR) introduced in Lin, Zhao, and Liu (2019) <doi:10.1080/01621459.2018.1520115>. The Lasso-SIR is consistent and achieve the optimal convergence rate under certain sparsity conditions for the multiple index models.
The DESCRIPTION file:
This package was not yet installed at build time.
Index: This package was not yet installed at build time.
LassoSIR
Zhigen Zhao [aut, cre], Qian Lin [aut], Jun Liu [aut]
Maintainer: Zhigen Zhao <zhigen.zhao@gmail.com>
Lin, Q., Zhao, Z. , and Liu, J. (2018) On consistency and sparsity for sliced inverse regression in high dimension. Annals of Statistics. Vol. 46, Number 2. Page 580-610.
Lin, Q., Zhao, Z. , and Liu, J. (2019) Sparse Sliced Inverse Regression for High Dimensional Data. Journal of the American Statistical Association. Vol. 114, Number 528, Page 1726-1739.
NA
p <- 10
n <- 200
H <- 20
m <- n/H
beta <- array(0, c(p, 1) )
beta[1:3,1] <- rnorm(3, 0, 1)
X <- array(0, c(n, p ) )
rho <- 0.3
Sigma <- diag(p)
elements <- rho^(c((p-1):0,1:(p-1) ) )
for(i in 1:p)
Sigma[i,] <- elements[(p+1-i):(2*p-i) ]
X <- matrix( rnorm(p*n), c(n, p) )
X <- X%*% chol(Sigma)
Y <- ( X%*% beta )^3/2 + rnorm(n,0,1)
sir.lasso <- LassoSIR( X, Y, H, choosing.d="automatic",
solution.path=FALSE, categorical=FALSE, nfolds=10,
screening=FALSE)
beta.hat <- sir.lasso$beta/sqrt( sum( sir.lasso$beta^2 ) )
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.