Description Details Author(s) References See Also Examples
Estimate the sufficient dimension reduction space using sparsed sliced inverse regression via Lasso (Lasso-SIR) introduced in Lin, Zhao, and Liu (2017) <arxiv:1611.06655>. The Lasso-SIR is consistent and achieve the optimal convergence rate under certain sparsity conditions for the multiple index models.
The DESCRIPTION file:
This package was not yet installed at build time.
Index: This package was not yet installed at build time.
LassoSIR
Zhigen Zhao, Qian Lin, Jun Liu
Maintainer: Zhigen Zhao <zhigen.zhao@gmail.com>
Qian Lin, Zhigen Zhao, Jun S. Liu (2017) On consistency and sparsity for sliced inverse regression in high dimensions. Annals of Statistics. https://arxiv.org/abs/1507.03895
Qian Lin, Zhigen Zhao, Jun S. Liu (2017) Sparse Sliced Inverse Regression for High Dimensional Data. https://arxiv.org/abs/1611.06655
NA
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | p <- 10
n <- 200
H <- 20
m <- n/H
beta <- array(0, c(p, 1) )
beta[1:3,1] <- rnorm(3, 0, 1)
X <- array(0, c(n, p ) )
rho <- 0.3
Sigma <- diag(p)
elements <- rho^(c((p-1):0,1:(p-1) ) )
for(i in 1:p)
Sigma[i,] <- elements[(p+1-i):(2*p-i) ]
X <- matrix( rnorm(p*n), c(n, p) )
X <- X%*% chol(Sigma)
Y <- ( X%*% beta )^3/2 + rnorm(n,0,1)
sir.lasso <- LassoSIR( X, Y, H, choosing.d="automatic",
solution.path=FALSE, categorical=FALSE, nfolds=10,
screening=FALSE)
beta.hat <- sir.lasso$beta/sqrt( sum( sir.lasso$beta^2 ) )
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.