fosr.vs | R Documentation |
Implements an iterative algorithm for function-on-scalar regression with variable selection by alternatively updating the coefficients and covariance structure.
fosr.vs(
formula,
data,
nbasis = 10,
method = c("ls", "grLasso", "grMCP", "grSCAD"),
epsilon = 1e-05,
max.iter_num = 100
)
formula |
an object of class " |
data |
a data frame that contains the variables in the model. |
nbasis |
number of B-spline basis functions used. |
method |
group variable selection method to be used ("grLasso", "grMCP", "grSCAD" refer to group Lasso, group MCP and group SCAD, respectively) or " |
epsilon |
the convergence criterion. |
max.iter_num |
maximum number of iterations. |
A fitted fosr.vs-object, which is a list with the following elements:
formula |
an object of class " |
coefficients |
the estimated coefficient functions. |
fitted.values |
the fitted curves. |
residuals |
the residual curves. |
vcov |
the estimated variance-covariance matrix when convergence is achieved. |
method |
group variable selection method to be used or " |
Yakuan Chen yc2641@cumc.columbia.edu
Chen, Y., Goldsmith, J., and Ogden, T. (2016). Variable selection in function-on-scalar regression. Stat 5 88-101
{grpreg}
## Not run:
set.seed(100)
I = 100
p = 20
D = 50
grid = seq(0, 1, length = D)
beta.true = matrix(0, p, D)
beta.true[1,] = sin(2*grid*pi)
beta.true[2,] = cos(2*grid*pi)
beta.true[3,] = 2
psi.true = matrix(NA, 2, D)
psi.true[1,] = sin(4*grid*pi)
psi.true[2,] = cos(4*grid*pi)
lambda = c(3,1)
set.seed(100)
X = matrix(rnorm(I*p), I, p)
C = cbind(rnorm(I, mean = 0, sd = lambda[1]), rnorm(I, mean = 0, sd = lambda[2]))
fixef = X%*%beta.true
pcaef = C %*% psi.true
error = matrix(rnorm(I*D), I, D)
Yi.true = fixef
Yi.pca = fixef + pcaef
Yi.obs = fixef + pcaef + error
data = as.data.frame(X)
data$Y = Yi.obs
fit.fosr.vs = fosr.vs(Y~., data = data, method="grMCP")
plot(fit.fosr.vs)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.