cv.msof: Cross-validation for linear multivariate scalar-on-function...

Description Usage Arguments Details Value Author(s) References Examples

Description

This function is used to perform cross-validation and build the final model using the signal compression approach for the following linear multivariate scalar-on-function regression model:

Y= μ+\int X_1(s)β_1(s)ds+...+\int X_p(s)β_p(s)ds+ε,

where Y is an m-dimensional multivariate response variable, μ is the m-dimensional intercept vector. The {X_i(s),1≤ i≤ p} are p functional predictors and {β_i(s),1≤ i≤ p} are their corresponding m-dimensional vector of coefficient functions, where p is a positive integer. The ε is the random noise vector.

We require that all the sample curves of each functional predictor are observed in a common dense grid of time points, but the grid can be different for different predictors. All the sample curves of the functional response are observed in a common dense grid.

Usage

1
2
cv.msof(X, Y, t.x.list, nbasis = 50, K.cv = 5, upper.comp = 10,
        thresh = 0.001)

Arguments

X

a list of length p, the number of functional predictors. Its i-th element is the n*m_i data matrix for the i-th functional predictor X_i(s), where n is the sample size and m_i is the number of observation time points for X_i(s).

Y

an n*q data matrix for the response Y or a n-dimensional vector if there is only one scalar response, where n is the sample size, and q is the number of scalar response variables.

t.x.list

a list of length p. Its i-th element is the vector of observation time points of the i-th functional predictor X_i(s), 1≤ i≤ p.

nbasis

the number of basis functions used for estimating the vector of functions ψ_{ik}(s)'s (see the reference for details). Default is 50.

K.cv

the number of CV folds. Default is 5.

upper.comp

the upper bound for the maximum number of components to be calculated. Default is 10.

thresh

a number between 0 and 1 used to determine the maximum number of components we need to calculate. The maximum number is between one and the "upp.comp" above. The optimal number of components will be chosen between 1 and this maximum number, together with other tuning parameters by cross-validation. A smaller thresh value leads to a larger maximum number of components and a longer running time. A larger thresh value needs less running time, but may miss some important components and lead to a larger prediction error. Default is 0.001.

Details

We use the decomposition \bold{β}_i(s)=∑_{k=1}^K α_{ki}(s) \bold{w}_k, 1≤ i ≤ p, based on the KL expansion of ∑_{i=1}^p \int X_i(s)\bold{β}_i(s)ds. Let \bold{Y}_{\ell}=(Y_{\ell,1},...,Y_{\ell,m})^T and \bold{X}_{\ell}(s)=(X_{\ell,1}(s), ...,X_{\ell,p}(s))^T, 1 ≤ \ell ≤ n, denote n independent samples. We estimate \bold{α}_k(s)=(α_{k1}(s),...,α_{kp}(s))^T for each k by solving the panelized generalized functional eigenvalue problem

max_{α} \frac{\int\int \bold{α}(s)^T\hat{\bold{B}}(s,s')\bold{α}(s')dsds'}{\int\int \bold{α}(s)^T\hat{\bold{Σ}}(s,s')\bold{α}(s')dsds'+P(\bold{α})}

{\rm{s.t.}} \quad \int\int \bold{α}(s)^T \hat{Σ}(s,s')\bold{α}(s')dsds'=1

{\rm{and}} \quad \int\int \bold{α}(s)^T \hat{\bold{Σ}}(s,s')\bold{α}_{k'}(s')dsds'=0 \quad {\rm{for}} \quad k'<k

where \hat{\bold{B}}(s,s')=∑_{\ell=1}^n∑_{\ell'=1}^n \{\bold{X}_{\ell}(s)-\bar{\bold{X}}(s)\}\{\bold{Y}_{\ell}-\bar{\bold{Y}}\}^T\{\bold{Y}_{\ell'}-\bar{\bold{Y}}\} \{\bold{X}_{\ell'}(s')-\bar{\bold{X}}(s')\}^T/n^2, \hat{\bold{Σ}}(s,s')=∑_{\ell=1}^n \{\bold{X}_{\ell}(s)-\bar{\bold{X}}(s)\}\{\bold{X}_{\ell}(s')-\bar{\bold{X}}(s')\}^T/n, and penalty

λ{|| α||^2} + λτ{|| α''||^2}.

Then we estimate w_{k}(t), k>0 by regressing \{\bold{Y}_{\ell}\} on \{\hat{z}_{\ell,1},... \hat{z}_{\ell,K}\} using least square method. Here \hat{z}_{\ell,k}= \int (\bold{X}_{\ell}(s)-\bar{\bold{X}}(s))^T\hat{\bold{α}}_{k}(s)ds.

Value

An object of the “cv.msof” class, which is used in the function pred.msof for prediction.

fitted_model

a list containing information about fitted model.

is_Y_vector

a logic value indicating whether Y is a vector.

Y

input data Y.

x.smooth.params

a list for internal use.

Author(s)

Ruiyan Luo and Xin Qi

References

Ruiyan Luo and Xin Qi (Submitted)

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#########################################################################
# Example: multiple scalar-on-function regression
#########################################################################


ptm <- proc.time()
library(FRegSigCom)
data(corn)
X=corn$X
Y=corn$Y
ntrain=60 # in paper, we use 80 observations as training data
xtrange=c(0,1) # the range of t in x(t).
t.x.list=list(seq(0,1,length.out=ncol(X)))
train.index=sample(1:nrow(X), ntrain)
X.train <- X.test <- list()
X.train[[1]]=X[train.index,]
X.test[[1]]=X[-(train.index),]
Y.train <- Y[train.index,]
Y.test <- Y[-(train.index),]

fit.cv.1=cv.msof(X.train, Y.train, t.x.list)# the cv procedure for our method
Y.pred=pred.msof(fit.cv.1, X.test) # make prediction on the test data

pred.error=mean((Y.pred-Y.test)^2)
print(c("pred.error=",pred.error))

print(proc.time()-ptm)

FRegSigCom documentation built on May 1, 2019, 9:45 p.m.