fregre.pls: Functional Penalized PLS regression with scalar response

View source: R/fregre.pc.R

fregre.plsR Documentation

Functional Penalized PLS regression with scalar response

Description

Computes functional linear regression between functional explanatory variable X(t) and scalar response Y using penalized Partial Least Squares (PLS)

Y=<\tilde{X},β>+ε

where <.,.> denotes the inner product on L_2 and ε are random errors with mean zero , finite variance σ^2 and E[X(t)ε]=0.
ν_1,...,ν_∞ orthonormal basis of PLS to represent the functional data as X(t)=∑_(k=1:∞) γ_k ν_k.

Usage

fregre.pls(fdataobj, y = NULL, l = NULL, lambda = 0, P = c(0, 0, 1), ...)

Arguments

fdataobj

fdata class object.

y

Scalar response with length n.

l

Index of components to include in the model.

lambda

Amount of penalization. Default value is 0, i.e. no penalization is used.

P

If P is a vector: P are coefficients to define the penalty matrix object. By default P=c(0,0,1) penalize the second derivative (curvature) or acceleration. If P is a matrix: P is the penalty matrix object.

...

Further arguments passed to or from other methods.

Details

Functional (FPLS) algorithm maximizes the covariance between X(t) and the scalar response Y via the partial least squares (PLS) components. The functional penalized PLS are calculated in fdata2pls by alternative formulation of the NIPALS algorithm proposed by Kraemer and Sugiyama (2011).
Let {ν_k}_k=1:∞ the functional PLS components and X_i(t)=∑{k=1:∞} γ_{ik} ν_k and β(t)=∑{k=1:∞} β_k ν_k. The functional linear model is estimated by:

y.est=< X,β.est > \approx ∑{k=1:k_n} γ_k β_k


The response can be fitted by:

  • λ=0, no penalization,

    y.est= ν'(ν'ν)^{-1}ν'y

    • Penalized regression, λ>0 and P!=0. For example, P=c(0,0,1) penalizes the second derivative (curvature) by P=P.penalty(fdataobj["argvals"],P),

      y.est=ν'(ν'ν+λ v'Pv)^{-1}ν'y

Value

Return:

  • call The matched call of fregre.pls function.

  • beta.est Beta coefficient estimated of class fdata.

  • coefficients A named vector of coefficients.

  • fitted.values Estimated scalar response.

  • residualsy-fitted values.

  • H Hat matrix.

  • df.residual The residual degrees of freedom.

  • r2 Coefficient of determination.

  • GCV GCV criterion.

  • sr2 Residual variance.

  • l Index of components to include in the model.

  • lambda Amount of shrinkage.

  • fdata.comp Fitted object in fdata2pls function.

  • lm Fitted object in lm function

  • fdataobj Functional explanatory data.

  • y Scalar response.

Author(s)

Manuel Febrero-Bande, Manuel Oviedo de la Fuente manuel.oviedo@udc.es

References

Preda C. and Saporta G. PLS regression on a stochastic process. Comput. Statist. Data Anal. 48 (2005): 149-158.

N. Kraemer, A.-L. Boulsteix, and G. Tutz (2008). Penalized Partial Least Squares with Applications to B-Spline Transformations and Functional Data. Chemometrics and Intelligent Laboratory Systems, 94, 60 - 69. doi: 10.1016/j.chemolab.2008.06.009

Martens, H., Naes, T. (1989) Multivariate calibration. Chichester: Wiley.

Kraemer, N., Sugiyama M. (2011). The Degrees of Freedom of Partial Least Squares Regression. Journal of the American Statistical Association. Volume 106, 697-705.

Febrero-Bande, M., Oviedo de la Fuente, M. (2012). Statistical Computing in Functional Data Analysis: The R Package fda.usc. Journal of Statistical Software, 51(4), 1-28. https://www.jstatsoft.org/v51/i04/

See Also

See Also as: P.penalty and fregre.pls.cv.
Alternative method: fregre.pc.

Examples

## Not run: 
data(tecator)
x <- tecator$absorp.fdata
y <- tecator$y$Fat
res <- fregre.pls(x,y,c(1:4))
summary(res)
res1 <- fregre.pls(x,y,l=1:4,lambda=100,P=c(1))
res4 <- fregre.pls(x,y,l=1:4,lambda=1,P=c(0,0,1))
summary(res4)#' plot(res$beta.est)
lines(res1$beta.est,col=4)
lines(res4$beta.est,col=2)

## End(Not run)

fda.usc documentation built on Oct. 17, 2022, 9:06 a.m.