fpcr | R Documentation |
Implements functional principal component regression (Reiss and Ogden, 2007, 2010) for generalized linear models with scalar responses and functional predictors.
fpcr(
y,
xfuncs = NULL,
fdobj = NULL,
ncomp = NULL,
pve = 0.99,
nbasis = NULL,
basismat = NULL,
penmat = NULL,
argvals = NULL,
covt = NULL,
mean.signal.term = FALSE,
spline.order = NULL,
family = "gaussian",
method = "REML",
sp = NULL,
pen.order = 2,
cv1 = FALSE,
nfold = 5,
store.cv = FALSE,
store.gam = TRUE,
...
)
y |
scalar outcome vector. |
xfuncs |
for 1D predictors, an |
fdobj |
functional data object (class " |
ncomp |
number of principal components. If |
pve |
proportion of variance explained: used to choose the number of principal components. |
nbasis |
number(s) of B-spline basis functions: either a scalar, or a
vector of values to be compared. For 2D predictors, tensor product
B-splines are used, with the given basis dimension(s) in each direction;
alternatively, |
basismat |
a |
penmat |
a |
argvals |
points at which the functional predictors and the
coefficient function are evaluated. By default, if 1D functional
predictors are given by the |
covt |
covariates: an |
mean.signal.term |
logical: should the mean of each subject's signal be included as a covariate? |
spline.order |
order of B-splines used, if |
family |
generalized linear model family. Current version supports
|
method |
smoothing parameter selection method, passed to function
|
sp |
a fixed smoothing parameter; if |
pen.order |
order of derivative penalty applied when estimating the
coefficient function; defaults to |
cv1 |
logical: should cross-validation be performed to select the best
model if only one set of tuning parameters provided? By default,
|
nfold |
the number of validation sets ("folds") into which the data are divided; by default, 5. |
store.cv |
logical: should a CV result table be in the output? By
default, |
store.gam |
logical: should the |
... |
other arguments passed to function |
One-dimensional functional predictors can be given either in functional
data object form, using argument fdobj
(see the fda package of
Ramsay, Hooker and Graves, 2009, and Method 1 in the example below), or
explicitly, using xfuncs
(see Method 2 in the example). In the
latter case, arguments basismat
and penmat
can also be used
to specify the basis and/or penalty matrices (see Method 3).
For two-dimensional predictors, functional data object form is not supported. Instead of radial B-splines as in Reiss and Ogden (2010), this implementation employs tensor product cubic B-splines, sometimes known as bivariate O-splines (Ormerod, 2008).
For purposes of interpreting the fitted coefficients, please note that the functional predictors are decorrelated from the scalar predictors before fitting the model (when there are no scalar predictors other than the intercept, this just means the columns of the functional predictor matrix are de-meaned); see section 3.2 of Reiss (2006) for details.
A list with components
gamObject |
if |
fhat |
coefficient function estimate. |
se |
pointwise Bayesian standard error. |
undecor.coef |
undecorrelated coefficient for covariates. |
argvals |
the supplied value of |
cv.table |
a
table giving the CV criterion for each combination of |
nbasis , ncomp |
when CV is performed, the values
of |
Philip Reiss phil.reiss@nyumc.org, Lan Huo lan.huo@nyumc.org, and Lei Huang huangracer@gmail.com
Ormerod, J. T. (2008). On semiparametric regression and data mining. Ph.D. dissertation, School of Mathematics and Statistics, University of New South Wales.
Ramsay, J. O., Hooker, G., and Graves, S. (2009). Functional Data Analysis with R and MATLAB. New York: Springer.
Reiss, P. T. (2006). Regression with signals and images as predictors. Ph.D. dissertation, Department of Biostatistics, Columbia University.
Reiss, P. T., and Ogden, R. T. (2007). Functional principal component regression and functional partial least squares. Journal of the American Statistical Association, 102, 984–996.
Reiss, P. T., and Ogden, R. T. (2010). Functional generalized linear models with images as predictors. Biometrics, 66, 61–69.
Wood, S. N. (2006). Generalized Additive Models: An Introduction with R. Boca Raton, FL: Chapman & Hall.
require(fda)
### 1D functional predictor example ###
######### Octane data example #########
data(gasoline)
# Create the requisite functional data objects
bbasis = create.bspline.basis(c(900, 1700), 40)
wavelengths = 2*450:850
nir <- t(gasoline$NIR)
gas.fd = smooth.basisPar(wavelengths, nir, bbasis)$fd
# Method 1: Call fpcr with fdobj argument
gasmod1 = fpcr(gasoline$octane, fdobj = gas.fd, ncomp = 30)
plot(gasmod1, xlab="Wavelength")
## Not run:
# Method 2: Call fpcr with explicit signal matrix
gasmod2 = fpcr(gasoline$octane, xfuncs = gasoline$NIR, ncomp = 30)
# Method 3: Call fpcr with explicit signal, basis, and penalty matrices
gasmod3 = fpcr(gasoline$octane, xfuncs = gasoline$NIR,
basismat = eval.basis(wavelengths, bbasis),
penmat = getbasispenalty(bbasis), ncomp = 30)
# Check that all 3 calls yield essentially identical estimates
all.equal(gasmod1$fhat, gasmod2$fhat, gasmod3$fhat)
# But note that, in general, you'd have to specify argvals in Method 1
# to get the same coefficient function values as with Methods 2 & 3.
## End(Not run)
### 2D functional predictor example ###
n = 200; d = 70
# Create true coefficient function
ftrue = matrix(0,d,d)
ftrue[40:46,34:38] = 1
# Generate random functional predictors, and scalar responses
ii = array(rnorm(n*d^2), dim=c(n,d,d))
iimat = ii; dim(iimat) = c(n,d^2)
yy = iimat %*% as.vector(ftrue) + rnorm(n, sd=.3)
mm = fpcr(yy, ii, ncomp=40)
image(ftrue)
contour(mm$fhat, add=TRUE)
## Not run:
### Cross-validation ###
cv.gas = fpcr(gasoline$octane, xfuncs = gasoline$NIR,
nbasis=seq(20,40,5), ncomp = seq(10,20,5), store.cv = TRUE)
image(seq(20,40,5), seq(10,20,5), cv.gas$cv.table, xlab="Basis size",
ylab="Number of PCs", xaxp=c(20,40,4), yaxp=c(10,20,2))
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.