pls: Partial Least Squares (PLS) Regression

Description Usage Arguments Details Value Author(s) References See Also Examples

View source: R/pls.R

Description

Function to perform Partial Least Squares (PLS) regression.

Usage

1
2
3
4
5
pls(X = NULL, Y = NULL, ncomp = 2, scale = TRUE,
  mode = c("regression", "canonical", "invariant", "classic"),
  tol = 1e-06, max.iter = 100, near.zero.var = FALSE,
  logratio = "none", multilevel = NULL, all.outputs = TRUE,
  data = NULL, formula = NULL)

Arguments

X

Numeric matrix of predictors, or name of such an assay from data . NAs are allowed.

Y

Numeric vector or matrix of responses (for multi-response models), or name of such an assay or colData from data. NAs are allowed.

ncomp

The number of components to include in the model. Default to 2.

scale

Boleean. If scale = TRUE, each block is standardized to zero means and unit variances (default: TRUE)

mode

Character string. What type of algorithm to use, (partially) matching one of "regression", "canonical", "invariant" or "classic". See Details.

tol

Convergence stopping value.

max.iter

Integer, the maximum number of iterations.

near.zero.var

Boolean, see the internal nearZeroVar function (should be set to TRUE in particular for data with many zero values). Setting this argument to FALSE (when appropriate) will speed up the computations. Default value is FALSE

logratio

One of ('none','CLR'). Default to 'none'

multilevel

Design matrix for repeated measurement analysis, where multlevel decomposition is required. For a one factor decomposition, the repeated measures on each individual, i.e. the individuals ID is input as the first column. For a 2 level factor decomposition then 2nd AND 3rd columns indicate those factors. See examples in ?spls).

all.outputs

Boolean. Computation can be faster when some specific (and non-essential) outputs are not calculated. Default = TRUE.

data

A MultiAssayExperiment object.

formula

(X and Y must be NULL) formula of form LHS~RHS (names of objects without quotations) where LHS and RHS (in effect Y and X, respectively) are numeric matrices . LHS and RHS can also be an assay names from data. LHS can also be a numeric colData name from data. see examples.

Details

pls function fit PLS models with 1, … ,ncomp components. Multi-response models are fully supported. The X and Y datasets can contain missing values.

The type of algorithm to use is specified with the mode argument. Four PLS algorithms are available: PLS regression ("regression"), PLS canonical analysis ("canonical"), redundancy analysis ("invariant") and the classical PLS algorithm ("classic") (see References). Different modes relate on how the Y matrix is deflated across the iterations of the algorithms - i.e. the different components.

- Regression mode: the Y matrix is deflated with respect to the information extracted/modelled from the local regression on X. Here the goal is to predict Y from X (Y and X play an asymmetric role). Consequently the latent variables computed to predict Y from X are different from those computed to predict X from Y.

- Canonical mode: the Y matrix is deflated to the information extracted/modelled from the local regression on Y. Here X and Y play a symmetric role and the goal is similar to a Canonical Correlation type of analysis.

- Invariant mode: the Y matrix is not deflated

- Classic mode: is similar to a regression mode. It gives identical results for the variates and loadings associated to the X data set, but differences for the loadings vectors associated to the Y data set (different normalisations are used). Classic mode is the PLS2 model as defined by Tenenhaus (1998), Chap 9.

Note that in all cases the results are the same on the first component as deflation only starts after component 1.

The estimation of the missing values can be performed by the reconstitution of the data matrix using the nipals function. Otherwise, missing values are handled by casewise deletion in the pls function without having to delete the rows with missing data.

logratio transform and multilevel analysis are performed sequentially as internal pre-processing step, through logratio.transfo and withinVariation respectively.

Value

pls returns an object of class "mixo_pls", a list that contains the following components:

X

the centered and standardized original predictor matrix.

Y

the centered and standardized original response vector or matrix.

ncomp

the number of components included in the model.

mode

the algorithm used to fit the model.

variates

list containing the variates.

loadings

list containing the estimated loadings for the X and Y variates.

names

list containing the names to be used for individuals and variables.

tol

the tolerance used in the iterative algorithm, used for subsequent S3 methods

iter

Number of iterations of the algorthm for each component

max.iter

the maximum number of iterations, used for subsequent S3 methods

nzv

list containing the zero- or near-zero predictors information.

scale

whether scaling was applied per predictor.

logratio

whether log ratio transformation for relative proportion data was applied, and if so, which type of transformation.

explained_variance

amount of variance explained per component (note that contrary to PCA, this amount may not decrease as the aim of the method is not to maximise the variance, but the covariance between data sets).

input.X

numeric matrix of predictors in X that was input, before any saling / logratio / multilevel transformation.

mat.c

matrix of coefficients from the regression of X / residual matrices X on the X-variates, to be used internally by predict.

defl.matrix

residual matrices X for each dimension.

Author(s)

Sébastien Déjean, Ignacio González, Kim-Anh Lê Cao, Al J Abadi.

References

Tenenhaus, M. (1998). La regression PLS: theorie et pratique. Paris: Editions Technic.

Wold H. (1966). Estimation of principal components and related models by iterative least squares. In: Krishnaiah, P. R. (editors), Multivariate Analysis. Academic Press, N.Y., 391-420.

Abdi H (2010). Partial least squares regression and projection on latent structure regression (PLS Regression). Wiley Interdisciplinary Reviews: Computational Statistics, 2(1), 97-106.

See Also

spls, summary, plotIndiv, plotVar, predict, perf and http://www.mixOmics.org for more details.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
#' \dontrun{
library(mixOmics.data)

## ---------------- with X and Y as matrices
X <- linnerud$exercise
Y <- linnerud$physiological
pls.res1 <- pls(X=X, Y=Y)
plotVar(pls.res1)

## ---------------- formula method for matrices
## 'formula' argument should be explicitly mentioned (formula = ...)
## for correct method dispatch
pls.res2 <- pls(formula = Y ~ X)
## exclude calls and see if all outputs  are identical
identical(pls.res1[-1], pls.res2[-1])
#> TRUE
## ---------------- MultiAssayExperiment and assay names as X and Y
## 'data' argument should be explicitly mentioned for correct method dispatch
pls.res3 <- pls(X='exercise', Y='physiological', data = linnerud.mae)
identical(pls.res1[-1], pls.res3[-1])
#> TRUE

## ---------------- MultiAssayExperiment and formula with assay names
pls.res4 <- pls(formula = physiological ~ exercise, data = linnerud.mae, mode = "classic",)
identical(pls.res1[-1], pls.res4[-1])
#> TRUE

## ---------------- MultiAssayExperiment; X=assay and Y=colData
toxicity.pls1 <- pls(data = liver.toxicity.mae,  formula = Dose.Group~gene, ncomp = 3)
toxicity.pls2 <- pls(data = liver.toxicity.mae,  Y='Dose.Group', X='gene', ncomp = 3)
identical(toxicity.pls1[-1], toxicity.pls2[-1])
#> TRUE

#' }

ajabadi/mixOmics2 documentation built on Aug. 9, 2019, 1:08 a.m.