pls: Partial Least Squares (PLS) Regression

Description Usage Arguments Details Value missing values multilevel logratio and multilevel Author(s) References See Also Examples

View source: R/pls.R

Description

Function to perform Partial Least Squares (PLS) regression.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
pls(
  X,
  Y,
  ncomp = 2,
  scale = TRUE,
  mode = c("regression", "canonical", "invariant", "classic"),
  tol = 1e-06,
  max.iter = 100,
  near.zero.var = FALSE,
  logratio = "none",
  multilevel = NULL,
  all.outputs = TRUE
)

Arguments

X

numeric matrix of predictors with the rows as individual observations. missing values (NAs) are allowed.

Y

numeric matrix of response(s) with the rows as individual observations matching X. missing values (NAs) are allowed.

ncomp

Positive Integer. The number of components to include in the model. Default to 2.

scale

Logical. If scale = TRUE, each block is standardized to zero means and unit variances (default: TRUE)

mode

Character string indicating the type of PLS algorithm to use. One of "regression", "canonical", "invariant" or "classic". See Details.

tol

Positive numeric used as convergence criteria/tolerance during the iterative process. Default to 1e-06.

max.iter

Integer, the maximum number of iterations. Default to 100.

near.zero.var

Logical, see the internal nearZeroVar function (should be set to TRUE in particular for data with many zero values). Setting this argument to FALSE (when appropriate) will speed up the computations. Default value is FALSE.

logratio

Character, one of ('none','CLR') specifies the log ratio transformation to deal with compositional values that may arise from specific normalisation in sequencing data. Default to 'none'. See ?logratio.transfo for details.

multilevel

Numeric, design matrix for repeated measurement analysis, where multilevel decomposition is required. For a one factor decomposition, the repeated measures on each individual, i.e. the individuals ID is input as the first column. For a 2 level factor decomposition then 2nd AND 3rd columns indicate those factors. See examplesin ?spls.

all.outputs

Logical. Computation can be faster when some specific (and non-essential) outputs are not calculated. Default = TRUE.

Details

pls function fit PLS models with 1, … ,ncomp components. Multi-response models are fully supported. The X and Y datasets can contain missing values.

The type of algorithm to use is specified with the mode argument. Four PLS algorithms are available: PLS regression ("regression"), PLS canonical analysis ("canonical"), redundancy analysis ("invariant") and the classical PLS algorithm ("classic") (see References). Different modes relate on how the Y matrix is deflated across the iterations of the algorithms - i.e. the different components.

- Regression mode: the Y matrix is deflated with respect to the information extracted/modelled from the local regression on X. Here the goal is to predict Y from X (Y and X play an asymmetric role). Consequently the latent variables computed to predict Y from X are different from those computed to predict X from Y.

- Canonical mode: the Y matrix is deflated to the information extracted/modelled from the local regression on Y. Here X and Y play a symmetric role and the goal is similar to a Canonical Correlation type of analysis.

- Invariant mode: the Y matrix is not deflated

- Classic mode: is similar to a regression mode. It gives identical results for the variates and loadings associated to the X data set, but differences for the loadings vectors associated to the Y data set (different normalisations are used). Classic mode is the PLS2 model as defined by Tenenhaus (1998), Chap 9.

Note that in all cases the results are the same on the first component as deflation only starts after component 1.

Value

pls returns an object of class "pls", a list that contains the following components:

X

the centered and standardized original predictor matrix.

Y

the centered and standardized original response vector or matrix.

ncomp

the number of components included in the model.

mode

the algorithm used to fit the model.

variates

list containing the variates.

loadings

list containing the estimated loadings for the X and Y variates. The loading weights multiplied with their associated deflated (residual) matrix gives the variate.

loadings.stars

list containing the estimated weighted loadings for the X and Y variates. The loading weights are projected so that when multiplied with their associated original matrix we obtain the variate.

names

list containing the names to be used for individuals and variables.

tol

the tolerance used in the iterative algorithm, used for subsequent S3 methods

iter

Number of iterations of the algorithm for each component

max.iter

the maximum number of iterations, used for subsequent S3 methods

nzv

list containing the zero- or near-zero predictors information.

scale

whether scaling was applied per predictor.

logratio

whether log ratio transformation for relative proportion data was applied, and if so, which type of transformation.

prop_expl_var

The proportion of the variance explained by each variate / component divided by the total variance in the data (after removing the possible missing values) using the definition of 'redundancy'. Note that contrary to PCA, this amount may not decrease in the following components as the aim of the method is not to maximise the variance, but the covariance between data sets (including the dummy matrix representation of the outcome variable in case of the supervised approaches).

input.X

numeric matrix of predictors in X that was input, before any scaling / logratio / multilevel transformation.

mat.c

matrix of coefficients from the regression of X / residual matrices X on the X-variates, to be used internally by predict.

defl.matrix

residual matrices X for each dimension.

missing values

The estimation of the missing values can be performed using the impute.nipals function. Otherwise, missing values are handled by element-wise deletion in the pls function without having to delete the rows with missing data.

multilevel

Multilevel (s)PLS enables the integration of data measured on two different data sets on the same individuals. This approach differs from multilevel sPLS-DA as the aim is to select subsets of variables from both data sets that are highly positively or negatively correlated across samples. The approach is unsupervised, i.e. no prior knowledge about the sample groups is included.

logratio and multilevel

logratio transform and multilevel analysis are performed sequentially as internal pre-processing step, through logratio.transfo and withinVariation respectively.

Author(s)

Sébastien Déjean, Ignacio González, Florian Rohart, Kim-Anh Lê Cao, Al J Abadi

References

Tenenhaus, M. (1998). La regression PLS: theorie et pratique. Paris: Editions Technic.

Wold H. (1966). Estimation of principal components and related models by iterative least squares. In: Krishnaiah, P. R. (editors), Multivariate Analysis. Academic Press, N.Y., 391-420.

Abdi H (2010). Partial least squares regression and projection on latent structure regression (PLS Regression). Wiley Interdisciplinary Reviews: Computational Statistics, 2(1), 97-106.

See Also

spls, summary, plotIndiv, plotVar, predict, perf and http://www.mixOmics.org for more details.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
data(linnerud)
X <- linnerud$exercise
Y <- linnerud$physiological
linn.pls <- pls(X, Y, mode = "classic")

## Not run: 
data(liver.toxicity)
X <- liver.toxicity$gene
Y <- liver.toxicity$clinic
toxicity.pls <- pls(X, Y, ncomp = 3)

## End(Not run)

mixOmics documentation built on April 15, 2021, 6:01 p.m.