| SVD.pls | R Documentation |
This function is called internally by pls.regression and is not intended
to be used directly. Use pls.regression(..., calc.method = "SVD") instead.
Performs Partial Least Squares (PLS) regression using the Singular Value Decomposition (SVD)
of the cross-covariance matrix. This method estimates the latent components by identifying directions
in the predictor and response spaces that maximize their covariance, using the leading singular vectors
of the matrix R = X^\top Y.
SVD.pls(x, y, n.components = NULL)
x |
A numeric matrix or data frame of predictors (X). Should have dimensions n × p. |
y |
A numeric matrix or data frame of response variables (Y). Should have dimensions n × q. |
n.components |
Integer specifying the number of PLS components to extract. If NULL, defaults to |
The algorithm begins by z-scoring both x and y (centering and scaling to unit variance).
The initial residual matrices are set to the scaled values: E = X_scaled, F = Y_scaled.
For each component h = 1, ..., H:
Compute the cross-covariance matrix R = E^\top F.
Perform SVD on R = U D V^\top.
Extract the first singular vectors: w = U[,1], q = V[,1].
Compute scores: t = E w (normalized), u = F q.
Compute loadings: p = E^\top t, regression scalar b = t^\top u.
Deflate residuals: E \gets E - t p^\top, F \gets F - b t q^\top.
After all components are extracted, a post-processing step removes components with zero regression
weight. The scaled regression coefficients are computed using the Moore–Penrose pseudoinverse of the
loading matrix P, and then rescaled to the original variable units.
A list containing:
Character string indicating the model type ("PLS Regression").
Matrix of predictor scores (n × H).
Matrix of response scores (n × H).
Matrix of predictor weights (p × H).
Matrix of normalized response weights (q × H).
Matrix of predictor loadings (p × H).
Matrix of response loadings (q × H).
Vector of scalar regression weights (length H).
Matrix of final regression coefficients in the original scale (p × q).
Vector of intercepts (length q). All zeros due to centering.
Percent of total X variance explained by each component.
Percent of total Y variance explained by each component.
Cumulative X variance explained.
Cumulative Y variance explained.
Abdi, H., & Williams, L. J. (2013). Partial least squares methods: Partial least squares correlation and partial least square regression. Methods in Molecular Biology (Clifton, N.J.), 930, 549–579. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1007/978-1-62703-059-5_23")}
de Jong, S. (1993). SIMPLS: An alternative approach to partial least squares regression. Chemometrics and Intelligent Laboratory Systems, 18(3), 251–263. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1016/0169-7439(93)85002-X")}
## Not run:
X <- matrix(rnorm(100 * 10), 100, 10)
Y <- matrix(rnorm(100 * 2), 100, 2)
model <- pls.regression(X, Y, n.components = 3, calc.method = "SVD")
model$coefficients
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.