Description Usage Arguments Value Author(s) See Also Examples
gplssvd
takes in left (XLW
, YLW
) and right (XRW
, YRW
) constraints (usually diagonal matrices, but any positive semi-definite matrix is fine) that are applied to each data matrix (X
and Y
), respectively.
Right constraints for each data matrix are used for the orthogonality conditions.
1 |
X |
a data matrix |
Y |
a data matrix |
XLW |
X's Left Weights – the constraints applied to the left side (rows) of the |
YLW |
Y's Left Weights – the constraints applied to the left side (rows) of the |
XRW |
X's Right Weights – the constraints applied to the right side (columns) of the |
YRW |
Y's Right Weights – the constraints applied to the right side (columns) of the |
k |
total number of components to return though the full variance will still be returned (see |
tol |
default is .Machine$double.eps. A tolerance level for eliminating effectively zero (small variance), negative, imaginary eigen/singular value components. |
A list with thirteen elements:
d_full |
A vector containing the singular values of DAT above the tolerance threshold (based on eigenvalues). |
l_full |
A vector containing the eigen values of DAT above the tolerance threshold ( |
d |
A vector of length |
l |
A vector of length |
u |
Left (rows) singular vectors. Dimensions are |
p |
Left (rows) generalized singular vectors. |
fi |
Left (rows) component scores. |
lx |
Latent variable scores for rows of |
v |
Right (columns) singular vectors. |
q |
Right (columns) generalized singular vectors. |
fj |
Right (columns) component scores. |
ly |
Latent variable scores for rows of |
Derek Beaton
tolerance_svd
, geigen
and gsvd
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | # Three "two-table" technique examples
data(wine)
X <- scale(wine$objective)
Y <- scale(wine$subjective)
## Partial least squares (correlation)
pls.res <- gplssvd(X, Y)
## Canonical correlation analysis (CCA)
### NOTE:
#### This is not "traditional" CCA because of the generalized inverse.
#### However the results are the same as standard CCA when data are not rank deficient.
#### and this particular version uses tricks to minimize memory & computation
cca.res <- gplssvd(
X = MASS::ginv(t(X)),
Y = MASS::ginv(t(Y)),
XRW=crossprod(X),
YRW=crossprod(Y)
)
## Reduced rank regression (RRR) a.k.a. redundancy analysis (RDA)
### NOTE:
#### This is not "traditional" RRR because of the generalized inverse.
#### However the results are the same as standard RRR when data are not rank deficient.
rrr.res <- gplssvd(
X = MASS::ginv(t(X)),
Y = Y,
XRW=crossprod(X)
)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.