cca | R Documentation |
Performs a canonical correlation (and canonical redundancy) analysis on two sets of variables.
cca(x, y, xlab = colnames(x), ylab = colnames(y), xcenter = TRUE, ycenter = TRUE, xscale = FALSE, yscale = FALSE, standardize.scores = TRUE, use = "complete.obs", na.rm = TRUE, use.eigs = FALSE, max.dim = Inf, reg.param = NULL) ## S3 method for class 'cca' plot(x, ...) ## S3 method for class 'cca' print(x, ...) ## S3 method for class 'cca' summary(object, ...)
x |
for |
y |
a single vector or a matrix whose columns contain the |
xlab |
an optional vector of |
ylab |
an optional vector of |
xcenter |
boolean; demean the |
ycenter |
boolean; demean the |
xscale |
boolean; scale the |
yscale |
boolean; scale the |
standardize.scores |
boolean; rescale scores (and coefficients) to produce scores of unit variance? |
use |
|
na.rm |
boolean; remove missing values during redundancy analysis? |
use.eigs |
boolean; use |
max.dim |
maximum number of canonical variates to extract (only relevant if less than the minimum of the number of columns of |
reg.param |
an optional L2 regularization parameter (or vector thereof). |
object |
a |
... |
additional arguments. |
Canonical correlation analysis (CCA) is a form of linear subspace analysis, and involves the projection of two sets of vectors (here, the variable sets x
and y
) onto a joint subspace. The goal of (CCA) is to find a squence of linear transformations of each variable set, such that the correlations between the transformed variables are maximized (under the proviso that each transformed variable must be orthogonal to those preceding it). These transformed variables – known as “canonical variates” (CVs) – can be thought of as expressing the common variation across the data sets, in a manner analogous to the role of principal components in within-set analysis (see, e.g., princomp
). Since the rank of the joint subspace is equal to the minimum of the ranks of the two spaces spanned by the initial data vectors, it follows that the number of CVs will usually be equal to the minimum of the number of x
and y
variables (perhaps fewer, if the sets are not of full rank or if max.dim
is used to constrain the number of variables extracted).
Formally, we may describe the CCA solution as follows. Given data matrices X and Y, let Cxx, Cxy, Cyx and Cyy be the respective sample covariance matrices for X versus itself, X versus Y, Y versus X, and Y versus itself. Now, for some i less than or equal to the minimum rank of X and Y, let u_i be the ith eigenvector of Cxx^-1 %*% Cxy %*% Cyy^-1 %*% Cyx, with corresponding eigenvalue λ_i. Then the vector u_i contains the coefficients projecting X onto the i
th canonical variate; the corresponding scores are given by X %*% u_i. Similarly, let v_i be the ith eigenvector of Cyy^-1 %*% Cyx %*% Cxx^-1 %*% Cxy. Then v_i contains the coefficients projecting Y onto the ith canonical variate (with scores Y %*% v_i). The eigenvalue in the second case will be the same as the first, and corresponds to the square of the ith canonical correlation for the CCA solution – that is, the correlation between the X and Y scores on the ith canonical variate. Since the canonical correlation structure is unaffected by rescaling of the canonical variate scores, it is common to adjust the coefficients u_i and v_i to ensure that the resulting scores have unit variance; this option is controlled here via the standardize.scores
argument.
CCA output can be fairly complex. Quantities of particular interest include the correlations between the original variables in each set and their respective canonical variates (structural correlations or loadings), the coefficients which take the original variables into the CVs, and of course the correlations between the CV scores in one set and their corresponding scores in the opposite set (the canonical correlations). The canonical correlations provide a basic measure of concordance between the transformed variables, but are surprisingly uninformative by themselves; canonical redundancies (see below) are of more typical interest. Interpretation of CVs is usually performed by inspection of loadings, which reveal the extent to which each CV is associated with particular variables in each set. The squared loadings, in particular, convey the fraction of variance in each original variable which is accounted for by a given CV (though not necessarily by the variables in the opposite set!).
A common interest in the context of CCA is the extent to which the variance of one set of variables can be accounted for by the other (in the usual least squares sense). While it is tempting to interpret the squared canonical correlations in this manner, this is incorrect: the squared canonical correlations convey the fraction of variance in the CV scores from one variable set which can be accounted for by scores from the other, but say nothing about the extent to which the CVs themselves account for variation in the original variables. The variance in one set explainable by the other is instead expressed via the so-called redundancy index, which combines the squared canonical correlations with the canonical adequacy (within-set variance accounted for) for each CV. The use of the redundancy index in this way is sometimes called “(canonical) redundancy analysis”, although it is simply an alternate means of presenting CCA results.
As the name of the technique implies, CCA is a symmetric procedure: the designation of one variable set as x
and another as y
is arbitrary, and may be reversed without incident. (Note, however, that the coefficients and redundancies are set-specific, and will also be reversed in this case.) CCA with one x
or y
variable is equivalent to OLS regression (with the squared canonical correlation corresponding to the R^2), and CCA on one variable pair yields the familiar Pearson product-moment correlation. Centering and scaling data prior to analysis is equivalent to working with correlation matrices in the underlying analysis (with interpretation/effects analogous to the principal components case).
Finding the CCA solution can pose numerical challenges, ironically more so when the degree of potential dimension reduction is highest. In recalcitrant cases, it can be useful to apply regularization to the solution for purposes of stabilization. The optional reg.param
can be used for this purpose: if given as a single numeric value, it adds an L2 (aka “ridge”) penalty to each variable set with the corresponding multiplier value. reg.param
can also be given as a vector of length 2, in which case the first value is applied to the x
variables and the second is applied to the y
variables. Relatedly, in high-dimension/low-rank problems it can be useful to extract a much smaller number of canonical variates than the nominal maximum. This can be controlled by max.dim
, though the default diagonalization method computes the entire eigendecomposition prior to canonical variate extraction. In such cases, it can be helpful to employ the alternative diagonalization method controlled by the use.eigs
argument to compute only those dimensions that are actually required. Experience suggests that this method (eigs
) is less stable than the base eigen
, but it can be much faster in high-dimensional settings.
An object of class cca
, whose elements are as follows:
corr |
Canonical correlations. |
corrsq |
Squared canonical correlations (shared variance across canonical variates). |
xcoef |
Coefficients for the |
ycoef |
Coefficients for the |
canvarx |
Canonical variate scores for the |
canvary |
Canonical variate scores for the |
xstructcorr |
Structural correlations (loadings) for |
ystructcorr |
Structural correlations (loadings) for |
xstructcorrsq |
Squared structural correlations for |
ystructcorrsq |
Squared structural correlations for |
xcrosscorr |
Canonical cross-loadings for |
ycrosscorr |
Canonical cross-loadings for |
xcrosscorrsq |
Squared canonical cross-loadings for |
ycrosscorrsq |
Squared canonical cross-loadings for |
xcancom |
Canonical communalities for |
ycancom |
Canonical communalities for |
xcanvad |
Canonical variate adequacies for |
ycanvad |
Canonical variate adequacies for |
xvrd |
Canonical redundancies for |
yvrd |
Canonical redundancies for |
xrd |
Total canonical redundancy for |
yrd |
Total canonical redundancy for |
chisq |
Sequential chi-squared values for tests of each respective canonical variate using Bartlett's omnibus statistic. |
df |
Degrees of freedom for Bartlett's test. |
xlab |
Variable names for |
ylab |
Variable names for |
reg.param |
Regularization parameter (if any). |
Carter T. Butts <buttsc@uci.edu>
Mardia, K. V.; Kent, J. T.; and Bibby, J. M. 1979. Multivariate Analysis. London: Academic Press.
F.test.cca
, cancor
, princomp
#Example parallels the R builtin cancor example data(LifeCycleSavings) pop <- LifeCycleSavings[, 2:3] oec <- LifeCycleSavings[, -(2:3)] cca.fit <- cca(pop, oec) cca.regfit <- cca(pop, oec, reg.param=1) # Some minimal regularization #View the results cca.fit summary(cca.fit) plot(cca.fit) cca.regfit #Not a vast difference, usually....
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.