%% template for Sweave vignettes %\VignetteIndexEntry{HE plot examples} %\VignetteDepends{candisc,car,lattice,reshape2} %\VignetteKeywords{MANOVA, ANOVA, Canonical discriminant analysis, effect plots, HE plots} %\VignettePackage{heplots}

\documentclass[11pt]{article} \usepackage{float,graphicx,geometry} \usepackage{amssymb, amsmath, amsfonts} \usepackage{latexsym}

\usepackage{fancyvrb} \usepackage{color} \usepackage{url} \usepackage[round]{natbib} \usepackage{hyperref}

%\bibpunct{(}{)}{;}{a}{,}{,} \usepackage{bm} \usepackage{upquote} %\usepackage{sfheaders} \usepackage{xspace} %\usepackage{multicol} % for table of contents \usepackage[toc]{multitoc} % for table of contents

\geometry{left=1.2in, right=1.2in, top=1in, bottom=1in}

% math stuff \newcommand{; | ;}{\ensuremath{\, | \,}} \renewcommand{\vec}[1]{\ensuremath{\bm{#1}}} \newcommand{\mat}[1]{\ensuremath{\bm{#1}}} \newcommand{\trans}{\ensuremath{^\mathsf{T}}} \newcommand{\diag}[1]{\ensuremath{\mathrm{diag} (#1)}} %\def\binom#1#2{{#1 \choose #2}}% %\newcommand{\implies}{ \ensuremath{\mapsto} }

\newcommand{\sizedmat}[2]{% \mathord{\mathop{$\mathbf{#1}$}\limits_{(#2)}}% }

\newcommand{\LM}{LM} \newcommand{\MLM}{MLM\xspace} \newcommand{\MLMs}{MLMs\xspace} \newcommand{\Var}{\ensuremath{\mathsf{Var}}}

%\newenvironment{equation*}{\displaymath}{\enddisplaymath}%

\newcommand{\tabref}[1]{Table~\@ref(#1)} \newcommand{\@ref}[1]{Figure~\@ref(#1)} \newcommand{\secref}[1]{Section~\@ref(#1)} \newcommand{\loglin}{loglinear }

% R stuff \newcommand{\pkg}[1]{{\textsf{#1} package}} \newcommand{\Rpackage}[1]{{\textsf{#1}}} \newcommand{\R}{\textsf{R}\xspace} \newcommand{\code}[1]{{#1}} \newcommand{\func}[1]{{#1()}}

\title{HE Plot Examples} \author{Michael Friendly} \date{\footnotesize{Using \Rpackage{heplots} version r packageDescription("heplots")[["Version"]] and \Rpackage{candisc} version r packageDescription("candisc")[["Version"]]; Date: r Sys.Date()}}

\definecolor{Soutput}{rgb}{0,0,0.56} \definecolor{Sinput}{rgb}{0.56,0,0} \DefineVerbatimEnvironment{Sinput}{Verbatim}{formatcom={\color{Sinput}},fontsize=\footnotesize, baselinestretch=0.75} \DefineVerbatimEnvironment{Soutput}{Verbatim}{formatcom={\color{Soutput}},fontsize=\footnotesize, baselinestretch=0.75}

[b]{.3\linewidth} \includegraphics[width=\linewidth, trim=0 30 0 30]{fig/plot-plastic2}

[b]{.3\linewidth} \includegraphics[width=\linewidth, trim=0 30 0 30]{fig/plot-skulls-can2}

[b]{.3\linewidth} \includegraphics[width=\linewidth, trim=0 30 0 30]{fig/plot-rohwer-HE1}

This vignette provides some worked examples of the analysis of multivariate linear models 
(\MLMs)
with graphical methods for visualizing results using the `heplots` and the `candisc`.
The emphasis here is on using these methods in \R, and understanding how they help reveal
aspects of these models that might not be apparent from other graphical displays.

No attempt is made here to describe the theory of \MLMs or the statistical details behind
HE plots and their reduced-rank canonical cousins.
For that, see @FoxFriendlyMonette:09:compstat,Friendly:07:manova,Friendly:06:hesoft.

{\small % \sloppy % {2} \tableofcontents % }

library(knitr)
knitr::opts_chunk$set(
  message = FALSE,
  warning = FALSE,
  fig.height=5,
  fig.width=5,
  # results='hide',
  # fig.keep='none',
  fig.path='fig/plot',
  echo=TRUE
  )
set.seed(1071)
old.opts <- options(width=85, digits=5, useFancyQuotes = FALSE, continue="  ")
library(heplots)
library(candisc)

MANOVA Designs

Plastic film data

An experiment was conducted to determine the optimum conditions for extruding plastic film. Three responses, tear resistance, film gloss and film opacity were measured in relation to two factors, rate of extrusion and amount of an additive, both of these being set to two values, High and Low. The design is thus a $2\times 2$ MANOVA, with $n=5$ per cell. This example illustrates 2D and 3D HE plots, the difference between "effect" scaling and "evidence" (significance) scaling, and visualizing composite linear hypotheses.

We begin with an overall MANOVA for the two-way MANOVA model. Because each effect has 1 df, all of the multivariate statistics are equivalent, but we specify test.statistic="Roy" because Roy's test has a natural visual interpretation in HE plots.

plastic.mod <- lm(cbind(tear, gloss, opacity) ~ rate*additive, data=Plastic)
Anova(plastic.mod, test.statistic="Roy")

For the three responses jointly, the main effects of rate and additive are significant, while their interaction is not. In some approaches to testing effects in multivariate linear models (\MLM), significant multivariate tests are often followed by univariate tests on each of the responses separately to determine which responses contribute to each significant effect. In \R, these analyses are most convieniently performed using the update() method for the mlm object plastic.mod.

Anova(update(plastic.mod, tear ~ .))
Anova(update(plastic.mod, gloss ~ .))
Anova(update(plastic.mod, opacity ~ .))

The results above show significant main effects for tear, a significant main effect of rate for gloss, and no significant effects for opacity, but they don't shed light on the nature of these effects. Traditional univariate plots of the means for each variable separately are useful, but they don't allow visualization of the relations among the response variables.

We can visualize these effects for pairs of variables in an HE plot, showing the "size" and orientation of hypothesis variation ($\mathbf{H}$) in relation to error variation ($\mathbf{E}$) as ellipsoids. When, as here, the model terms have 1 degree of freedom, the $\mathbf{H}$ ellipsoids degenerate to a line.

# Compare evidence and effect scaling 
colors = c("red", "darkblue", "darkgreen", "brown")
heplot(plastic.mod, size="evidence", col=colors, cex=1.25)
heplot(plastic.mod, size="effect", add=TRUE, lwd=4, term.labels=FALSE, col=colors)

With effect scaling, both the $\mathbf{H}$ and $\mathbf{E}$ sums of squares and products matrices are both divided by the error df, giving multivariate analogs of univariate measures of effect size, e.g., $(\bar{y}1-\bar{y}_2) / s$. With significance scaling, the $\mathbf{H}$ ellipse is further divided by $\lambda\alpha$, the critical value of Roy's largest root statistic. This scaling has the property that an $\mathbf{H}$ ellipse will protrude somewhere outside the $\mathbf{E}$ ellipse iff the multivariate test is significant at level $\alpha$. \@ref{fig:plastic1} shows both scalings, using a thinner line for significance scaling. Note that the (degenerate) ellipse for additive is significant, but does not protrude outside the $\mathbf{E}$ ellipse in this view. All that is guarranteed is that it will protrude somewhere in the 3D space of the responses.

By design, means for the levels of interaction terms are not shown in the HE plot, because doing so in general can lead to messy displays. We can add them here for the term rate:additive as follows:

## add interaction means
intMeans <- termMeans(plastic.mod, 'rate:additive', abbrev.levels=2)
#rownames(intMeans) <- apply(expand.grid(c('Lo','Hi'), c('Lo', 'Hi')), 1, paste, collapse=':')
points(intMeans[,1], intMeans[,2], pch=18, cex=1.2, col="brown")
text(intMeans[,1], intMeans[,2], rownames(intMeans), adj=c(0.5,1), col="brown")
lines(intMeans[c(1,3),1], intMeans[c(1,3),2], col="brown")
lines(intMeans[c(2,4),1], intMeans[c(2,4),2], col="brown")
#      <<plastic1a>>
# Compare evidence and effect scaling 
colors = c("red", "darkblue", "darkgreen", "brown")
heplot(plastic.mod, size="evidence", col=colors, cex=1.25)
heplot(plastic.mod, size="effect", add=TRUE, lwd=4, term.labels=FALSE, col=colors)
#      <<plastic1b>>
## add interaction means
intMeans <- termMeans(plastic.mod, 'rate:additive', abbrev.levels=2)
#rownames(intMeans) <- apply(expand.grid(c('Lo','Hi'), c('Lo', 'Hi')), 1, paste, collapse=':')
points(intMeans[,1], intMeans[,2], pch=18, cex=1.2, col="brown")
text(intMeans[,1], intMeans[,2], rownames(intMeans), adj=c(0.5,1), col="brown")
lines(intMeans[c(1,3),1], intMeans[c(1,3),2], col="brown")
lines(intMeans[c(2,4),1], intMeans[c(2,4),2], col="brown")

[htb]

\includegraphics[width=.6\textwidth, clip]{plot-plastic1}

\caption{HE plot for effects on tear and gloss according to the factors rate, additive and their interaction, rate:additive. The thicker lines show effect size scaling, the thinner lines show significance scaling.} {#fig:plastic1}

The factor means in this plot (\@ref{fig:plastic1}) have a simple interpretation: The high rate level yields greater tear resistance but lower gloss than the low level. The high additive amount produces greater tear resistance and greater gloss.

The rate:additive interaction is not significant overall, though it approaches significance for gloss. The cell means for the combinations of rate and additive shown in this figure suggest an explanation, for tutorial purposes: with the low level of rate, there is little difference in gloss for the levels of additive. At the high level of rate, there is a larger difference in gloss. The $\mathbf{H}$ ellipse for the interaction of rate:additive therefore "points" in the direction of gloss indicating that this variable contributes to the interaction in the multivariate tests.

In some MANOVA models, it is of interest to test sub-hypotheses of a given main effect or interaction, or conversely to test composite hypotheses that pool together certain effects to test them jointly. All of these tests (and, indeed, the tests of terms in a given model) are carried out as tests of general linear hypotheses in the MLM.

In this example, it might be useful to test two composite hypotheses: one corresponding to both main effects jointly, and another corresponding to no difference among the means of the four groups (equivalent to a joint test for the overall model). These tests are specified in terms of subsets or linear combinations of the model parameters.

plastic.mod

Thus, for example, the joint test of both main effects tests the parameters rateHigh and additiveHigh.

print(linearHypothesis(plastic.mod, c("rateHigh", "additiveHigh"), title="Main effects"), 
      SSP=FALSE)

print(linearHypothesis(plastic.mod, c("rateHigh", "additiveHigh", "rateHigh:additiveHigh"), 
                       title="Groups"), SSP=FALSE)

Correspondingly, we can display these tests in the HE plot by specifying these tests in the hypothesis argument to heplot(), as shown in \@ref{fig:plastic2}. [htb]

heplot(plastic.mod, hypotheses=list("Group" = 
       c("rateHigh", "additiveHigh", "rateHigh:additiveHigh ")),
       col=c(colors, "purple"),
       lwd=c(2, 3, 3, 3, 2), cex=1.25)
heplot(plastic.mod, hypotheses=list("Main effects" = 
       c("rateHigh", "additiveHigh")), add=TRUE,
       col=c(colors, "darkgreen"), cex=1.25)

\caption{HE plot for tear and gloss, supplemented with ellipses representing the joint tests of main effects and all group differences} {#fig:plastic2}

Finally, a 3D HE plot can be produced with heplot3d(), giving \@ref{fig:plastic1-HE3D}. This plot was rotated interactively to a view that shows both main effects protruding outside the error ellipsoid.

colors = c("pink", "darkblue", "darkgreen", "brown")
heplot3d(plastic.mod, col=colors)

[htb]

\includegraphics[clip]{plastic1-HE3D} \caption{3D HE plot for the plastic film data} {#fig:plastic1-HE3D}

Effects of physical attractiveness on mock jury decisions

In a social psychology study of influences on jury decisions by @Plaster:89, male participants (prison inmates) were shown a picture of one of three young women.
Pilot work had indicated that one woman was beautiful, another of average physical attractiveness, and the third unattractive. Participants rated the woman they saw on each of twelve attributes on scales of 1--9. These measures were used to check on the manipulation of "attractiveness" by the photo.

Then the participants were told that the person in the photo had committed a Crime, and asked to rate the seriousness of the crime and recommend a prison sentence, in Years. The data are contained in the data frame MockJury.% \footnote{The data were made available courtesy of Karl Wuensch, from [http://core.ecu.edu/psyc/wuenschk/StatData/PLASTER.dat] }

str(MockJury)

Sample sizes were roughly balanced for the independent variables in the three conditions of the attractiveness of the photo, and the combinations of this with Crime:

table(MockJury$Attr)
table(MockJury$Attr, MockJury$Crime)

The main questions of interest were: (a) Does attractiveness of the "defendant" influence the sentence or perceived seriousness of the crime?
(b) Does attractiveness interact with the nature of the crime?

But first, we try to assess the ratings of the photos in relation to the presumed categories of the independent variable Attr. The questions here are (a) do the ratings of the photos on physical attractiveness (phyattr) confirm the original classification? (b) how do other ratings differentiate the photos? To keep things simple, we consider only a few of the other ratings in a one-way MANOVA.

(jury.mod1 <- lm( cbind(phyattr, happy, independent, sophisticated) ~ Attr, data=MockJury))
Anova(jury.mod1, test="Roy")

Note that Beautiful is the baseline category of Attr, so the intercept term gives the means for this level. We see that the means are significantly different on all four variables collectively, by a joint multivariate test. A traditional analysis might follow up with univariate ANOVAs for each measure separately.

As an aid to interpretation of the MANOVA results We can examine the test of Attr in this model with an HE plot for pairs of variables, e.g., for phyattr and happy (\@ref{fig:jury-mod1-HE}). The means in this plot show that Beautiful is rated higher on physical attractiveness than the other two photos, while Unattractive is rated less happy than the other two. Comparing the sizes of the ellipses, differences among group means on physical attractiveness contributes more to significance than do ratings on happy.

heplot(jury.mod1, main="HE plot for manipulation check")

[htb]

\includegraphics{fig/plot-jury-mod1-HE}

\caption{HE plot for ratings of phyattr and happy according to the classification of photos on Attr} {#fig:jury-mod1-HE}

The HE plot for all pairs of variables (\@ref{fig:jury-mod1-pairs}) shows that the means for happy and independent are highly correlated, as are the means for phyattr and sophisticated. In most of these pairwise plots, the means form a triangle rather than a line, suggesting that these attributes are indeed measuring different aspects of the photos.

[htb]

pairs(jury.mod1)

\caption{HE plots for all pairs of ratings according to the classification of photos on Attr} {#fig:jury-mod1-pairs}

With 3 groups and 4 variables, the $\mathbf{H}$ ellipsoid has only $s=\min(df_h, p)=2$ dimensions. candisc() carries out a canonical discriminant analysis for the MLM and returns an object that can be used to show an HE plot in the space of the canonical dimensions. This is plotted in \@ref(fig:jury-can1).

jury.can <- candisc(jury.mod1)
jury.can

[htb]

opar <- par(xpd=TRUE)
heplot(jury.can, prefix="Canonical dimension", main="Canonical HE plot")
par(opar)

\caption{Canonical discriminant HE plot for the MockJury data} {#fig:jury-can1}

From this we can see that 91% of the variation among group means is accounted for by the first dimension, and this is nearly completely aligned with phyattr. The second dimension, accounting for the remaining 9% is determined nearly entirely by ratings on happy and independent. This display gives a relatively simple account of the results of the MANOVA and the relations of each of the ratings to discrimination among the photos.

Proceeding to the main questions of interest, we carry out a two-way MANOVA of the responses Years and Serious in relation to the independent variables Attr and Crime.

# influence of Attr of photo and nature of crime on Serious and Years
jury.mod2 <- lm( cbind(Serious, Years) ~ Attr * Crime, data=MockJury)
Anova(jury.mod2, test="Roy")

We see that there is a nearly significant interaction between Attr and Crime and a strong effect of Attr.

[htb]

heplot(jury.mod2)

\caption{HE plot for the two-way MANOVA for Years and Serious} {#fig:jury-mod2-HE}

The HE plot shows that the nearly significant interaction of Attr:Crime is mainly in terms of differences among the groups on the response of Years of sentence, with very little contribution of Serious. We explore this interaction in a bit more detail below. The main effect of Attr is also dominated by differences among groups on Years.

If we assume that Years of sentence is the main outcome of interest, it also makes sense to carry out a step-down test of this variable by itself, controlling for the rating of seriousness (Serious) of the crime. The model jury.mod3 below is equivalent to an ANCOVA for Years.

# stepdown test (ANCOVA), controlling for Serious
jury.mod3 <- lm( Years ~ Serious + Attr * Crime, data=MockJury)
t(coef(jury.mod3))
Anova(jury.mod3)

Thus, even when adjusting for Serious rating, there is still a significant main effect of Attr of the photo, but also a hint of an interaction of Attr with Crime. The coefficient for Serious indicates that participants awarded 0.84 additional years of sentence for each 1 unit step on the scale of seriousness of crime.

A particularly useful method for visualizing the fitted effects in such univariate response models is provided by the effects. By default allEffects() calculates the predicted values for all high-order terms in a given model, and the plot method produces plots of these values for each term. The statements below produce \@ref{fig:jury-mod3-eff}.

\setkeys{Gin}{width=1\textwidth}

library(effects)
jury.eff <- allEffects(jury.mod3)
plot(jury.eff, ask=FALSE)

[htb!] % maybe use [H]

\includegraphics[width=\textwidth]{fig/plot-jury-mod3-eff}

\caption{Effect plots for Serious and the Attr * Crime in the ANCOVA model jury.mod3.} {#fig:jury-mod3-eff}

The effect plot for Serious shows the expected linear relation between that variable and Years. Of greater interest here is the nature of the possible interaction of Attr and Crime on Years of sentence, controlling for Serious. The effect plot shows that for the crime of Swindle, there is a much greater Years of sentence awarded to Unattractive defendants.

Egyptian skulls from five epochs

This example examines physical measurements of size and shape made on 150 Egyptian skulls from five epochs ranging from 4000 BC to 150 AD. The measures are: maximal breadth (mb), basibregmatic height (bh), basialiveolar length (bl), and nasal height (nh) of each skull. See [http://www.redwoods.edu/instruct/agarwin/anth_6_measurements.htm] for the definitions of these measures, and \@ref{fig:skulls} for a diagram. The question of interest is whether and how these measurements change over time. Systematic changes over time is of interest because this would indicate interbreeding with immigrant populations.

#| out.width="60%",
#| fig.cap='Diagram of the skull measurements. Maximal breadth and basibregmatic height are the basic measures of "size" of a skull.  Basialveolar length and nasal height are important anthropometric measures of "shape".'
knitr::include_graphics("fig/skulls.pjg")
data(Skulls)
str(Skulls)
table(Skulls$epoch)

Note that epoch is an ordered factor, so the default contrasts will be orthogonal polynomials. This assumes that epoch values are equally spaced, which they are not. However, examining the linear and quadratic trends is useful to a first approximation.

For ease of labeling various outputs, it is useful to trim the epoch values and assign more meaningful variable labels.

# make shorter labels for epochs
Skulls$epoch <- factor(Skulls$epoch, labels=sub("c","",levels(Skulls$epoch)))
# assign better variable labels
vlab <- c("maxBreadth", "basibHeight", "basialLength", "nasalHeight")

We start with some simple displays of the means by epoch. From the numbers, the means don't seem to vary much.
A pairs plot, \@ref{fig:skulls4}, joining points by epoch is somewhat more revealing for the bivariate relations among means.

means <- aggregate(cbind(mb, bh, bl, nh) ~ epoch, data=Skulls, FUN=mean)[,-1]
rownames(means) <- levels(Skulls$epoch)
means
pairs(means, vlab,
      panel = function(x, y) {
          text(x, y, levels(Skulls$epoch))
          lines(x,y)
      })

[htb]

\includegraphics[width=.8\textwidth]{fig/plot-skulls4}

\caption{Pairs plot of means of Skulls data, by epoch} {#fig:skulls4}

Perhaps better for visualizing the trends over time is a set of boxplots, joining means over epoch. Using bwplot() from the lattice package requires reshaping the data from wide to long format. The following code produces \@ref{fig:skulls-bwplot}.

library(lattice)
library(reshape2)
sklong <- melt(Skulls, id="epoch")

bwplot(value ~ epoch | variable, data=sklong, scales="free", 
    ylab="Variable value", xlab="Epoch",
    strip=strip.custom(factor.levels=paste(vlab, " (", levels(sklong$variable), ")", sep="")),
    panel = function(x,y, ...) {
        panel.bwplot(x, y, ...)
        panel.linejoin(x,y, col="red", ...)
    }) 

[htb]

\includegraphics[width=.8\textwidth]{fig/plot-skulls-bwplot}

\caption{Boxplots of Skulls data, by epoch, for each variable} {#fig:skulls-bwplot}

Now, fit the MANOVA model, and test the effect of epoch with Manova(). We see that the multivariate means differ substantially.

# fit manova model
sk.mod <- lm(cbind(mb, bh, bl, nh) ~ epoch, data=Skulls)
Manova(sk.mod)

Perhaps of greater interest are the more focused tests of trends over time. These are based on tests of the coefficients in the model sk.mod being jointly equal to zero, for subsets of the (polynomial) contrasts in epoch.

coef(sk.mod)

We use linearHypothesis() for a multivariate test of the epoch.L linear effect. The linear trend is highly significant. It is not obvious from \@ref{fig:skulls4} that maximal breadth and nasal are increasing over time, while the other two measurements have negative slopes.

coef(sk.mod)["epoch.L",]
print(linearHypothesis(sk.mod, "epoch.L"), SSP=FALSE) # linear component

linearHypothesis() can also be used to test composite hypotheses. Here we test all non-linear coefficients jointly. The result indicates that, collectively, all non-linear terms are not significantly different from zero.

print(linearHypothesis(sk.mod, c("epoch.Q", "epoch.C", "epoch^4")), SSP=FALSE)

Again, HE plots can show the patterns of these tests of multivariate hypotheses. With four response variables, it is easiest to look at all pairwise HE plots with the pairs.mlm() function. The statement below produces \@ref{fig:skulls-HE-pairs}. In this plot, we show the hypothesis ellipsoids for the overall effect of epoch, as well as those for the tests just shown for the linear trend component epoch.L as well as the joint test of all non-linear terms.

pairs(sk.mod, variables=c(1,4,2,3),
    hypotheses=list(Lin="epoch.L", NonLin=c("epoch.Q", "epoch.C", "epoch^4")), 
    var.labels=vlab[c(1,4,2,3)])

[htb]

\includegraphics[width=.8\textwidth]{fig/plot-skulls-HE-pairs}

\caption{Pairs HE plot of Skulls data, showing multivariate tests of epoch, as well as tests of linear and nonlinear trends.} {#fig:skulls-HE-pairs}

These plots have an interesting geometric interpretation: the $\mathbf{H}$ ellipses for the overall effect of epoch are representations of the additive decomposition of this effect into
$\mathbf{H}$ ellipses for the linear and nonlinear linear hypothesis tests according to

$$\mathbf{H}_{\textrm{epoch}} = \mathbf{H}_{\textrm{linear}} + \mathbf{H}_{\textrm{nonlinear}}$$

where the linear term has rank 1 (and so plots as a line), while the nonlinear term has rank 3. In each panel, it can be seen that the large direction of the $\mathbf{H}{\textrm{epoch}}$ leading to significance of this effect corresponds essentially to the linear contrast. $\mathbf{H}{\textrm{nonlinear}}$ is the orthogonal complement of $\mathbf{H}{\textrm{linear}}$ in the space of $\mathbf{H}{\textrm{epoch}}$, but nowhere does it protrude beyond the boundary of the $\mathbf{E}$ ellipsoid.

These relations can be seen somewhat more easily in 3D, as produced using heplot3d() by the following statement. The resulting plot is better viewed interactively in R and is not reproduced here. It would be seen there that the ellipsoid for the nonlinear terms is nearly flat in one direction, corresponding to the panel for (mb, hb) in \@ref{fig:skulls-HE-pairs}.

heplot3d(sk.mod, hypotheses=list(Lin="epoch.L", Quad="epoch.Q", 
                            NonLin=c("epoch.Q", "epoch.C", "epoch^4")), 
    col=c("pink", "blue"))

Finally, a simpler view of these effects can be shown in the canonical space corresponding to the canonical discriminant analysis for epoch. The computations are performed by candisc() which returns a class "candisc" object.

library(candisc)
sk.can <- candisc(sk.mod)
sk.can

The output above shows that, although $\mathbf{H}$_{\textrm{epoch}}$ is of rank 4, two dimensions account for 96% of the between-epoch variation. By the likelihood ratio test, only the canonical correlation for the first dimension can be considered non-zero.

The canonical HE plot is produced by plotting the sk.can object, giving \@ref{fig:skulls-can2}.

heplot(sk.can, prefix="Canonical dimension")

[htb]

\includegraphics[width=.8\textwidth]{fig/plot-skulls-can2}

\caption{Canonical discriminant HE plot of the Skulls data} {#fig:skulls-can2}

In this plot, the first canonical dimension (88%) essentially corresponds to the ordered values of epoch, from right to left. The variable vectors for maximum breadth (mb), basialiveolar length (bl) and nasal height (nh) are largely aligned with this dimension, indicating that they distinguish the groups of skulls according to time. The lengths of these vectors indicates their relative contribution to discrimination among the group means. Only the variable vector for basibregmatic height (bh) points in the direction of the second canonical dimension, corresponding to higher means in the middle of the range of epochs. We leave it to forensic anthropologists to determine if this has any meaning.

Multivariate Multiple Regression Designs

The ideas behind HE plots extend naturally to multivariate multiple regression (MMRA) and multivariate analysis of covariance (MANCOVA). In MMRA, the $\mathbf{X}$ matrix contains only quantitative predictors, while in MANCOVA designs, there is a mixture of factors and quantitative predictors (covariates).

In the MANCOVA case, there is often a subtle difference in emphasis: true MANCOVA analyses focus on the differences among groups defined by the factors, adjusting for (or controlling for) the quantitative covariates. Analyses concerned with homogeneity of regression focus on quantitative predictors and attempt to test whether the regression relations are the same for all groups defined by the factors.

Rohwer data

To illustrate the homogeneity of regression flavor, we use data from a study by Rohwer (given in Timm, 1975: Ex.\ 4.3, 4.7, and 4.23)\nocite% {Timm:75} on kindergarten children, designed to determine how well a set of paired-associate (PA) tasks predicted performance on the Peabody Picture Vocabulary test (PPVT), a student achievement test (SAT), and the Raven Progressive matrices test (Raven). The PA tasks varied in how the stimuli were presented, and are called named (% n), still (s), named still (ns), named action (na), and sentence still (ss).

Two groups were tested: a group of $n=37$ children from a low socioeconomic status (SES) school, and a group of $n=32$ high SES children from an upper-class, white residential school. The data are in the data frame Rohwer in the heplots package:

some(Rohwer,n=6)

At one extreme, we might be tempted to fit separate regression models for each of the High and Low SES groups. This approach is not recommended because it lacks power and does not allow hypotheses about equality of slopes and intercepts to be tested directly.

rohwer.ses1 <- lm(cbind(SAT, PPVT, Raven) ~ n + s + ns + na + ss, data=Rohwer, subset=SES=="Hi")
Anova(rohwer.ses1)

rohwer.ses2 <- lm(cbind(SAT, PPVT, Raven) ~ n + s + ns + na + ss, data=Rohwer, subset=SES=="Lo")
Anova(rohwer.ses2)

This allows separate slopes and intercepts for each of the two groups, but it is difficult to compare the coefficients numerically.

coef(rohwer.ses1)
coef(rohwer.ses2)

Nevertheless, we can visualize the results with HE plots, and here we make use of the fact that several HE plots can be overlaid using the option add=TRUE as shown in \@ref{fig:rohwer-HE1}.

heplot(rohwer.ses1, ylim=c(40,110),col=c("red", "black"), lwd=2, cex=1.2)
heplot(rohwer.ses2, add=TRUE, col=c("blue", "black"), grand.mean=TRUE, error.ellipse=TRUE, lwd=2, cex=1.2)
means <- aggregate(cbind(SAT,PPVT)~SES, data=Rohwer,  mean)
text(means[,2], means[,3], labels=means[,1], pos=3, cex=2, col=c("red", "blue"))

[htb]

\includegraphics[width=.6\textwidth, trim=0 30 0 30]{fig/plot-rohwer-HE1}

\caption{HE plot for SAT and PPVT, showing the effects for the PA predictors for the High and Low SES groups separately} {#fig:rohwer-HE1}

We can readily see the difference in means for the two SES groups (High greater on both variables) and it also appears that the slopes of the predictor ellipses are shallower for the High than the Low group, indicating greater relation with the SAT score.

Alternatively (and optimistically), we can fit a MANCOVA model that allows different means for the two SES groups on the responses, but constrains the slopes for the PA covariates to be equal.

# MANCOVA, assuming equal slopes
rohwer.mod <- lm(cbind(SAT, PPVT, Raven) ~ SES + n + s + ns + na + ss, 
                 data=Rohwer)
Anova(rohwer.mod)

Note that, although the multivariate tests for two of the covariates (ns and na) are highly significant, univariate multiple regression tests for the separate responses [from summary(rohwer.mod)] are relatively weak. We can also test the global 5 df hypothesis, $\mathbf{B}$=$\mathbf{0}$, that all covariates have null effects for all responses as a linear hypothesis (suppressing display of the error and hypothesis SSP matrices),

(covariates <- rownames(coef(rohwer.mod))[-(1:2)])
Regr<-linearHypothesis(rohwer.mod, covariates)
print(Regr, digits=5, SSP=FALSE)

Then 2D views of the additive MANCOVA model rohwer.mod and the overall test for all covariates are produced as follows, giving the plots in \@ref{fig:rohwer-HE2}.

colors <- c("red", "blue", rep("black",5), "#969696")
heplot(rohwer.mod, col=colors,
      hypotheses=list("Regr" = c("n", "s", "ns", "na", "ss")),
      cex=1.5, lwd=c(2, rep(3,5), 4),
      main="(SAT, PPVT, Raven) ~ SES + n + s + ns + na + ss")
heplot(rohwer.mod, col=colors,  variables=c(1,3),
      hypotheses=list("Regr" = c("n", "s", "ns", "na", "ss")),
      cex=1.5, lwd=c(2, rep(3,5), 4),
      main="(SAT, PPVT, Raven) ~ SES + n + s + ns + na + ss")

[htb]

\includegraphics[width=.48\textwidth, trim=0 30 0 30]{fig/plot-rohwer-HE2}
\includegraphics[width=.48\textwidth, trim=0 30 0 30]{fig/plot-rohwer-HE3}

\caption{HE plot for SAT and PPVT (left) and for SAT and Raven (right) using the MANCOVA model } {#fig:rohwer-HE2}

The positive orientation of the Regr ellipses shows that the predicted values for all three responses are positively correlated (more so for SAT and PPVT). As well, the High SES group is higher on all responses than the Low SES group.

Alternatively, all pairwise plots among these responses could be drawn using the pairs function (figure not shown),

pairs(rohwer.mod, col=colors,
      hypotheses=list("Regr" = c("n", "s", "ns", "na", "ss")),
      cex=1.3, lwd=c(2, rep(3,5), 4))

or as a 3D plot, using heplot3d() as shown in \@ref{fig:rohwer-HE3D}.

colors <- c("pink", "blue", rep("black",5), "#969696")
if(requireNamespace("rgl")){
heplot3d(rohwer.mod, col=colors,
    hypotheses=list("Regr" = c("n", "s", "ns", "na", "ss")))
}

[htb]

\includegraphics[width=.6\textwidth]{rohwer-HE3D}

\caption{3D HE plot for the MANCOVA model fit to the Rohwer data} {#fig:rohwer-HE3D}

The MANCOVA model, rohwer.mod, has relatively simple interpretations (large effect of SES, with ns and na as the major predictors) but the test of relies on the assumption of homogeneity of slopes for the predictors. We can test this as follows, adding interactions of SES with each of the covariates:

rohwer.mod2 <- lm(cbind(SAT, PPVT, Raven) ~ SES * (n + s + ns + na + ss),
                  data=Rohwer)
Anova(rohwer.mod2)

It appears from the above that there is only weak evidence of unequal slopes from the separate SES: terms. The evidence for heterogeneity is stronger, however, when these terms are tested collectively using the linearHypothesis() function:

(coefs <- rownames(coef(rohwer.mod2)))
print(linearHypothesis(rohwer.mod2, coefs[grep(":", coefs)]), SSP=FALSE)

This model (rohwer.mod2) is similar in spirit to the two models (rohwer.ses1 and rohwer.ses2) fit for the two SES groups separately and show in \@ref{fig:rohwer-HE1}, except that model rohwer.mod2 assumes a common within-groups error covariance matrix and allows overall tests.

To illustrate model rohwer.mod2, we construct an HE plot for SAT and PPVT shown in \@ref{fig:rohwer-HE4}. To simplify this display, we show the hypothesis ellipses\ for the overall effects of the PA tests in the baseline high-SES group, and a single combined ellipse for all the SESLo: interaction terms that we tested previously, representing differences in slopes between the low and high-SES groups.

Because SES is "treatment-coded" in this model, the ellipse for each covariate represents the hypothesis that the slopes for that covariate are zero in the high-SES baseline category. With this parameterization, the ellipse for Slopes represents the joint hypothesis that slopes for the covariates do not differ in the low-SES group.

colors <- c("red", "blue", rep("black",5), "#969696")
heplot(rohwer.mod2, col=c(colors, "brown"), 
      terms=c("SES", "n", "s", "ns", "na", "ss"), 
      hypotheses=list("Regr" = c("n", "s", "ns", "na", "ss"),
                      "Slopes" = coefs[grep(":", coefs)]))

[htb]

\includegraphics[width=.6\textwidth, trim=0 30 0 30]{fig/plot-rohwer-HE4}

\caption{HE plot for SAT and PPVT, fitting the model rohwer.mod2 that allows unequal slopes for the covariates. } {#fig:rohwer-HE4}

Comparing \@ref{fig:rohwer-HE4} for the heterogeneous slopes model with \@ref{fig:rohwer-HE2} (left) for the homogeneous slopes model, it can be seen that most of the covariates have ellipses of similar size and orientation, reflecting similar evidence against the respective null hypotheses, as does the effect of SES, showing the greater performance of the high-SES group on all response measures. Somewhat more subtle, the error ellipse is noticeably smaller in \@ref{fig:rohwer-HE4}, reflecting the additional variation accounted for by differences in slopes.

Recovery from hernia repair

This example uses the Hernior data, comprising data on measures of post-operative recovery of 32 patients undergoing an elective herniorrhaphy operation, in relation to pre-operative measures.

The response measures are the patient's condition upon leaving the recovery room (leave, a 1-4 scale, 1=best), level of nursing required one week after operation (nurse, a 1-5 scale, 1=worst) and length of stay in hospital after operation (los, in days)

The predictor variables are patient age, sex, physical status (pstat, a 1-5 scale, with 1=perfect health, 5=very poor health), body build (build, a 1-5 scale, with 1=emaciated, ..., 5=obese), and preoperative complications with (cardiac) heart and respiration (resp).

We begin with a model fitting all predictors. Note that the ordinal predictors, pstat, build, cardiac and resp could arguably be treated as factors, rather than linear, regression terms. We ignore this possibility in this example.

Hern.mod <- lm(cbind(leave, nurse, los) ~ age + sex +  pstat +  build + cardiac + resp, 
               data=Hernior)
Anova(Hern.mod) 

The results of the multivariate tests above are somewhat disappointing. Only the physical status predictor (pstat) appears to be significant at conventional levels.

The univariate models for each response are implicit in the MLM Hern.mod. These can be printed using summary(), or we can use summary() to extract certain statistics for each univariate response model, as we do here.

Hern.summary <- summary(Hern.mod)
unlist(lapply(Hern.summary, function(x) x$r.squared))

The univariate tests for predictors in each of these models (not shown here) are hard to interpret, and largely show only significant effects for the leave variable. Yet, the $R^2$ values for the other responses are slightly promising. We proceed to an overall test of $\mathbf{B} = 0$ for all predictors.

# test overall regression
Regr <- linearHypothesis(Hern.mod, c("age", "sexm", "pstat", "build", "cardiac", "resp"))
print(Regr, digits=5, SSP=FALSE)
#| fig.width=7,
#| fig.height=7,
#| fig.cap="HE pairs plot for Hernior data"
clr <- c("red", "darkgray", "blue", "darkgreen", "magenta", "brown", "black")
vlab <- c("LeaveCondition\n(leave)", 
          "NursingCare\n(nurse)", 
          "LengthOfStay\n(los)")

hyp <- list("Regr" = c("age", "sexm", "pstat", "build", "cardiac", "resp"))
pairs(Hern.mod, hypotheses=hyp, col=clr, var.labels=vlab)

A pairs() plot for the \MLM gives the set of plots shown in \@ref{fig:hern-pairs} helps to interpret the relations among the predictors which lead to the highly significant overall test. Among the predictors, age and sex have small and insignificant effects on the outcome measures jointly. The other predictors all contribute to the overall test of $\mathbf{B} = 0$, though in different ways for the various responses. For example, in the panel for (leave, los) in \@ref{fig:hern-pairs}, it can be seen that while only pstat individually is outside the $\mathbf{E}$ ellipse, build and resp contribute to the overall test in an opposite direction.

In this multivariate regression example, all of the terms in the model Hern.mod have 1 df, and so plot as lines in HE plots. An alternative view of these effects can be seen in canonical discriminant space, which, for each predictor shows the scores on the linear combination of the responses that contributes most to the multivariate test of that effect, together with the weights for the responses. We use candiscList() to calculate the canonical analyses for all terms in Hern.mod.

Hern.canL <- candiscList(Hern.mod)

1D canonical discriminant plots for all terms can be obtained interactively with a menu, simply by plotting the Hern.canL object.

plot(Hern.canL)

Plots for separate terms are produced by the lines below, and shown in \@ref{fig:hern-can1} and \@ref{fig:hern-can2}.

plot(Hern.canL, term="pstat")
plot(Hern.canL, term="build")
plot(Hern.canL, term="age")
plot(Hern.canL, term="cardiac")
plot(Hern.canL, term="pstat")
plot(Hern.canL, term="build")
plot(Hern.canL, term="age")
plot(Hern.canL, term="cardiac")

[htb]

\includegraphics[width=.48\textwidth]{fig/plot-hern-can-pstat}
\includegraphics[width=.48\textwidth]{fig/plot-hern-can-build}

\caption{1D Canonical discriminant plots for pstat and build. The canonical scores are such that better outcomes are associated with smaller scores.} {#fig:hern-can1}

[htb!]

\includegraphics[width=.48\textwidth]{fig/plot-hern-can-age}
\includegraphics[width=.48\textwidth]{fig/plot-hern-can-cardiac}

\caption{1D Canonical discriminant plots for age and cardiac. } {#fig:hern-can2}

In these plots, the canonical scores panel shows the linear combinations of the response variables which have the largest possible $R^2$. Better outcomes correspond to numerically smaller canonical scores. The arrows in the structure panel are proportional to the canonical weights.

These plots provide simple interpretations of the results for the canonical combinations of the responses. Better physical status, smaller body build, lower age and absence of cardiac complications are all positively related to better outcomes.

%\SweaveInput{SocGrades.Rnw}

Grades in a Sociology Course

The data set SocGrades contains four outcome measures on student performance in an introductory sociology course together with six potential predictors. These data were used by Marascuilo and Levin (1983) for an example of canonical correlation analysis, but are also suitable as examples of multivariate multiple regression, MANOVA, MANCOVA and step-down analysis in multivariate linear models.

The outcome measures used here are three test scores during the course, midterm1, midterm2, final, and a course evaluation (eval).% \footnote{It is arguable that the students' course evaluation should not be considered a response variable here. It could be used as a predictor in a follow-up, step-down analysis, which would address the separate question of whether the effects on exam grades remain, when eval is controlled for.
} Predictor variables are student's social class (class, an ordered factor with levels 1 > 2 > 3) sex, grade point average (gpa), College Board test scores (boards), previous high school unit in sociology? (hssoc: no, yes), and score on a course pretest (pretest).

The basic MLM is fit below as grades.mod.

data(SocGrades)
grades.mod <- lm(cbind(midterm1, midterm2, final, eval) ~ 
    class + sex + gpa + boards + hssoc + pretest, data=SocGrades)
Anova(grades.mod, test="Roy")

In both univariate and multivariate response models, it is often useful to screen for higher-order terms (interactions, non-linear predictors). This can most easily be done using update(), as we do below. First, try the extended model with all pairwise interactions of the predictors.

grades.mod2 <- update(grades.mod, . ~ .^2)
Anova(grades.mod2, test="Roy")

In the results above, only the interaction of class:sex is significant, and the main effects of hssoc and pretest remain insignificant. A revised model to explore is grades.mod3,

grades.mod3 <- update(grades.mod, . ~ . + class:sex - hssoc - pretest)
Anova(grades.mod3, test="Roy")

A pairwise HE plot for all responses (\@ref{fig:grades-pairs}) shows that nearly all effects are in the expected directions: higher gpa, boards, class leads to better performance on most outcomes. The interaction of class:sex seems to be confined largely to midterm1.

pairs(grades.mod3)

[htb]

\includegraphics[width=.8\textwidth]{fig/plot-grades-pairs}

\caption{HE pairs plot for SocGrades} {#fig:grades-pairs}

These effects are easier to appreciate for the three exam grades jointly in a 3D HE plot. A snapshot is shown in \@ref{fig:grades-HE3D}.

heplot3d(grades.mod3, wire=FALSE)

[htb]

\includegraphics[width=.7\textwidth]{grades-HE3D}

\caption{3D HE plot for SocGrades, model grades.mod3} {#fig:grades-HE3D}

Interactive rotation of this plot shows that the effect of class is only two dimensional, and of these, one is very small. The major axis of the class ellipsoid is aligned with increasing performance on all three grades, with the expected ordering of the three social classes.

The representation of these effects in canonical space is particularly useful here. Again, use candiscList() to compute the canonical decompositions for all terms in the model, and extract the canonical $R^2$ from the terms in the result.

# calculate canonical results for all terms
grades.can <- candiscList(grades.mod3)
# extract canonical R^2s
unlist(lapply(grades.can, function(x) x$canrsq))

We use heplot() on the "candiscList" object to show the effects of class in canonical space, giving \@ref{fig:grades-can-class}.

# plot class effect in canonical space
 op <- par(xpd=TRUE)
 heplot(grades.can, term="class", scale=4, fill=TRUE, var.col="black", var.lwd=2)
 par(op)

[htb]

\includegraphics[width=.7\textwidth]{fig/plot-grades-can-class}

\caption{Canonical HE plot for class effect in grades.mod3} {#fig:grades-can-class}

It can be seen in \@ref{fig:grades-can-class} that nearly all variation in exam performance due to class is aligned with the first canonical dimension. The three tests and course evaluation all have similar weights on this dimension, but the course evaluation differs from the rest along a second, very small dimension.

1D plots of the canonical scores for other effects in the model are also of interest, and provide simple interpretations of these effects on the response variables. The statements below produce the plots shown in \@ref{fig:grades-can1}.

plot(grades.can, term="sex")
plot(grades.can, term="gpa")
plot(grades.can, term="sex")
plot(grades.can, term="gpa")

[htb!]

\includegraphics[width=.48\textwidth]{fig/plot-grades-can-sex}
\includegraphics[width=.48\textwidth]{fig/plot-grades-can-gpa}

\caption{1D Canonical discriminant plots for sex and gpa. Higher canonical scores reflect better course performance.} {#fig:grades-can1}

It is readily seen that males perform better overall, but the effect of sex is strongest for the midterm2. As well, increasing course performance on tests is strongly associated with gpa.

options(old.opts )

%# References

References



friendly/heplots documentation built on May 4, 2024, 12:05 a.m.