Hypothesis Testing {#cha-hypothesis-testing}

#    IPSUR: Introduction to Probability and Statistics Using R
#    Copyright (C) 2018  G. Jay Kerns
#
#    Chapter: Hypothesis Testing
#
#    This file is part of IPSUR.
#
#    IPSUR is free software: you can redistribute it and/or modify
#    it under the terms of the GNU General Public License as published by
#    the Free Software Foundation, either version 3 of the License, or
#    (at your option) any later version.
#
#    IPSUR is distributed in the hope that it will be useful,
#    but WITHOUT ANY WARRANTY; without even the implied warranty of
#    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#    GNU General Public License for more details.
#
#    You should have received a copy of the GNU General Public License
#    along with IPSUR.  If not, see <http://www.gnu.org/licenses/>.
# This chapter's package dependencies
library(TeachingDemos)
library(HH)
# need this for plotting hypothesis tests
# based on work with R. Heiberger in 2009-10

plot.htest <- function (x, hypoth.or.conf = 'Hypoth',...) {
  require(HH)
  if (x$method == "1-sample proportions test with continuity correction" || x$method == "1-sample proportions test without continuity correction"){
    mu <- x$null.value
    obs.mean <- x$estimate
    n <- NA
    std.dev <- abs(obs.mean - mu)/sqrt(x$statistic)
    deg.freedom <- NA
    if(x$alternative == "two.sided"){
      alpha.right <- (1 - attr(x$conf.int, "conf.level"))/2
      Use.alpha.left <- TRUE
      Use.alpha.right <- TRUE
    } else if (x$alternative == "less") {
      alpha.right <- 1 - attr(x$conf.int, "conf.level")
      Use.alpha.left <- TRUE
      Use.alpha.right <- FALSE
    } else {
      alpha.right <- 1 - attr(x$conf.int, "conf.level")
      Use.alpha.left <- FALSE
      Use.alpha.right <- TRUE
    }

  } else if (x$method == "One Sample z-test"){
    mu <- x$null.value
    obs.mean <- x$estimate
    n <- x$parameter[1]
    std.dev <- x$parameter[2]
    deg.freedom <- NA
    if(x$alternative == "two.sided"){
      alpha.right <- (1 - attr(x$conf.int, "conf.level"))/2
      Use.alpha.left <- TRUE
      Use.alpha.right <- TRUE
    } else if (x$alternative == "less") {
      alpha.right <- 1 - attr(x$conf.int, "conf.level")
      Use.alpha.left <- TRUE
      Use.alpha.right <- FALSE
    } else {
      alpha.right <- 1 - attr(x$conf.int, "conf.level")
      Use.alpha.left <- FALSE
      Use.alpha.right <- TRUE
    }
  } else if (x$method == "One Sample t-test" || x$method == "Paired t-test"){
    mu <- x$null.value
    obs.mean <- x$estimate
    n <- x$parameter + 1
    std.dev <- x$estimate/x$statistic*sqrt(n)
    deg.freedom <- x$parameter
    if(x$alternative == "two.sided"){
      alpha.right <- (1 - attr(x$conf.int, "conf.level"))/2
      Use.alpha.left <- TRUE
      Use.alpha.right <- TRUE
    } else if (x$alternative == "less") {
      alpha.right <- 1 - attr(x$conf.int, "conf.level")
      Use.alpha.left <- TRUE
      Use.alpha.right <- FALSE
    } else {
      alpha.right <- 1 - attr(x$conf.int, "conf.level")
      Use.alpha.left <- FALSE
      Use.alpha.right <- TRUE
    }
  } else if (x$method == "Welch Two Sample t-test"){
    mu <- x$null.value
    obs.mean <- -diff(x$estimate)
    n <- x$parameter + 2
    std.dev <- obs.mean/x$statistic*sqrt(n)
    deg.freedom <- x$parameter
    if(x$alternative == "two.sided"){
      alpha.right <- (1 - attr(x$conf.int, "conf.level"))/2
      Use.alpha.left <- TRUE
      Use.alpha.right <- TRUE
    } else if (x$alternative == "less") {
      alpha.right <- 1 - attr(x$conf.int, "conf.level")
      Use.alpha.left <- TRUE
      Use.alpha.right <- FALSE
    } else {
      alpha.right <- 1 - attr(x$conf.int, "conf.level")
      Use.alpha.left <- FALSE
      Use.alpha.right <- TRUE
    }
  } else if (x$method == " Two Sample t-test"){
    mu <- x$null.value
    obs.mean <- -diff(x$estimate)
    n <- x$parameter + 2
    std.dev <- obs.mean/x$statistic*sqrt(n)
    deg.freedom <- x$parameter
    if(x$alternative == "two.sided"){
      alpha.right <- (1 - attr(x$conf.int, "conf.level"))/2
      Use.alpha.left <- TRUE
      Use.alpha.right <- TRUE
    } else if (x$alternative == "less") {
      alpha.right <- 1 - attr(x$conf.int, "conf.level")
      Use.alpha.left <- TRUE
      Use.alpha.right <- FALSE
    } else {
      alpha.right <- 1 - attr(x$conf.int, "conf.level")
      Use.alpha.left <- FALSE
      Use.alpha.right <- TRUE
    }
  }
  return(normal.and.t.dist(
    mu.H0 = mu,
    obs.mean = obs.mean,
    std.dev = std.dev,
    n = n,
    deg.freedom = deg.freedom,
    alpha.right = alpha.right,
    Use.obs.mean = TRUE,
    Use.alpha.left = Use.alpha.left,
    Use.alpha.right = Use.alpha.right,
    hypoth.or.conf = hypoth.or.conf)
  )
}

What do I want them to know?

Introduction {#sec-introduction-hypothesis}

I spent a week during the summer of 2005 at the University of Nebraska at Lincoln grading Advanced Placement Statistics exams, and while I was there I attended a presentation by Dr. Roxy Peck. At the end of her talk she described an activity she had used with students to introduce the basic concepts of hypothesis testing. I was impressed by the activity and have used it in my own classes several times since.

```{block, type="quote"} The instructor (with a box of cookies in hand) enters a class of fifteen or more students and produces a brand-new, sealed deck of ordinary playing cards. The instructor asks for a student volunteer to break the seal, and then the instructor prominently shuffles the deck[^hypoth01] several times in front of the class, after which time the students are asked to line up in a row. They are going to play a game. Each student will draw a card from the top of the deck, in turn. If the card is black, then the lucky student will get a cookie. If the card is red, then the unlucky student will sit down empty-handed. Let the game begin.

The first student draws a card: red. There are jeers and outbursts, and the student slinks off to his/her chair. (S)he is disappointed, of course, but not really. After all, (s)he had a 50-50 chance of getting black, and it did not happen. Oh well.

The second student draws a card: red, again. There are more jeers, and the second student slips away. This student is also disappointed, but again, not so much, because it is probably his/her unlucky day. On to the next student.

The student draws: red again! There are a few wiseguys who yell (happy to make noise, more than anything else), but there are a few other students who are not yelling any more -- they are thinking. This is the third red in a row, which is possible, of course, but what is going on, here? They are not quite sure. They are now concentrating on the next card... it is bound to be black, right?

The fourth student draws: red. Hmmm... now there are groans instead of outbursts. A few of the students at the end of the line shrug their shoulders and start to make their way back to their desk, complaining that the teacher does not want to give away any cookies. There are still some students in line though, salivating, waiting for the inevitable black to appear.

The fifth student draws red. Now it isn't funny any more. As the remaining students make their way back to their seats an uproar ensues, from an entire classroom demanding cookies.

[^hypoth01]: The jokers are removed before shuffling.


Keep the preceding experiment in the back of your mind as you read the
following sections. When you have finished the entire chapter, come
back and read this introduction again. All of the mathematical jargon
that follows is connected to the above paragraphs. In the meantime, I
will get you started:

* **Null hypothesis:** it is an ordinary deck of playing cards, shuffled thoroughly.
* **Alternative hypothesis:** something funny is going on. Either it is a trick deck of cards, or the instructor is doing some fancy shufflework.
* **Observed data:** the sequence of draws from the deck, five reds in a row.

If it were truly an ordinary, well-shuffled deck of cards, the
probability of observing zero blacks out of a sample of size five
(without replacement) from a deck with 26 black cards and 26 red cards
would be

```r 
dhyper(0, m = 26, n = 26, k = 5)

There are two very important final thoughts. First, everybody gets a cookie in the end. Second, the students invariably (and aggressively) attempt to get me to open up the deck and reveal the true nature of the cards. I never do.

Tests for Proportions {#sec-tests-for-proportions}

```{example, label="widget-machine"} We have a machine that makes widgets.

If

* \(Y=0\), then the torque converter is great!
* \(Y=4\), then the torque converter seems to be helping. 
* \(Y=9\), then there is not much evidence that the torque converter helps.
* \(Y=17\), then throw away the torque converter.

Let \(p\) denote the proportion of defectives produced by the
machine. Before the installation of the torque converter \(p\) was
\(0.10\). Then we installed the torque converter. Did \(p\) change?
Did it go up or down? We use statistics to decide. Our method is to
observe data and construct a 95% confidence interval for \(p\),
\begin{equation}
\hat{p} \pm z_{\alpha/2}\sqrt{\frac{\hat{p}(1 - \hat{p})}{n}}.
\end{equation}
If the confidence interval is 

* \([0.01,\,0.05]\), then we are 95% confident that \(0.01\leq
  p \leq 0.05\), so there is evidence that the torque converter is
  helping.
* \([0.15,\,0.19]\), then we are 95% confident that \(0.15\leq
  p \leq 0.19\), so there is evidence that the torque converter is
  hurting.
* \([0.07,\,0.11]\), then there is not enough evidence to conclude
  that the torque converter is doing anything at all, positive or
  negative.


### Terminology

The *null hypothesis* \(H_{0}\) is a "nothing" hypothesis, whose
interpretation could be that nothing has changed, there is no
difference, there is nothing special taking place, *etc*. In Example
\@ref(ex:widget-machine) the null hypothesis would be \(H_{0}:\, p = 0.10.\)
The *alternative hypothesis* \(H_{1}\) is the hypothesis that
something has changed, in this case, \(H_{1}:\, p \neq 0.10\). Our
goal is to statistically *test* the hypothesis \(H_{0}:\, p = 0.10\)
versus the alternative \(H_{1}:\, p \neq 0.10\). Our procedure will
be:

1. Go out and collect some data, in particular, a simple random sample
   of observations from the machine.
2. Suppose that \(H_{0}\) is true and construct a \(100(1-\alpha)\%\)
   confidence interval for \(p\).
3. If the confidence interval does not cover \(p = 0.10\), then we
   *reject* \(H_{0}\). Otherwise, we *fail to reject* \(H_{0}\).

\bigskip

```{block, type="remark"}
Every time we make a decision it is possible to be wrong, and there
are two possible mistakes we can make. We have committed a

* **Type I Error**  if we reject \(H_{0}\) when in fact \(H_{0}\) is
                  true. This would be akin to convicting an innocent
                  person for a crime (s)he did not commit.
* **Type II Error** if we fail to reject \(H_{0}\) when in fact
                   \(H_{1}\) is true. This is analogous to a guilty
                   person escaping conviction.

Type I Errors are usually considered worse[^hypoth02], and we design our statistical procedures to control the probability of making such a mistake. We define the \begin{equation} \mbox{significance level of the test} = \mathbb{P}(\mbox{Type I Error}) = \alpha. \end{equation} We want (\alpha) to be small which conventionally means, say, (\alpha=0.05), (\alpha=0.01), or (\alpha=0.005) (but could mean anything, in principle).

[^hypoth02]: There is no mathematical difference between the errors, however. The bottom line is that we choose one type of error to control with an iron fist, and we try to minimize the probability of the other type. That being said, null hypotheses are often by design to correspond to the "simpler" model, so it is often easier to analyze (and thereby control) the probabilities associated with Type I Errors.

We are ready for tests of hypotheses for one proportion. We know from Section BLANK that when (H_{0}:\,p = p_{0}) is true and (n) is large, \begin{equation} \hat{p} \sim \mathsf{norm}(\mathtt{mean} = p_{0}, \mathtt{sd} = \sqrt{p_{0}(1 - p_{0})/n}), \end{equation} approximately, and the approximation gets better as the sample size gets bigger. Another way to write this is \begin{equation} Z = \frac{\hat{p} - p_{0}}{\sqrt{p_{0}(1 - p_{0})/n}} \sim \mathsf{norm}(\mathtt{mean} = 0, \mathtt{sd} = 1). \end{equation}

(ref:tab-ztest-one-sample-prop)

| (H_{0}) | (H_{a}) | Rejection Region | |-------------------+----------------------+-----------------------------------| | (p = p_{0}) | (p > p_{0}) | (z > z_{\alpha}) | | (p = p_{0}) | (p < p_{0}) | (z < -z_{\alpha}) | | (p = p_{0}) | (p \neq p_{0}) | (\vert z \vert > z_{\alpha/2}) |

Table: Hypothesis tests, population proportion, large sample.

Assumptions for a valid test:

\bigskip

Find 1. The null and alternative hypotheses, 2. Check your assumptions, 
3. Define a critical region with an \(\alpha=0.05\) significance level,
4. Calculate the value of the test statistic and state your conclusion.

\bigskip

``{example, label="prop-test-pvalue-a"} Suppose \(p = \mathrm{the\ proportion\ of\ students}\) who are admitted to the graduate school of the University of California at Berkeley, and suppose that a public relations officer boasts that UCB has historically had a 40% acceptance rate for its graduate school. Consider the data stored in the tableUCBAdmissions` from 1973. Assuming these observations constituted a simple random sample, are they consistent with the officer's claim, or do they provide evidence that the acceptance rate was significantly less than 40%? Use an (\alpha = 0.01) significance level.

Our null hypothesis in this problem is \(H_{0}:\,p = 0.4\) and the
alternative hypothesis is \(H_{1}:\,p < 0.4\). We reject the null
hypothesis if \(\hat{p}\) is too small, that is, if
\begin{equation}
\frac{\hat{p} - 0.4}{\sqrt{0.4(1 - 0.4)/n}} < -z_{\alpha},
\end{equation}
where \(\alpha = 0.01\) and \(-z_{0.01}\) is 

```r 
-qnorm(0.99)

Our only remaining task is to find the value of the test statistic and see where it falls relative to the critical value. We can find the number of people admitted and not admitted to the UCB graduate school with the following.

A <- as.data.frame(UCBAdmissions)
head(A)
xtabs(Freq ~ Admit, data = A)

Now we calculate the value of the test statistic.

phat <- 1755/(1755 + 2771)
(phat - 0.4)/sqrt(0.4 * 0.6/(1755 + 2771)) 

Our test statistic is not less than (-2.32), so it does not fall into the critical region. Therefore, we fail to reject the null hypothesis that the true proportion of students admitted to graduate school is less than 40% and say that the observed data are consistent with the officer's claim at the (\alpha = 0.01) significance level.

\bigskip

```{example, label="prop-test-pvalue-b"} We are going to do Example \@ref(ex:prop-test-pvalue-a) all over again. Everything will be exactly the same except for one change. Suppose we choose significance level (\alpha = 0.05) instead of (\alpha = 0.01). Are the 1973 data consistent with the officer's claim?

Our null and alternative hypotheses are the same. Our observed test
statistic is the same: it was approximately \(-1.68\). But notice that
our critical value has changed: \(\alpha = 0.05\) and \(-z_{0.05}\) is


```r 
-qnorm(0.95)

Our test statistic is less than (-1.64) so it now falls into the critical region! We now reject the null hypothesis and conclude that the 1973 data provide evidence that the true proportion of students admitted to the graduate school of UCB in 1973 was significantly less than 40%. The data are not consistent with the officer's claim at the (\alpha = 0.05) significance level.

What is going on, here? If we choose (\alpha = 0.05) then we reject the null hypothesis, but if we choose (\alpha = 0.01) then we fail to reject the null hypothesis. Our final conclusion seems to depend on our selection of the significance level. This is bad; for a particular test, we never know whether our conclusion would have been different if we had chosen a different significance level.

Or do we?

Clearly, for some significance levels we reject, and for some significance levels we do not. Where is the boundary? That is, what is the significance level for which we would reject at any significance level bigger, and we would fail to reject at any significance level smaller? This boundary value has a special name: it is called the p-value of the test.

\bigskip

The *p-value*, or *observed significance level*, of a hypothesis test
is the probability when the null hypothesis is true of obtaining the
observed value of the test statistic (such as \(\hat{p}\)) or values
more extreme -- meaning, in the direction of the alternative
hypothesis[^hypoth03].

[^hypoth03]: Bickel and Doksum [@Bickel2001] state the definition particularly well: the (p)-value is "the smallest level of significance (\alpha) at which an experimenter using the test statistic (T) would reject (H_{0}) on the basis of the observed sample outcome (x)".

\bigskip

Calculate the \(p\)-value for the test in Examples
\@ref(ex:prop-test-pvalue-A) and \@ref(ex:prop-test-pvalue-B).

The (p)-value for this test is the probability of obtaining a (z)-score equal to our observed test statistic (which had (z)-score (\approx-1.680919)) or more extreme, which in this example is less than the observed test statistic. In other words, we want to know the area under a standard normal curve on the interval ((-\infty,\,-1.680919]). We can get this easily with

pnorm(-1.680919)

We see that the (p)-value is strictly between the significance levels (\alpha = 0.01) and (\alpha = 0.05). This makes sense: it has to be bigger than (\alpha = 0.01) (otherwise we would have rejected (H_{0}) in Example \@ref(ex:prop-test-pvalue-A)) and it must also be smaller than (\alpha = 0.05) (otherwise we would not have rejected (H_{0}) in Example \@ref(ex:prop-test-pvalue-B)). Indeed, (p)-values are a characteristic indicator of whether or not we would have rejected at assorted significance levels, and for this reason a statistician will often skip the calculation of critical regions and critical values entirely. If (s)he knows the (p)-value, then (s)he knows immediately whether or not (s)he would have rejected at any given significance level.

Thus, another way to phrase our significance test procedure is: we will reject (H_{0}) at the (\alpha)-level of significance if the (p)-value is less than (\alpha).

\bigskip

```{block, type="remark"} If we have two populations with proportions (p_{1}) and (p_{2}) then we can test the null hypothesis (H_{0}:\,p_{1} = p_{2}). In that which follows,

\bigskip

(ref:tab-ztest-two-sample-prop)

| \(H_{0}\)         | \(H_{a}\)                | Rejection Region                  |
|-------------------+--------------------------+-----------------------------------|
| \(p_{1} = p_{2}\) | \(p_{1} - p_{2} > 0\)    | \(z > z_{\alpha}\)                |
| \(p_{1} = p_{2}\) | \(p_{1} - p_{2} < 0\)    | \(z < -z_{\alpha}\)               |
| \(p_{1} = p_{2}\) | \(p_{1} - p_{2} \neq 0\) | \(\vert z \vert > z_{\alpha/2}\)  |

Table: Hypothesis tests, difference in population proportions, large sample.


BLANK Example


#### How to do it with R

The following does the test.

```r 
prop.test(1755, 1755 + 2771, p = 0.4, alternative = "less", 
          conf.level = 0.99, correct = FALSE)

Do the following to make the plot.

\bigskip

``{block, type="remark"} In the above we setcorrect = FALSE` to tell the computer that we did not want to use Yates' continuity correction (reference BLANK). It is customary to use Yates when the expected frequency of successes is less than 10. (reference BLANK) You can use it all of the time, but you will have a decrease in power. For large samples the correction does not matter.

#### With the R Commander

If you already know the number of successes and failures, then you can
use the menu `Statistics` \(\triangleright\) `Proportions`
\(\triangleright\) `IPSUR Enter table for single sample...`

Otherwise, your data -- the raw successes and failures -- should be in
a column of the Active Data Set. Furthermore, the data must be stored
as a "factor" internally. If the data are not a factor but are numeric
then you can use the menu `Data` \(\triangleright\) 
`Manage variables in active data set`  \(\triangleright\) 
`Convert numeric variables to factors...` to 
convert the variable to a factor. Or, you can always
use the `factor` function.

Once your unsummarized data is a column, then you can use the menu
`Statistics` \(\triangleright\) `Proportions` \(\triangleright\)
`Single-sample proportion test...`

## One Sample Tests for Means and Variances {#sec-one-sample-tests}


### For Means

Here, \(X_{1}\), \(X_{2}\), ..., \(X_{n}\) are a \(SRS(n)\) from a
\(\mathsf{norm}(\mathtt{mean} = \mu,\,\mathtt{sd} = \sigma)\)
distribution. We would like to test \(H_{0}:\mu = \mu_{0}\).


**Case A:**  Suppose \(\sigma\) is known. Then under \(H_{0}\),
\begin{equation}
   Z = \frac{\overline{X} - \mu_{0}}{\sigma/\sqrt{n}} \sim \mathsf{norm}(\mathtt{mean} = 0,\,\mathtt{sd} = 1).
\end{equation}

(ref:tab-ztest-one-sample)

| \(H_{0}\)         | \(H_{a}\)            | Rejection Region                  |
|-------------------+----------------------+-----------------------------------|
| \(\mu = \mu_{0}\) | \(\mu > \mu_{0}\)    | \(z > z_{\alpha}\)                |
| \(\mu = \mu_{0}\) | \(\mu < \mu_{0}\)    | \(z < -z_{\alpha}\)               |
| \(\mu = \mu_{0}\) | \(\mu \neq \mu_{0}\) | \(\vert z \vert > z_{\alpha/2}\)  |

Table: Hypothesis tests, population mean, large sample.


**Case B:** When \(\sigma\) is unknown, under \(H_{0}\),
\begin{equation}
   T = \frac{\overline{X} - \mu_{0}}{S/\sqrt{n}} \sim \mathsf{t}(\mathtt{df} = n - 1).
\end{equation}


(ref:tab-ttest-one-sample)

| \(H_{0}\)         | \(H_{a}\)            | Rejection Region                        |
|-------------------+----------------------+-----------------------------------------|
| \(\mu = \mu_{0}\) | \(\mu > \mu_{0}\)    | \(t > t_{\alpha}(n - 1)\)               |
| \(\mu = \mu_{0}\) | \(\mu < \mu_{0}\)    | \(t < -t_{\alpha}(n - 1)\)              |
| \(\mu = \mu_{0}\) | \(\mu \neq \mu_{0}\) | \(\vert t \vert > t_{\alpha/2}(n - 1)\) |

Table: Hypothesis tests, population mean, small sample.

\bigskip

```{block, type="remark"}
If \(\sigma\) is unknown but \(n\) is large then we can use the
\(z\)-test.

\bigskip

In this example we

1. Find the null and alternative hypotheses.
2. Choose a test and find the critical region.
3. Calculate the value of the test statistic and state the conclusion.
4. Find the \(p\)-value.

The quantity (\sigma/\sqrt{n}), when (\sigma) is known, is called the standard error of the sample mean. In general, if we have an estimator (\hat{\theta}) then (\sigma_{\hat{\theta}}) is called the standard error of (\hat{\theta}). We usually need to estimate (\sigma_{\hat{\theta}}) with (\widehat{\sigma_{\hat{\theta}}}).

How to do it with R

I am thinking z.test \index{z.test@\texttt{z.test}} in TeachingDemos, t.test \index{t.test@\texttt{t.test}} in base R.

x <- rnorm(37, mean = 2, sd = 3)
z.test(x, mu = 1, sd = 3, conf.level = 0.90)

The RcmdrPlugin.IPSUR package [@RcmdrPlugin.IPSUR] does not have a menu for z.test yet.

x <- rnorm(13, mean = 2, sd = 3)
t.test(x, mu = 0, conf.level = 0.90, alternative = "greater")
plot(t.test(x, mu = 0, conf.level = 0.90, alternative = "greater"))

(ref:cap-ttest-plot) \small A plot of the results from a Student's t test. This graph was generated by code based on joint work with Prof Richard Heiberger and uses the normal.and.t.dist function in the HH package [@HH].

With the R Commander

Your data should be in a single numeric column (a variable) of the Active Data Set. Use the menu Statistics (\triangleright) Means (\triangleright) Single-sample t-test...

Tests for a Variance

Here, (X_{1}), (X_{2}), ..., (X_{n}) are a (SRS(n)) from a (\mathsf{norm}(\mathtt{mean} = \mu,\,\mathtt{sd} = \sigma)) distribution. We would like to test (H_{0}:\,\sigma^{2} = \sigma_{0}). We know that under (H_{0}), [ X^{2} = \frac{(n - 1)S^{2}}{\sigma^{2}} \sim \mathsf{chisq}(\mathtt{df} = n - 1). ]

(ref:tab-sigmatest-one-sample)

| (H_{0}) | (H_{a}) | Rejection Region | |---------------------------------+------------------------------------+--------------------------------------------------------------------------------------| | (\sigma^{2} = \sigma^{2}{0}) | (\sigma^{2} > \sigma^{2}{0}) | (X^{2} > \chi^{2}{\alpha}(n - 1)) | | (\sigma^{2} = \sigma^{2}{0}) | (\sigma^{2} < \sigma^{2}{0}) | (X^{2} < \chi^{2}{1 - \alpha}(n - 1)) | | (\sigma^{2} = \sigma^{2}{0}) | (\sigma^{2} \neq \sigma^{2}{0}) | (X^{2} > \chi^{2}{\alpha/2}(n - 1)) or (X^{2} < \chi^{2}{1 - \alpha/2}(n - 1))

Table: Hypothesis tests, population variance, normal population.

Give some data and a hypothesis. BLANK

* Give an \(\alpha\)-level and test the critical region way.
* Find the \(p\)-value for the test.

How to do it with R

I am thinking about sigma.test \index{sigma.test@\texttt{sigma.test}} in the TeachingDemos package [@TeachingDemos].

sigma.test(women$height, sigma = 8)

Two-Sample Tests for Means and Variances {#sec-two-sample-tests-for-means}

The basic idea for this section is the following. We have (X\sim\mathsf{norm}(\mathtt{mean} = \mu_{X},\,\mathtt{sd} = \sigma_{X})) and (Y\sim\mathsf{norm}(\mathtt{mean} = \mu_{Y},\,\mathtt{sd} = \sigma_{Y})) distributed independently. We would like to know whether (X) and (Y) come from the same population distribution, in other words, we would like to know: \begin{equation} \mbox{Does }X\overset{\mathrm{d}}{=}Y? \end{equation} where the symbol (\overset{\mathrm{d}}{=}) means equality of probability distributions. Since both (X) and (Y) are normal, we may rephrase the question: \begin{equation} \mbox{Does }\mu_{X} = \mu_{Y}\mbox{ and }\sigma_{X} = \sigma_{Y}? \end{equation} Suppose first that we do not know the values of (\sigma_{X}) and (\sigma_{Y}), but we know that they are equal, (\sigma_{X}=\sigma_{Y}). Our test would then simplify to (H_{0}:\,\mu_{X} = \mu_{Y}). We collect data (X_{1}), (X_{2}), ..., (X_{n}) and (Y_{1}), (Y_{2}), ..., (Y_{m}), both simple random samples of size (n) and (m) from their respective normal distributions. Then under (H_{0}) (that is, assuming (H_{0}) is true) we have (\mu_{X} = \mu_{Y}), or rewriting, (\mu_{X} - \mu_{Y} = 0), so \begin{equation} T = \frac{\overline{X} - \overline{Y}}{S_{p}\sqrt{\frac{1}{n} + \frac{1}{m}}} = \frac{\overline{X} - \overline{Y} - (\mu_{X} - \mu_{Y})}{S_{p}\sqrt{\frac{1}{n} + \frac{1}{m}}}\sim\mathsf{t}(\mathtt{df} = n + m - 2). \end{equation}

Independent Samples

```{block, type="remark"} If the values of (\sigma_{X}) and (\sigma_{Y}) are known, then we can plug them in to our statistic: \begin{equation} Z = \frac{\overline{X} - \overline{Y}}{\sqrt{\sigma_{X}^{2}/n + \sigma_{Y}^{2}/m}}; \end{equation} the result will have a (\mathsf{norm}(\mathtt{mean} = 0,\,\mathtt{sd} = 1)) distribution when (H_{0}:\,\mu_{X} = \mu_{Y}) is true.

\bigskip

```{block, type="remark"}
Even if the values of \(\sigma_{X}\) and \(\sigma_{Y}\) are not known,
if both \(n\) and \(m\) are large then we can plug in the sample
estimates and the result will have approximately a
\(\mathsf{norm}(\mathtt{mean} = 0,\,\mathtt{sd} = 1)\) distribution when
\(H_{0}:\,\mu_{X} = \mu_{Y}\) is true.
\begin{equation} 
Z = \frac{\overline{X} - \overline{Y}}{\sqrt{S_{X}^{2}/n + S_{Y}^{2}/m}}.
\end{equation}

\bigskip

```{block, type="remark"} It is often helpful to construct side-by-side boxplots and other visual displays in concert with the hypothesis test. This gives a visual comparison of the samples and helps to identify departures from the test's assumptions -- such as outliers.

\bigskip

```{block, type="remark"}
WATCH YOUR ASSUMPTIONS.

* The normality assumption can be relaxed as long as the population
  distributions are not highly skewed.
* The equal variance assumption can be relaxed as long as both sample
  sizes \(n\) and \(m\) are large. However, if one (or both) samples
  is small, then the test does not perform well; we should instead use
  the methods of Chapter \@ref(cha-resampling-methods).

For a nonparametric alternative to the two-sample (F) test see Chapter \@ref(cha-nonparametric-statistics).

(ref:tab-two-t-test-pooled)

| (H_{0}) | (H_{a}) | Rejection Region | |---------------------+------------------------+----------------------------------------------| | (\mu_{D} = D_{0}) | (\mu_{D} > D_{0}) | (t > t_{\alpha}(n + m - 2)) | | (\mu_{D} = D_{0}) | (\mu_{D} < D_{0}) | (t < -t_{\alpha}(n + m - 2)) | | (\mu_{D} = D_{0}) | (\mu_{D} \neq D_{0}) | (\vert t \vert > t_{\alpha/2}(n + m - 2)) |

Table: Rejection regions, difference in means, pooled t-test.

Paired Samples

(ref:tab-two-t-test-paired)

| (H_{0}) | (H_{a}) | Rejection Region | |---------------------+------------------------+----------------------------------------------| | (\mu_{D} = D_{0}) | (\mu_{D} > D_{0}) | (t > t_{\alpha}(n_{D} - 1)) | | (\mu_{D} = D_{0}) | (\mu_{D} < D_{0}) | (t < -t_{\alpha}(n_{D} - 2)) | | (\mu_{D} = D_{0}) | (\mu_{D} \neq D_{0}) | (\vert t \vert > t_{\alpha/2}(n_{D} - 1)) |

Table: Rejection regions, difference in means, pairled t-test.

How to do it with R

t.test(extra ~ group, data = sleep, paired = TRUE)

Other Hypothesis Tests {#sec-other-hypothesis-tests}

Kolmogorov-Smirnov Goodness-of-Fit Test {#sub-kolmogorov-smirnov-goodness-of-fit-test}

How to do it with R

with(randu, ks.test(x, "punif"))

Most of the small-sample procedures we have studied thus far require that the target population be normally distributed. In later chapters we will be assuming that errors from a proposed model are normally distributed, and one method we will use to assess the model's adequacy will be to investigate the plausibility of that assumption. So, you see, determining whether or not a random sample comes from a normal distribution is an important problem for us. We have already learned graphical methods to address the question, but here we will study a formal hypothesis test of the same.

The test statistic we will use is Wilk's (W), which looks like this: \begin{equation} W = \frac{\left(\sum_{i = 1}^{n} a_{i}x_{(i)} \right)^{2}}{\sum_{i = 1}^{n}(x_{i} - \overline{x})^{2}}, \end{equation} where the (x_{(i)})'s are the order statistics (see Section BLANK) and the constants (a_{i}) form the components of a vector (\mathbf{a}{1\times\mathrm{n}}) defined by \begin{equation} \mathbf{a}=\frac{\mathbf{m}^{\mathrm{T}}\mathbf{V}^{-1}}{\sqrt{\mathbf{m}^{\mathrm{T}}\mathbf{V}^{-1}\mathbf{V}^{-1}\mathbf{m}}}, \end{equation} where (\mathbf{m}{\mathrm{n}\times1}) and (\mathbf{V}{\mathrm{n} \times \mathrm{n}}) are the mean and covariance matrix, respectively, of the order statistics from an (\mathsf{mvnorm} \left(\mathtt{mean} = \mathbf{0},\,\mathtt{sigma} = \mathbf{I}\right)) distribution. This test statistic (W) was published in 1965 by (you guessed it): Shapiro and Wilk [@Wilk1965]. In contrast to most other test statistics we know, we reject (H{0}) if (W) is too small.

How to do it with R

with(women, shapiro.test(height))

Analysis of Variance {#sec-analysis-of-variance}

How to do it with R

I am thinking

with(chickwts, by(weight, feed, shapiro.test))

and

temp <- lm(weight ~ feed, data = chickwts)

and

anova(temp)

BLANK

Plot for the intuition of between versus within group variation.

y1 <- rnorm(300, mean = c(2,8,22))
plot(y1, xlim = c(-1,25), ylim = c(0,0.45) , type = "n")
f <- function(x){dnorm(x, mean = 2)}
curve(f, from = -1, to = 5, add = TRUE, lwd = 2)
f <- function(x){dnorm(x, mean = 8)}
curve(f, from = 5, to = 11, add = TRUE, lwd = 2)
f <- function(x){dnorm(x, mean = 22)}
curve(f, from = 19, to = 25, add = TRUE, lwd = 2)
rug(y1)

(ref:cap-between-versus-within) \small A plot of between group versus within group variation.

old.omd <- par(omd = c(.05,.88, .05,1))
F.setup(df1 = 5, df2 = 30)
F.curve(df1 = 5, df2 = 30, col='blue')
F.observed(3, df1 = 5, df2 = 30)
par(old.omd)

(ref:cap-some-f-plots-hh) \small Some (F) plots from the \texttt{HH} package.

Sample Size and Power {#sec-sample-size-and-power}

The power function of a test for a parameter (\theta) is [ \beta(\theta)=\mathbb{P}{\theta}(\mbox{Reject }H{0}),\quad -\infty < \theta < \infty. ] Here are some properties of power functions:

  1. (\beta(\theta)\leq\alpha) for any (\theta\in\Theta_{0}), and (\beta(\theta_{0})=\alpha). We interpret this by saying that no matter what value (\theta) takes inside the null parameter space, there is never more than a chance of (\alpha) of rejecting the null hypothesis. We have controlled the Type I error rate to be no greater than (\alpha).
  2. (\lim_{n\to\infty}\beta(\theta)=1) for any fixed (\theta\in\Theta_{1}). In other words, as the sample size grows without bound we are able to detect a nonnull value of (\theta) with increasing accuracy, no matter how close it lies to the null parameter space. This may appear to be a good thing at first glance, but it often turns out to be a curse, because this means that our Type II error rate grows as the sample size increases.

How to do it with R

I am thinking about replicate \index{replicate@\texttt{replicate}} here, and also power.examp \index{power.examp@\texttt{power.examp}} from the TeachingDemos package [@TeachingDemos]. There is an even better plot in upcoming work from the HH package [@HH].

power.examp()

(ref:cap-power-examp) \small A plot of significance level and power. This graph was generated by the power.examp function from the TeachingDemos package.

The plot corresponds to the hypothesis test (H_{0}:\,\mu=\mu_{0}) versus (H_{1}:\,\mu=\mu_{1}) (where (\mu_{0}=0) and (\mu_{1}=1), by default) based on a single observation (X\sim\mathsf{norm}(\mathtt{mean}=\mu,\,\mathtt{sd}=\sigma)). The top graph is of the (H_{0}) density while the bottom is of the (H_{1}) density. The significance level is set at (\alpha=0.05), the sample size is (n=1), and the standard deviation is (\sigma=1). The pink area is the significance level, and the critical value (z_{0.05}\approx1.645) is marked at the left boundary -- this defines the rejection region. When (H_{0}) is true, the probability of falling in the rejection region is exactly (\alpha=0.05). The same rejection region is marked on the bottom graph, and the probability of falling in it (when (H_{1}) is true) is the blue area shown at the top of the display to be approximately (0.26). This probability represents the power to detect a non-null mean value of (\mu=1). With the command the run.power.examp() at the command line the same plot opens, but in addition, there are sliders available that allow the user to interactively change the sample size (n), the standard deviation (\sigma), the true difference between the means (\mu_{1}-\mu_{0}), and the significance level (\alpha). By playing around the student can investigate the effect each of the aforementioned parameters has on the statistical power. Note that you need the tkrplot package [@tkrplot] for run.power.examp.

Exercises



Try the IPSUR package in your browser

Any scripts or data that you put into this service are public.

IPSUR documentation built on May 2, 2019, 9:15 a.m.