Power Calculation for the Cochran-Mantel-Haenszel Test"

Calculating post hoc Power

The Data Set

We will use the Titanic{.r} dataset in the datasets{.r} package to demonstrate a post hoc power calculation. This dataset is explored in greater detail in the introductory vignette.

Let $X$ = sex, $Y$ = survival, and $Z$ = class. Just as was performed in the introductory vignette, we will load the data and then create a set of partial tables stratified at fixed levels of class.

data(Titanic, package = "datasets")
partial_tables <- margin.table(Titanic, c(2,4,1))
ftable(partial_tables)

The Test Statistic

Performing a CMH test on the partial tables above shows that there is strong evidence that the common odds ratio is greater than one.

mantelhaen.test(partial_tables)

The Power Calculation

Using the power.cmh.test(){.r} function, we may calculate the probability that our rejection of the null hypothesis is a true result.

library(samplesizeCMH)

power.cmh.test(
  p1 = apply(partial_tables, 3, prop.table, 2)[1,],
  p2 = apply(partial_tables, 3, prop.table, 2)[3,],
  N = sum(Titanic),
  power = NULL,
  s = apply(partial_tables[,1,],2,sum) / apply(partial_tables,3,sum),
  t = apply(partial_tables, 3, sum) / sum(Titanic)
)

Perhaps unsurprisingly, the power of this test approaches 100%.

Power and Effect Size in a Negative Result

It does not make sense to calculate power in a study when the null hypothesis is not rejected. However, it may be of interest to the researcher to determine the minimum effect size that could be detected, given the sample size of the study and a specified power. We use the example of the Nurses' Health Study explored by @Munoz1984. This study looked for a link between oral contraceptives and breast cancer, stratified by age [@Barton1980; @Hennekens1984]. The data for this illustration has been included in the samplesizeCMH package.

data(contraceptives, package = "samplesizeCMH")
ftable(contraceptives)

Here, as shown in @Hennekens1984, the Mantel-Haenszel test shows non-significant results, with the common odds ratio approximately equal to 1.

mantelhaen.test(contraceptives)

The question now is what effect size would be detectable if an effect truly did exist in the data. That is to say, what effect size were we prepared to correctly detect given our data and specified significance level and power? We may vectorize the power.cmh.test(){.r} function to test a wide range of effect sizes to see where the effect size crosses over into adequate power.

vpower <- Vectorize(power.cmh.test, "theta", SIMPLIFY = FALSE)

thetas <- seq(1.05,1.5,0.05)

power_list <- vpower(
  theta = thetas,
  p2 = contraceptives[1,2,] / apply(contraceptives[,2,],2,sum),
  N = sum(contraceptives),
  power = NULL,
  s = 1/11,
  t = apply(contraceptives, 3, sum) / sum(contraceptives),
  alternative = "greater"
)

powers <- sapply(power_list, "[[", "power")

names(powers) <- thetas

powers

Using the data above, we may create a power curve.

plot(y = powers, x = thetas, main = "Power curve as a function of effect size")
abline(h = 0.95, col = "gold")
abline(h = 0.90, col = "red")
abline(h = 0.80, col = "blue")

legend(
  "bottomright", 
  legend = c("95%", "90%", "80%"), 
  col = c("gold", "red", "blue"),
  bty = "n",
  lty = c(1,1),
  title = "Power level"
  )

As we can see from above, 90% power would have been achieved if the common odds ratio was estimated to be approximately 1.25.

References



Try the samplesizeCMH package in your browser

Any scripts or data that you put into this service are public.

samplesizeCMH documentation built on May 2, 2019, 6:38 a.m.