cohens_g | R Documentation |

Cohen's *g* is an effect size of asymmetry (or marginal heterogeneity) for
dependent (paired) contingency tables ranging between 0 (perfect symmetry)
and 0.5 (perfect asymmetry) (see `stats::mcnemar.test()`

). (Note this is not
*not* a measure of (dis)agreement between the pairs, but of (a)symmetry.)

```
cohens_g(x, y = NULL, ci = 0.95, alternative = "two.sided", ...)
```

`x` |
a numeric vector or matrix. |

`y` |
a numeric vector; ignored if |

`ci` |
Confidence Interval (CI) level |

`alternative` |
a character string specifying the alternative hypothesis;
Controls the type of CI returned: |

`...` |
Ignored |

A data frame with the effect size (`Cohens_g`

, `Risk_ratio`

(possibly with the prefix `log_`

), `Cohens_h`

) and its CIs (`CI_low`

and
`CI_high`

).

Confidence intervals are based on the proportion (`P = g + 0.5`

)
confidence intervals returned by `stats::prop.test()`

(minus 0.5), which give
a good close approximation.

"Confidence intervals on measures of effect size convey all the information
in a hypothesis test, and more." (Steiger, 2004). Confidence (compatibility)
intervals and p values are complementary summaries of parameter uncertainty
given the observed data. A dichotomous hypothesis test could be performed
with either a CI or a p value. The 100 (1 - `\alpha`

)% confidence
interval contains all of the parameter values for which *p* > `\alpha`

for the current data and model. For example, a 95% confidence interval
contains all of the values for which p > .05.

Note that a confidence interval including 0 *does not* indicate that the null
(no effect) is true. Rather, it suggests that the observed data together with
the model and its assumptions combined do not provided clear evidence against
a parameter value of 0 (same as with any other value in the interval), with
the level of this evidence defined by the chosen `\alpha`

level (Rafi &
Greenland, 2020; Schweder & Hjort, 2016; Xie & Singh, 2013). To infer no
effect, additional judgments about what parameter values are "close enough"
to 0 to be negligible are needed ("equivalence testing"; Bauer & Kiesser,
1996).

`see`

The `see`

package contains relevant plotting functions. See the plotting vignette in the `see`

package.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd Ed.). New York: Routledge.

Other effect sizes for contingency table:
`oddsratio()`

,
`phi()`

```
data("screening_test")
phi(screening_test$Diagnosis, screening_test$Test1)
phi(screening_test$Diagnosis, screening_test$Test2)
# Both tests seem comparable - but are the tests actually different?
(tests <- table(Test1 = screening_test$Test1, Test2 = screening_test$Test2))
mcnemar.test(tests)
cohens_g(tests)
# Test 2 gives a negative result more than test 1!
```

effectsize documentation built on Sept. 14, 2023, 5:07 p.m.

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.