Automated Interpretation of Indices of Effect Size"

library(knitr)
options(knitr.kable.NA = "")
knitr::opts_chunk$set(comment = ">")
options(digits = 2)

Why?

The metrics used in statistics (indices of fit, model performance, or parameter estimates) can be very abstract. A long experience is required to intuitively feel the meaning of their values. In order to facilitate the understanding of the results they are facing, many scientists use (often implicitly) some set of rules of thumb. Some of these rules of thumb have been standardize and validated and subsequently published as guidelines. Understandably then, such rules of thumb are just suggestions and there is nothing universal about them. The interpretation of any effect size measures is always going to be relative to the discipline, the specific data, and the aims of the analyst. This is important because what might be considered a small effect in psychology might be large for some other field like public health.

One of the most famous interpretation grids was proposed by Cohen (1988) for a series of widely used indices, such as the correlation r (r = .20, small; r = .40, moderate and r = .60, large) or the standardized difference (Cohen's d). However, there is now a clear evidence that Cohen's guidelines (which he himself later disavowed; Funder, 2019) are much too stringent and not particularly meaningful taken out of context [@funder2019evaluating]. This led to the emergence of a literature discussing and creating new sets of rules of thumb.

Although everybody agrees on the fact that effect size interpretation in a study should be justified with a rationale (and depend on the context, the field, the literature, the hypothesis, etc.), these pre-baked rules can nevertheless be useful to give a rough idea or frame of reference to understand scientific results.

The package effectsize catalogs such sets of rules of thumb for a variety of indices in a flexible and explicit fashion, helping you understand and report your results in a scientific yet meaningful way. Again, readers should keep in mind that these thresholds, as ubiquitous as they may be, remain arbitrary. Thus, their use should be discussed on a case-by-case basis depending on the field, hypotheses, prior results, and so on, to avoid their crystallization, as for the infamous $p < .05$ criterion of hypothesis testing.

Moreover, some authors suggest the counter-intuitive idea that very large effects, especially in the context of psychological research, is likely to be a "gross overestimate that will rarely be found in a large sample or in a replication" [@funder2019evaluating]. They suggest that smaller effect size are worth taking seriously (as they can be potentially consequential), as well as more believable.

Correlation r

There can be used to interpret not only Pearson's correlation coefficient, but also Spearman's, $\phi$ (phi), Cramer's V and Tschuprow's T. Although Cohen's w and Pearson's C are not a correlation coefficients, they are often also interpreted as such.

@funder2019evaluating

interpret_r(x, rules = "funder2019")

@gignac2016effect

Gignac's rules of thumb are actually one of few interpretation grid justified and based on actual data, in this case on the distribution of effect magnitudes in the literature.

interpret_r(x, rules = "gignac2016")

@cohen1988statistical

interpret_r(x, rules = "cohen1988")

@evans1996straightforward

interpret_r(x, rules = "evans1996")

@lovakov2021empirically

interpret_r(x, rules = "lovakov2021")

Standardized Difference d (Cohen's d)

The standardized difference can be obtained through the standardization of linear model's parameters or data, in which they can be used as indices of effect size.

@cohen1988statistical

interpret_cohens_d(x, rules = "cohen1988")

@sawilowsky2009new

interpret_cohens_d(x, rules = "sawilowsky2009")

@gignac2016effect

Gignac's rules of thumb are actually one of few interpretation grid justified and based on actual data, in this case on the distribution of effect magnitudes in the literature. These is in fact the same grid used for r, based on the conversion of r to d:

interpret_cohens_d(x, rules = "gignac2016")

@lovakov2021empirically

interpret_cohens_d(x, rules = "lovakov2021")

Odds Ratio (OR)

Odds ratio, and log odds ratio, are often found in epidemiological studies. However, they are also the parameters of logistic regressions, where they can be used as indices of effect size. Note that the (log) odds ratio from logistic regression coefficients are unstandardized, as they depend on the scale of the predictor. In order to apply the following guidelines, make sure you standardize your predictors!

Keep in mind that these apply to Odds ratios, so Odds ratio of 10 is as extreme as a Odds ratio of 0.1 (1/10).

@chen2010big

interpret_oddsratio(x, rules = "chen2010")

@cohen1988statistical

interpret_oddsratio(x, rules = "cohen1988")

This converts (log) odds ratio to standardized difference d using the following formula [@cohen1988statistical;@sanchez2003effect]:

$$ d = log(OR) \times \frac{\sqrt{3}}{\pi} $$

Coefficient of determination (R2)

For Linear Regression

@cohen1988statistical

interpret_r2(x, rules = "cohen1988")

@falk1992primer

interpret_r2(x, rules = "falk1992")

For PLS / SEM R-Squared of latent variables

@chin1998partial

interpret_r2(x, rules = "chin1998")

@hair2011pls

interpret_r2(x, rules = "hair2011")

Omega / Eta / Epsilon Squared

The Omega squared is a measure of effect size used in ANOVAs. It is an estimate of how much variance in the response variables are accounted for by the explanatory variables. Omega squared is widely viewed as a lesser biased alternative to eta-squared, especially when sample sizes are small.

@field2013discovering

interpret_omega_squared(x, rules = "field2013")

@cohen1992power

These are applicable to one-way ANOVAs, or to partial Eta / Omega / Epsilon Squared in a multi-way ANOVA.

interpret_omega_squared(x, rules = "cohen1992")

Kendall's coefficient of concordance

The interpretation of Kendall's coefficient of concordance (w) is a measure of effect size used in non-parametric ANOVAs (the Friedman rank sum test). It is an estimate of agreement among multiple raters.

@landis1977measurement

interpret_omega_squared(w, rules = "landis1977")

Cohen's g

Cohen's g is a measure of effect size used for McNemar's test of agreement in selection - when repeating a multiple chose selection, is the percent of matches (first response is equal to the second response) different than 50%?

@cohen1988statistical

interpret_cohens_g(x, rules = "cohen1988")

Interpretation of other Indices

effectsize also offers functions for interpreting other statistical indices:

References



Try the effectsize package in your browser

Any scripts or data that you put into this service are public.

effectsize documentation built on Sept. 14, 2023, 5:07 p.m.