Automated Interpretation of Indices of Effect Size"

library(knitr)
options(knitr.kable.NA = '')
knitr::opts_chunk$set(comment=">")
options(digits=2)

Why?

The metrics used in statistics (indices of fit, model performance or parameter estimates) can be very abstract. A long experience is required to intuitively "feel" the meaning of their values. In order to facilitate the understanding of the results they are facing, many scientists use (often implicitly) some set of rules of thumb. Thus, in order to validate and standardize such interpretation grids, some authors validated and published them in the form of guidelines.

One of the most famous interpertation grid was proposed by Cohen (1988) for a series of widely used indices, such as the correlation r (r = .20, small; r = .40, moderate and r = .60, large) or the standardized difference (Cohen's d). However, there is now a clear evidence that Cohen's guidelines (which he himself later disavowed; Funder, 2019) are much too stringent and not particularly meaningful taken out of context [@funder2019evaluating]. This led to the emergence of a litterature discussing and creating new sets of rules of thumb.

Altough everybody agrees on the fact that effect size interpretation in a study should be justified with a rationale (and depend on the context, the field, the litterature, the hypothesis, etc.), these pre-baked rules can still nevertheless be useful to give a rough idea or frame of reference to understand scientific results.

The package effectsize implements such sets of rules of thumb for a variety of indices in a flexible and explicit fashion, helping you understanding and reporting your results in a scientific yet meaningful way. Again, readers should keep in mind that these thresholds, altough "validated", remain arbitrary. Thus, their use should be discussed on a case-by-case basis depending on the field, hypotheses, prior results and so on, to avoid their crystalisation, as for the infamous p < .05 example.

Moreovere, some authors suggest the counter-intuitive idea that very large effects, especially in the context of psychological research, is likely to be a "gross overestimate that will rarely be found in a large sample or in a replication" [@funder2019evaluating]. They suggest that smaller effect size are worth taking seriously (as they can be potentially consequential), as well as more believable.

Supported Indices

Coefficient of determination (R2)

@falk1992primer

interpret_r2(x, rules = "falk1992")

@cohen1988statistical

interpret_r2(x, rules = "cohen1988")

@chin1998partial

interpret_r2(x, rules = "chin1998")

@hair2011pls

interpret_r2(x, rules = "hair2011")

Correlation r

interpret_r(x, rules = "funder2019")

@gignac2016effect

Gignac's rules of thumb are actually one of few interpretation grid justified and based on actual data, in this case on the distribution of effect magnitudes in the litterature.

interpret_r(x, rules = "gignac2016")

@cohen1988statistical

interpret_r(x, rules = "cohen1988")

@evans1996straightforward

interpret_r(x, rules = "evans1996")

Standardized Difference d (Cohen's d)

The sandardized difference can be obtained through the standardization of linear model's parameters or data, in which they can be used as indices of effect size.

interpret_d(x, rules = "funder2019")

@gignac2016effect

Gignac's rules of thumb are actually one of few interpretation grid justified and based on actual data, in this case on the distribution of effect magnitudes in the litterature.

interpret_d(x, rules = "gignac2016")

@cohen1988statistical

interpret_d(x, rules = "cohen1988")

@sawilowsky2009new

interpret_d(x, rules = "sawilowsky2009")

Odds ratio

Odds ratio, and log odds ratio, are often found in epidemiological studies. However, they are also the parameters of logistic regressions, where they can be used as indices of effect size. Note that the (log) odds ratio from logistic regression coefficients are unstandardized, as they depend on the scale of the predictor. In order to apply the following guidelines, make sure you standardize your predictors!

@chen2010big

interpret_odds(x, rules = "chen2010")

@cohen1988statistical

interpret_odds(x, rules = "cohen1988")

This converts (log) odds ratio to standardized difference d using the following formula [@cohen1988statistical;@sanchez2003effect]:

d <- log_odds * (sqrt(3) / pi)

Omega Squared

The Omega squared is a measure of effect size used in ANOVAs. It is an estimate of how much variance in the response variables are accounted for by the explanatory variables. Omega squared is widely viewed as a lesser biased alternative to eta-squared, especially when sample sizes are small.

@field2013discovering

interpret_omega_squared(x, rules = "field2013")

Bayes Factor (BF)

Bayes factors (BF) are continuous measures of relative evidence, with a Bayes factor greater than 1 giving evidence in favor of one of the models (the numerator), and a Bayes factor smaller than 1 giving evidence in favor of the other model (the denominator). Yet, it is common to interpret the magnitude of relative evidence based on conventions of intervals (presented below), such that the values of a BF10 (comparing the alternative to the null) can be interpreted as:

For human readability, it is recommended to report BFs so that the ratios are larger than 1 - for example, it's harder to understand a BF10\=0.07 (indicating the data are 0.07 times more probable under the alternative) than a BF01\=1/0.07\=14.3 (indicating the data are 14.3 times more probable under the null. BFs between 0 and 1, indicating evidence against the hypothesis, can be converted via bf = 1 / abs(bf).

One can report Bayes factors using the following sentence:

There is a strong evidence against the null hypothesis (BF = 12.2).

@jeffreys1961theory

interpret_bf(x, rules = "jeffreys1961")

@raftery1995bayesian

interpret_bf(x, rules = "raftery1995")

Bayesian Convergence Diagnostic (Rthat and Effective Sample Size)

Experts have suggested thresholds value to help interpreting and convergence and sampling quality. As such, Rhat should not be larger than 1.1 [@gelman1992inference] or 1.01 [@vehtari2019rank]. An effective sample size (ESS) greater than 1,000 is sufficient for stable estimates [@burkner2017brms].

Other Bayesian Indices (\% in ROPE, pd)

The interpretation of Bayesian indices is detailed in this article.

References



Try the effectsize package in your browser

Any scripts or data that you put into this service are public.

effectsize documentation built on Jan. 28, 2020, 1:07 a.m.