Description Usage Arguments Details Value References Examples

Calculate fully standardised effects (model coefficients) in standard deviation units, adjusted for multicollinearity.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |

`mod` |
A fitted model object, or a list or nested list of such objects. |

`weights` |
An optional numeric vector of weights to use for model
averaging, or a named list of such vectors. The former should be supplied
when |

`data` |
An optional dataset, used to first refit the model(s). |

`term.names` |
An optional vector of names used to extract and/or sort effects from the output. |

`unique.eff` |
Logical, whether unique effects should be calculated (adjusted for multicollinearity among predictors). |

`cen.x, cen.y` |
Logical, whether effects should be calculated as if from mean-centred variables. |

`std.x, std.y` |
Logical, whether effects should be scaled by the standard deviations of variables. |

`refit.x` |
Logical, whether the model should be refit with mean-centred predictor variables (see Details). |

`incl.raw` |
Logical, whether to append the raw (unstandardised) effects to the output. |

`R.squared` |
Logical, whether R-squared values should also be calculated
(via |

`R2.arg` |
A named list of additional arguments to |

`env` |
Environment in which to look for model data (if none supplied).
Defaults to the |

`stdEff()`

will calculate fully standardised effects (coefficients)
in standard deviation units for a fitted model or list of models. It
achieves this via adjusting the 'raw' model coefficients, so no
standardisation of input variables is required beforehand. Users can simply
specify the model with all variables in their original units and the
function will do the rest. However, the user is free to scale and/or centre
any input variables should they choose, which should not affect the outcome
of standardisation (provided any scaling is by standard deviations). This
may be desirable in some cases, such as to increase numerical stability
during model fitting when variables are on widely different scales.

If arguments `cen.x`

or `cen.y`

are `TRUE`

, effects will be calculated as
if all predictors (x) and/or the response variable (y) were mean-centred
prior to model-fitting (including any dummy variables arising from
categorical predictors). Thus, for an ordinary linear model where centring
of x and y is specified, the intercept will be zero — the mean (or weighted
mean) of y. In addition, if `cen.x = TRUE`

and there are interacting terms
in the model, all effects for lower order terms of the interaction are
adjusted using an expression which ensures that each main effect or lower
order term is estimated at the mean values of the terms they interact with
(zero in a 'centred' model) — typically improving the interpretation of
effects. The expression used comprises a weighted sum of all the effects
that contain the lower order term, with the weight for the term itself
being zero and those for 'containing' terms being the product of the means
of the other variables involved in that term (i.e. those not in the lower
order term itself). For example, for a three-way interaction (x1 * x2 *
x3), the expression for main effect *β1* would be:

*β1 + (β12 * x̄2) + (β13 *
x̄3) + (β123 * x̄2 * x̄3)*

(adapted from here)

In addition, if `std.x = TRUE`

or `unique.eff = TRUE`

(see below), product
terms for interactive effects will be recalculated using mean-centred
variables, to ensure that standard deviations and variance inflation
factors (VIF) for predictors are calculated correctly (the model must be
refit for this latter purpose, to recalculate the variance-covariance
matrix).

If `std.x = TRUE`

, effects are scaled by multiplying by the standard
deviations of predictor variables (or terms), while if `std.y = TRUE`

they
are divided by the standard deviation of the response variable (minus any
offsets). If the model is a GLM, this latter is calculated using the
link-transformed response (or an estimate of same) generated using the
function `glt()`

. If both arguments are true, the effects are regarded as
'fully' standardised in the traditional sense, often referred to as
'betas'.

If `unique.eff = TRUE`

(default), effects are adjusted for
multicollinearity among predictors by dividing by the square root of the
VIFs (Dudgeon 2016, Thompson *et al.* 2017; `RVIF()`

). If they have also
been scaled by the standard deviations of x and y, this converts them to
semipartial correlations, i.e. the correlation between the unique
components of predictors (residualised on other predictors) and the
response variable. This measure of effect size is arguably much more
interpretable and useful than the traditional standardised coefficient, as
it always represents the unique effects of predictors and so can more
readily be compared both within and across models. Values range from zero
to +/- one rather than +/- infinity (as in the case of betas) — putting
them on the same scale as the bivariate correlation between predictor and
response. In the case of GLMs however, the measure is analogous but not
exactly equal to the semipartial correlation, so its values may not always
be bound between +/- one (such cases are likely rare). Importantly, for
ordinary linear models, the square of the semipartial correlation equals
the increase in R-squared when that variable is included last in the model
— directly linking the measure to unique variance explained. See
here
for additional arguments in favour of the use of semipartial correlations.

If `refit.x`

, `cen.x`

, and `unique.eff`

are `TRUE`

and there are
interaction terms in the model, the model will be refit with any
(newly-)centred continuous predictors, in order to calculate correct VIFs
from the variance-covariance matrix. However, refitting may not be
necessary in some circumstances, for example where predictors have already
been mean-centred, and whose values will not subsequently be resampled
(e.g. parametric bootstrap). Setting `refit.x = FALSE`

in such cases will
save time, especially with larger/more complex models and/or bootstrap
runs.

If `incl.raw = TRUE`

, raw (unstandardised) effects can also be appended,
i.e. those with all centring and scaling options set to `FALSE`

(though
still adjusted for multicollinearity, where applicable). These may be of
interest in some cases, for example to compare their bootstrapped
distributions with those of standardised effects.

If `R.squared = TRUE`

, model R-squared values are appended to effects via
the `R2()`

function, with any additional arguments passed via `R2.arg`

.

Finally, if `weights`

are specified, the function calculates a weighted
average of standardised effects across a set (or sets) of different
candidate models for a particular response variable(s) (Burnham & Anderson
2002), via the `avgEst()`

function.

A numeric vector of the standardised effects, or a list or nested list of such vectors.

Burnham, K. P., & Anderson, D. R. (2002). *Model Selection and
Multimodel Inference: A Practical Information-Theoretic Approach* (2nd
ed.). New York: Springer-Verlag. Retrieved from
https://www.springer.com/gb/book/9780387953649

Dudgeon, P. (2016). A Comparative Investigation of Confidence Intervals for
Independent Variables in Linear Regression. *Multivariate Behavioral
Research*, **51**(2-3), 139-153. doi: 10/gfww3f

Thompson, C. G., Kim, R. S., Aloe, A. M., & Becker, B. J. (2017).
Extracting the Variance Inflation Factor and Other Multicollinearity
Diagnostics from Typical Regression Results. *Basic and Applied Social
Psychology*, **39**(2), 81-90. doi: 10/gfww2w

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | ```
library(lme4)
# Standardised (direct) effects for SEM
m <- shipley.sem
stdEff(m)
stdEff(m, cen.y = FALSE, std.y = FALSE) # x-only
stdEff(m, std.x = FALSE, std.y = FALSE) # centred only
stdEff(m, cen.x = FALSE, cen.y = FALSE) # scaled only
stdEff(m, unique.eff = FALSE) # include multicollinearity
stdEff(m, R.squared = TRUE) # add R-squared
stdEff(m, incl.raw = TRUE) # add unstandardised
# Demonstrate equality with effects from manually-standardised variables
# (gaussian models only)
m <- shipley.growth[[3]]
d <- data.frame(scale(na.omit(shipley)))
e1 <- stdEff(m, unique.eff = FALSE)
e2 <- coef(summary(update(m, data = d)))[, 1]
stopifnot(all.equal(e1, e2))
# Demonstrate equality with square root of increment in R-squared
# (ordinary linear models only)
m <- lm(Growth ~ Date + DD + lat, data = shipley)
r2 <- summary(m)$r.squared
e1 <- stdEff(m)[-1]
en <- names(e1)
e2 <- sapply(en, function(i) {
f <- reformulate(en[!en %in% i])
r2i <- summary(update(m, f))$r.squared
sqrt(r2 - r2i)
})
stopifnot(all.equal(e1, e2))
# Model-averaged standardised effects
m <- shipley.growth # candidate models
w <- runif(length(m), 0, 1) # weights
stdEff(m, w)
``` |

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.