compare_lm: Compare lm()'s fitted outputs using PRE and R-squared.

View source: R/compare_lm.R

compare_lmR Documentation

Compare lm()'s fitted outputs using PRE and R-squared.

Description

Compare lm()'s fitted outputs using PRE and R-squared.

Usage

compare_lm(
  fitC = NULL,
  fitA = NULL,
  n = NULL,
  PC = NULL,
  PA = NULL,
  SSEC = NULL,
  SSEA = NULL
)

Arguments

fitC

The result of lm() of the Compact model (model C).

fitA

The result of lm() of the Augmented model (model A).

n

Sample size of the model C or model A. Model C and model A must use the same sample, and hence have the same sample size. Non-integer n would be converted to be an integer using as.integer().

PC

The number of parameters in model C. Non-integer PC would be converted to be an integer using as.integer().

PA

The number of parameters in model A. Non-integer PA would be converted to be an integer using as.integer(). as.integer(PA) should be larger than as.integer(PC).

SSEC

The Sum of Squared Errors (SSE) of model C.

SSEA

The Sum of Squared Errors of model A.

Details

compare_lm() compares model A with model C using PRE (Proportional Reduction in Error) , R-squared, f_squared, and post-hoc power. PRE is partial R-squared (called partial Eta-squared in Anova). There are two ways of using compare_lm(). The 1st is giving compare_lm() fitC and fitA. The 2nd is giving n, PC, PA, SSEC, and SSEA. The 1st way is more convenient, and it minimizes precision loss by omitting copying-and-pasting. Note that the F-tests for PRE and that for R-squared change are equivalent. Please refer to Judd et al. (2017) for more details about PRE, and refer to Aberson (2019) for more details about f_squared and post-hoc power.

Value

A matrix with 12 rows and 4 columns. The 1st column reports information for the baseline model (intercept-only model). the 2nd for model C, the third for model A, and the fourth for the change (model A vs. model C). SSE (Sum of Squared Errors), sample size n, df of SSE, and the number of parameters for baseline model, model C, model A, and change (model A vs. model C) are reported in rows 1-3. The information in the 4th column are all for the change; put differently, these results could quantify the effect of one or a set of new parameters model A has but model C doesn't. If fitC and fitA are not inferior to the intercept-only model, R-squared, Adjusted R-squared, PRE, PRE_adjusted, and f_squared for the full model (compared with the baseline model) are reported for model C and model A. If model C or model A has at least one predictor, F-test with p, and post-hoc power would be computed for the corresponding full model.

References

Aberson, C. L. (2019). Applied power analysis for the behavioral sciences. Routledge.

Judd, C. M., McClelland, G. H., & Ryan, C. S. (2017). Data analysis: A model Comparison approach to regression, ANOVA, and beyond. Routledge.

Examples

x1 <- rnorm(193)
x2 <- rnorm(193)
y <- 0.3 + 0.2*x1 + 0.1*x2 + rnorm(193)
dat <- data.frame(y, x1, x2)
# Fix the intercept to constant 1 using I().
fit1 <- lm(I(y - 1) ~ 0, dat)
# Free the intercept.
fit2 <- lm(y ~ 1, dat)
compare_lm(fit1, fit2)
# One predictor.
fit3 <- lm(y ~ x1, dat)
compare_lm(fit2, fit3)
# Fix the intercept to 0.3 using offset().
intercept <- rep(0.3, 193)
fit4 <- lm(y ~ 0 + x1 + offset(intercept), dat)
compare_lm(fit4, fit3)
# Two predictors.
fit5 <- lm(y ~ x1 + x2, dat)
compare_lm(fit2, fit5)
compare_lm(fit3, fit5)
# Fix the slope of x2 to 0.05 using offset().
fit6 <- lm(y ~ x1 + offset(0.05*x2), dat)
compare_lm(fit6, fit5)

Keng documentation built on April 4, 2025, 1:37 a.m.