gof: Goodness of fit tests for binomial regression

Description Usage Arguments Details Value Note Author(s) References Examples

View source: R/gof.R

Description

Goodness of fit tests for binomial regression

Usage

1
2
3
4
gof(x, ...)

## S3 method for class 'glm'
gof(x, ..., g = 10, plotROC = TRUE)

Arguments

x

A regression model with class glm and x$family$family == "binomial".

...

Additional arguments when plotting the receiver-operating curve. See:
?pROC::roc
and
?pROC::plot.roc

g

Number of groups (quantiles) into which to split observations for the Hosmer-Lemeshow and the modified Hosmer-Lemeshow tests.

plotROC

Plot a receiver operating curve?

Details

Details of the elements in the returned list follow below:

ct:
A contingency table, similar to the output of dx.
The following are given per covariate group:

n number of observations
y1hat predicted number of observations with y=1
y1 actual number of observations with y=1
y0hat predicted number of observations with y=0
y0 actual number of observations with y=0

chiSq:
Chi-squared tests of the significance of the model.
Pearsons test and the deviance D test are given.
These are calculated by indididual I, by covariate group G and also from the contingency table CT above. They are calculated as:

Chisq = SUM Pr[i]^2

and

D = SUM dr[i]^2

The statistics should follow a chiSq distribution with n - p degrees of freedom.
Here, n is the number of observations (taken individually or by covariate group) and p is the number pf predictors in the model.
A high p value for the test suggests that the model is a poor fit.
The assumption of a chiSq distribution is most valid when observations are considered by group.
The statistics from the contingency table should be similar to those obtained when caluclated by group.

ctHL:
The contingency table for the Hosmer-Lemeshow test.
The observations are ordered by probability, then grouped into g groups of approximately equal size.
The columns are:

P the probability
y1 the actual number of observations with y=1
y1hat the predicted number of observations with y=1
y0 the actual number of observations with y=0
y0hat the predicted number of observations with y=0
n the number of observations
Pbar the mean probability, which is (n * P) / SUM(n)

gof:
All of these tests rely on assessing the effect of adding an additional variable to the model.
Thus a low p value for any of these tests implies that the model is a poor fit.

Hosmer and Lemeshow tests

Hosmer and Lemeshows C statistic is based on: y[k], the number of observations where y=1, n[k], the number of observations and Pbar[k], the average probability in group k:

Pbar[k] = SUM(i) n[i]P[i] / n[k], k=1,2...g

The statistic is:

C = SUM(k=1...g) (y[k] - n[k]Pbar[k])^2 / n[k]Pbar[k](1-Pbar[k])

This should follow a chiSq distribution with g - 2 degrees of freedom.

The modified Hosmer and Lemeshow test is assesses the change in model deviance D when G is added as a predictor. That is, a linear model is fit as:

dr[i] ~ G, dr[i] = deviance residual

and the effect of adding G assessed with anova(lm(dr ~ G)).

Osius and Rojek's tests

These are based on a power-divergence statistic PD[l] (l=1 for Pearsons test) and the standard deviation (herein, of a binomial distribution) SD. The statistic is:

Z[OR] = PD[l] - lbar / SD[l]


For logistic regression, it is calculated as:

Z[OR] = (chiSq - (n - p)) / (2 * n * SUM 1/n[i])^0.5

where RSS is the residual sum-of-squares from a weighted linear regression:

(1 - 2 * P[i]) / SD[i] ~ X, weights = SD[i]

Here \bold{X} is the matrix of model predictors.
A two-tailed test against a standard normal distribution N ~ (0, 1) should not be significant.

Stukels tests

These are based on the addition of the vectors:

z[1] = Pgeq0.5 = sign(P[i] >= 0.5)

and / or

z[2] = Pl0.5 = sign(P[i] < 0.5)

to the existing model predictors.
The model fit is compared to the original using the score (e.g. SstPgeq0.5) and likelihood-ratio (e.g. SllPl0.5) tests. These models should not be a significantly better fit to the data.

R2:

Pseudo-R^2 comparisons of the predicted values from the fitted model vs. an intercept-only model.

sum-of-squares

The sum-of-squres (linear-regression) measure based on the squared Pearson correlation coefficient by individual is based on the mean probability:

Pbar = SUM(n[i]) / n

and is given by:

R2[ssI] = 1 - SUM(y[i] - P[i])^2 / SUM(y[i] - Pbar)^2

The same measure, by covariate group, is:

R2[ssG] = 1 - SUM(y[i] - n[i] * P[i])^2 / SUM(y[i] - n[i] * Pbar)^2

log-likelihood

The log-likelihood based R^2 measure per individual is based on:

It is calculated as

R2[llI] = (ll[0] - ll[p]) / ll[0]

This measure per covariate group is based on ll[s], the log-likelihood for the saturated model, which is calculated from the model deviance D:

ll[s] = ll[p] - D / 2

It is cacluated as:

R2[llG] = (ll[0] - ll[p]) / (ll[0] - ll[s])

auc:

The area under the receiver-operating curve.
This may broadly be interpreted as:

auc Discrimination
auc=0.5 useless
0.7 <= auc < 0.8 acceptable
0.8 <= auc < 0.9 excellent
auc >= 0.9 outstanding

auc >= 0.9 occurs rarely as this reuqires almost complete separation/ perfect classification.

Value

A list of data.tables as follows:

ct

Contingency table.

chiSq

Chi-squared tests of the significance of the model. The tests are:

PrI test of the Pearsons residuals, calculated by individual
drI test of the deviance residuals, calculated by individual
PrG test of the Pearsons residuals, calculated by covariate group
drG test of the deviance residuals, calculated by covariate group
PrCT test of the Pearsons residuals, calculated from the contingency table
drCT test of the deviance residuals, calculated from the contingency table
ctHL

Contingency table for the Hosmer-Lemeshow test.

gof

Goodness-of-fit tests. These are:

  • HL Hosmer-Lemeshow's C statistic.

  • mHL The modified Hosmer-Lemeshow test.

  • OsRo Osius and Rojek's test of the link function.

  • S Stukel's tests:

    SstPgeq0.5 score test for addition of vector z1
    SstPl0.5 score test for addition of vector z2
    SstBoth score test for addition of vector z2
    SllPgeq0.5 log-likelihood test for addition of vector z1
    SllPl0.5 log-likelihood test for addition of vector z2
    SllBoth log-likelihood test for addition of vectors z1 and z2
R2

R-squared like tests:

ssI sum-of-squares, by individual
ssG sum-of-squares, by covariate group
llI log-likelihood, by individual
llG log-likelihood, by covariate group.
auc

Area under the receiver-operating curve (ROC) with 95 % CIs.

Additionally, if plotROC=TRUE, a plot of the ROC.

Note

The returned list has the additional class of "gof.glm".
The print method for this class shows only those results which have a p value.

Author(s)

Modified Hosmer & Lemeshow goodness of fit test: adapted from existing work by Yongmei Ni. Code at github.

References

Osius G & Rojek D, 1992. Normal goodness-of-fit tests for multinomial models with large degrees of freedom. Journal of the American Statistical Association. 87(420):1145-52. doi: 10.1080/01621459.1992.10476271. Also available at JSTOR at https://www.jstor.org/stable/2290653

Hosmer D, Hosmer T, Le Cessie S & Lemeshow S (1997). A comparison of goodness-of-fit tests for the logistic regression model. Statistics in Medicine. 16(9):965-80. doi: 10.1002/(SICI)1097-0258(19970515)16:9<965::AID-SIM509>3.0.CO;2-O

Mittlboch M, Schemper M (1996). Explained variation for logistic regression. Statistics in Medicine. 15(19):1987-97. doi: 10.1002/(SICI)1097-0258(19961015)15:19<1987::AID-SIM318>3.0.CO;2-9 Also available from CiteSeerX / Penn State University (free).

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
## H&L 2nd ed. Sections 5.2.2, 5.2.4, 5.2.5. Pages 147-167.
## Not run: 
data(uis)
uis$NDRGFP1 <- 10 / (uis$NDRGTX + 1)
uis$NDRGFP2 <- uis$NDRGFP1 * log((uis$NDRGTX + 1) / 10)
g1 <- glm(DFREE ~ AGE + NDRGFP1 + NDRGFP2 + IVHX +
              RACE + TREAT + SITE +
              AGE:NDRGFP1 + RACE:SITE,
              family=binomial, data=uis)
gof(g1, plotROC=FALSE)
unclass(g1)
attributes(g1$gof)

## End(Not run)

dardisco/LogisticDx documentation built on Dec. 26, 2021, 8:11 p.m.