Description Usage Arguments Details Value Author(s) References See Also Examples
Run the Metropolis-Hastings algorithm using a Markov basis computed with 4ti2 to sample from the conditional distribution of the data given the sufficient statistics of a hierarchical model.
1 2 | hierarchical(formula, data, iter = 10000, burn = 1000, thin = 10,
engine = c("Cpp", "R"), method = c("ipf", "mcmc"), moves)
|
formula |
formula for the hierarchical log-linear model |
data |
data, typically as a table but can be in different formats. see |
iter |
number of chain iterations |
burn |
burn-in |
thin |
thinning |
engine |
C++ or R? (C++ yields roughly a 20-25x speedup) |
method |
should the expected value (exp) be fit using iterative proportional fitting (via loglin) or the MCMC as the average of the steps? |
moves |
the markov moves for the mcmc |
Hierarchical fits and tests a hierarchical log-linear model on a contingency table. In many ways, hierarchical is like stats::loglin or MASS::loglm; however, there are a few key differences in the functionality of hierarchical.
The first difference is methodological. The tests conducted with hierarchical are exact tests based on the conditional distribution of the data given the sufficient statistics for the model. In other words, they are Fisher's exact test analogues for log-linear models. These tests are made possible by advances in algebraic statistics; see the first and second references below. In particular, hierarchical leverages Markov bases through the software 4ti2 to construct a Metropolis-Hastings algorithm to sample from the conditional distribution of the table given the sufficient statistics.
A second way that hierarchical differs from stats::loglin or MASS::loglm is in generalizing the kinds of tests performed. While those allow for the asymptotic unconditional testing using Pearson's X^2 test and the likelihood ratio test (MASS::loglm is simply a wrapper for stats::loglin), hierarchical gives several test statistics: Pearson's X^2, the likelihood ratio G^2, Freeman-Tukey, Cressie-Read (lambda = 2/3), and Neyman's modified X^2., see the last reference. In other words, to compute the exact p-value, iter = 1e4 samples are sampled from the conditional distribution of the table given the sufficient statistics, and then the proportion of tables that have X^2, G^2, etc. values greater than or equal to that of the observed table is the p value for the (conditional) exact test. A similar, but perhaps preferable approach, simply adds up the probabilities of the tables that have probabilities less than or equal to that of the observed table; this is the first line output in hierarchical and does not use a test statistic.
Some authors (see the third reference) suggest that for discrete problems, a "mid p value" is preferable to the traditional p value, and when presented should be interepreted in the same way. If the p value is defined to be, say, P(samps >= obs), the mid p value is defined to be P(samps > obs) + P(samps == obs)/2. The mid p value is computed for each test.
Since the tests make use of Monte Carlo sampling, standard errors (SE) are reported for each statistic. For the test statistics, this is just the standard deviation of the samples divided by the square root of the sample size, iter; they are computed and returned by the print method. The standard errors of the p values use the CLT asymptotic approximation and, therefore, warrant greater consideration when the p value is close to 0 or 1.
a list containing named elements
steps
: an integer matrix whose columns represent individual samples from the mcmc.
moves
: the moves used for the proposal distribution in the mcmc, computed with 4ti2 (note that only the positive moves are given).
acceptProb
: the average acceptance probability of the moves, including the thinned moves.
param
: the fitted parameters of the log linear model.
df
: parameters per term in the model
quality
: model selection statistics AIC, AICc, and BIC.
residuals
: the (unstandardized) pearson residuals (O - E) / sqrt(E)
call
: the call.
obs
: the contingency table given.
exp
: the fit contingency table as an integer array.
A
: the sufficient statistics computing matrix (from Tmaker).
p.value
: the exact p-values of individual tests, accurate to Monte-Carlo error. these are computed as the proportion of samples with statistics equal to or larger than the oberved statistic.
mid.p.value
: the mid p.values, see Agresti pp.20–21.
statistic
: the pearson's chi-squared (X2), likelihood ratio (G2), Freeman-Tukey (FT), Cressie-Read (CR), and Neyman modified chi-squared (NM) statistics computed for the table given.
sampsStats
: the statistics computed for each mcmc sample.
cells
: the number of cells in the table.
method
: the method used to estimate the table.
David Kahle
Diaconis, P. and B. Sturmfels (1998). Algebraic Algorithms for Sampling from Conditional Distributions. The Annals of Statistics 26(1), pp.363-397.
Drton, M., B. Sturmfels, and S. Sullivant (2009). Lectures on Algebraic Statistics, Basel: Birkhauser Verlag AG.
Agresti, A. (2002). Categorical Data Analysis, Basel: John Wiley & Sons, 2ed.
Agresti, A. (1992). A Survey of Exact Inference for Contingency Tables Statistical Science 7(1), pp.131-153.
Read, T. and Cressie, N. (1998). Goodness-of-Fit Statistics for Discrete Multivariate Data, Springer-Verlag.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 | ## Not run:
## handedness introductory example
############################################################
data(handy)
(out <- hierarchical(~ Gender + Handedness, data = handy))
# hierarchical performs the same tasks as loglin and loglm,
# but hierarchical gives the exact test p values and more statistics
statsFit <- stats::loglin(handy, list(c(1),c(2)), fit = TRUE, param = TRUE)
massFit <- MASS::loglm(~ Gender + Handedness, data = handy)
# loglm is just a wrapper of loglin
# comparisons between hierarchical and loglin
############################################################
# the expected table given the sufficient statistics can be computed
# via two methods, iterative proportional fitting, and the mcmc itself:
out$exp # ipf
hierarchical(~ Gender + Handedness, data = handy, method = "mcmc")$exp
statsFit$fit # the equivalent in loglin; this is used by default in hierarchical
# the parameter values of the loglinear model can be accessed
out$param
statsFit$param
# the p-value for the overall model is available as well
# hierarchical gives the exact conditional p-value
# (conditional on the sufficient statistics)
# the five numbers correspond the probability of tables that are
# "more weird" than the observed table, where "more weird" is determined
# by having a larger X2 value (or G2, FT, CR, or NM)
out$p.value
fisher.test(handy)$p.value # out$p.value["X2"] is accurate to monte carlo error
# loglin gives the p-values using the unconditional asymptotic distributions
c(
"X2" = pchisq(statsFit$pearson, df = statsFit$df, lower.tail = FALSE),
"G2" = pchisq(statsFit$lrt, df = statsFit$df, lower.tail = FALSE)
)
out$mid.p.value # the mid (exact conditional) p-value is also available
# the test statistics based on the observed table and the expected
# table under the model are available
out$statistic
c(statsFit$pearson, statsFit$lrt) # loglin only gives X2 and G2
# the markov basis used for the proposal distribution of the metropolis-hastings
# algorithm are returned. the proposal distribution is uniform on +/-
# the moves added to the current table
out$moves
# they are easier understood as tables
vec2tab(out$moves, dim(handy))
# notice that the marginals stay fixed:
handy + vec2tab(out$moves, dim(handy))
# these were computed as the markov basis of the integer matrix
out$A
markov(out$A)
out$moves
# the moves are also sometimes written in tableau form (LAS p.13)
tableau(out$moves, dim(handy))
# that's +1 the the table in elements [1,1] and [2,2]
# and -1 in the table in elements [1,2] and [2,1]
# the acceptance probability of the MCMC is retained
out$acceptProb
# various model assessment measures are also available
out$quality
# the number of independent parameters per term are in df
out$df
# as an added help, you may find the visuals in vcd useful:
# library(vcd)
# mosaic(~ Gender + Handedness, data = handy, shade = TRUE, legend = TRUE)
## politics example - with computing the exact p value by hand
############################################################
data(politics)
(out <- hierarchical(~ Personality + Party, data = politics))
statsFit <- stats::loglin(politics, as.list(1:2), fit = TRUE, param = TRUE)
out$p.value
# exact without monte-carlo error
sum(dhyper(c(0:3,6:9), 10, 10, 9))
fisher.test(politics)$p.value
round(dhyper(0:9, 10, 10, 9), 4)
# comparisons :
out$exp
statsFit$fit
out$param
statsFit$param
out$p.value # exact
c(
"X2" = pchisq(statsFit$pearson, df = statsFit$df, lower.tail = FALSE),
"G2" = pchisq(statsFit$lrt, df = statsFit$df, lower.tail = FALSE)
) # asymptotic approximation
fisher.test(politics)$p.value # accurate to monte carlo error
out$statistic # accurate to monte carlo error
c(statsFit$pearson, statsFit$lrt)
# mosaic(~ Personality + Party, data = politics, shade = TRUE, legend = TRUE)
## eyeHairColor from the Diaconis and Sturmfels reference
############################################################
data(HairEyeColor)
eyeHairColor <- margin.table(HairEyeColor, 2:1)
outC <- hierarchical(~ Eye + Hair, data = eyeHairColor)
outR <- hierarchical(~ Eye + Hair, data = eyeHairColor, engine = "R")
# doesn't work even with workspace = 2E9 (with over 4.5Gb in memory)
#fisher.test(eyeHairColor, hybrid = TRUE, workspace = 2E9)
tableau(outC$moves, dim(eyeHairColor))
# library(microbenchmark)
# microbenchmark(
# hierarchical(~ Eye + Hair, data = eyeHairColor),
# hierarchical(~ Eye + Hair, data = eyeHairColor, engine = "R")
# )
# 5-10 times faster; much faster with increased iter
# mosaic(~ Eye + Hair, data = HairEyeColor, shade = TRUE, legend = TRUE)
## abortion preference example from the
## Diaconis and Sturmfels reference pp. 379--381
## a no 3-way interaction model
############################################################
data(abortion)
out <- hierarchical(
~ Education*Abortion + Abortion*Denomination + Education*Denomination,
data = abortion,
iter = 10000, burn = 50000, thin = 50
)
out$p.value
vec2tab(rowMeans(out$steps), dim(abortion)) # cf. p. 380
loglin(abortion, subsets(1:3, 2), fit = TRUE)$fit
out$param
loglin(abortion, subsets(1:3, 2), param = TRUE)$param
qqplot(rchisq(1055, df = 8), out$sampsStats$X2s)
curve(1*x, from = 0, to = 30, add = TRUE, col = "red")
( nMoves <- 2*ncol(out$moves) ) # DS uses 110
# the markov basis is larger than it needs to be
## loglin no three-way interaction model example
############################################################
# the help for fits the no three-way interaction model on HairEyeColor,
# finds a .66196 p-value using the asymptotic distribution, and concludes
# a good fit:
data(HairEyeColor)
fit <- loglin(HairEyeColor, subsets(1:3, 2), fit = TRUE, param = TRUE)
mod <- hierarchical(~ Eye*Hair + Hair*Sex + Eye*Sex, data = HairEyeColor)
# p values
pchisq(fit$lrt, fit$df, lower.tail = FALSE) # see ?loglin
mod$p.value
# test statistics
c(fit$pearson, fit$lrt)
mod$statistic
# fits (estimated tables)
fit$fit
mod$exp
mod$obs
# checking the autocorrelation
acf(mod$sampsStats$PRs)
mod <- hierarchical(~ Eye*Hair + Hair*Sex + Eye*Sex, data = HairEyeColor, thin = 100)
acf(mod$sampsStats$PRs) # got it!
# the slight differences in fit$fit and mod$exp (both done with ipf from loglin)
# are due to differences in variable order:
loglin(HairEyeColor, subsets(1:3, 2), fit = TRUE)$fit
loglin(HairEyeColor, subsets(1:3, 2)[c(1,3,2)], fit = TRUE)$fit
# a few model moves
vec2tab(mod$moves[,1], dim(HairEyeColor))
vec2tab(mod$moves[,50], dim(HairEyeColor))
-vec2tab(mod$moves[,50], dim(HairEyeColor))
# they contribute 0 to the marginals of the table
vec2tab(mod$moves[,50], dim(HairEyeColor))
mod$A %*% mod$move[,50]
vec2tab(mod$A %*% mod$move[,50], dim(HairEyeColor))
HairEyeColor
HairEyeColor + vec2tab(mod$moves[,50], dim(HairEyeColor))
## a table with positive marginals but no MLE for
## the no-three way interaction model
############################################################
data(haberman)
mod <- hierarchical(~ X1*X2 + X2*X3 + X1*X3, data = haberman)
statsFit <- loglin(haberman, subsets(1:3, 2), param = TRUE, fit = TRUE)
statsFit$fit
statsFit$param
c(statsFit$pearson, statsFit$lrt)
algstatFit <- hierarchical(~ X1*X2 + X2*X3 + X1*X3, data = haberman, method = "mcmc")
algstatFit$exp
algstatFit$param
algstatFit$statistic
## an example from agresti, p.322
############################################################
data(drugs)
ftable(aperm(drugs, c(3, 1, 2))) # = table 8.3
out <- hierarchical(~Alcohol + Cigarette + Marijuana, data = drugs)
matrix(round(aperm(out$exp, c(2,1,3)), 1), byrow = FALSE)
loglin(drugs, as.list(1:3), fit = TRUE)$fit
loglin(drugs, as.list(1:3), param = TRUE)$param
# # the saturated model issues a warning from markov, but works :
# out <- hierarchical(~Alcohol * Cigarette * Marijuana, data = drugs)
# matrix(round(aperm(out$exp, c(2,1,3)), 1), byrow = FALSE)
ftable(aperm(out$exp, c(3,1,2)))
stats <- loglin(drugs, as.list(1:3), fit = TRUE, param = TRUE)
# considered via glm
df <- as.data.frame(drugs)
mod <- glm(Freq ~ Alcohol + Cigarette + Marijuana, data = df, family = poisson)
summary(mod)
mod$fitted.values
# the same can be done with glm :
mod <- glm(
Freq ~ Alcohol + Cigarette + Marijuana,
data = as.data.frame(drugs), family = poisson
)
summary(mod)
matrix(round(mod$fitted.values[c(1,3,2,4,5,7,6,8)],1))
mod <- glm(
Freq ~ Alcohol * Cigarette + Marijuana,
data = as.data.frame(drugs), family = poisson
)
summary(mod)
matrix(round(mod$fitted.values[c(1,3,2,4,5,7,6,8)],1))
mod <- glm(
Freq ~ Alcohol * Cigarette * Marijuana,
data = as.data.frame(drugs), family = poisson
)
summary(mod)
matrix(round(mod$fitted.values[c(1,3,2,4,5,7,6,8)],1))
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.