# Overview and Setup

The rmsb package is the Bayesian companion to the rms package, and uses many of its functions for post-model-fitting analysis and graphics. The sole fitting function in rmsb at present is blrm. blrm is for Bayesian binary and ordinal proportional odds logistic regression and the @pet90par partial proportional odds model that relaxes the proportional odds assumption. It is the analog and generalization of rms::lrm and for ordinal responses is intended for outcomes with up to a several hundred ordinal levels. The Bayesian approach has a number of advantage over traditional frequentist models, including

1. the use of exact calculations (to within simulation error) instead of large sample (e.g., normal theory) approximations to p-values and confidence intervals
2. exact and more intuitive inference when random effects are included in the model
3. the ability to make probability statements about parameters and combinations of parameters, which includes computations of the probabilities of assertions such as "the effect of $x_1$ exceeds 1.2 and the effect of $x_2$ exceeds 0.7"
4. capturing more sources of uncertainty. For example, the blrm function automatically computes highest posterior density intervals on a variety of statistical indexes such as the Brier score and AUROC ($c$-index). Note: By default these intervals are computed using only 400 posterior draws to save time. For a blrm fit object f you can specify how many samples to draw, to get more accurate intervals, by specifying for example print(f, ns=2000).
5. the ability to incorporate external information or beliefs about parameters using prior distributions
6. by using Stan language to specify the likelihood components, one can not only do posterior distribution sampling but can quickly compute maximum likelihood estimates (Bayesian posterior modes)

blrm uses normal priors for the $\beta$ parameters, and the user may specify the standard deviation of these priors separately for each model parameters. The default is $\sigma=100$ which is nearly flat. When there are random effects, the analyst may specify the mean of the exponential distribution that serves as the prior for the standard deviation of the random effects, with the default mean being 1.0, a reasonable number for the logit scale. You can also use a half-$t$ distribution for the SD of random effects.

Regarding uncertainies about statistical indexes of model performance such as AUROC, the Bayesian posterior distributions computed by rmsb account for coefficient uncertainty for the sample at hand's ability to uncover the true unknown data generating coefficients and how strongly those coefficients can predict outcomes in the sample. The posterior distributions do not pertain to sample-to-sample variability of model fit or model validation.

blrm uses Stan code written by Ben Goodrich of Columbia University and Frank Harrell. This code is precompiled when the rmsb package is built. You must isntall the rstan package to use rmsb.

Here is a typical setup code chunk for using rmsb. It is not recommended to use options(auto_write=TRUE) as when running a series of regressions, the rstan package tends to recompile Stan code even when the code doesn't change.

r
require(rmsb)
options(mc.cores = parallel::detectCores())   # use max # CPUs

require(rmsb)
knitrSet(lang='markdown', w=7, h=7, fig.path='png/')
options(prType='html')     # for rms output: print, anova, summary, etc.
options(mc.cores = parallel::detectCores())   # use max # CPUs

# Function to print an object, inside a verbatim quote if
# results='asis' in effect
# Works for all output formats in R markdown
vprint <- function(x, ...) {
r <- knitr::opts_current$get('results') asis <- length(r) && r == 'asis' if(! asis) return(invisible(print(x, ...))) r <- c('\n\n', capture.output(print(x, ...)), '\n\n') cat(r, sep='\n') invisible() }  ## Running Fits Only When Something Changes You'll see file='...' in the longer running of the blrm calls below. If the file already exists and none of the data nor the options sent to rstan nor the underlying Stan code have changed from what were used to create the fit object stored in that file (as judged by their md5 hash), that saved fit object is returned immediately without running the rstan code, often saving a great deal of execution time. This works well with the workflow of long R markdown reports making it so that only portions of Bayesian analysis that have changed are run. Note that using the knitr cache=TRUE option does not work well as the cache files for this script were about a GB in size, and the system does not accurately recognize when a model fit hasn't changed and doesn't need to be run when rstan is involved. A restored fit object does not contain the rstan object, saving tens of megabytes of storage. Standard Stan diagnostics are stored in the fit object separately, and it is assumed that if the user wanted to run rstan::loo loo=TRUE would have been specified to blrm so that loo is run and its (small) result stored in the blrm fit object. If you want to run pairs to get more graphical diagnostics, intead of relying on the rstan object always being available, specify pairs=TRUE or pairs='filename.png' to blrm to graph the pair plots. The latter is recommended, and one can put knitr::include_graphics('filename.png') in the R code chunk to render the graph in the report even if blrm was not re-run. When file='filename.rds' is specified to blrm and the file does not exist or analysis inputs have changed since the file was created, the blrm fit object will be saved in saveRDS .rds binary format in your current working directory at the completion of blrm's work. The rstan component is omitted from the saved file. The utility function fitIf available from here is another way to efficiently manage Bayesian simulation workflow. The fitIf approach runs the analysis only if the file doesn't already exist. If it exists but the data or the model have changed, the fitIf approach is not intelligent enough to re-run the analysis (unlike the file='...' approach above). Whether using fitIf or file=, you lose the ability to run the rms::stanDxplot(..., rstan=TRUE) on restored objects, so the stanDxplot function tries to find an existing trace plot image file corresponding to the current R markdown chunk when the fit object no longer has the rstan component. For most purposes this doesn't matter, because running \code{stanDxplot} using the defaults shows the needed non-burnin samples which are always stored in fit objects. The file=filename.rds approach is preferred. # Proportional Odds and Binary Logistic Regression ## Example: 10-level Ordinal Outcome Simulate a dataset with three predictors and one 10-level ordinal outcome. Run the frequentist proportional odds model then the Bayesian one. set.seed(1) n <- 500 x1 <- runif(n, -1, 1) x2 <- runif(n, -1, 1) x3 <- sample(0 : 1, n, TRUE) y <- x1 + 0.5 * x2 + x3 + rnorm(n) y <- as.integer(cut2(y, g=10)) dd <- datadist(x1, x2, x3); options(datadist='dd') f <- lrm(y ~ x1 + pol(x2, 2) + x3, eps=1e-7) # eps to check against rstan f  Before getting posterior distributions of parameters, use rstan to just get maximum likelihood estimates and compare them with those from lrm. Do this for increasingly flat priors for the$\beta$s. The default prior SD for blrm is 100. Running method='optimizing' is a quick way to study the effect of priors on the posterior modes for non-intercepts when there are no random effects in the model. It is also a way to get a sense of the scaling of parameters used in the actual calculations (but see a later section for how to show standard deviations of variables on the transformed scale given to Stan). Except for variables listed in the keepsep argument to blrm, prior standard deviations apply to the QR orthonormalized version of the design matrix. for(psd in c(0.25, 1, 10, 100, 10000)) { cat('\nPrior SD:', psd, '\n') g <- blrm(y ~ x1 + pol(x2, 2) + x3, method='optimizing', priorsd=psd) cat('-2 log likelihood:', g$deviance, '\n')
print(g$coefficients) } # Compare with ordinary MLEs and deviance f$deviance
coef(f)


Fit the model with Dirichlet priors on intercepts and wide normal priors on the $\beta$s. Show the model fit summary. Note that the indexes of predictive discrimination/accuracy include 0.95 highest posterior density intervals. In frequentist inference we pretend that quantities such as AUROC and $R^2$ are estimated without error, which is far from the case.

In several places you will see an index named Symmetry. This is a measure of the symmetry of a posterior distribution. Values farther from 1.0 indicate asymmetry, which indicates that the use of standard errors and the use of a normal approximation for the posterior distribution are not justified. The symmetry index is the ratio of the gap between the posterior mean and the 0.95 quantile of the posterior distribution to the gap between the 0.05 quantile and the mean.

bs <- blrm(y ~ x1 + pol(x2, 2) + x3, file='bs.rds')
bs

# Show more detailed analysis of model performance measures
blrmStats(bs, pl=TRUE)


Show basic Stan diagnostics. Had stanDxplots(bs, rstan=TRUE) been used, intercepts would have been shifted from what is in g because of subtractions of covariate means before passing data to rstan.

stanDxplot(bs)
stanDx(bs)


Here are the posterior distributions, calculated using kernel density estimates from posterior draws. Posterior models, shown as vertical lines, are parameter values that maximize the log posterior density (using rstan::optimizing in the original model fit) so do not necessarily coincide with the peak of the kernel density estimates.

plot(bs)
# Also show 2-d posterior density contour for two collinear terms
plot(bs, c('x2', 'x2^2'), bivar=TRUE)   # assumes ellipse
plot(bs, c('x2', 'x2^2'), bivar=TRUE, bivarmethod='kernel')   # kernel density

# Print frequentist side-by-side with Bayesian posterior mean, median, mode

cbind(MLE=coef(f), t(bs$param)) # Compare covariance matrix of posterior draws with MLE round(diag(vcov(f)) / diag(vcov(bs)), 2) range(vcov(f) / vcov(bs))  Next show frequentist and Bayesian contrasts. For the Bayesian contrast the point estimate is the posterior mean, and the 0.95 highest posterior density interval is computed. Instead of a p-value, the posterior probability that the contrast is positive is computed. contrast(f, list(x1=0, x3=1), list(x1=.25, x3=0)) k <- contrast(bs, list(x1=0:1, x3=1), list(x1=.25, x3=0)) k  For Bayesian contrasts we can also plot the posterior densities for the contrasts, and 2-d highest-density contour. plot(k) plot(k, bivar=TRUE) # applicable when exactly 2 contrasts plot(k, bivar=TRUE, bivarmethod='kernel')  Compute posterior probabilities for various assertions about unknown true parameter values. The PostF function is a function generator that effectively evaluates the assertion to a 0/1 value and computes the mean of these binary values over posterior draws. As is the case with inference about the quadratic effect of x2 below, when the assertion does not evaluate to a binary 0/1 or logical TRUE/FALSE value, it is taken as a quantity that is derived from one or more model parameters, and a posterior density is drawn for the derived parameter. We use that to get a posterior distribution on the vertex of the quadratic x2 effect. P <- PostF(bs, pr=TRUE) # show new short legal R names P(b3 > 0 & b1 > 1.5) P(b3 > 0) P(abs(b3) < 0.25) # evidence for small |nonlinearity| mean(bs$draws[, 'x2^2'] > 0, na.rm=TRUE)    # longhand calculation
# Plot posterior distribution for the vertex of the quadratic x2 effect
# This distribution should be wide because the relationship is linear
# (true value of b3 is zero)
plot(P(-b2 / (2 * b3)))

# Recreate the P function using original parameter names
# (which may not be legal R name)
P <- PostF(bs, name='orig')
P(x2^2 > 0)
P(x2^2 > 0 & x1 > 1.5)

# Remove rstan results from fit.  Compute  savings in object size.
# Note: this will only be accurate when running the fits for
# the first time (not when restoring shortened forms of them from
# disc)
# Result: 33.8MB before, 0.5MB after
s1 <- format(object.size(bs), 'MB')
bs$rstan <- NULL s2 <- format(object.size(bs), 'MB') cat('Before:', s1, ' After:', s2, '\n')  ## Bayesian Wilcoxon Test Since the proportional odds ordinal (PO) logistic model is a generalization of Wilcoxon/Kruskal-Wallis tests one can use Bayesian proportional odds regression to get the Bayesian equivalent to the Wilcoxon test. Even if not adjusting for covariates (impossible with the Wilcoxon test) there are advantages to putting this in a modeling framework as detailed in Section 7.6 of BBR. A major advantage is estimation ability. One can estimate group-specific means, quantiles, and exceedance probabilities. And Bayesian inference provides exact uncertainty intervals (highest posterior density intervals in what follows) for these. The PO model does not require there to be any ties among the Y values, so it handles continuous data very well (the orm function in the rms package efficiently handles many thousands of distinct Y levels requiring many thousands of intercepts in the model). Let's re-analyze the calprotectin data in Section 7.3.1 of BBR to mimic the frequentist PO analysis in Section 7.6. # Fecal Calprotectin: 2500 is above detection limit # When detection limits occur at a single value, the PO # model easily handles this in terms of estimating group # differences (but not for estimating the mean Y) calpro <- c(2500, 244, 2500, 726, 86, 2500, 61, 392, 2500, 114, 1226, 2500, 168, 910, 627, 2500, 781, 57, 483, 30, 925, 1027, 2500, 2500, 38, 18) # Endoscopy score: 1 = No/Mild, 2=Mod/Severe Disease # Would have been far better to code dose as 4 ordinal levels endo <- c(2, 1, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1, 1) endo <- factor(endo, 1 : 2, c("No or Mild Activity", "Moderate or Severe Activity")) dd <- datadist(endo, calpro); options(datadist='dd') bcalpro <- blrm(calpro ~ endo, file='bcalpro.rds') print(bcalpro, intercepts=TRUE) # print.blrm defaults to not showing intercepts if more than 9 of them summary(bcalpro)  One can see that the posterior probability of a positive group difference exceeds 0.99. Now compute the posterior mean estimate of the mean and median calprotectin levels for the unknown data generating process, stratified by group and compare with sample estimates. # Sample estimates tapply(calpro, endo, mean) tapply(calpro, endo, median) # Now compute estimates and 0.95 HPD intervals assuming PO # The first method is exact newdata <- data.frame(endo=levels(endo)) bar <- Mean(bcalpro) predict(bcalpro, newdata, fun=bar) quant <- Quantile(bcalpro) med <- function(lp, ...) quant(lp=lp, ...) Predict(bcalpro, endo, fun=med)  The contrast function in rms now allows one to get posterior distributions of differences in nonlinearly transformed parameters, as follows. k <- contrast(bcalpro, list(endo=levels(endo)[1]), list(endo=levels(endo)[2]), fun=bar) k plot(k, which='diff')  Peeking ahead a later section, we can use the constrained partial proportional odds model to assess the proportional odds assumption. Let's assume that departures from proportional odds (constant increments in log odds) are modeled as linear in square root of calprotectin level. There is only one predictor (endo), so there is only one variable that might act non-proportionally. Hence the second formula has the same right-hand-side as the first formula in the blrm call. bcalp <- blrm(calpro ~ endo, ~ endo, cppo=sqrt) # use cppo=function(y) y for linear x by y interaction bcalp  The probability that the effect of moderate or severe activity decreases for higher levels of calprotectin is 0.99. Let's graph the logit of the group-stratified empirical CDFs to see some visual evidence for this. Ecdf(~ calpro, group=endo, fun=qlogis)  Non-parallelism is readily seen, indicating non-proportional odds. Note that allowing for a systematic departure from proportional odds is like having unequal group variances in a 2-sample$t$-test, but is more general. With the constrained partial proportional odds model one can examine for a given y the evidence for a group effect in the tail of the distribution of Y beyond y. For ordinal logistic models the group effect is captured by the relative odds that Y$\geq$y. We can use the rms contrast function to do the calculations for varying y. contrast will evaluate the cppo function as needed (which is a scaled and centered$\sqrt{}$). ys <- seq(100, 2500, by=100) k <- contrast(bcalp, list(endo='Moderate or Severe Activity'), list(endo='No or Mild Activity'), ycut=ys) xl <- 'Calprotectin Cutoff' par(mfrow=c(1,2)) with(k, plot(ycut, Contrast, xlab=xl, ylab='log OR', type='l')) with(k, plot(ycut, PP, xlab=xl, ylab='P(OR > 1)', type='l'))  ## Binary Regression with Restricted Cubic Splines Turn to the support dataset and fit a binary logistic model to predict the probability of in-hospital death of critically ill adults. blrm keeps posterior sampling efficient by orthonormalizing the design matrix before doing the sampling (this is done internally in the Stan code). This allows for arbitrary collinearities, for example in the basis functions used in restricted cubic splines. When there are such collinearities, expect to see some disagreements in estimates between blrm and lrm, because the latter does not do orthonormalization (only normalization to mean 0 variance 1). Collinearity implies that there are many different solutions to the equations, all giving almost the same predicted values. getHdata(support) dd <- datadist(support); options(datadist='dd') f <- lrm(hospdead ~ dzgroup + rcs(crea, 5) + rcs(meanbp, 5), data=support, eps=1e-4, x=TRUE) vprint(specs(f)) f # Compute the apparent standard error of Dxy (not accounting for overfitting) # for comparison with the Bayesian HPD interval for Dxy vprint(rcorr.cens(predict(f), support$hospdead))
vprint(Function(f))
bsup <- blrm(hospdead ~ dzgroup + rcs(crea, 5) + rcs(meanbp, 5),
data=support, file='bsup.rds')
vprint(specs(bsup))
vprint(Function(bsup))   # by default uses posterior mode parameter values
# To add an intercept use e.g. Function(bsup, intercept=coef(g, 'mode')[5])
bsup

vprint(stanDx(bsup))
stanDxplot(bsup)
plot(bsup)


Show approximate relative explained variation (REV) and compare this with Wald statistics from the frequentist lrm model. REV is less accurate the more the multivariate posterior distribution differs from a multivariate normal distribution. On a given posterior draw, REV for a term in the model is the Wald $\chi^2$ statistic divided by the Wald statistic for the whole model.

a <- anova(bsup)
a
plot(a)
anova(f)


Note: To get a Bayesian equivalent of a likelihood ratio test for comparing two models use the rms function compareBmods.

Now compute odds ratios over default inter-quartile ranges for continuous predictors, based on posterior mode parameters. Also show 0.95 HPD intervals. Note that unlike the print method, the plot method for summary doesn't actually compute HPD intervals, but approximates them by assuming normality and using the standard deviation of the posterior samples. Compare the plot with the ordinary lrm result.

s <- summary(bsup)
s
plot(s)
plot(summary(bsup))


Draw partial effect plots with 0.95 HPD intervals. Point estimates are posterior modes (which can be easily changed).

ggplot(Predict(bsup))


Compute estimated mortality probabilities at all levels of dzgroup adjusting covariates to medians/modes. Need funint=FALSE to tell Predict that fun is a simple 1-1 function of the linear predictor (unlike Mean, Quantile, etc.).

Predict(bsup, dzgroup, fun=plogis, funint=FALSE)


Draw a nomogram from posterior mode parameter values.

p <- nomogram(bsup, fun=plogis, funlabel='P(death)')
plot(p)


For comparison here is a nomogram based on maximum likelihood estimates of parameters rather than posterior modes.

plot(nomogram(f, fun=plogis, funlabel='P(death)'))


# Partial Proportional Odds Model

The proportional odds (PO) assumption is a parallelism assumption reflecting the belief that the effect of baseline variables on, say, $Y \geq 3$ is the same effect on $Y \geq 4$. To relax that assumption, @pet90par developed the partial proportional odds model (PPO) model. The blrm function accepts a second model formula in the argument named ppo that specifies the subset of predictors for which PO is not to be assumed but for which the model is effectively polytomous (multinomial). Note that for frequentist modeling the R VGAM package handles the PPO model (as will be shown below). VGAM is more flexible than what blrm can do in allowing for all sorts of model restrictions.

The presence of the second formula triggers fitting a PPO model. The default is the unconstrained PPO model, which has $(k - 2) \times q$ extra parameter for $k$ category $Y$ and $q$ equal to the number of columns in the induced design matrix from the second formula. This is normally too many parameters. More typically, the constrained PPO model (@pet90par Eq. 6) is used. This model is fitted when a function is provided as the cppo argument. Generalizing Peterson and Harrell Eq. 6 the cppo function can be a continuous function of $Y$ as well as being a discontinuous function such as an indicator variable that allows a category of $Y$ (typically the first or last) to have a special effect of selected covariates. When $Y$ is continuous, cppo would typically be the continuous value of $Y$ or a monotonic function of it. This induces a model that is akin to having a treatment $\times$ time interaction in a Cox proportional hazards model, or systematic heteroskedasticity in a linear model. When Y is very skewed, it may be more reasonable to use something like cppo = function(y) y^(1/3). Note that blrm internally centers (by the mean) and scales (by the SD) the function value, so the user need not concern herself with location or scale of the PO-modification function. Note also that post-fitting functions for estimation and prediction are not implemented for the unconstrained partial PO model.

Note that the cppo function used to trigger the use of the constrained PPO model is never evaluated at the lowest value of Y. That is because in the data likelihood, the probability element for observations at the minimum Y is one minus the probability element at the second lowest value of Y. So you can't give a special effect at the minimum Y. If for example you have a scale ordered as death, sick, well and you want to allow for a special effect of a covariate on death, reverse the order of levels in the factor variable representing Y and specify cppo=function(y) y == 'dead'.

## Unconstrained Partial PO Model

Let's look first at the unconstrained PPO model, which is more suitable for categorical Y with not too many categories. Consider a $2\times 3$ table of proportions (2 treatment groups, 3 ordered outcome levels) where the treatment effect is not in PO. We will fit a PO model and see how well it tries to reconstruct the 6 proportions, then fit a PPO model. As of 2020-05-11 blrm has not implemented predict-type functions for PPO models, so you will se predictions done the long way (which does better show how PPO works). Note: the VGAM vgam function parameterizes PPO effects by $y$-specific covariate effects whereas blrm like Peterson and Harrell parameterize the model by estimating increments in log odds for the covariate effect for $y \geq y$ over and above the effect for $y \geq 2$ if $y$ has values 1, 2, ..., $k$. Bayesian model specification is detailed here.

Not shown here is that blrm also allows for random effects with PPO models to handle longitudinal data if a compound symmetry correlation pattern is reasonable.

p0 <- c(.4, .2, .4)
p1 <- c(.3, .1, .6)
m  <- 50             # observations per cell
m0 <- p0 * m         # from proportions to frequencies
m1 <- p1 * m
x  <- c(rep(0, m), rep(1, m))
y0 <- c(rep(1, m0[1]), rep(2, m0[2]), rep(3, m0[3]))
y1 <- c(rep(1, m1[1]), rep(2, m1[2]), rep(3, m1[3]))
y  <- c(y0, y1)
table(x, y)

# A PO model cannot reproduce the original proportions
f <- lrm(y ~ x)
f

predict(f, data.frame(x=c(0, 1)), type='fitted.ind')
require(VGAM)
fv <- vgam(y ~ x, cumulative(reverse=TRUE, parallel=TRUE))
coef(fv)
predict(fv, data.frame(x=c(0, 1)), type='response')

# Now fit a PPO model that will reproduce all cell proportions
fvppo <- vgam(y ~ x, cumulative(reverse=TRUE, parallel=FALSE))
coef(fvppo)
predict(fvppo, data.frame(x=c(0, 1)), type='response')  # perfect recovery

# Function to manually compute cell probablities
pprop <- function(co, type, centered=FALSE) {
x <- if(centered) c(-0.5, 0.5) else 0:1
switch(type,
vgam = {
pge2 <- plogis(co[1] + x * co[3])
peq3 <- plogis(co[2] + x * co[4])
rbind(c(1 - pge2[1], pge2[1] - peq3[1], peq3[1]),
c(1 - pge2[2], pge2[2] - peq3[2], peq3[2]))
},
blrm = {
pge2 <- plogis(co[1] + x * co[3])
peq3 <- plogis(co[2] + x * (co[3] + co[4]))
rbind(c(1 - pge2[1], pge2[1] - peq3[1], peq3[1]),
c(1 - pge2[2], pge2[2] - peq3[2], peq3[2]))
} )
}

co <- coef(vgam(y ~ x, cumulative(reverse=TRUE, parallel=FALSE)))
pprop(co, type='vgam')

# Now try blrm
# First mimic PO model by penalizing PPO term to nearly zero
# Quickly get maximum likelihood estimates (posterior modes)
b <- blrm(y ~ x, ~x, priorsd=1000, priorsdppo=0.001, method='opt')
coef(b)
pprop(coef(b), type='blrm')

# Now really fit PPO model, at first only getting MLE
# Do full posterior sampling
b <- blrm(y ~ x, ~ x, priorsd=1000, priorsdppo=1000, method='opt')
coef(b)   # also the posterior mode
coef(b)[3] + coef(b)[4]

bppo <- blrm(y ~ x, ~ x, priorsd=1000, priorsdppo=1000, file='bppo.rds')
# take differences in last 2 coefficients to get our scheme
# Check recovery of proportions, using posterior mode/mean/median
pprop(coef(bppo, 'mode'),   type='blrm')
pprop(coef(bppo, 'mean'),   type='blrm')
pprop(coef(bppo, 'median'), type='blrm')

bppo


## Constrained Partial PO Model

Consider the same dataset analyzed above. Specify a constrained PPO model that in this particular case is really unconstrained because it has a total of two parameters to handle the group effect, and there are only $k=3$ levels of Y.

# cppo function specifies that there is a special effect of x for y=3
bcppo <- blrm(y ~ x, ~ x, cppo=function(y) y == 3, file='bcppo.rds')
cppo <- bcppo$cppo # automatically normalized to have mean 0 SD 1 cppo b <- coef(bcppo, 'mode') rbind(Mode=b, Mean=coef(bcppo)) # Compute the 4 cumulative probabilities using the posterior mode (MLE) # b[3] and b[4] are multiplied x (hence drop out of the x=0 row) L <- rbind('x=0' = c(b[1], b[2]), 'x=1' = c(b[1] + b[3] + cppo(2) * b[4], b[2] + b[3] + cppo(3) * b[4])) plogis(L)  Now consider the severity of nausea data from @pet90par. d0 <- data.frame(tx=0, y=c(rep(0, 43), rep(1, 39), rep(2, 13), rep(3, 22), rep(4, 15), rep(5, 29))) d1 <- data.frame(tx=1, y=c(rep(0, 7), rep(1, 7), rep(2, 3), rep(3, 12), rep(4, 15), rep(5, 14))) d <- rbind(d0, d1) d$tx <- factor(d$tx, 0:1, c('No cisplatin', 'cisplatin')) dd <- datadist(d); options(datadist='dd') with(d, table(tx, y)) # Allow for a different effect of tx at y=5 g <- function(y) y==5 # max(y)=5 and y is discrete # Check against maximum likelihood estimates in Peterson & Harrell f <- blrm(y ~ tx, ~ tx, cppo=g, data=d, method='opt') # Compute the treatment effect log(OR) for y=1, 2, 3, 4, 5 h <- f$cppo     # normalized version of g (mean 0 SD 1)
k <- coef(f)
k
k[6] + h(1:5) * k[7]   # matches paper

# Now get posterior distributions of parameters
fp <- blrm(y ~ tx, ~ tx, cppo=g, data=d, file='bnausea.rds')
rbind(coef(f), coef(fp, 'mode'), coef(fp, 'mean'))
k <- coef(fp)          # posterior means
k[6] + h(1:5) * k[7]   # close to paper

## Simulated Random Effects Longitudinal Data

Let's generate some data with repeatedly measured outcome per subject where the outcome is binary and the random effects have a $N(0, 0.25^2)$ distribution. 500 subjects have 10 measurements each.

n <- 500   # subjects
set.seed(2)
re <- rnorm(n) * 0.25   # worked fine also with rnorm(n) * 4
X <- runif(n)   # baseline covariate, will be duplicated over repeats
m <- 10         # measurements per subject

id <- rep(1 : n, each = m)
x  <- X[id]
L <- x + re[id]   # actual logit
y <- ifelse(runif(n * m) <= plogis(L), 1, 0)
f <- lrm(y ~ x, x=TRUE, y=TRUE)     # ordinary fit
f     # now use cluster sandwich covariance estimator:
g <- robcov(f, id)  # covariance matrix adjusted for clustering
g


We first fit an inappropriate Bayesian model in which the random effects are omitted.

# Note: loo defaults to FALSE when n > 1000 as in this case
# Need loo for compareBmods
breo <- blrm(y ~ x, loo=TRUE, file='breo.rds')
breo


Now use a proper Bayesian random effects model. The prior distribution for the standard deviation $\sigma_{\gamma}$ of the random effects ($\gamma$s) is assumed to be exponential when psigma=2, and we will use the default mean of 1.0.

bre <- blrm(y ~ x + cluster(id), psigma=2, loo=TRUE, file='bre.rds')
bre
plot(bre)


Before delving more into the random effects model, let's compare this new model with the previous model that erroneously omitted the random effects.

compareBmods(breo, bre)


Roughly speaking, of the two models, the one with random effects has a probability of 0.93 of being the correct one. See rstan::loo and loo::loo.array for details.

Now let's get into more details from the random effects model fit.

# Plot distribution of the 500 estimated random effects (posterior medians)
hist(bre$gammas, xlab='Estimated Random Effects', nclass=40)  Now generate similar data except for a bimodal random effects distribution. This will fool the random effects normal prior into having a wider variance for a single normal distribution but will still result in estimated random effects that are somewhat realistic. n <- 500 set.seed(3) re <- c(rnorm(n/2, mean=-1.75), rnorm(n/2, mean=1.75)) * 0.25 cat('SD of real random effects:', round(sd(re), 4), '\n') X <- runif(n) # baseline covariate, will be duplicated over repeats m <- 10 # measurements per subject id <- rep(1 : n, each = m) x <- X[id] L <- x + re[id] # actual logit y <- ifelse(runif(n * m) <= plogis(L), 1, 0) breb <- blrm(y ~ x + cluster(id), file='breb.rds') breb par(mfrow=c(2, 2)) hist(breb$gammas, xlab='Estimated Random Effects', nclass=40, main='')
hist(re,       xlab='Real Random Effects',      nclass=40, main='')
plot(re, breb$gammas, xlab='Real', ylab='Estimated') abline(a=0, b=1)  ## Absorbing State in Mixed Effects Ordinal Regression blrm is not designed to handle this situation but let's see how it performs. For an ordinal outcome y=0, 1, 2, 3, 4, 5 suppose that y=5 represents an absorbing state such as death. Suppose that subjects are observed for 10 days, and if death occurs within those days, all later values of y for that subject are set to 5. Generate repeated outcomes under a$N(0, 0.25^2)$random effects model with two treatments: a and b. The b:a odds ratio is 0.65 and the cell probabilities are 0.3, 0.3, 0.1, 0.1, 0.1, 0.1 corresponding to y=0-5, when the random effect is zero. # Generate data as if there is no absorbing state n <- 1000 set.seed(6) pa <- c(.3, .3, .1, .1, .1, .1) # P(Y=0-5 | tx=a, random effect=0) pb <- pomodm(p=pa, odds.ratio=0.65) # P(Y=0-5 | tx=b, re=0) # Hmisc round(pb, 3) re <- rnorm(n) * 0.25 tx <- c(rep('a', n/2), rep('b', n/2)) # will be duplicated over repeats m <- 10 # measurements per subject id <- rep(1 : n, each = m) time <- rep(1 : m, n) or <- exp(log(0.65) * (tx[id] == 'b') + re[id]) y <- integer(n * m) for(j in 1 : (n * m)) { p <- pomodm(p=pa, odds.ratio=or[j]) y[j] <- sample(0:5, 1, p, replace=TRUE) } Tx <- tx[id] table(Tx, y)  The first Bayesian proportional odds model fitted is the one that exactly matches the data generation model, as we have not yet imposed an absorbing state, so that outcomes with y < 5 can appear after a y=5 outcome for the subject. bst <- blrm(y ~ Tx + cluster(id), file='bst.rds') bst  stanDx(bst) stanDxplot(bst, 'ALL')  If time were to be added to the above model, you'll see that its regression coefficient is very small ($\hat{\beta}=0.009in this case), in alignment with the data generating model. Now assume that state y=5 is an absorbing state. Change observations after the first y=5 within subject to also have y=5. require(data.table) g <- function(x) if(length(x)) min(x, na.rm=TRUE) else 99L u <- data.table(id, time, Tx, y, key='id') # Add variable 'first' which is time of first y=5 for subject (99 if never) w <- u[, .(first=g(time[y == 5])), by=id] d <- u[w] # Show distribution of first time of y=5 table(d[time == 1, first]) # Set all observations after the first y=5 to also have y=5 z <- d z[time > first, y:=5] table(uy); table(d$y); table(z$y)

bcf <- blrm(y ~ Tx + cluster(id), data=z, file='bcf.rds')
bcf
hist(bcf$gammas, xlab='Estimated Random Effects', nclass=40, main='')  The regression coefficient for treatment is too large (the true value is log(0.65) = r round(log(0.64), 3)). The standard deviation of random effects is large (the true value is 0.25), reflecting increased dependence of outcomes without subject due to the duplication of y=5 records. However the data being analyzed were not formally generated with the model that has a treatment odds ratio of 0.65. Repeated correlated ordinal outcomes were generated with that odds ratio and with a random effect standard deviation of 0.25, but then the outcomes were overridden in the following fashion: The first time within a subject that y=5 causes suppression of all later records. The histogram of estimated subject random effects (posterior medians) shows some bimodality with heavy right tail due to the y=5 absorbing state. Let s also plot the random effects against the time of death (99 if the subject did not die, recoded here to 15). t5 <- subset(z, time == 1)$first
t5 <- ifelse(t5 == 99, 15, t5)
plot(t5, bcf$gammas, xlab='Time of y=5', ylab='Random Effect')  What happens with time is added to this model? bcft <- blrm(y ~ Tx + time + cluster(id), data=z, file='bcft.rds') bcft  We see that the slope of time is very large, but the treatment effect and random effect standard deviation are still very large. Look at random effects again. hist(bcft$gammas, xlab='Estimated Random Effects', nclass=40, main='')

# References

## Try the rmsb package in your browser

Any scripts or data that you put into this service are public.

rmsb documentation built on April 12, 2022, 5:06 p.m.