hmr: Bayesian meta-analysis for cross-design synthesis.

Description Usage Arguments Details Value References Examples

View source: R/hmr.R

Description

This function performers a Bayesian cross-design synthesis. The function fits a hierarchical meta-regression model based on a bivariate random effects model. The number of events in the control and treated group are modeled with two conditional Binomial distributions and the random-effects are based on a bivariate scale mixture of Normals. The individual participant data is modeled as a Bayesian logistic regression for participants in the control group. Coefficients in the regression are modeled as exchangeables.

Usage

1
2
3
4
5
6
7
8
hmr(data, two.by.two = TRUE, dataIPD, re = "normal", link = "logit",
  mean.mu.1 = 0, mean.mu.2 = 0, mean.mu.phi = 0, sd.mu.1 = 1,
  sd.mu.2 = 1, sd.mu.phi = 1, sigma.1.upper = 5, sigma.2.upper = 5,
  sigma.beta.upper = 5, mean.Fisher.rho = 0,
  sd.Fisher.rho = 1/sqrt(2), df = 4, df.estimate = FALSE,
  df.lower = 3, df.upper = 20, split.w = FALSE, nr.chains = 2,
  nr.iterations = 10000, nr.adapt = 1000, nr.burnin = 1000,
  nr.thin = 1, be.quiet = FALSE, r2jags = TRUE)

Arguments

data

Aggregated data results: a data frame where the first four columns containing the number of events in the control group (yc), the number of patients in the control group (nc), the number of events in the treatment group (yt) and the number of patients in the treatment group (nt). If two.by.two = TRUE a data frame where each line contains the trial results with column names: yc, nc, yt, nt.

two.by.two

If TRUE indicates that the trial results are with names: yc, nc, yt, nt.

dataIPD

Individual participant data: a data frame where the first column is the outcome variable and the other columns represent individual participant charachteristics.

re

Random effects distribution for the resulting model. Possible values are normal for bivariate random effects and sm for scale mixtures.

link

The link function used in the model. Possible values are logit, cloglog probit.

mean.mu.1

Prior mean of baseline risk, default value is 0.

mean.mu.2

Prior mean of treatment effect, default value is 0.

mean.mu.phi

Prior mean of the bias parameter which measures the difference between the baseline mean mu.1 and the intercept parameter of the logistic regression of the individual participant data. The defalut vaule is 0.

sd.mu.1

Prior standard deviation of mu.1, default value is 1. The default prior of mu.1 is a logistic distribution with mean 0 and dispersion 1. The implicit prior for mu.1 in the probability scale is a uniform between 0 and 1.

sd.mu.2

Prior standard deviation of mu.2, default value is 1. The default prior of mu.2 is a logistic distribution with mean 0 and dispersion 1. The implicit prior for mu.2 in the probability scale is a uniform between 0 and 1.

sd.mu.phi

Prior standard deviation of mu.phi, default value is 1.

sigma.1.upper

Upper bound of the uniform prior of sigma.1, default value is 5.

sigma.2.upper

Upper bound of the uniform prior of sigma.2, default value is 5.

sigma.beta.upper

Upper bound of the uniform prior of sigma.beta, default value is 5.

mean.Fisher.rho

Mean of rho in the Fisher scale, default value is 0.

sd.Fisher.rho

Standard deviation of rho in the Fisher scale, default value is 1/sqrt(2).

df

If de.estimate = FALSE, then df is the degrees of freedom for the scale mixture distribution, default value is 4.

df.estimate

Estimate the posterior of df. The default value is FALSE.

df.lower

Lower bound of the prior of df. The default value is 3.

df.upper

Upper bound of the prior of df. The default value is 30.

split.w

Split the w parameter in two independent weights one for each random effect. The default value is FALSE.

nr.chains

Number of chains for the MCMC computations, default 5.

nr.iterations

Number of iterations after adapting the MCMC, default is 10000. Some models may need more iterations.

nr.adapt

Number of iterations in the adaptation process, default is 1000. Some models may need more iterations during adptation.

nr.burnin

Number of iteration discarded for burnin period, default is 1000. Some models may need a longer burnin period.

nr.thin

Thinning rate, it must be a positive integer, the default value 1.

be.quiet

Do not print warning message if the model does not adapt default value is FALSE. If you are not sure about the adaptation period choose be.quiet=TRUE.

r2jags

Which interface is used to link R to JAGS (rjags and R2jags) default value is R2Jags TRUE.

Details

The function calculates the implicit hierarchical meta-regression, where the treatment effect is regressed to the baseline risk (rate of events in the control group). The scale mixture weights are used to adjust for internal validity and structural outliers identification. This is used to predict the treatment effect for subgroups of individual participant data.

Computations are done by calling JAGS (Just Another Gibbs Sampler) to perform MCMC (Markov Chain Monte Carlo) sampling and returning an object of the class mcmc.list.

Installation of JAGS: It is important to note that R 3.3.0 introduced a major change in the use of toolchain for Windows. This new toolchain is incompatible with older packages written in C++. As a consequence, if the installed version of JAGS does not match the R installation, then the rjags package will spontaneously crash. Therefore, if a user works with R version >= 3.3.0, then JAGS must be installed with the installation program JAGS-4.2.0-Rtools33.exe. For users who continue using R 3.2.4 or an earlier version, the installation program for JAGS is the default installer JAGS-4.2.0.exe.

Value

This function returns an object of the class "hmr". This object contains the MCMC output of each parameter and hyper-parameter in the model, the data frame used for fitting the model, the link function, type of random effects distribution and the splitting information for conflict of evidence analysis. The results of the object of the class metadiag can be extracted with R2jags or with rjags. In addition a summary, a print and a plot function are implemented for this type of object.

References

Verde, P.E, Ohmann, C., Icks, A. and Morbach, S. (2016) Bayesian evidence synthesis and combining randomized and nonrandomized results: a case study in diabetes. Statistics in Medicine. Volume 35, Issue 10, 10 May 2016, Pages: 1654 to 1675.

Verde, P.E. (2019) The hierarchical meta-regression approach and learning from clinical evidence. Biometrical Journal. 1 - 23.

Examples

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
## Not run: 

# Example: from Verde 2019, Section 5

library(jarbes)

data("healing")
AD <- healing[, c("y_c", "n_c", "y_t", "n_t")]

data("healingipd")

IPD <- healingipd[, c("healing.without.amp", "PAD", "neuropathy",
"first.ever.lesion", "no.continuous.care", "male", "diab.typ2",
"insulin", "HOCHD", "HOS", "CRF", "dialysis", "DNOAP", "smoking.ever",
"diabdur", "wagner.class")]

mx2 <- hmr(AD, two.by.two = FALSE,
           dataIPD = IPD,
           re = "sm",
           link = "logit",
           sd.mu.1 = 2,
           sd.mu.2 = 2,
           sd.mu.phi = 2,
           sigma.1.upper = 5,
           sigma.2.upper = 5,
           sigma.beta.upper = 5,
           sd.Fisher.rho = 1.25,
           df.estimate = FALSE,
           df.lower = 3,
           df.upper = 10,
           nr.chains = 1,
           nr.iterations = 1500,
           nr.adapt = 100,
           nr.thin = 1)

print(mx2)

# This experiment corresponds to Section 4 in Verde (2018).
#
# Experiment: Combining aggretated data from RCTs and a single
# observational study with individual participant data.
#
# In this experiment we assess conflict of evidence between the RCTs
# and the observational study with a partially identified parameter
# mu.phi.
#
# We run two simulated data: 1) mu.phi = 0.5 which is diffucult to
# identify. 2) mu.phi = 2 which can be identify. The simulations are
# used to see if the hmr() function can recover mu.phi.



library(MASS)
library(ggplot2)
library(jarbes)
library(gridExtra)
library(mcmcplots)
library(R2jags)


# Simulation of the IPD data

invlogit <- function (x)
{
 1/(1 + exp(-x))
}

# Data set for mu.phi = 0.5 .........................................

# Parameters Aggregated Data:

# mean control
pc <- 0.7
# mean treatment
pt <- 0.4
# reduction of "amputations" odds ratio
OR <- (pt/(1-pt)) /(pc/(1-pc))
# mu_2: treatment effect ...
log(OR)
mu.2.true <- log(OR)
# mu_1
mu.1.true <- log(pc/(1-pc)) # Baseline risk
mu.1.true
#sigma_1 # Between studies variability
sigma.1.true <- 1
#sigma_2
sigma.2.true <- 0.5
# rho: correlation between treatment effect and baseline risk
rho.true <- -0.5

Sigma <- matrix(c(sigma.1.true^2, sigma.1.true*sigma.2.true*rho.true,
                  sigma.1.true*sigma.2.true*rho.true, sigma.2.true^2),
                  byrow = TRUE, ncol = 2)

# Parameters values IPD

# Parameters values
mu.phi.true <- 0.5
beta0 <- mu.1.true + mu.phi.true
beta1 <- 2.5
beta2 <- 2

# Regression variables

x1 <- rnorm(200)
x2 <- rbinom(200, 1, 0.5)

# Binary outcome as a function of "b0 + b1 * x1 + b2 * x2"

y <- rbinom(200, 1,
         invlogit(beta0 + beta1 * x1 + beta2 * x2))


# Preparing the plot to visualize the data
jitter.binary <- function(a, jitt = 0.05)

 ifelse(a==0, runif(length(a), 0, jitt),
        runif(length(a), 1-jitt, 1))



plot(x1, jitter.binary(y), xlab = "x1",
    ylab = "Success probability")

curve(invlogit(beta0 + beta1*x),
     from = -2.5, to = 2.5, add = TRUE, col ="blue", lwd = 2)
curve(invlogit(beta0 + beta1*x + beta2),
     from = -2.5, to = 2.5, add = TRUE, col ="red", lwd =2)
legend("bottomright", c("b2 = 0", "b2 = 2"),
      col = c("blue", "red"), lwd = 2, lty = 1)

noise <- rnorm(100*20)
dim(noise) <- c(100, 20)
n.names <- paste(rep("x", 20), seq(3, 22), sep="")
colnames(noise) <- n.names

data.IPD <- data.frame(y, x1, x2, noise)

# Aggregated Data

# Experiment 1: External validity bias
theta <- mvrnorm(35, mu = c(mu.1.true, mu.2.true),
                  Sigma = Sigma )

 plot(theta, xlim = c(-2,3))
 abline(v=mu.1.true, lty = 2)
 abline(h=mu.2.true, lty = 2)
 abline(a = mu.2.true, b=sigma.2.true/sigma.1.true * rho.true, col = "red")
 abline(lm(theta[,2]~theta[,1]), col = "blue")
 # Target group
 mu.T <- mu.1.true + 2 * sigma.1.true
 abline(v=mu.T, lwd = 3, col = "blue")
 eta.true <- mu.2.true + sigma.2.true/sigma.1.true*rho.true* mu.T
 eta.true
 exp(eta.true)
 abline(h = eta.true, lty =3, col = "blue")
 # Simulation of each primary study:
 n.c <- round(runif(35, min = 30, max = 60),0)
 n.t <- round(runif(35, min = 30, max = 60),0)
 y.c <- y.t <- rep(0, 35)
 p.c <- exp(theta[,1])/(1+exp(theta[,1]))
 p.t <- exp(theta[,2]+theta[,1])/(1+exp(theta[,2]+theta[,1]))
 for(i in 1:35)
 {
   y.c[i] <- rbinom(1, n.c[i], prob = p.c[i])
   y.t[i] <- rbinom(1, n.t[i], prob = p.t[i])
 }

 AD.s1 <- data.frame(yc=y.c, nc=n.c, yt=y.t, nt=n.t)


 incr.e <- 0.05
 incr.c <- 0.05
 n11 <- AD.s1$yt
 n12 <- AD.s1$yc
 n21 <- AD.s1$nt - AD.s1$yt
 n22 <- AD.s1$nc - AD.s1$yc
 AD.s1$TE <- log(((n11 + incr.e) * (n22 + incr.c))/((n12 + incr.e) * (n21 + incr.c)))
 AD.s1$seTE <- sqrt((1/(n11 + incr.e) + 1/(n12 + incr.e) +
                       1/(n21 + incr.c) + 1/(n22 + incr.c)))

 Pc <- ((n12 + incr.c)/(AD.s1$nc + 2*incr.c))

 AD.s1$logitPc <- log(Pc/(1-Pc))

 AD.s1$Ntotal <- AD.s1$nc + AD.s1$nt
  rm(list=c("Pc", "n11","n12","n21","n22","incr.c", "incr.e"))

# Application of HMR .......................................

res.s2 <- hmr(AD.s1, two.by.two = FALSE,
             dataIPD = data.IPD,
             sd.mu.1 = 2,
             sd.mu.2 = 2,
             sd.mu.phi = 2,
             sigma.1.upper = 5,
             sigma.2.upper = 5,
             sd.Fisher.rho = 1.5)


print(res.s2)

# Data set for mu.phi = 2 ..................................
# Parameters values

mu.phi.true <- 2
beta0 <- mu.1.true + mu.phi.true
beta1 <- 2.5
beta2 <- 2

# Regression variables
x1 <- rnorm(200)
x2 <- rbinom(200, 1, 0.5)
# Binary outcome as a function of "b0 + b1 * x1 + b2 * x2"
y <- rbinom(200, 1,
           invlogit(beta0 + beta1 * x1 + beta2 * x2))

# Preparing the plot to visualize the data
jitter.binary <- function(a, jitt = 0.05)

 ifelse(a==0, runif(length(a), 0, jitt),
        runif(length(a), 1-jitt, 1))

plot(x1, jitter.binary(y), xlab = "x1",
    ylab = "Success probability")

curve(invlogit(beta0 + beta1*x),
     from = -2.5, to = 2.5, add = TRUE, col ="blue", lwd = 2)
curve(invlogit(beta0 + beta1*x + beta2),
     from = -2.5, to = 2.5, add = TRUE, col ="red", lwd =2)
legend("bottomright", c("b2 = 0", "b2 = 2"),
      col = c("blue", "red"), lwd = 2, lty = 1)

noise <- rnorm(100*20)
dim(noise) <- c(100, 20)
n.names <- paste(rep("x", 20), seq(3, 22), sep="")
colnames(noise) <- n.names

data.IPD <- data.frame(y, x1, x2, noise)

# Application of HMR ................................................

res.s3 <- hmr(AD.s1, two.by.two = FALSE,
             dataIPD = data.IPD,
             sd.mu.1 = 2,
             sd.mu.2 = 2,
             sd.mu.phi = 2,
             sigma.1.upper = 5,
             sigma.2.upper = 5,
             sd.Fisher.rho = 1.5
)

print(res.s3)

# Posteriors for mu.phi ............................
attach.jags(res.s2)
mu.phi.0.5 <- mu.phi
df.phi.05 <- data.frame(x = mu.phi.0.5)

attach.jags(res.s3)
mu.phi.1 <- mu.phi
df.phi.1 <- data.frame(x = mu.phi.1)


p1 <- ggplot(df.phi.05, aes(x=x))+
 xlab(expression(mu[phi])) +
 ylab("Posterior distribution")+
 xlim(c(-7,7))+
 geom_histogram(aes(y=..density..),fill = "royalblue",
              colour = "black", alpha= 0.4, bins=60) +
 geom_vline(xintercept = 0.64, colour = "black", size = 1.7, lty = 2)+
 geom_vline(xintercept = 0.5, colour = "black", size = 1.7, lty = 1)+
 stat_function(fun = dlogis,
               n = 101,
               args = list(location = 0, scale = 1), size = 1.5) + theme_bw()

p2 <- ggplot(df.phi.1, aes(x=x))+
 xlab(expression(mu[phi])) +
 ylab("Posterior distribution")+
 xlim(c(-7,7))+
 geom_histogram(aes(y=..density..),fill = "royalblue",
                colour = "black", alpha= 0.4, bins=60) +
 geom_vline(xintercept = 2.2, colour = "black", size = 1.7, lty = 2)+
 geom_vline(xintercept = 2, colour = "black", size = 1.7, lty = 1)+
 stat_function(fun = dlogis,
               n = 101,
               args = list(location = 0, scale = 1), size = 1.5) + theme_bw()

grid.arrange(p1, p2, ncol = 2, nrow = 1)


# Catter plots for regression coefficients ...........................
library(rjags)
var.names <- names(data.IPD[-1])
v <- paste("beta", names(data.IPD[-1]), sep = ".")
mcmc.x.2 <- as.mcmc.rjags(res.s2)
mcmc.x.3 <- as.mcmc.rjags(res.s3)

greek.names <- paste(paste("beta[",1:22, sep=""),"]", sep="")
par.names <- paste(paste("beta.IPD[",1:22, sep=""),"]", sep="")

caterplot(mcmc.x.2,
         parms = par.names,
         col = "black", lty = 1,
         labels = greek.names,
         greek = TRUE,
         labels.loc="axis", cex =0.7,
         style = "plain",reorder = FALSE,
         denstrip = FALSE)

caterplot(mcmc.x.3,
         parms = par.names,
         col = "grey", lty = 2,
         labels = greek.names,
         greek = TRUE,
         labels.loc="axis", cex =0.7,
         style = "plain", reorder = FALSE,
         denstrip = FALSE,
         add = TRUE,
         collapse=TRUE, cat.shift=-0.5)

abline(v=0, lty = 2, lwd = 2)
abline(v =2, lty = 2, lwd = 2)
abline(v =2.5, lty = 2, lwd = 2)

# End of the examples.


## End(Not run)

jarbes documentation built on March 11, 2019, 5:05 p.m.