Specifying a primary weight of the informative prior component"

knitr::opts_chunk$set(
  echo = T, collapse = T, warning = F, message = F, 
  prompt = T, comment = "#", out.width = "100%"
)

Introduction

In a clinical trial project that uses Bayesian dynamic borrowing via meta-analytic predictive (MAP) priors, ideally a pre-specified weight of the informative component of the MAP prior needs to be determined (@Ionan2023). Expert elicitation is a way through which expert judgement can be formally considered for statistical inference and decision making and it can be used to determine this weight.

For reviews on expert elicitaton see e.g. @Brownstein2019 and @OHagan2019, and for experiences with expert elicitation in drug development see e.g. @Dallow2018.

The Sheffield elicitation framework (SHELF) is an established framework that is used for the conduct of expert elicitation (@OHagan2019, @Gosling2018, @Best2020), and the SHELF package is available to facilitate implementation in R (@SHELFv4).

This vignette provides a brief description of how expert elicitation can be used in a Bayesian borrowing analysis using robust MAP priors. It is not a description of the expert elicitation process, it shows how elicited data can be processed. It is based on and closely resembles functions in the SHELF package, but is much more limited, in the sense that it only considered determination of one weight parameter (a variable on the scale [0,1]).

The data in this example are hypothetical data.

Loading the tipmap package and set.seed:

library(tipmap)
set.seed(123)

Expert weightings collected using the roulette method

Here, the expert data are assumed to be collected via the 'roulette method' (@Gosling2018, @Dallow2018). The experts are asked to place 10 chips into a grid to create histogram-like data that reflects their preferred weighting. No particular shape of symmetry is needed.

Data from a single expert:

chips_1exp <- c(1, 3, 4, 2, 0, 0, 0, 0, 0, 0)
sum(chips_1exp)

Fitting beta distributions to expert data

The roulette data are assumed to follow a beta distribution. The following calculation and fitting of a beta distribution is similar to an implementation in SHELF::fitdist and yields identical results.

Data from a single expert:

# Compute cumulative probabilities
(x <- get_cum_probs_1exp(chips_1exp))
# Compute model inputs
(y <- get_model_input_1exp(x))
# Fit beta distribution
(fit_1exp <- fit_beta_1exp(df = y)$par)

For multiple experts the individual steps are handled by the fit_beta_mult_exp-function:

chips_mult <- rbind(
  c(1, 3, 4, 2, 0, 0, 0, 0, 0, 0),
  c(0, 2, 3, 2, 2, 1, 0, 0, 0, 0),
  c(0, 1, 3, 2, 2, 1, 1, 0, 0, 0),
  c(1, 3, 3, 2, 1, 0, 0, 0, 0, 0),
  c(0, 1, 4, 3, 2, 0, 0, 0, 0, 0)
)
beta_fits <- fit_beta_mult_exp(
  chips_mult = chips_mult
)
beta_fits

Summary statistics

Summary statistics for a single expert:

(alpha <- fit_1exp[1]); (beta <- fit_1exp[2])

# Mean
(beta_mean <- alpha/(alpha+beta))

# Standard deviation
beta_sd <- sqrt( (alpha*beta)/( (alpha+beta)^2 *(alpha+beta+1) ) )
beta_sd

# Mean absolute deviation around the mean
beta_mad_mean <- (2*(alpha^alpha)*(beta^beta))/( beta(alpha, beta) * (alpha+beta)^(alpha+beta+1) )
beta_mad_mean

# Mode
if (alpha > 1 & beta >1) beta_mode <- (alpha-1)/(alpha+beta-2)
if (alpha > 1 & beta >1) beta_mode <- 0.5
beta_mode

# Quantiles
qbeta(p = c(0.001, 0.025, 0.05, 0.1, 0.5, 0.9, 0.95, 0.975, 0.99),
      shape1 = alpha, shape2 = beta)

# Samples
x <- rbeta(n = 10^6, shape1 = alpha, shape2 = beta)
mean(x)
sd(x)

Summary statistics for data from multiple experts:

expert_samples <- draw_beta_mixture_nsamples(
  n = 10^3, 
  chips_mult = chips_mult
) 
summary(expert_samples)
(mean_w <- round(mean(expert_samples), 2))

Mean or median values of the pooled distribution may be used as primary weights of the informative component of the robust MAP prior when pre-specifying the Bayesian analysis.

Figures

# Load libraries
packages <- c("magrittr", "ggplot2", "tibble", "dplyr")
invisible(lapply(packages, library, character.only = T))

Without linear pooling

# Create matrix
fits_mat <- as.matrix(beta_fits[,c(1,2)])
# Wide format
fit_beta_mult_plot_wide <- tibble::tibble(
 x = seq(0.001, 0.999, length = 200),
 Expert1 = dbeta(x, fits_mat[1,1], fits_mat[1,2]),
 Expert2 = dbeta(x, fits_mat[2,1], fits_mat[2,2]),
 Expert3 = dbeta(x, fits_mat[3,1], fits_mat[3,2]),
 Expert4 = dbeta(x, fits_mat[4,1], fits_mat[4,2]),
 Expert5 = dbeta(x, fits_mat[5,1], fits_mat[5,2])
)
# Long format
fit_beta_mult_plot_long <- fit_beta_mult_plot_wide %>%
  tidyr::pivot_longer(
    !x,
    names_to = "Expert",
    values_to = "dens")
# Plot without linear pool
fig_betas_1 <- ggplot(
  data = fit_beta_mult_plot_long,
  aes(x = x, y = dens, goup = Expert)
  ) +
  geom_line(aes(color = Expert)) +
 ggtitle("Fitted beta distributions") +
 xlab("Weight") + ylab("Density") +
 scale_x_continuous(breaks = c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10) / 10) +
 theme_bw()
print(fig_betas_1)

With linear pooling

# Wide format
fit_beta_mult_plot_wide2 <- fit_beta_mult_plot_wide %>%
  mutate(linpool = (Expert1 + Expert2 + Expert3 + Expert4 + Expert5)/5)
# Long format
fit_beta_mult_plot_long2 <- fit_beta_mult_plot_wide %>%
  tidyr::pivot_longer(
    !x,
    names_to = "Expert",
    values_to = "dens")
# Plot
fig_betas_2 <- ggplot(
  data = fit_beta_mult_plot_long2,
  aes(x = x, y = dens, group = Expert)) +
  geom_line(aes(color = Expert ) ) +
  ggtitle("Fitted beta distributions and linear pool") +
  xlab("Weight") + ylab("Density") +
  scale_x_continuous(breaks=c(0,1,2,3,4,5,6,7,8,9,10)/10) +
  theme_bw() +
  geom_line(data = fit_beta_mult_plot_wide2,
            aes(x = x, y = linpool, group = 1),
            linewidth=1)
print(fig_betas_2)

References



Try the tipmap package in your browser

Any scripts or data that you put into this service are public.

tipmap documentation built on Aug. 14, 2023, 5:09 p.m.