Reproducing Parts of Neuenschwander et al. (2016)"

knitr::opts_chunk$set(
  eval     = TRUE,  # en- / disables R code evaluation globally
  cache    = FALSE,  # en- / disables R code caching globally
  collapse = TRUE,
  comment  = "#>"
)

This vignette demonstrates how to use the R package bhmbasket to reproduce parts of "Robust exchangeability designs for early phase clinical trials with multiple strata" by @exnex2016.

Two examples are shown in this vignette: the analysis of a basket trial's outcome with the ExNex approach, and the assessment of the operating characteristics of a planned basket trial under several scenarios.

library(bhmbasket)

rng_seed <- 123
set.seed(rng_seed)

Analysis of a Basket Trial's Outcome

Section 2 of @exnex2016 discusses the application of the ExNex approach to the results of the phase II sarcoma basket trial by @chugh2009. The prior specifications and the estimated response rates per cohort are shown in the Appendix of @exnex2016.

Creating the trial

A single trial with a given outcome can be created with the function createTrial(), which takes as arguments the number of subjects and the number of responders of the trial. The outcome of the basket trial by @chugh2009 can be set up as follows:

chugh_trial <- createTrial(
  n_subjects   = c(15, 13, 12, 28, 29, 29, 26, 5, 2, 20),
  n_responders = c(2, 0, 1, 6, 7, 3, 5, 1, 0, 3))

Running the model

The analysis of the trial is handled by the function performAnalyses(), to which to the chugh_trial, the name of the method "exnex", and the prior parameters as shown in Appendix 5.3 of @exnex2016 are provided:

chugh_prior_parameters <- setPriorParametersExNex(
  mu_mean   = c(-1.735, 0.847),
  mu_sd     = c(0.146, 0.266)^-0.5,
  tau_scale = 1,
  mu_j      = rep(-1.734, 10),
  tau_j     = rep(0.128^-0.5, 10),
  w_j       = c(0.5, 0, 0.5))

if (file.exists("chugh_analysis.rds")) {

  chugh_analysis <- readRDS("chugh_analysis.rds")

} else {

  chugh_analysis <- performAnalyses(
  scenario_list         = chugh_trial,
  method_names          = "exnex",
  prior_parameters_list = chugh_prior_parameters,
  seed                  = rng_seed,
  n_mcmc_iterations     = 5e4,
  n_cores               = 2L,
  verbose               = FALSE)

  saveRDS(chugh_analysis, "chugh_analysis.rds")

}

Getting the estimates

One can use the function getEstimates() to get point estimates and credible intervals of the response rates:

getEstimates(analyses_list = chugh_analysis)

By comparing these values to the respective numbers provided by @exnex2016 in Appendix 5.3 one can see that the results are equal to the second decimal place.

Assessing the Operating Characteristics of a Basket Trial Design {#trialOC}

Section 3.1 of @exnex2016 presents the design and the operating characteristics of a phase II basket trial in four indications. In their publication, the authors present the scenarios with the respective true response rates per cohort in Table V along with the respective cohort-wise go probabilities, biases, and mean squared errors. The number of subjects is 20 each for the first two cohorts and 10 each for the last two cohorts.

Simulating the scenarios

The scenarios are simulated with the function simulateScenarios(). There are seven different scenarios presented in Table V. In this example, the Scenarios 1, 3, and 4 are reproduced. For each scenario, 10000 basket trial realizations are simulated:

scenarios_list <- simulateScenarios(
  n_subjects_list     = list(c(20, 20, 10, 10),
                             c(20, 20, 10, 10),
                             c(20, 20, 10, 10)),
  response_rates_list = list(c(0.1, 0.1, 0.1, 0.1),
                             c(0.1, 0.1, 0.3, 0.3),
                             c(0.1, 0.1, 0.1, 0.5)),
  scenario_numbers    = c(1, 3, 4),
  n_trials            = 1e4)

Running the model {#performAnalysis}

The analysis of each simulated basket trial is performed with the function performAnalyses(). @exnex2016 use the posterior means of the cohorts' response rates to calculate the biases and mean squared errors, as well as to derive cohort-wise decisions. Setting the argument post_mean to TRUE will save the posterior means along with the posterior probability thresholds, which are required for the decision making laid out below. The prior parameters for the ExNex model are taken from Section 3.1 and Appendix 5.1.2 of @exnex2016.

prior_parameters <- setPriorParametersExNex(
  mu_mean   = c(logit(0.1), logit(0.3)),
  mu_sd     = c(3.18, 1.94),
  tau_scale = 1,
  mu_j      = rep(logit(0.2), 4),
  tau_j     = rep(2.5, 4),
  w_j       = c(0.25, 0.25, 0.5))

if (file.exists("analyses_list.rds")) {

  analyses_list <- readRDS("analyses_list.rds")

} else {

  analyses_list <- performAnalyses(
    scenario_list         = scenarios_list,
    method_names          = "exnex",
    prior_parameters_list = prior_parameters,
    seed                  = rng_seed,
    n_mcmc_iterations     = 5e4,
    n_cores               = 2L)

  saveRDS(analyses_list, "analyses_list.rds")

}
## check for unnecessary data
if (any(grepl("mu_",
              colnames(analyses_list$scenario_1$quantiles_list$exnex[[1]])))) {

  ## remove unnecessary data

  ## ...in quantiles_list
  analyses_list$scenario_1$quantiles_list$exnex <-   lapply(analyses_list$scenario_1$quantiles_list$exnex, function   (x) {
    x[-c(2, 6), -c(1:6, 11, 12)]
  })
  analyses_list$scenario_3$quantiles_list$exnex <-   lapply(analyses_list$scenario_3$quantiles_list$exnex, function   (x) {
    x[-c(2, 6), -c(1:6, 11, 12)]
  })
  analyses_list$scenario_4$quantiles_list$exnex <-   lapply(analyses_list$scenario_4$quantiles_list$exnex, function   (x) {
    x[-c(2, 6), -c(1:6, 11, 12)]
  })

  ## ... on quantiles
  analyses_list$scenario_1$analysis_parameters$quantiles <-   c(0.025, 0.100, 0.200, 0.500, 0.975)
  analyses_list$scenario_3$analysis_parameters$quantiles <-   c(0.025, 0.100, 0.200, 0.500, 0.975)
  analyses_list$scenario_4$analysis_parameters$quantiles <-   c(0.025, 0.100, 0.200, 0.500, 0.975)

  ## ... on subjects and responders
  analyses_list$scenario_1$scenario_data$n_subjects <- NULL
  analyses_list$scenario_3$scenario_data$n_subjects <- NULL
  analyses_list$scenario_4$scenario_data$n_subjects <- NULL
  analyses_list$scenario_1$scenario_data$n_responders <- NULL
  analyses_list$scenario_3$scenario_data$n_responders <- NULL
  analyses_list$scenario_4$scenario_data$n_responders <- NULL

  analyses_list$scenario_1$quantiles_list$exnex[[1]]

  saveRDS(analyses_list, "analyses_list.rds", compress = "xz")
}

Note that although the total number of simulated trials is 3 x 10000, there are only 2314 unique trial realizations among the three scenarios. As the function performAnalyses() takes advantage of this, its run time is reduced when compared to applying the model to all simulated trials. However, some additional time is required to map the results from the unique trials back to the simulated trials of each scenario.

Estimating the biases and MSEs

The bias and mean squared error (MSE) per cohort, as well as some posterior quantiles, are estimated with getEstimates(). The argument point_estimator = "mean" ensures that the mean instead of the median will be used for calculating the biases and MSEs. In order to better assess and compare the results with the numbers presented by @exnex2016, scaling and rounding may be applied with scaleRoundList():

estimates <- getEstimates(
    analyses_list   = analyses_list,
    point_estimator = "mean")

scaleRoundList(
  list         = estimates,
  scale_param  = 100,
  round_digits = 2)

Comparing the biases and MSEs with the respective numbers provided in Table V of @exnex2016, one can see that the results are rather similar.

Estimating the cohort-wise go probabilities

The cohort-wise go decisions are derived with the function getGoDecisions(), which applies the specified decision rules to each simulated basket trial for each scenario. The estimated go probabilities for each scenario can then be derived with the function getGoProbabilities().

The set up of the decision rules in getGoDecisions() warrants further explanation. Generally, a simple decision rule for a go in a single cohort $j$ can be written as $$P(p_j|\text{data} > \bar{p}{j}) > \gamma_j,$$ where $p_j|\text{data}$ is the posterior response rate, $\bar{p}{j}$ is the is the boundary response rate, and $\gamma_j$ is the posterior evidence level for a go decision. Thus, the decision rule consists of three parts: the posterior response rate, the boundary response rate, and the posterior probability threshold. The arguments for setting up the decision rules correspond to these three parts: cohort_names picks the posterior response rates, boundary_rules specifies the boundary response rates and type of comparison, and evidence_levels provides the posterior evidence levels. For example, the decision rule $P(p_1|\text{data} > 0.1) > 0.9$ would then be implemented as

cohort_names    = "p_1"
boundary_rules  = quote(c(x[1] > 0.1))
evidence_levels = 0.9

It is possible to specify different boundary rules and evidence levels for different analysis methods with list(). In this case, only the ExNex method is applied and no list is needed. Note that only evidence levels specified in performAnalyses() can be utilized.

It is possible to implement combined decision criteria, such as utilized by @exnex2016 who require in Section 3.1 for a go decision in the first cohort that $$P(p_1|\text{data} > 0.1) > 0.9 \quad\land\quad \text{Mean}(p_1|\text{data}) > 0.2$$ holds true. This combined decision rule would then be implemented as

cohort_names    = c("p_1", "p_1")
boundary_rules  = quote(c(x[1] > 0.1 & x[2] > 0.2))
evidence_levels = c(0.9, "mean")

Thus, the go probabilities for the Scenarios 1, 3, and 4 can be estimated by:

decisions_list <- getGoDecisions(
  analyses_list   = analyses_list,
  cohort_names    = c("p_1", "p_1",
                      "p_2", "p_2",
                      "p_3", "p_3",
                      "p_4", "p_4"),
  evidence_levels = c(0.9, "mean",
                      0.9, "mean",
                      0.8, "mean",
                      0.8, "mean"),
  boundary_rules  = quote(c(x[1] > 0.1 & x[2] > 0.2,
                            x[3] > 0.1 & x[4] > 0.2,
                            x[5] > 0.1 & x[6] > 0.2,
                            x[7] > 0.1 & x[8] > 0.2)))

go_probabilities <- getGoProbabilities(decisions_list)

scaleRoundList(
  list         = go_probabilities,
  scale_param  = 100,
  round_digits = 0)

By comparing these go probabilities to the values provided in Table V of @exnex2016 one can see that the results differ at most by 1 percentage point.

References



Try the bhmbasket package in your browser

Any scripts or data that you put into this service are public.

bhmbasket documentation built on March 18, 2022, 7:46 p.m.