knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "man/figures/README-", out.width = "100%" )
The adaptr
package simulates adaptive (multi-arm, multi-stage) clinical trials
using adaptive stopping, adaptive arm dropping and/or response-adaptive
randomisation.
The package has been developed as part of the INCEPT (Intensive Care Platform Trial) project, primarily supported by a grant from Sygeforsikringen "danmark".
Examples:
adaptr
to assess
the performance of adaptive clinical trials according to different
follow-up/data collection lags.adaptr
to assess the performance of adaptive clinical trials according to different
sceptical priors.The easiest way is to install from CRAN directly:
install.packages("adaptr")
Alternatively, you can install the development version from GitHub - this requires the remotes-package installed. The development version may contain additional features not yet available in the CRAN version, but may not be stable or fully documented:
# install.packages("remotes") remotes::install_github("INCEPTdk/adaptr@dev")
The central functionality of adaptr
and the typical workflow is illustrated
here.
First, the package is loaded and a cluster of parallel workers is initiated by
the setup_cluster()
function to facilitate parallel computing:
library(adaptr) setup_cluster(2)
Setup a trial specification (defining the trial design and scenario) using
the general setup_trial()
function, or one of the special case variants using
default priors setup_trial_binom()
(for binary, binomially distributed
outcomes; used in this example) or setup_trial_norm()
(for continuous,
normally distributed outcomes).
# Setup a trial using a binary, binomially distributed, undesirable outcome binom_trial <- setup_trial_binom( arms = c("Arm A", "Arm B", "Arm C"), # Scenario with identical outcomes in all arms true_ys = c(0.25, 0.25, 0.25), # Response-adaptive randomisation with minimum 20% allocation in all arms min_probs = rep(0.20, 3), # Number of patients with data available at each analysis data_looks = seq(from = 300, to = 2000, by = 100), # Number of patients randomised at each analysis (higher than the numbers # with data, except at last look, due to follow-up/data collection lag) randomised_at_looks = c(seq(from = 400, to = 2000, by = 100), 2000), # Stopping rules for inferiority/superiority not explicitly defined # Stop for equivalence at > 90% probability of differences < 5 %-points equivalence_prob = 0.9, equivalence_diff = 0.05 ) # Print trial specification print(binom_trial, prob_digits = 3)
In the example trial specification, there are no true between-arm differences,
and stopping rules for inferiority and superiority are not explicitly defined.
This is intentional, as these stopping rules will be calibrated to obtain a
desired probability of stopping for superiority in the scenario with no
between-arm differences (corresponding to the Bayesian type 1 error rate). Trial
specifications do not necessarily have to be calibrated, and simulations can be
run directly using the run_trials()
function covered below (or run_trial()
for a single simulation).
Calibration of a trial specification is done using the calibrate_trial()
function, which defaults to calibrate constant, symmetrical stopping rules
for inferiority and superiority (expecting a trial specification with
identical outcomes in each arm), but can be used to calibrate any parameter in a
trial specification towards any performance metric.
# Calibrate the trial specification calibrated_binom_trial <- calibrate_trial( trial_spec = binom_trial, n_rep = 1000, # 1000 simulations for each step (more generally recommended) base_seed = 4131, # Base random seed (for reproducible results) target = 0.05, # Target value for calibrated metric (default value) search_range = c(0.9, 1), # Search range for superiority stopping threshold tol = 0.01, # Tolerance range dir = -1 # Tolerance range only applies below target ) # Print result (to check if calibration is successful) calibrated_binom_trial
The calibration is successful - the calibrated, constant stopping threshold for
superiority is printed with the results (r calibrated_binom_trial$best_x
) and
can be extracted using calibrated_binom_trial$best_x
. Using the default
calibration functionality, the calibrated, constant stopping threshold for
inferiority is symmetrical, i.e., 1 - stopping threshold for superiority
(r 1 - calibrated_binom_trial$best_x
). The calibrated trial specification
may be extracted using calibrated_binom_trial$best_trial_spec
and, if printed,
will also include the calibrated stopping thresholds.
Calibration results may be saved (and reloaded) by using the path
argument, to
avoid unnecessary repeated simulations.
The results of the simulations using the calibrated trial specification
conducted during the calibration procedure may be extracted using
calibrated_binom_trial$best_sims
. These results can be summarised with several
functions. Most of these functions support different 'selection strategies' for
simulations not ending with superiority, i.e., performance metrics can be
calculated assuming different arms would be used in clinical practice if no arm
is ultimately superior.
The check_performance()
function summarises performance metrics in a tidy
data.frame
, with uncertainty measures (bootstrapped confidence intervals) if
requested. Here, performance metrics are calculated considering the 'best' arm
(i.e., the one with the highest probability of being overall best) selected in
simulations not ending with superiority:
# Calculate performance metrics with uncertainty measures binom_trial_performance <- check_performance( calibrated_binom_trial$best_sims, select_strategy = "best", uncertainty = TRUE, # Calculate uncertainty measures n_boot = 1000, # 1000 bootstrap samples (more typically recommended) ci_width = 0.95, # 95% confidence intervals (default) boot_seed = "base" # Use same random seed for bootstrapping as for simulations ) # Print results print(binom_trial_performance, digits = 2)
Similar results in list
format (without uncertainty measures) can be obtained
using the summary()
method, which comes with a print()
method providing
formatted results:
binom_trial_summary <- summary( calibrated_binom_trial$best_sims, select_strategy = "best" ) print(binom_trial_summary)
Individual simulation results may be extracted in a tidy data.frame
using
extract_results()
.
Finally, the probabilities of different remaining arms and
their statuses (with uncertainty) at the last adaptive analysis can be
summarised using the check_remaining_arms()
function.
Several visualisation functions are included (all are optional, and all require
the ggplot2
package installed).
Convergence and stability of one or more performance metrics may be visually
assessed using plot_convergence()
function:
plot_convergence( calibrated_binom_trial$best_sims, metrics = c("size mean", "prob_superior", "prob_equivalence"), # select_strategy can be specified, but does not affect the chosen metrics )
The empirical cumulative distribution functions for continuous performance metrics may also be visualised:
plot_metrics_ecdf( calibrated_binom_trial$best_sims, metrics = "size" )
The status probabilities for the overall trial (or for specific arms) according
to trial progress can be visualised using the plot_status()
function:
# Overall trial status probabilities plot_status( calibrated_binom_trial$best_sims, x_value = "total n" # Total number of randomised patients at X-axis )
Finally, various metrics may be summarised over the progress of one or multiple
trial simulations using the plot_history()
function, which requires non-sparse
results (the sparse
argument must be FALSE
in calibrate_trials()
,
run_trials()
, or run_trial()
, leading to additional results being saved).
The calibrated stopping thresholds (calibrated in a scenario with no between-arm differences) may be used to run simulations with the same overall trial specification, but according to a different scenario (i.e., with between-arm differences present) to assess performance metrics (including the Bayesian analogue of power).
First, a new trial specification is setup using the same settings as before, except for between-arm differences and the calibrated stopping thresholds:
binom_trial_calib_diff <- setup_trial_binom( arms = c("Arm A", "Arm B", "Arm C"), true_ys = c(0.25, 0.20, 0.30), # Different outcomes in the arms min_probs = rep(0.20, 3), data_looks = seq(from = 300, to = 2000, by = 100), randomised_at_looks = c(seq(from = 400, to = 2000, by = 100), 2000), # Stopping rules for inferiority/superiority explicitly defined # using the calibration results inferiority = 1 - calibrated_binom_trial$best_x, superiority = calibrated_binom_trial$best_x, equivalence_prob = 0.9, equivalence_diff = 0.05 )
Simulations using the trial specification with calibrated stopping thresholds
and differences present can then be conducted using the run_trials()
function
and performance metrics calculated as above:
binom_trial_diff_sims <- run_trials( binom_trial_calib_diff, n_rep = 1000, # 1000 simulations (more generally recommended) base_seed = 1234 # Reproducible results ) check_performance( binom_trial_diff_sims, select_strategy = "best", uncertainty = TRUE, n_boot = 1000, # 1000 bootstrap samples (more typically recommended) ci_width = 0.95, boot_seed = "base" )
Again, simulations may be saved and reloaded using the path
argument.
Similarly, overall trial statuses for the scenario with differences can be visualised:
plot_status(binom_trial_diff_sims, x_value = "total n")
We use the GitHub issue tracker for all bug/issue reports and proposals for enhancements.
We welcome contributions directly to the code to improve performance as well as new functionality. For the latter, please first explain and motivate it in an issue.
Changes to the code base should follow these steps:
NEWS.md
file to see the formatting)dev
branch
of adaptr
If you use the package, please consider citing it:
citation(package = "adaptr")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.