Vaccine Efficacy Trial Design"

knitr::opts_chunk$set(
  collapse = FALSE,
  comment = "#>",
  dev = "ragg_png",
  dpi = 96,
  fig.retina = 1,
  fig.width = 7.2916667,
  fig.asp = 0.618,
  fig.align = "center",
  out.width = "80%"
)

options(width = 58)
library(gsDesign)
library(gt)
library(dplyr)

Introduction

This article explores a method of approximating a design using the exact binomial method of @ChanBohidar by a time-to-event design using the method of @LachinFoulkes. This allows use of spending functions to derive boundaries for the exact method. The time-to-event design can not only be used to set boundaries for the @ChanBohidar method, but to allow specification of enrollment duration and study duration to determine enrollment rates and sample size required. This vignette also illustrates the concept of super-superiority often used in prevention studies. Finally, since this procedure is new as of November, 2023 we suggest checks and potential revisions to spending function choices to optimize design boundaries.

Parameterization

We begin with the assumption that we will a require a large sample size due to an endpoint with a small incidence rate. This could apply to a vaccine study or other prevention study with a relatively small number of events expected.

Exact binomial approach

Paralleling the notation of @ChanBohidar, we assume $N_C, P_C$ to be binomial sample size and probability of an event for each participant assigned to control; for the experimental treatment group, these are labeled $N_E, P_E$. Vaccine efficacy is defined as

$$\pi = 1 - P_E/P_C.$$

The parameter $\pi$ is often labeled as VE for vaccine efficacy. Taking into account the randomization ratio $r$ (experimental / control) the approximate probability that any given event is in the experimental group is

$$ \begin{aligned} p &= rP_E/(rP_E+ P_C)\ &= r/(r + P_C/P_E)\ &= r/(r + (1-\pi)^{-1}). \end{aligned} $$

As noted, this approximation is dependent on a large sample size and small probability of events. The above can be inverted to obtain

$$\pi = 1 - \frac{1}{r(1/p-1)}. $$

For our example of interest, we begin with an alternate hypothesis vaccine efficacy $\pi_1 = 0.7$ and experimental:control randomization ratio $r=3$. This converts to an alternate hypothesis (approximate) probability that any event is in the experimental group of

pi1 <- .7
ratio <- 3
p1 <- ratio / (ratio + 1 / (1 - pi1))
p1

We use the inversion formula to revert this to $\pi_1 = 0.7$

1 - 1 / (ratio * (1 / p1 - 1))

Letting the null hypothesis vaccine efficacy be $\pi_0 = 0.3$, our exact binomial null hypothesis probability that an event is in the experimental group is

pi0 <- .3
p0 <- ratio / (ratio + 1 / (1 - pi0))
p0

We also translate several vaccine efficacy values to proportion of events in the experimental group:

ve <- c(.5, .6, .65, .7, .75, .8)
prob_experimental <- ratio / (ratio + 1 / (1 - ve))
tibble(VE = ve, "P(Experimental)" = prob_experimental) %>%
  gt() %>%
  tab_options(data_row.padding = px(1)) %>%
  fmt_number(columns = 2, decimals = 3)

Chapter 12 of @JTBook walks through how to design and analyze such a study using a fixed or group sequential design. The time-to-event approximation provides an initial approximation to computing bounds; more importantly, it provides sample size and study duration approximations that are not given by the Jennison and Turnbull approach.

The time-to-event approach

For a time-to-event formulation with exponential failure rates $\lambda_C$ for control and $\lambda_E$ for experimental group assigned participants, we would define

$$\pi = 1 - \lambda_E / \lambda_C$$

which is 1 minus the hazard ratio often used in time-to-event studies. In the following we examine how closely the time-to-event method using asymptotic distributional assumptions can approximate an appropriate exact binomial design. We will also define a planned number of events at each of $K$ planned analyses by $D_k, 1\le k\le K$.

Generating a design

We begin by specifying parameters. The alpha and beta parameters will not be met exactly due to the discrete group sequential probability calculations performed. The current version includes only designs that use non-binding futility bounds or no futility bounds. The design is generated by first using asymptotic theory for a time-to-event design with specified spending functions. This design is then adapted to a design using the exact binomial method of @ChanBohidar. The randomization ratio (experimental/control) was assumed to be 3:1 as in the @SPUTNIK2021 trial.

alpha <- 0.025 # Type I error
beta <- 0.1 # Type II error (1 - power)
k <- 3 # number of analyses in group sequential design
timing <- c(.45, .7) # Relative timing of interim analyses compared to final
sfu <- sfHSD # Efficacy bound spending function (Hwang-Shih-DeCani)
sfupar <- -3 # Parameter for efficacy spending function
sfl <- sfHSD # Futility bound spending function (Hwang-Shih-DeCani)
sflpar <- -3 # Futility bound spending function parameter
timename <- "Month" # Time unit
failRate <- .002 # Exponential failure rate
dropoutRate <- .0001 # Exponential dropout rate
enrollDuration <- 8 # Enrollment duration
trialDuration <- 24 # Planned trial duration
VE1 <- .7 # Alternate hypothesis vaccine efficacy
VE0 <- .3 # Null hypothesis vaccine efficacy
ratio <- 3 # Experimental/Control enrollment ratio
test.type <- 4 # 1 for one-sided, 4 for non-binding futility

The time-to-event design

Now we generate the design. If resulting alpha and beta do not satisfy requirements, adjust parameters above until a satisfactory result is obtained.

# Derive Group Sequential Design
# This determines final sample size
x <- gsSurv(
  k = k, test.type = test.type, alpha = alpha, beta = beta, timing = timing,
  sfu = sfu, sfupar = sfupar, sfl = sfl, sflpar = sflpar,
  lambdaC = failRate, eta = dropoutRate,
  # Translate vaccine efficacy to HR
  hr = 1 - VE1, hr0 = 1 - VE0,
  R = enrollDuration, T = trialDuration,
  minfup = trialDuration - enrollDuration, ratio = ratio
)

Now we convert this to a design with integer event counts at analyses. This is achieved by rounding interim analysis event counts from the above design and rounding up the final analysis event count. This will result in a slight change in event fractions at interim analyses as well as a slight change from the targeted 90% power. We now explain the rationale behind the spending function choices. Recall that the hazard ratio (HR) is 1 minus the VE. The ~HR at bound represents the approximate hazard ratio required to cross a bound. Thus, small HR's at the interim analyses along with small cumulative $\alpha$-spending suggest crossing an interim efficacy bound would provide a result strong enough to potentially justify the new treatment. The hazard ratio of ~0.69 (VE ~ 0.31) for the interim 1 futility bound mean that the efficacy trend would be essentially no better than the null hypothesis if the futility bound were crossed. The second analysis futility bound with approximate VE of 0.5 would be worth discussion with a data monitoring committee as well as other planners for the trial; a custom spending function \citep{AndClark} could be used to set both the first and second interim bounds to desired levels.

xx <- toInteger(x)
gsBoundSummary(xx,
  tdigits = 1, logdelta = TRUE, deltaname = "HR", Nname = "Events",
  exclude = c("B-value", "CP", "CP H1", "PP")
) %>%
  gt() %>%
  tab_header(
    title = "Initial group sequential approximation",
    subtitle = "Integer event counts at analyses"
  ) %>%
  tab_options(data_row.padding = px(1))

A textual summary for the design is:

cat(summary(xx, timeunit = "months"))

Converting to an exact binomial design

We now convert this to an exact binomial design. The bound counts are described in this initial table displayed. N is the total event count, a the maximum number of events in the experimental group to cross the efficacy bound. For example, if 12 or fewer of 30 events at interim 1 are in the experimental group the efficacy bound has been crossed. The futility bound is in b; at the first interim, if 21 or more of 30 total events are in the experimental group then the futility bound would be crossed and the alternate hypothesis could be rejected. The second and third tables below give probabilities of crossing the upper (futility) and lower (efficacy) bounds under the null (theta = 0.6774) and alternate (theta = 0.4737) hypotheses, respectively; these calculations are done under the exact binomial distribution assumptions.

xb <- toBinomialExact(x)

Combined summary table

We produce a summary table. Code for this is provided in the R markdown for this vignette provided with the package. This combines information from the time-to-event design for calendar timing of analyses (Time) and expected sample size at each analysis (N) along with bounds and operating characteristics for the design.

# Function to print summary table for VE design
veTable <- function(xbDesign, tteDesign, ve) {
  # Analysis, N, Time, Total Cases, Success(Cases, VE, VE lower CI, Spend), Futility (Cases, VE, Spend), Type I Error, Power table
  ratio <- tteDesign$ratio
  prob_experimental <- ratio / (ratio + 1 / (1 - ve))
  power_table <- gsBinomialExact(
    k = xbDesign$k, theta = prob_experimental,
    n.I = xbDesign$n.I, a = xbDesign$lower$bound,
    b = xbDesign$upper$bound
  )$lower$prob
  # Cumulative sum within rows
  power_table <- apply(power_table, 2, cumsum)
  colnames(power_table) <- paste(ve * 100, "%", sep = "")
  out_tab <- tibble(
    Analysis = 1:tteDesign$k,
    Time = tteDesign$T,
    N = as.vector(round(tteDesign$eNC + tteDesign$eNE)),
    Cases = xbDesign$n.I,
    Success = xb$lower$bound,
    Futility = xb$upper$bound,
    ve_efficacy = 1 - 1 / (ratio * (xbDesign$n.I / xbDesign$lower$bound - 1)), # Efficacy bound
    ve_futility = 1 - 1 / (ratio * (xbDesign$n.I / xbDesign$upper$bound - 1)), # Futility bound
    alpha = as.vector(cumsum(
      gsBinomialExact(
        k = k, theta = xbDesign$theta[1], n.I = xbDesign$n.I,
        a = xbDesign$lower$bound, b = xbDesign$n.I + 1
      )$lower$prob
    )),
    beta = as.vector(cumsum(xbDesign$upper$prob[, 2]))
  )
  out_tab <- cbind(out_tab, power_table)
}
veTable(xb, x, ve) %>%
  gt() %>%
  fmt_number(columns = 2, decimals = 1) %>%
  fmt_number(columns = c(7:8, 11:16), decimals = 2) %>%
  fmt_number(columns = 9:10, decimals = 4) %>%
  tab_spanner(label = "Experimental Cases at Bound", columns = 5:6, id = "cases") %>%
  tab_spanner(label = "Power by Vaccine Efficacy", columns = 11:16, id = "power") %>%
  tab_spanner(label = "Error Spending", columns = 9:10, id = "spend") %>%
  tab_spanner(label = "Vaccine Efficacy at Bound", columns = 7:8, id = "vebound") %>%
  cols_label(
    ve_efficacy = "Efficacy",
    ve_futility = "Futility"
  ) %>%
  tab_footnote(
    footnote = "Cumulative spending at each analysis",
    locations = cells_column_spanners(spanners = "spend")
  ) %>%
  tab_footnote(
    footnote = "Experimental case counts to cross between success and futility counts do not stop trial",
    locations = cells_column_spanners(spanners = "cases")
  ) %>%
  tab_footnote(
    footnote = "Exact vaccine efficacy required to cross bound",
    locations = cells_column_spanners(spanners = "vebound")
  ) %>%
  tab_footnote(
    footnote = "Cumulative power at each analysis by underlying vaccine efficacy",
    locations = cells_column_spanners(spanners = "power")
  ) %>%
  tab_footnote(
    footnote = "alpha-spending for efficacy ignores non-binding futility bound",
    location = cells_column_labels(columns = alpha)
  ) %>%
  tab_header("Design Bounds and Operating Characteristics")

The initial approximation of bounds for the exact binomial design was generated from the time-to-event design as follows. First, we computed nominal p-value 1-sided bounds under the null hypothesis for the efficacy bounds using the normal approximation that the time-to-event design used:

efficacyNominalPValue <- pnorm(-xx$upper$bound)
efficacyNominalPValue

Then we took the inverse binomial distribution for these p-values assuming the targeted total number of cases to obtain:

qbinom(p = efficacyNominalPValue, size = xx$n.I, prob = p0) - 1

This is actually the same as the final bounds computed above:

xb$lower$bound

This and the initial approximation for the futility bound are returned from toBinomialExact():

xb$init_approx$a
xb$init_approx$b

For the futility bound, only a slight adjustment was required for the final bound:

xb$upper$bound

The vaccine efficacy at bounds should be checked to see if the evidence is convincing enough to be accepted as a clinically relevant benefit in addition to statistical benefit (efficacy bounds) or a less than relevant benefit for futility bounds.

Checking design properties

Next, we look at $\alpha$- and $\beta$-spending for the time-to-event design to compare to the exact bound restricted by the discrete possible counts at bounds. For both $\alpha$- and $\beta$-spending, the exact binomial design has no more spending at each analysis than allowed by each spending function. We also confirm that changing any of the exact bounds by 1 exceeds the targeting spending.

$\alpha$-spending

# Exact design cumulative alpha-spending at efficacy bounds
# (non-binding)
nb <- gsBinomialExact(k = xb$k, theta = xb$theta, n.I = xb$n.I, b= xb$n.I + 1, a = xb$lower$bound)
cumsum(nb$lower$prob[,1])
# Targeted alpha-spending
xx$upper$sf(alpha, t = xx$timing, xx$upper$param)$spend

Above we see the achieved $\alpha$-spending ignoring the futility bound is controlled at the targeted level at each analysis; because of the exact bound, control is below the target. For this particular example, we show below that increasing any efficacy bound by 1 not only increases above the targeted cumulative spend compared to above at each analysis (diagonal elements), but also exceeds the targeted total spend of 0.025 at the final analysis (final column). The latter property will not always hold. Choosing your spending function carefully (in this case, using the spending function parameter) to ensure cumulative spending for the exact design is close to the targeted $\alpha$ is worth careful examination. For this case, choosing the Hwang-Shih-DeCani efficacy spending parameter as $\gamma=-4$ instead of $\gamma=-3$ second property would not hold.

# Check that increasing any bound goes above cumulative spend
excess_alpha_spend <- matrix(0, nrow = nb$k, ncol=nb$k)
for(i in 1:xb$k){
  a <- xb$lower$bound
  a[i] <- a[i] + 1
  excess_alpha_spend[i,] <- 
    cumsum(gsBinomialExact(k = xb$k, theta = xb$theta, n.I = xb$n.I, b= xb$n.I + 1, a = a)$lower$prob[,1])
}
excess_alpha_spend

$\beta$-spending

# Cumulative beta-spending for exact design
cumsum(xb$upper$prob[,2])
# Targeted beta-spending
xx$lower$sf(beta, t = xx$timing, xx$lower$param)$spend

Since the futility bound at the final analysis is only 1 plus the efficacy bound, it cannot be lowered in the following. However, we see that changing either interim futility bound by 1 would exceed targeted interim spending (diagonal elements) and total Type II error spending (third column). Again, the spending function had to be carefully chosen to ensure the second of these properties; e.g., with the O'Brien-Flemining-like spending function the second property did not hold.

# Check that increasing any bound goes above cumulative spend
excess_beta_spend <- matrix(0, nrow = nb$k - 1, ncol=nb$k)
for(i in 1:(xb$k - 1)){
  b <- xb$upper$bound
  b[i] <- b[i] - 1
  excess_beta_spend[i,] <- 
    cumsum(as.numeric(gsBinomialExact(k = xb$k, theta = xb$theta, n.I = xb$n.I, b = b, a = xb$lower$bound)$upper$prob[,2]))
}
excess_beta_spend

Bound Update at Time of Analysis for Example 2

We now use the second interim analysis outcome from the SPUTNIK trial to show how to update bounds from above when event counts differ from planned. The first database was on November 18, 2020 and included 20 endpoints. The second database lock with 78 endpoint cases was one week later on November 24, 2020. These are slightly different than the targeted event counts above and bounds must be updated using the spending functions for the trial based on the altered information fraction compared to plan; see table below. At the time of analysis of 78 endpoint events, 16 were in the experimental group, comfortably crossing the efficacy bound requiring 44 or fewer experimental events. Given the rapid accrual of endpoints, the futility bound would likely have been irrelevant for the first interim. Since Type I error assumed a non-binding bound, the first analysis could be ignored; i.e., since it was non-binding the futility bound could be ignored for decision-making purposes without inflating Type I error. The VE = -0.33 at the futility bound favored placebo. Since we have overrun the targeted event count at the second analysis we exceed the targeted power and never reach the allowed $\beta$-spending for Type II error. Note that for this table, the expected sample size and calendar timing are no longer needed.

ebUpdate <- toBinomialExact(xx, observedEvents = c(20, 78))

We hide the code to produce the table; this is available in package vignette code.

  ve <- c(.65, .75, .85)
  # Analysis, Total Cases, Success(Cases, VE, VE lower CI, Spend), Futility (Cases, VE, Spend), Type I Error, Power table
  ratio <- xx$ratio
  prob_experimental <- ratio / (ratio + 1 / (1 - ve))
  power_table <- gsBinomialExact(
    k = ebUpdate$k, theta = prob_experimental,
    n.I = ebUpdate$n.I, a = ebUpdate$lower$bound,
    b = ebUpdate$upper$bound
  )$lower$prob
  # Cumulative sum within rows
  power_table <- apply(power_table, 2, cumsum)
  colnames(power_table) <- paste(ve * 100, "%", sep = "")
  out_tab <- tibble(
    Analysis = 1:ebUpdate$k,
    Cases = ebUpdate$n.I,
    Success = ebUpdate$lower$bound,
    Futility = ebUpdate$upper$bound,
    ve_efficacy = 1 - 1 / (ratio * (ebUpdate$n.I / ebUpdate$lower$bound - 1)), # Efficacy bound
    ve_futility = 1 - 1 / (ratio * (ebUpdate$n.I / ebUpdate$upper$bound - 1)), # Futility bound
    alpha = as.vector(cumsum(
      gsBinomialExact(
        k = ebUpdate$k, theta = ebUpdate$theta[1], n.I = ebUpdate$n.I,
        a = ebUpdate$lower$bound, b = ebUpdate$n.I + 1
      )$lower$prob
    )),
    beta = as.vector(cumsum(ebUpdate$upper$prob[, 2]))
  )
  out_tab <- cbind(out_tab, power_table)
  out_tab %>% gt() %>%
  fmt_number(columns = c(5:6, 9:11), decimals = 2) %>%
  fmt_number(columns = 7:8, decimals = 4) %>%
  tab_spanner(label = "Cases at Bound", columns = 3:4, id = "cases") %>%
  tab_spanner(label = "Power by VE", columns = 9:11, id = "power") %>%
  tab_spanner(label = "Error Spending", columns = 7:8, id = "spend") %>%
  tab_spanner(label = "VE at Bound", columns = 5:6, id = "vebound") %>%
  cols_label(
    ve_efficacy = "Efficacy",
    ve_futility = "Futility"
  ) %>%
  tab_footnote(
    footnote = "Cumulative spending at each analysis",
    locations = cells_column_spanners(spanners = "spend")
  ) %>%
  tab_footnote(
    footnote = "Experimental case counts; counts between success and futility bounds do not stop trial",
    locations = cells_column_spanners(spanners = "cases")
  ) %>%
  tab_footnote(
    footnote = "Exact vaccine efficacy required to cross bound",
    locations = cells_column_spanners(spanners = "vebound")
  ) %>%
  tab_footnote(
    footnote = "Cumulative power at each analysis by underlying vaccine efficacy",
    locations = cells_column_spanners(spanners = "power")
  ) %>%
  tab_footnote(
    footnote = "Efficacy spending ignores non-binding futility bound",
    location = cells_column_labels(columns = alpha)
  ) %>%
  tab_header(title = "Updated Bounds for Actual Analyses from SPUTNIK trial")

Summary

We have provided an extended example to show that a @ChanBohidar exact binomial using spending function bounds can be derived in a two-step process that delivers sample size and bounds by 1) deriving a related time-to-event design using asymptotic methods and then 2) converting to an exact binomial design. Adjustments were made to target Type I and Type II error probabilities in the asymptotic approximation to ensure the exact binomial Type I and Type II error rates were achieved. The method seems a reasonable and straightforward approach to develop a complete design that accounts for the impact of enrollment, failure rates dropout rates, and trial duration.

References



Try the gsDesign package in your browser

Any scripts or data that you put into this service are public.

gsDesign documentation built on Nov. 12, 2023, 9:06 a.m.