corrPower: Power Calculation for Log-rank Tests in Overlapping...

View source: R/corrPower.R

corrPowerR Documentation

Power Calculation for Log-rank Tests in Overlapping Populations

Description

This function calculates the powers at specified analysis times based on the asymptotic distribution of the log-rank test statistics in overalapping populations under H1. For group sequential design, the power will be calculated for each analysis and overall study.

Usage

corrPower(
  T = c(24, 36),
  n = list(AandB = 300, AnotB = 0, BnotA = 450),
  r = list(AandB = 1/2, AnotB = 0, BnotA = 1/2),
  sf = list(sfuA = gsDesign::sfLDOF, sfuB = gsDesign::sfLDOF),
  h0 = list(AandB = function(t) {     log(2)/12 }, AnotB = function(t) {     log(2)/12
    }, BnotA = function(t) {     log(2)/12 }),
  S0 = list(AandB = function(t) {     exp(-log(2)/12 * t) }, AnotB = function(t) {    
    exp(-log(2)/12 * t) }, BnotA = function(t) {     exp(-log(2)/12 * t) }),
  h1 = list(AandB = function(t) {     log(2)/12 * 0.7 }, AnotB = function(t) {    
    log(2)/12 * 0.7 }, BnotA = function(t) {     log(2)/12 * 0.7 }),
  S1 = list(AandB = function(t) {     exp(-log(2)/12 * 0.7 * t) }, AnotB = function(t)
    {     exp(-log(2)/12 * 0.7 * t) }, BnotA = function(t) {     exp(-log(2)/12 * 0.7 *
    t) }),
  strat.ana = c("Y", "N"),
  alpha = 0.025,
  w = c(1/3, 2/3),
  epsilon = list(epsA = c(NA, NA), epsB = c(1, 1)),
  method = c("Balanced Allocation", "Customized Allocation"),
  F.entry = function(t) {     (t/18)^1.5 * as.numeric(t <= 18) + as.numeric(t > 18) },
  G.ltfu = function(t) {     1 - exp(-0.03/12 * t) },
  variance = "H1"
)

Arguments

T

A vector of analysis times for interim and final analysis, calculated from first subject randomized, .

n

Total sample size for two arms for subjects in both population A and B, A not B, B not A. Default is NULL.

r

A vector of proportions of experimental subjects in each of subgroup: Both in A and B, A not B, B not A. For randomization stratified by A and B, then r$AandB = r$AnotB = r$BnotA.

sf

Spending functions for tests A and B. Default sf=list(sfuA=gsDesign::sfLDOF, sfuB=gsDesign::sfLDOF), both tests are based on O'Brien Fleming spending boundary. Refer to gsDesign() function for other choices of spending functions.

h0

Hazard function of control arm for subjects in both population A and B, A not B, B not A. h0(t) = log(2)/m0 means T~exponential distribution with median m0. For study design without considering heterogeneous effect in strata for control arm, then specify the same h0(t) function across strata.

S0

Survival function of control arm for subjects in both population A and B, A not B, B not A. h0(t) = log(2)/m0 means T~exponential distribution with median m0. For study design without considering heterogeneous effect in strata for control arm, then specify the same S0(t) function across strata. The density function f0(t) = h0(t) * S0(t).

h1

Hazard function of experimental arm for subjects in both population A and B, A not B, B not A. For study design without considering heterogeneous effect in strata for the experimental arm, then specify the same h1(t) function across strata.

S1

Survival function of experimental arm for subjects in both population A and B, A not B, B not A. For study design without considering heterogeneous effect in strata for the experimental arm, then specify the same h1(t) function across strata.

strat.ana

stratified analysis flag, "Y" or "N". Default, "Y". The stratified analysis means that testing HA is stratified by B, and vice versa.

w

A vector of proportions for type I error allocation to all primary hypotheses. The sum of w must be 1.

method

The method for alpha adjustment: "Balanced Allocation" or "Customized Allocation". "Balanced Allocation" = the adjustment is equally allocated to all primary hypotheses. "Customized Allocation" = the adjustment is made according to pre-specified levels for some hypotheses. Default "Balanced Allocation".

F.entry

Distribution function of enrollment. For uniform enrollment, F.entry(t) = (t/A) where A is the enrollment period, i.e., F.entry(t) = t/A for 0<=t<=A, and F.entry(t) = 1 when t > A. For more general non-uniform enrollment with weight psi, F.entry(t) = (t/A)^psi*I(0<=t<=A) + I(t>A). Default F.entry is uniform distribution function.

G.ltfu

Distribution function of lost-to-follow-up censoring process. The observed survival time is min(survival time, lost-to-follow-up time). Default G.ltfu = 0 (no lost-to-followup)

variance

Option for variance estimate. "H1" or "H0". Default H1, which is usually more conservative than H0.

incr.alpha

A vector of incremental alpha allocated to all analyses. sum(incr.alpha) = overall.alpha. If sf is provided, then incr.alpha will be ignored. If sf is not provided, then incr.alpha is required. In detail, if the alpha spending function a(t) is used, then incr.alpha=c(a(t1), alpha2 = a(t2)-a(t1), ..., alphaK = a(tK)-a(t_K-1) for timing = c(t1, ..., tK=1).

overall.alpha

Overall familywise one-sided alpha, default 0.025, for both tests.

epsA

A vector of efficiency factors for testing Ha at all analyses.

epsB

A vector of efficiency factors for testing Hb at all analyses. epsA and epsB are required when method = "Customized Allocation". At analysis k, either epsAk or epsBk is required, but not both. The unspecified will be determined. For example, epsA = c(1, NA) and epsB = c(NA, 1): At the 1st analysis, Ha rejection boundary is the same as the boundary based on alpha-splitting method, and the benefit from the correlation is fully captured to Hb to improve its rejection boundary. At the 2nd analysis, Hb's rejection boundary is the same as the alpha-splitting approach, and the benefit from the correlation is fully captured to Ha. In order to insure the improved rejection boundary is no worse than the alpha-splitting method, epsA and epsB is must be at least 1, and also capped by the acceptable value when the other one is 1.

Value

An object with dataframes below.

  • overall.alpha: Family-wise type I error and each test's type I error

  • events: Number of events by analysis and by treatment arm

    • DCO: Data cutoff time for analysis, calculated from first subject randomized

    • n.events0: Expected events in control arm

    • n.events1: Expected events in experimental arm

    • n.events.total: Expected events in both control and experimental arms

    • n0: number of subjects in control arm

    • n1: number of subjects in experimental arm

    • n.total: total number of subjects

    • maturity0: maturity in control arm, percent of events

    • maturity1: maturity in experimental arm, percent of events

    • maturity: maturity in both arms, percent of events

  • power Power calculation including variables:

    timingA: Information fraction for test A

    marg.powerA: Marginal power for test A regardless of the previous test results

    incr.powerA: Incremental power for test A The sum of all incremental powers is the overall power.

    cum.powerA: Cumulative power for test A

    overall.powerA: Overall power for test A

    marg.powerA0: Marginal power for test A regardless of the previous test results using alpha-splitting method

    incr.powerA0: Incremental power for test A using alpha-splitting method The sum of all incremental powers is the overall power.

    cum.powerA0: Cumulative power for test A using alpha-splitting method

    overall.powerA0: Overall power for test A using alpha-splitting method

    timingB: Information fraction for test B

    marg.powerB: Marginal power for test B regardless of the previous test results

    incr.powerB: Incremental power for test B The sum of all incremental powers is the overall power.

    cum.powerB: Cumulative power for test B

    overall.powerB: Overall power for test B

    marg.powerB0: Marginal power for test B regardless of the previous test results using alpha-splitting method

    incr.powerB0: Incremental power for test B using alpha-splitting method The sum of all incremental powers is the overall power.

    cum.powerB0: Cumulative power for test B using alpha-splitting method

    overall.powerB0: Overall power for test B using alpha-splitting method

  • bd: Rejection boundary in z value and p value including variables

    • timingA: Information fraction for test A

    • incr.alphaA: Incremental alpha for test A

    • cum.alphaA: Cumulative alpha for test A

    • bd.pA0: p value boundary for test A based on alpha-splitting method

    • bd.zA0: z value boundary for test A based on alpha-splitting method

    • bd.pA: p value boundary for test A

    • bd.zA: z value boundary for test A

    • epsA: Efficiency factor for test A with correlation considered. Larger epsA indicates more improvement from the alpha-splitting method.

    • timingB: Information fraction for test B

    • incr.alphaB: Incremental alpha for test B

    • cum.alphaB: Cumulative alpha for test B

    • bd.pB0: p value boundary for test B based on alpha-splitting method

    • bd.zB0: z value boundary for test B based on alpha-splitting method

    • bd.pB: p value boundary for test B

    • bd.zB: z value boundary for test B

    • epsB: Efficiency factor for test B with correlation considered. Larger epsA indicates more improvement from the alpha-splitting method.

  • CV: Critical Value in HR and median

  • median: Medians by treatment arm (0 or 1) and test (A or B)

  • max.eps: Upper bound of acceptable epsA and epsB values when method = "Customized Allocation".

  • corr: Correlation matrix of log-rank test statistics vector for K analyses: (zA1, zB1, zA2, zB2, ..., zAK, zBK)

  • cov: Covariance matrix of log-rank test score statistics vector for K analyses: (uA1, uB1, uA2, uB2, ..., uAK, uBK)

  • method: method of improvement allocation

  • strat.ana: stratified analysis flag (Y/N)

Examples

 
#Example: 1:1 randomization, enrollment follows non-uniform 
#enrollment distribution with weight 1.5 and enrollment period is 18 months. 
#Control arm ~ exponential distribution with median 12 months, and 
#Experimental arm ~ exponential distribution (Proportional Hazards) with median 12 / 0.7 months.
#Assuming 3\% drop-off per 12 months of followup.
#250 PD-L1+ subjects, a total of 600 subjects.
#3 Analyses are planned: 24 mo, 36 mo, and 42 mo.
#Assumed HR: 0.60 for PD-L1+, and 0.80 for PD-L1-, so the HR for overall population is 0.71.

pow = corrPower(T = c(24, 36), n = list(AandB = 350, AnotB=0, BnotA=240), 
           r = list(AandB=1/2, AnotB =0, BnotA = 1/2), 
           sf=list(sfuA=gsDesign::sfLDOF, sfuB=gsDesign::sfLDOF), 
           h0=list(AandB=function(t){log(2)/12}, AnotB=function(t){log(2)/12}, 
                   BnotA=function(t){log(2)/12}), 
           S0=list(AandB=function(t){exp(-log(2)/12*t)}, AnotB=function(t){exp(-log(2)/12*t)},
                   BnotA=function(t){exp(-log(2)/12*t)}),
           h1=list(AandB=function(t){log(2)/12*0.6},    AnotB=function(t){log(2)/12*0.6},
                   BnotA=function(t){log(2)/12*0.80}), 
           S1=list(AandB=function(t){exp(-log(2)/12 * 0.6 * t)},AnotB=function(t){exp(-log(2)/12 * 0.6 * t)},
                   BnotA=function(t){exp(-log(2)/12 * 0.80 * t)}),
           strat.ana=c("Y", "N"),
           alpha=0.025, w=c(1/3, 2/3), epsilon = list(epsA = c(NA,NA), epsB=c(1,1)),
           method=c("Balanced Allocation", "Customized Allocation"), 
           F.entry = function(t){(t/18)^1.5*as.numeric(t <= 18) + as.numeric(t > 18)}, 
           G.ltfu = function(t){1-exp(-0.03/12*t)}, variance="H1")


phe2189/corrTests documentation built on Oct. 7, 2022, 11:13 a.m.