knitr::opts_chunk$set( collapse = TRUE, comment = "#" )
library(PowerTOST) # attach the library
 Parameter  Argument  Purpose  Default 

 $\small{\alpha}$  alpha
 Nominal level of the test  0.025

 $\small{\pi}$  targetpower
 Minimum desired power  0.80

 logscale  logscale
 Analysis on logtransformed or original scale?  TRUE

 margin  margin
 Noninferiority margin  see below 
 $\small{\theta_0}$  theta0
 ‘True’ or assumed T/R ratio  see below 
 CV  CV
 CV  none 
 design  design
 Planned design  "2x2"

 imax  imax
 Maximum number of iterations  100

 print  print
 Show information in the console?  TRUE

 details  details
 Show details of the sample size search?  FALSE

Note that contrary to the other functions of the package a onesided ttest (instead of TOST) is employed. Hence, $\small{\alpha}$ defaults to 0.025.\
Defaults depending on the argument logscale
:
 Parameter  Argument  logscale=TRUE
 logscale=FALSE

::::
 margin  margin
 0.80
 –0.20

 $\small{\theta_0}$  theta0
 0.95
 +0.05

Arguments targetpower
, margin
, theta0
, and CV
have to be given as fractions, not in percent.\
The CV is generally the within (intra) subject coefficient of variation. Only for design = "parallel"
it is the total (a.k.a. pooled) CV.
Designs with one (parallel), two (conventional crossover and paired), and three or four periods (replicates) are supported.
# design name df # "parallel" 2 parallel groups n2 # "2x2" 2x2 crossover n2 # "2x2x2" 2x2x2 crossover n2 # "2x2x3" 2x2x3 replicate crossover 2n3 # "2x2x4" 2x2x4 replicate crossover 3n4 # "2x4x4" 2x4x4 replicate crossover 3n4 # "2x3x3" partial replicate (2x3x3) 2n3 # "2x4x2" Balaam’s (2x4x2) n2 # "2x2x2r" Liu’s 2x2x2 repeated xover 3n2 # "paired" paired means n1
The terminology of the design
argument follows this pattern: treatments x sequences x periods
. The conventional TRRT (a.k.a. ABBA) design can be abbreviated as "2x2"
. Some call the "parallel"
design a ‘onesequence’ design. The design "paired"
has two periods but no sequences, e.g., in studying linear pharmacokinetics a single dose is followed by multiple doses. A profile in steady state (T) is compared to the one after the single dose (R). Note that the underlying model assumes no period effects.
With sampleN.noninf(..., details = FALSE, print = FALSE)
results are provided as a data frame ^[R Documentation. Data Frames. 20201026. Rmanual.] with eight columns Design
, alpha
, CV
, theta0
, Margin
, Sample size
, Achieved power
, and Target power
. To access e.g., the sample size use either sampleN.noninf[1, 6]
or sampleN.noninf[["Sample size"]]
. We suggest to use the latter in scripts for clarity.
The estimated sample size gives always the total number of subjects (not subject/sequence in crossovers or subjects/group in parallel designs – like in some other software packages).
If the supplied margin is < 1 (logscale = TRUE
) or < 0 (logscale = FALSE
), then it is assumed that higher response values are better. The hypotheses are with
logscale = TRUE
$$\small{H_0:\theta_0 \leq \log({margin})\:vs\:H_1:\theta_0>\log({margin})}$$
where $\small{\theta_0=\mu_\textrm{T}/\mu_\textrm{R}}$
logscale = FALSE
$$\small{H_0:\theta_0 \leq {margin}\:vs\:H_1:\theta_0>{margin}}$$
where $\small{\theta_0=\mu_T\mu_R}$
Estimate the sample size for assumed intrasubject CV 0.25. Defaults margin
0.80 and $\small{\theta_{0}}$ 0.95 employed.
sampleN.noninf(CV = 0.25)
To get only the sample size:
sampleN.noninf(CV = 0.25, details = FALSE, print = FALSE)[["Sample size"]]
Note that the sample size is always rounded up to give balanced sequences (here a multiple of two). Since power is higher than our target, likely this was the case here. Let us assess that:\ Which power will we get with a sample size of 35?
power.noninf(CV = 0.25, n = 35)
Confirmed that with 35 subjects we will already reach the target power. That means also that one dropout will not compromise power.
If the supplied margin is > 1 (logscale = TRUE
) or > 0 (logscale = FALSE
), then it is assumed that lower response values are better. The hypotheses are with
logscale = TRUE
$$\small{H_{0}:\theta_0 \geq \log({margin})\:vs\:H_{1}:\theta_0<\log({margin})}$$
where $\small{\theta_0=\mu_\textrm{T}/\mu_\textrm{R}}$
logscale = FALSE
$$\small{H_{0}:\theta_0 \geq {margin}\:vs\:H_{1}:\theta_0<{margin}}$$
where $\small{\theta_0=\mu_T\mu_R}$
Estimate the sample size for assumed intrasubject CV 0.25.
sampleN.noninf(CV = 0.25, margin = 1.25, theta0 = 1/0.95)
Same sample size like in example 1 since reciprocal values of both margin
0.80 and $\small{\theta_{0}}$ are specified.
Compare a new modified release formulation (regimen once a day) with an intermediate release formulation (twice a day).^[European Medicines Agency, Committee for Medicinal Products for Human Use. Guideline on the pharmacokinetic and clinical evaluation of modified release dosage forms. London. 20 November 2014. EMA/CPMP/EWP/280/96 Corr1. online.] C~min~ is the target metric for efficacy (noninferiority) and C~max~ for safety (nonsuperiority). Margins are 0.80 for C~min~ and 1.25 for C~max~. CVs are 0.35 for C~min~ and 0.20 for C~max~; $\small{\theta_{0}}$ 0.95 for C~min~ and 1.05 for C~max~. Full replicate design due to the high variability of C~min~.\ Which PK metric leads the sample size?
res < data.frame(design = "2x2x4", metric = c("Cmin", "Cmax"), margin = c(0.80, 1.25), CV = c(0.35, 0.20), theta0 = c(0.95, 1.05), n = NA, power = NA, stringsAsFactors = FALSE) # this line for R <4.0.0) for (i in 1:2) { res[i, 6:7] < sampleN.noninf(design = res$design[i], margin = res$margin[i], theta0 = res$theta0[i], CV = res$CV[i], details = FALSE, print = FALSE)[6:7] } print(res, row.names = FALSE)
The sample size depends on C~min~. Hence, the study is ‘overpowered’ for C~max~.
power.noninf(design = "2x2x4", margin = 1.25, CV = 0.20, theta0 = 1.05, n = 32)
Therefore, that gives us some ‘safety margin’ for C~max~.
power.noninf(design = "2x2x4", margin = 1.25, CV = 0.25, theta0 = 1.10, n = 32) # higher CV, worse theta0
The bracketing approach does not necessarily give lower sample sizes than tests for equivalence. In this example we could aim at referencescaling for the highly variable C~min~ and at conventional ABE for C~max~.
res < data.frame(design = "2x2x4", intended = c("ABEL", "ABE"), metric = c("Cmin", "Cmax"), CV = c(0.35, 0.20), theta0 = c(0.90, 1.05), n = NA, power = NA, stringsAsFactors = FALSE) # this line for R <4.0.0 res[1, 6:7] < sampleN.scABEL(CV = res$CV[1], theta0 = res$theta0[1], design = res$design[1], print = FALSE, details = FALSE)[8:9] res[2, 6:7] < sampleN.TOST(CV = res$CV[2], theta0 = res$theta0[2], design = res$design[2], print = FALSE, details = FALSE)[7:8] print(res, row.names = FALSE)
Which method is optimal is a casetocase decision. Although in this example the bracketing approach seems to be the ‘winner’ (32 subjects instead of 34), we might fail if the CV of C~min~ is larger than assumed, whereas in referencescaling we might still pass due to the expanded limits.
n < sampleN.scABEL(CV = 0.35, theta0 = 0.90, design = "2x2x4", print = FALSE, details = FALSE)[["Sample size"]] # CV and theta0 of both metrics worse than assumed res < data.frame(design = "2x2x4", intended = c("ABEL", "ABE"), metric = c("Cmin", "Cmax"), CV = c(0.50, 0.25), theta0 = c(0.88, 1.12), n = n, power = NA, stringsAsFactors = FALSE) # this line for R <4.0.0 res[1, 7] < power.scABEL(CV = res$CV[1], theta0 = res$theta0[1], design = res$design[1], n = n) res[2, 7] < power.TOST(CV = res$CV[2], theta0 = res$theta0[2], design = res$design[2], n = n) print(res, row.names = FALSE)
See also the vignettes RSABE, ABE, and PA.
Detlew Labes
r Sys.Date()
Helmut SchützAny scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.