View source: R/sampleN.scABEL.sdsims.R
sampleN.scABEL.sdsims | R Documentation |
These functions performs the sample size estimation via power calculations of the BE decision via scaled (expanded) BE acceptance limits, based on subject data simulations.
This function has an alias sampleN.scABEL.sds().
sampleN.scABEL.sdsims(alpha = 0.05, targetpower = 0.8, theta0, theta1,
theta2, CV, design = c("2x3x3", "2x2x4", "2x2x3"),
regulator, nsims = 1e5, nstart, imax = 100,
print = TRUE, details = TRUE,
setseed = TRUE, progress)
alpha |
Type I error probability. Per convention mostly set to 0.05. |
targetpower |
Power to achieve at least. Must be >0 and <1. |
theta0 |
‘True’ or assumed T/R ratio. |
theta1 |
Conventional lower ABE limit to be applied in the mixed procedure if
|
theta2 |
Conventional upper ABE limit to be applied in the mixed procedure if
|
CV |
Intra-subject coefficient(s) of variation as ratio (not percent).
|
design |
Design of the study to be planned. |
regulator |
Regulatory settings for the widening of the BE acceptance limits. |
nsims |
Number of simulations to be performed to obtain the (empirical) power.
The default value 100,000 = 1e+5 is usually sufficient. Consider to rise
this value if |
nstart |
Set this to a start for the sample size search if a previous run failed. |
imax |
Maximum number of steps in sample size search. Defaults to 100. |
print |
If |
details |
If set to |
setseed |
Simulations are dependent on the starting point of the (pseudo) random number generator. To avoid differences in power for different runs a |
progress |
Should a progressbar be shown? Defaults to |
The methods rely on the analysis of log-transformed data, i.e., assume a
log-normal distribution on the original scale.
The expanded BE acceptance limits will be calculated by the formula
[L, U] = exp(± r_const * sWR)
with r_const
the regulatory constant and sWR
the standard deviation of the within
subjects variability of the Reference. r_const = 0.76
(~log(1.25)/0.29356) is used
in case of regulator = "EMA"
.
If the CVwR is < CVswitch=0.3 the conventional ABE limits apply (mixed procedure).
In case of regulator="EMA"
a cap is placed on the widened limits if
CVwR > 0.50, i.e., the widened limits are held at value calculated for CVwR = 0.50.
In case of regulator="GCC"
fixed wider limits of 0.7500 – 1.3333 for CVwR > 0.30 are applied and the conventional limits otherwise.
The simulations are done by simulating subject data (all effects fixed except the
residuals) and evaluating these data via ANOVA of all data to get the point estimate
of T vs. R along with its 90% CI and an ANOVA of the data under R(eference) only
to get an estimate of s²wR.
The estimated sample size gives always the total number of subjects (not subject/sequence – like in some other software packages).
Returns a data.frame with the input settings and sample size results.
The Sample size
column contains the total sample size.
The nlast
column contains the last n
value. May be useful for restarting.
Although some designs are more ‘popular’ than others, sample size estimations are valid for all of the following designs:
"2x2x4" | TRTR | RTRT |
TRRT | RTTR | |
TTRR | RRTT | |
"2x2x3" | TRT | RTR |
TRR | RTT | |
"2x3x3" | TRR | RTR | RRT |
The sample size estimation for very extreme theta0
(<0.83 or >1.21) may be very
time consuming and will eventually also fail since the start values chosen are
not really reasonable in that ranges.
If you really need sample sizes in that range be prepared to restart the sample
size estimation via the argument nstart.
Since the dependence of power from n is very flat in the mentioned region you may
also consider to adapt the number of simulations not to get caught in the simulation
error trap.
We are doing the sample size estimation only for balanced designs since the
break down of the total subject number in case of unbalanced sequence groups
is not unique. Moreover the formulas used are only for balanced designs.
The minimum sample size is 6, even if the power is higher than the intended
targetpower.
Subject simulations are easily more than 100times slower than simulations based
on the ‘key’ statistics. We recommend this function only for the partial
replicate design (TRR|RTR|RRT) assuming heteroscedasticity in the case of CVwT > CVwR.
Thus be patient and go for a cup of coffee if you use this function with high
sample sizes!
H. Schütz
Tóthfalusi L, Endrényi L. Sample Sizes for Designing Bioequivalence Studies for Highly Variable Drugs. J Pharm Pharmaceut Sci. 2011;15(1):73–84. open access
power.scABEL.sdsims
, sampleN.scABEL
, reg_const
# using the defaults:
# partial replicate design, targetpower=80%,
# true assumed ratio = 0.90, 1E+5 simulated studies
# ABE limits, PE constraint 0.8 - 1.25
# EMA regulatory settings
# Heteroscedasticity (CVwT 0.4, CVwR 0.3)
# compare results and run times
CV <- c(0.4, 0.3)
expl <- data.frame(method = c("subject simulations", "\'key\' statistics"),
n = NA, power = NA, seconds = NA)
start <- proc.time()[[3]]
expl[1, 2:3] <- sampleN.scABEL.sdsims(CV = CV, print = FALSE,
details = FALSE)[8:9]
expl[1, 4] <- proc.time()[[3]] - start
start <- proc.time()[[3]]
expl[2, 2:3] <- sampleN.scABEL(CV = CV, print = FALSE,
details = FALSE)[8:9]
expl[2, 4] <- proc.time()[[3]] - start
print(expl, row.names = FALSE)
# should result in a sample size n=69, power=0.80198 for
# the subject simulations and n=66, power=0.80775 for the
# 'key' statistics
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.