dfboot: Generic percentile dose-response / dose-finding bootstrap...

View source: R/bootstrap.r

dfbootR Documentation

Generic percentile dose-response / dose-finding bootstrap routine

Description

Bootstrap routine for resampling a dose-finding or dose-response experiment. The bootstrap replicates are generated from a centered-isotonic-regression (CIR) estimate of the dose-response function, rather than resampled directly.

Usage

dfboot(
  x,
  y,
  doses = NULL,
  estfun = dynamean,
  design,
  desArgs,
  target,
  balancePt = target,
  conf = 0.9,
  B = 1000,
  seed = NULL,
  randstart = TRUE,
  showdots = TRUE,
  full = FALSE,
  ...
)

Arguments

x

numeric vector: sequence of administered doses, treatments, stimuli, etc.

y

numeric vector: sequence of observed responses. Must be same length as x or shorter by 1, and must be coded TRUE/FALSE or 0/1.

doses

the complete set of dose values that could have been included in the experiment. Must include all unique values in x.

estfun

the estimation function to be bootstrapped. Default dynamean

design, desArgs

design details passed on to dfsim; the former is a function and the latter a list of its arguments and values. For self-consistent bootstrapping, this must specify the design used in the actual experiment. See dfsim.

target

The target percentile to be estimated (as a fraction). Again must be the same one estimated in the actual experiment. Default 0.5.

balancePt

In case the design's inherent balance point differs somewhat from target, specify it here to improve estimation accuracy. See Details for further explanation. Otherwise, this argument defaults to be equal to target.

conf

the CI's confidence level, as a fraction in (0,1).

B

Size of bootstrap ensemble, default 1000.

seed

Random seed; default NULL which leads to a "floating" seed, varying between calls.

randstart

Logical: should the bootstrap runs randomize the starting dose, or use the same starting dose as the actual experiment? Default TRUE, which we expect to produce better properties. The randomization will be weighted by the real data's dose-specific sample sizes.

showdots

Logical: should "progress dots" be printed out as the bootstrap runs progress? Default TRUE

full

Logical: controls how detailed the output is. Default (FALSE) is only the resulting interval bounds, while TRUE returns a list with the full bootstrap ensemble of doses, responses and estimates, as well as the generating dose-response curve and the bootstrap's dose set.

...

Additional parameters passed on to estimation functions.

Details

The function should be able to generate bootstrap resamples of any dose-finding design, as long as ⁠design, desArgs⁠ are specified correctly. For the "Classical" median-finding UDD, use ⁠design = krow, desArgs = list(k=1)⁠. For other UDDs, see dfsim.

Like Chao and Fuh (2001) and Stylianou et al. (2003), the bootstrap samples are generated indirectly, by estimating a dose-response curve F from the data, then generating an ensemble of bootstrap experiments using the same design used in the original experiment. Unlike these two which used parametric or isotonic regression, respectively, with no bias-mitigation and no additional provisions to improve coverage, our implementation uses CIR with the Flournoy and Oron (2020) bias-mitigation. When feasible, it also allows the bootstrap runs to extend up to 2 dose-levels in each direction, beyond the doses visited in the actual experiment.

Note

This function can be run stand-alone, but it is mostly meant to be called in the backend, in case a dose-averaging estimate "wants" a confidence interval (which is default behavior for ⁠dynamean(), reversmean()⁠ at present). You are welcome to figure out how to run it stand-alone, but I do not provide example code here since we still recommend CIR and its analytically-informed intervals over dose-averaging with bootstrap intervals. If you would like to run general up-and-down or dose-finding simulations, see dfsim() and its code example.

Author(s)

Assaf P. Oron <assaf.oron.at.gmail.com>

References

  • Chao MT, Fuh CD. Bootstrap methods for the up and down test on pyrotechnics sensitivity analysis. Statistica Sinica. 2001 Jan 1:1-21.

  • Flournoy N, Oron AP. Bias induced by adaptive dose-finding designs. J Appl Stat. 2020;47(13-15):2431-2442.

  • Stylianou M, Proschan M, Flournoy N. Estimating the probability of toxicity at the target dose following an up‐and‐down design. Statistics in medicine. 2003 Feb 28;22(4):535-43.

See Also

dfsim


upndown documentation built on April 3, 2025, 10:57 p.m.