ssize.pcc | R Documentation |
Calculate sample size for training set in developing classifiers using high dimensional data. The calculation is based on the probability of correct classification (PCC).
ssize.pcc(gamma, stdFC, prev = 0.5, nrFeatures, sigFeatures = 20, verbose = FALSE)
gamma |
tolerance between PCC(infty) and PCC(n). |
stdFC |
expected standardized fold-change; that is, expected fold-change devided by within class standard deviation. |
prev |
expected prevalence. |
nrFeatures |
number of features (variables) considered. |
sigFeatures |
number of significatn features; default (20) should be sufficient for most if not all cases. |
verbose |
print intermediate results. |
The computations are based the algorithm provided in Section~4.2 of Dobbin and Simon (2007). Prevalence is incorporated by the simple rough approach given in Section~4.4 (ibid.).
The results for prevalence equal to $50%$ are identical to the numbers computed by https://brb.nci.nih.gov/brb/samplesize/samplesize4GE.html. For other prevalences the numbers differ and are larger for our implementation.
Object of class "power.htest"
, a list of the arguments
(including the computed one) augmented with method
and
note
elements.
optimize
is used to solve equation (4.3) of Dobbin and Simon (2007),
so you may see errors from it.
Matthias Kohl Matthias.Kohl@stamats.de
K. Dobbin and R. Simon (2007). Sample size planning for developing classifiers using high-dimensional DNA microarray data. Biostatistics, 8(1):101-117.
K. Dobbin, Y. Zhao, R. Simon (2008). How Large a Training Set is Needed to Develop a Classifier for Microarray Data? Clin Cancer Res., 14(1):108-114.
optimize
## see Table 2 of Dobbin et al. (2008) g <- 0.1 fc <- 1.6 ssize.pcc(gamma = g, stdFC = fc, nrFeatures = 22000) ## see Table 3 of Dobbin et al. (2008) g <- 0.05 fc <- 1.1 ssize.pcc(gamma = g, stdFC = fc, nrFeatures = 22000)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.