DBR | R Documentation |
Applies the [HBR-\lambda
] procedure, with or without computing the
critical constants, to a set of p-values and their respective discrete
supports.
DBR(test.results, ...)
## Default S3 method:
DBR(
test.results,
pCDFlist,
alpha = 0.05,
lambda = NULL,
ret.crit.consts = FALSE,
select.threshold = 1,
pCDFlist.indices = NULL,
...
)
## S3 method for class 'DiscreteTestResults'
DBR(
test.results,
alpha = 0.05,
lambda = NULL,
ret.crit.consts = FALSE,
select.threshold = 1,
...
)
test.results |
either a numeric vector with |
... |
further arguments to be passed to or from other methods. They are ignored here. |
pCDFlist |
list of the supports of the CDFs of the |
alpha |
single real number strictly between 0 and 1 indicating the target FDR level. |
lambda |
real number strictly between 0 and 1 specifying the DBR tuning parameter; if |
ret.crit.consts |
single boolean specifying whether critical constants are to be computed. |
select.threshold |
single real number strictly between 0 and 1 indicating the largest raw |
pCDFlist.indices |
list of numeric vectors containing the test indices that indicate to which raw |
[DBR-\lambda
] is the discrete version of the
[Blanchard-Roquain-\lambda
] procedure (see References). The authors
of the latter suggest to take lambda = alpha
(see their Proposition 17),
which explains the choice of the default value here.
Computing critical constants (ret.crit.consts = TRUE
) requires considerably
more execution time, especially if the number of unique supports is large.
We recommend that users should only have them calculated when they need them,
e.g. for illustrating the rejection set in a plot or other theoretical
reasons.
A DiscreteFDR
S3 class object whose elements are:
Rejected |
rejected raw |
Indices |
indices of rejected hypotheses. |
Num.rejected |
number of rejections. |
Adjusted |
adjusted |
Critical.constants |
critical values (only exists if computations where performed with |
Data |
list with input data. |
Data$Method |
character string describing the used algorithm, e.g. 'Discrete Benjamini-Hochberg procedure (step-up)' |
Data$Raw.pvalues |
observed |
Data$pCDFlist |
list of the |
Data$FDR.level |
FDR level |
Data$DBR.Tuning |
value of the tuning parameter |
Data$Data.name |
the respective variable names of the input data. |
Select |
list with data related to |
Select$Threshold |
|
Select$Effective.Thresholds |
results of each |
Select$Pvalues |
selected |
Select$Indices |
indices of |
Select$Scaled |
scaled selected |
Select$Number |
number of selected |
: G. Blanchard and E. Roquain (2009). Adaptive false discovery rate control under independence and dependence. Journal of Machine Learning Research, 10, pp. 2837-2871. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.48550/arXiv.0707.0536")}
discrete.BH()
, DBH()
, ADBH()
, DBY()
X1 <- c(4, 2, 2, 14, 6, 9, 4, 0, 1)
X2 <- c(0, 0, 1, 3, 2, 1, 2, 2, 2)
N1 <- rep(148, 9)
N2 <- rep(132, 9)
Y1 <- N1 - X1
Y2 <- N2 - X2
df <- data.frame(X1, Y1, X2, Y2)
df
# Compute p-values and their supports of Fisher's exact test
test.result <- generate.pvalues(df, "fisher")
raw.pvalues <- test.result$get_pvalues()
pCDFlist <- test.result$get_pvalue_supports()
# DBR without critical values; using test results object
DBR.fast <- DBR(test.result)
summary(DBR.fast)
# DBR with critical values; using extracted p-values and supports
DBR.crit <- DBR(raw.pvalues, pCDFlist, ret.crit.consts = TRUE)
summary(DBR.crit)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.