run_evals: run_evals: Main function to benchmark FDR methods on given...

Description Usage Arguments Details Value Examples

View source: R/benchmarking.R

Description

run_evals: Main function to benchmark FDR methods on given simulations.

Usage

1
run_evals(sim_funs, fdr_methods, nreps, alphas, ...)

Arguments

sim_funs

List of simulation settings

fdr_methods

List of FDR controlling methods to be benchmarked

nreps

Integer, number of Monte Carlo replicates for the simulations

alphas

Numeric, vector of nominal significance levels at which to apply FDR controlling methods

...

Additional arguments passed to sim_fun_eval

Details

This is the main workhorse function which runs all simulation benchmarks for IHWpaper. It receives input as described above, and the output is a data.frame with the following columns:

Value

data.frame which summarizes results of numerical experiment

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
   nreps <- 3 # monte carlo replicates
   ms <- 5000 # number of hypothesis tests
   eff_sizes <- c(2,3)
   sim_funs <- lapply(eff_sizes,
			function(x) du_ttest_sim_fun(ms,0.95,x, uninformative_filter=FALSE))
	  continuous_methods_list <- list(bh,
                              	  lsl_gbh,
 	                          	  clfdr,
                                   ddhf)
  fdr_methods <- lapply(continuous_methods_list, continuous_wrap)
 eval_table <- run_evals(sim_funs, fdr_methods, nreps, 0.1, BiocParallel=FALSE)

nignatiadis/ihwPaper documentation built on Jan. 18, 2021, 3:13 p.m.