View source: R/evaluateResults.R
getBenchmarkResults | R Documentation |
Computing benchmark table with the mean overall results.
getBenchmarkResults( errorList, nameVec, tableTCs, errorParam = "zzDevAbsCutoff_Ov", cutoffZ = 5, catList = c("fractionPathol <= 0.20 & N <= 5000", "fractionPathol <= 0.20 & N > 5000", "fractionPathol > 0.20 & N <= 5000", "fractionPathol > 0.20 & N > 5000"), catLabels = c("lowPlowN", "lowPhighN", "highPlowN", "highPhighN"), perfCombination = c("mean", "median", "sum") )
errorList |
(list) containing the the computed errors for the different (indirect) methods/algorithms |
nameVec |
(character) vector specifying the names of the different (indirect) methods/algorithms |
tableTCs |
(data.frame) containing all information about the simulated test sets |
errorParam |
(character) specifying for which error parameter the data frame should be generated |
cutoffZ |
(integer) specifying if and if so which cutoff for the absolute z-score deviation should be used to classify results as implausible and exclude them from the overall benchmark score (default: 5) |
catList |
(character) vector containing the categories to split the dataset |
catLabels |
(character) vector containing the labels that will be used for the categories |
perfCombination |
(character) specifying which measure should be used to compute the overall benchmark score; choose from "mean" (default), "median", or "sum" |
(data frame) containing the computed benchmark results
Tatjana Ammer tatjana.ammer@roche.com
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.