calcExternalPerformance | R Documentation |
If calcExternalPerformance
is used, such as when having a vector of
known classes and a vector of predicted classes determined outside of the
ClassifyR package, a single metric value is calculated. If
calcCVperformance
is used, annotates the results of calling
crossValidate
, runTests
or runTest
with one of the user-specified performance measures.
## S4 method for signature 'factor,factor'
calcExternalPerformance(
actualOutcome,
predictedOutcome,
performanceTypes = "auto"
)
## S4 method for signature 'Surv,numeric'
calcExternalPerformance(
actualOutcome,
predictedOutcome,
performanceTypes = "auto"
)
## S4 method for signature 'factor,tabular'
calcExternalPerformance(
actualOutcome,
predictedOutcome,
performanceTypes = "auto"
)
## S4 method for signature 'ClassifyResult'
calcCVperformance(result, performanceTypes = "auto")
performanceTable(
resultsList,
performanceTypes = "auto",
aggregate = c("median", "mean")
)
actualOutcome |
A factor vector or survival information specifying each sample's known outcome. |
predictedOutcome |
A factor vector or survival information of the same length as |
performanceTypes |
Default:
|
result |
An object of class |
resultsList |
A list of modelling results. Each element must be of type |
aggregate |
Default: |
All metrics except Matthews Correlation Coefficient are suitable for evaluating classification scenarios with more than two classes and are reimplementations of those available from Intel DAAL.
crossValidate
, runTests
or runTest
was run in resampling mode, one performance
measure is produced for every resampling. Otherwise, if the leave-k-out mode was used,
then the predictions are concatenated, and one performance measure is
calculated for all classifications.
"Balanced Error"
calculates the balanced error rate and is better
suited to class-imbalanced data sets than the ordinary error rate specified
by "Error"
. "Sample Error"
calculates the error rate of each
sample individually. This may help to identify which samples are
contributing the most to the overall error rate and check them for
confounding factors. Precision, recall and F1 score have micro and macro
summary versions. The macro versions are preferable because the metric will
not have a good score if there is substantial class imbalance and the
classifier predicts all samples as belonging to the majority class.
If calcCVperformance
was run, an updated
ClassifyResult
object, with new metric values in the
performance
slot. If calcExternalPerformance
was run, the
performance metric value itself.
Dario Strbenac
predictTable <- DataFrame(sample = paste("A", 1:10, sep = ''),
class = factor(sample(LETTERS[1:2], 50, replace = TRUE)))
actual <- factor(sample(LETTERS[1:2], 10, replace = TRUE))
result <- ClassifyResult(DataFrame(characteristic = "Data Set", value = "Example"),
paste("A", 1:10, sep = ''), paste("Gene", 1:50), list(paste("Gene", 1:50), paste("Gene", 1:50)), list(paste("Gene", 1:5), paste("Gene", 1:10)),
list(function(oracle){}), NULL, predictTable, actual)
result <- calcCVperformance(result)
performance(result)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.