Description Usage Arguments Value Methods (by generic) See Also Examples
Calculate the relative number of correct/incorrect classifications and the following evaluation measures:
tpr True positive rate (Sensitivity, Recall)
fpr False positive rate (Fall-out)
fnr False negative rate (Miss rate)
tnr True negative rate (Specificity)
ppv Positive predictive value (Precision)
for False omission rate
lrp Positive likelihood ratio (LR+)
fdr False discovery rate
npv Negative predictive value
acc Accuracy
lrm Negative likelihood ratio (LR-)
dor Diagnostic odds ratio
For details on the used measures see measures and also
https://en.wikipedia.org/wiki/Receiver_operating_characteristic.
The element for the false omission rate in the resulting object is not called for but
fomr since for should never be used as a variable name in an object.
1 2 3 4 | calculateROCMeasures(pred)
## S3 method for class 'ROCMeasures'
print(x, abbreviations = TRUE, digits = 2, ...)
|
pred |
[ |
x |
[ |
abbreviations |
[ |
digits |
[ |
... |
|
[ROCMeasures].
A list containing two elements confusion.matrix which is
the 2 times 2 confusion matrix of relative frequencies and measures, a list of the above mentioned measures.
print:
Other roc: asROCRPrediction,
plotViperCharts
Other performance: ConfusionMatrix,
calculateConfusionMatrix,
estimateRelativeOverfitting,
makeCostMeasure,
makeCustomResampledMeasure,
makeMeasure, measures,
performance
1 2 3 4 | lrn = makeLearner("classif.rpart", predict.type = "prob")
fit = train(lrn, sonar.task)
pred = predict(fit, task = sonar.task)
calculateROCMeasures(pred)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.