Description Usage Arguments Details Value References
Calculate statistical measures of the performance of binary classification [1] from the output of confusion matrix in table.evaluate
.
1 | confusion(input.table)
|
input.table |
the output confusion table from |
true positive: tp; false positive: fp; true negative: tn; false negative: fn;
positives in reference network: p; negatives in reference network: n;
true positive rate: tpr=recall=\frac{tp}{tp+fn}; false positive rate: fpr=\frac{fp}{fp+tn};
true negative rate: tnr=\frac{tn}{tn+fp}; false negative rate: fnr=\frac{fn}{fn+tp};
precision: precision=\frac{tp}{tp+fp}; negative predictive value: npv=\frac{tn}{tn+fn};
false discovery rate: fdr=\frac{fp}{fp+tp}; accuracy: accuracy=\frac{tp+tn}{p+n};
f1 scaore: f1=\frac{2tp}{2tp+fp+fn};
Matthews correlation coefficient:
mcc=\frac{tp\times tn-fp\times fn}{√{(tp+fp)\times (tp+fn)\times (tn+fp)\times (tn+fn)}}
confusion
returns a data frame of measures of performance, see Details.
1. Powers DMW: Evaluation: From Precision, Recall and F-Factor to ROC, Informedness, Markedness & Correlation. In. Adelaide, Australia; 2007.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.