confusion: Statistical derivations of confusinon table

Description Usage Arguments Details Value References

Description

Calculate statistical measures of the performance of binary classification [1] from the output of confusion matrix in table.evaluate.

Usage

1
confusion(input.table)

Arguments

input.table

the output confusion table from table.evaluate.

Details

true positive: tp; false positive: fp; true negative: tn; false negative: fn;
positives in reference network: p; negatives in reference network: n;
true positive rate: tpr=recall=\frac{tp}{tp+fn}; false positive rate: fpr=\frac{fp}{fp+tn};
true negative rate: tnr=\frac{tn}{tn+fp}; false negative rate: fnr=\frac{fn}{fn+tp};
precision: precision=\frac{tp}{tp+fp}; negative predictive value: npv=\frac{tn}{tn+fn};
false discovery rate: fdr=\frac{fp}{fp+tp}; accuracy: accuracy=\frac{tp+tn}{p+n};
f1 scaore: f1=\frac{2tp}{2tp+fp+fn};
Matthews correlation coefficient:

mcc=\frac{tp\times tn-fp\times fn}{√{(tp+fp)\times (tp+fn)\times (tn+fp)\times (tn+fn)}}

Value

confusion returns a data frame of measures of performance, see Details.

References

1. Powers DMW: Evaluation: From Precision, Recall and F-Factor to ROC, Informedness, Markedness & Correlation. In. Adelaide, Australia; 2007.


wyguo/RLowPC documentation built on May 4, 2019, 12:04 p.m.