Description Usage Arguments Details Value Examples
For a vector of classifications and truth labels, we create a confusion matrix. We allow binary and multi-class classifications and compute the following four measures for each class:
1 | confusion(truthClass, predictedClass)
|
truthClass |
vector of ground truth classification labels |
predictedClass |
vector of predicted classification labels |
True Positives (TP)
True Negatives (TN)
False Positives (FP)
False Negatives (FN)
For multi-class classification, we consider each class in a binary context. For example, suppose that we have the three food condiment classes: ketchup, mustard, and other. When calculating the TP, TN, FP, and FN values for ketchup, we consider each observation as either 'ketchup' or 'not ketchup.' Similarly, for mustard, we would consider 'mustard' and 'not mustard', and for other, we would consider 'other' and 'not other.'
With the above counts for each class, we can quickly calculate a variety of class-specific and aggregate classification accuracy measures.
list with the results of confusion matrix results for each class.
1 2 3 | data(prediction_values)
confusion(prediction_values[,"Curated_Quality"], prediction_values[,"PredictClass"])
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.