accuracy: Computes classification accuracy from the confusion matrix...

Description Usage Arguments Details Value Examples

Description

For each class, we calculate the classification accuracy in order to summarize its performance for the signature. We compute one of two aggregate scores, to summarize the overall performance of the signature.

Usage

1
accuracy(confusionSummary, aggregate = c("micro", "macro"))

Arguments

confusionSummary

list containing the confusion summary for a set of classifications

aggregate

string that indicates the type of aggregation; by default, micro. See details.

Details

We define the accuracy as the proportion of correct classifications.

The two aggregate score options are the macro- and micro-aggregate (average) scores. The macro-aggregate score is the arithmetic mean of the binary scores for each class. The micro-aggregate score is a weighted average of each class' binary score, where the weights are determined by the sample sizes for each class. By default, we use the micro-aggregate score because it is more robust, but the macro-aggregate score might be more intuitive to some users.

Note that the macro- and micro-aggregate scores are the same for classification accuracy.

The accuracy measure ranges from 0 to 1 with 1 being the optimal value.

Value

list with the accuracy measure for each class as well as the macro- and micro-averages (aggregate measures across all classes).

Examples

1
2
3
4
5
6
data(prediction_values)

# Create the confusion matrix
confmat <- confusion(prediction_values[,"Curated_Quality"], prediction_values[,"PredictClass"])

accuracy(confmat)

PNNL-Comp-Mass-Spec/glmnetGLR documentation built on May 28, 2019, 2:23 p.m.