specificity: Computes classification specificity from the confusion matrix...

Description Usage Arguments Details Value Examples

Description

For each class, we calculate the classification specificity in order to summarize its performance for the signature. We compute one of two aggregate scores, to summarize the overall performance of the signature.

Usage

1
specificity(confusionSummary, aggregate = c("micro", "macro"))

Arguments

confusionSummary

list containing the confusion summary for a set of classifications

aggregate

string that indicates the type of aggregation; by default, micro. See details.

Details

To estimate specificity_j for the jth class, we compute

(TN_j) / (TN_j + FP_j),

where TN_j and FP_j are the true negatives and false positives, respectively. More specifically, TN_j is the number of observations that we correctly classified into other classes than the jth class, and FP_j is the number of observations that we have incorrectly classified into class j.

The two aggregate score options are the macro- and micro-aggregate (average) scores. The macro-aggregate score is the arithmetic mean of the binary scores for each class. The micro-aggregate score is a weighted average of each class' binary score, where the weights are determined by the sample sizes for each class. By default, we use the micro-aggregate score because it is more robust, but the macro-aggregate score might be more intuitive to some users.

Notice that the specificity is equal to the TNR.

The specificity measure ranges from 0 to 1 with 1 being the optimal value.

Value

list with the accuracy measure for each class as well as the macro- and micro-averages (aggregate measures across all classes).

Examples

1
2
3
4
5
data(prediction_values)

confmat <- confusion(prediction_values[,"Curated_Quality"], prediction_values[,"PredictClass"])

specificity(confmat)

PNNL-Comp-Mass-Spec/glmnetGLR documentation built on May 28, 2019, 2:23 p.m.