knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) library("riskyr") # load the "riskyr" package
Behold the aptly named "confusion matrix":
  Condition    
 : ::::::::
 Decision  present (TRUE
):  absent (FALSE
):  Sum:  (b) by decision: 
 positive (TRUE
):  hi
 fa
 dec.pos
 PPV
= hi
/dec.pos

 negative (FALSE
):  mi
 cr
 dec.neg
 NPV
= cr
/dec.neg

 Sum:  cond.true
 cond.false
 N
 
 (a) by condition  sens
= hi
/cond.true
 spec
= cr
/cond.false
  acc
= dec.cor
/N
= (hi
+cr
)/N

Most people, including medical experts and social scientists, struggle to understand the implications of this matrix.
This is no surprise when considering explanations like the corresponding article on Wikipedia, which squeezes more than a dozen metrics out of four essential frequencies (hi
, mi
, fa
, and cr
). While each particular metric is quite simple, their abundance and interdependence can be overwhelming.
Fortunately, the basic matrix is actually quite simple and its implications rather straightforward. In the following, we aim to disentangle the profusion of measures and summarize those parts of the confusion matrix that a riskliterate person really needs to know.
Condensed to its core, the confusion matrix looks like this:
  Condition  
 : ::::
 Decision  present (TRUE
):  absent (FALSE
): 
 positive (TRUE
):  hi
 fa

 negative (FALSE
):  mi
 cr

This is not so confusing any more. And, perhaps surprisingly, all other metrics follow from this simple core in a straightforward way.
Essentially, the confusion matrix views a population of N
individuals in different ways by adopting different perspectives. "Adopting a perspective" means that we can distinguish between individuals on the basis of some criterion. The 2 primary criteria used here are:
(a) each individual's condition, which can either be present (TRUE
) or absent (FALSE
), and
(b) each individual's decision, which can either be positive
(TRUE
) or negative
(FALSE
).
Numerically, the adoption of each of these two perspectives splits the population into two subgroups.^[To split a group into subgroups, some criterion for classifying the individuals of the group has to be used. If a criterion is binary (i.e., assigns only two different values), its application yields two subgroups. In the present case, both an individual's condition and the corresponding decision are binary criteria.] Applying two different splits of a population into two subgroups results in $2 \times 2 = 4$ cases, which form the core of the confusion matrix:
hi
represents hits (or true positives): condition present (TRUE
) & decision positive (TRUE
). mi
represents misses (or false negatives): condition present (TRUE
) & decision negative (FALSE
). fa
represents false alarms (or false positives): condition absent (FALSE
) & decision positive (TRUE
). cr
represents correct rejections (or true negatives): condition absent (FALSE
) & decision negative (FALSE
). Importantly, all frequencies required to understand and compute various metrics are combinations of these four frequencies  which is why we refer to them as the four essential frequencies (see the vignette on Data formats). For instance, adding up the columns and rows of the matrix yields the frequencies of the two subgroups that result from adopting our two perspectives on the population N
(or splitting N
into subgroups by applying two binary criteria):
(a) by condition (corresponding to the two columns of the confusion matrix):
$$ \begin{aligned} \texttt{N} \ &= \ \texttt{cond.true} & +\ \ \ \ \ &\texttt{cond.false} & \textrm{(a)} \ \ &= \ (\texttt{hi} + \texttt{mi}) & +\ \ \ \ \ &(\texttt{fa} + \texttt{cr}) \ \end{aligned} $$
(b) by decision (corresponding to the two rows of the confusion matrix):
$$ \begin{aligned} \texttt{N} \ &= \ \texttt{dec.pos} & +\ \ \ \ \ &\texttt{dec.neg} & \ \ \ \ \textrm{(b)} \ \ &= \ (\texttt{hi} + \texttt{fa}) & +\ \ \ \ \ &(\texttt{mi} + \texttt{cr}) \ \end{aligned} $$
To reflect these two perspectives in the confusion matrix, we only need to add the sums of columns (i.e., by condition) and rows (by decision):
  Condition   
 : ::::::::
 Decision  present (TRUE
):  absent (FALSE
):  Sum: 
 positive (TRUE
):  hi
 fa
 dec.pos

 negative (FALSE
):  mi
 cr
 dec.neg

 Sum:  cond.true
 cond.false
 N

ToDo:
A third way of grouping the four essential frequencies results from asking the question: Which of the four essential frequencies are correct decisions and which are erroneous decisions? Crucially, this question about decision accuracy can neither be answered by only considering each individual's condition (i.e., the columns of the matrix), nor can it be answered by only considering each individual's decision (i.e., the rows of the matrix). Instead, the question requires considering the correspondence between condition and decision. Checking the correspondence between rows and columns for the four essential frequencies yields an important insight: The confusion matrix contains two types of correct decisions and two types of errors:
A decision is correct, when it corresponds to the condition. This is the case for two cells in (or the "\" diagonal of) the confusion matrix:
hi
: condition present (TRUE
) & decision positive (TRUE
)cr
: condition absent (FALSE
) & decision negative (FALSE
) A decision is incorrect or erroneous, when it does not correspond to the condition. This also is the case for two cells in (or the "/" diagonal of) the confusion matrix:
mi
: condition present (TRUE
) & decision negative (FALSE
)fa
: condition absent (FALSE
) & decision positive (TRUE
)Splitting all N
individuals into two subgroups of those with correct vs. those with erroneous decisions yields a third perspective on the population:
(c) by the correspondence of decisions to conditions (corresponding to the two diagonals of the confusion matrix):
$$ \begin{aligned} \texttt{N} \ &= \ \texttt{dec.cor} & +\ \ \ \ \ &\texttt{dec.err} & \ \ \textrm{(c)} \ \ &= \ (\texttt{hi} + \texttt{cr}) & +\ \ \ \ \ &(\texttt{mi} + \texttt{fa}) \ \end{aligned} $$
It may be instructive to point out two possible sources of confusion, so that they can be deliberately avoided:
Beware of alternative terms for mi
and cr
:
Misses mi
are often called "false negatives", but are nevertheless cases for which the condition is TRUE
(i.e., in the cond.true
column of the confusion table).
Correct rejections cr
are often called "true negatives", are nevertheless cases for which the condition is FALSE
(i.e., in the cond.false
column of the confusion table).
Thus, the terms "true" and "false" are ambiguous by switching their referents. When used to denote the four essential frequencies (e.g., describing mi
as "false negatives" and cr
as "true negatives") the terms refer to the correspondence of a decision to the condition, rather than to their condition. To avoid this source of confusion, we prefer the terms mi
and cr
, rather than "false negatives" and "true negatives".
dec.cor
and dec.err
:dec.cor
and dec.err
as "true decisions" and "false decisions". However, this would also invite conceptual confusion, as "true decisions" would include cond.false
cases (cr
) and "false decisions" would include cond.true
cases (mi
). Again, we prefer the less ambiguous terms "correct decisions" vs. "erroneous decisions".The perspective of accuracy raises an important question: How good is a given decision (e.g., a clinical judgment or some diagnostic test) in capturing the true state of the condition? Different accuracy metrics provide different answers to this question, but share a common goal  measuring decision performance by capturing the correspondence of decisions to conditions in some quantitative fashion.^[It is convenient to think of accuracy metrics as outcomes of the confusion table. However, when designing tests or decision algorithms, accuracy measures also serve as inputs that are to be maximized by some process (see Phillips et al., 2017, for examples).]
While all accuracy metrics quantify the relationship between correct and erroneous decisions, different metrics emphasize different aspects or have different purposes. We distinguish between specific and general metrics.
The goal of a specific accuracy metric is to quantify some particular aspect of decision performance. For instance, how accurate is our decision or diagnostic test in correctly detecting cond.true
cases? How accurate is it in detecting cond.false
cases?
As we are dealing with two types of correct decisions (hi
and cr
) and two perspectives (by columns vs. by rows), we can provide 4 answers to these questions. To obtain a numeric quantity, we divide the frequency of correct cases (either hi
or cr
) by
(a) column sums (cond.true
vs. cond.false
): This yields the decision's sensitivity (sens
) and specificity (spec
):
$$ \begin{aligned} \texttt{sens} \ &= \frac{\texttt{hi}}{\texttt{cond.true}} & \ \ \textrm{(a1)} \ \ \ \texttt{spec} \ &= \frac{\texttt{cr}}{\texttt{cond.false}} & \ \ \textrm{(a2)} \ \end{aligned} $$
(b) row sums (dec.pos
vs. dec.neg
): This yields the decision's positive predictive value (PPV
) and negative predictive value (NPV
):
$$ \begin{aligned} \texttt{PPV} \ &= \frac{\texttt{hi}}{\texttt{dec.pos}} & \ \ \ \textrm{(b1)} \ \ \ \texttt{NPV} \ &= \frac{\texttt{cr}}{\texttt{dec.neg}} & \ \ \ \textrm{(b2)} \ \end{aligned} $$
In contrast to these specific metrics, general metrics of accuracy aim to capture overall performance (i.e., summarize the four essential frequencies of the confusion matrix) in a single quantity. riskyr
currently computes four general metrics (which are contained in accu
):
acc
Overall accuracy (acc
) divides the number of correct decisions (i.e., all dec.cor
cases or the "\" diagonal of the confusion table) by the number N
of all decisions (or individuals for which decisions have been made). Thus,
Accuracy
acc
:= Proportion or percentage of cases correctly classified.
Numerically, overall accuracy acc
is computed as:
$$
\begin{aligned}
\texttt{acc} &= \frac{\texttt{hi} + \texttt{cr}}{\texttt{hi} + \texttt{mi} + \texttt{fa} + \texttt{cr}}
= \frac{\texttt{dec.cor}}{\texttt{dec.cor} + \texttt{dec.err}} = \frac{\texttt{dec.cor}}{\texttt{N}}
\end{aligned}
$$
wacc
Whereas overall accuracy (acc
) does not discriminate between different types of correct and incorrect cases, weighted accuracy (wacc
) allows for taking into account the importance of errors. Essentially, wacc
combines the sensitivity (sens
) and specificity (spec
), but multiplies sens
by a weighting parameter w
(ranging from 0 to 1) and spec
by its complement (1  w)
:
Weighted accuracy
wacc
:= the average of sensitivity (sens
) weighted byw
, and specificity (spec
), weighted by(1  w)
.
$$ \begin{aligned} \texttt{wacc} \ &= \texttt{w} \cdot \texttt{sens} \ + \ (1  \texttt{w}) \cdot \texttt{spec} \ \end{aligned} $$
Three cases can be distinguished, based on the value of the weighting parameter w
:
If w = .5
, sens
and spec
are weighted equally and wacc
becomes balanced accuracy bacc
.
If 0 <= w < .5
, sens
is less important than spec
(i.e., instances of fa
are considered more serious errors than instances of mi
).
If .5 < w <= 1
, sens
is more important than spec
(i.e., instances of mi
are considered more serious errors than instances of fa
).
mcc
The Matthews correlation coefficient (with values ranging from $1$ to $+1$) is computed as:
$$ \begin{aligned} \texttt{mcc} \ &= \frac{(\texttt{hi} \cdot \texttt{cr}) \  \ (\texttt{fa} \cdot \texttt{mi})}{\sqrt{(\texttt{hi} + \texttt{fa}) \cdot (\texttt{hi} + \texttt{mi}) \cdot (\texttt{cr} + \texttt{fa}) \cdot (\texttt{cr} + \texttt{mi})}} \ \end{aligned} $$
The mcc
is a correlation coefficient specifying the correspondence between the actual and the predicted binary categories. A value of $0$ represents chance performance, a value of $+1$ represents perfect performance, and a value of $−1$ indicates complete disagreement between truth and predictions.
See Wikipedia: Matthews correlation coefficient for details.
For creatures who cannot live with only three general measures of accuracy, accu
also provides the F1 score, which is the harmonic mean of PPV
(aka. precision) and sens
(aka. recall):
$$ \begin{aligned} \texttt{f1s} \ &= 2 \cdot \frac{\texttt{PPV} \cdot \texttt{sens}}{\texttt{PPV} + \texttt{sens}} \ \end{aligned} $$
See Wikipedia: F1 score for details.
Links to related Wikipedia articles:
We appreciate your feedback, comments, or questions.
Please report any riskyr
related issues at https://github.com/hneth/riskyr/issues.
For general inquiries, please email us at contact.riskyr@gmail.com.
riskyr
Vignettes Nr.  Vignette  Content 
 : ::
 A.  User guide  Motivation and general instructions 
 B.  Data formats  Data formats: Frequencies and probabilities 
 C.  Confusion matrix  Confusion matrix and accuracy metrics 
 D.  Functional perspectives  Adopting functional perspectives 
 E.  Quick start primer  Quick start primer 
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.