confusion: Construct and analyze confusion matrices

Description Usage Arguments Value Author(s) See Also Examples

Description

Confusion matrices compare two classifications (usually one done automatically using a machine learning algorithm versus the true classication represented by a manual classification by a specialist... but one can also compare two automatic or two manual classifications against each other).

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
confusion(x, ...)
## Default S3 method:
confusion(x, y = NULL, vars = c("Actual", "Predicted"),
    labels = vars, merge.by = "Id", useNA = "ifany", prior, ...)
## S3 method for class 'mlearning'
confusion(x, y = response(x),
    labels = c("Actual", "Predicted"), useNA = "ifany", prior, ...)

## S3 method for class 'confusion'
print(x, sums = TRUE, error.col = sums, digits = 0,
    sort = "ward", ...)
## S3 method for class 'confusion'
summary(object, type = "all", sort.by = "Fscore",
    decreasing = TRUE, ...)
## S3 method for class 'summary.confusion'
print(x, ...)
## S3 method for class 'confusion'
plot(x, y = NULL, type = c("image", "barplot", "stars",
    "dendrogram"), stat1 = "Recall", stat2 = "Precision", names, ...)

confusionImage(x, y = NULL, labels = names(dimnames(x)), sort = "ward",
    numbers = TRUE, digits = 0, mar = c(3.1, 10.1, 3.1, 3.1), cex = 1, asp = 1,
    colfun, ncols = 41, col0 = FALSE, grid.col = "gray", ...)
confusionBarplot(x, y = NULL, col = c("PeachPuff2", "green3", "lemonChiffon2"),
    mar = c(1.1, 8.1, 4.1, 2.1), cex = 1, cex.axis = cex, cex.legend = cex,
    main = "F-score (precision versus recall)", numbers = TRUE, min.width = 17,
    ...)
confusionStars(x, y = NULL, stat1 = "Recall", stat2 = "Precision", names, main,
    col = c("green2", "blue2", "green4", "blue4"), ...)
confusionDendrogram(x, y = NULL, labels = rownames(x), sort = "ward",
    main = "Groups clustering", ...)

prior(object, ...)
## S3 method for class 'confusion'
prior(object, ...)
prior(object, ...) <- value
## S3 replacement method for class 'confusion'
prior(object, ...) <- value

Arguments

x

an object.

y

another object, from which to extract the second classification, or NULL if not used.

vars

the variables of interest in the first and second classification in the case the objects are lists or data frames. Otherwise, this argument is ignored and x and y must be factors with same length and same levels.

labels

labels to use for the two classifications. By default, it is the same as vars or the one in the confusion matrix.

merge.by

a character string with the name of variables to use to merge the two data frames, or NULL.

useNA

do we keep NAs as a separate category? The default "ifany" creates this category only if there are missing values. Other possibilities are "no", or "always".

prior

class frequencies to use for first classifier that is tabulated in the rows of the confusion matrix. For its value, see here under, the value argument.

sums

is the confusion matrix printed with rows and columns sums?

error.col

is a column with class error for first classifier added (equivalent to flase negative rate of FNR)?

digits

the number of digits after the decimal point to print in the confusion matrix. The default or zero leads to most compact presentation and is suitable for frequencies, but not for relative frequencies.

sort

are rows and columns of the confusion matrix sorted so that classes with larger confusion are closer together? Sorting is done using a hierachical clustering with hclust(). The clustering method is provided is the one provides ("ward", by default, but see the hclust() help for other options). If FALSE or NULL, no sorting is done.

object

a 'confusion' object.

sort.by

the statistics to use to sort the table (by default, Fmeasure, the F1 score for each class = 2 * recall * precision / (recall + precision)).

decreasing

do we sort in increasing or decreasing order?

type

the type of graph to plot (only "stars" if two confusion matrices are to be compared).

stat1

first statistic to compare in the stars plot.

stat2

second statistic to compare in the stars plot.

...

further arguments passed to the function. In particular for plot(), it can be all arguments for the corresponding plot.

numbers

are actual numbers indicated in the confusion matrix image?

mar

graph margins.

cex

text magnification factor.

cex.axis

idem for axes. If NULL, the axis is not drawn.

cex.legend

idem for legend text. If NULL, no legend is added.

asp

graph aspect ration. There is little reasons to cvhange the default value of 1.

col

color(s) to use fir the graph.

colfun

a function that calculates a series of colors, like e.g., cm.colors() and that accepts one argument being the number of colors to be generated.

ncols

the number of colors to generate. It should preferrably be 2 * number of levels + 1, where levels is the number of frequencies you want to evidence in the plot. Default to 41.

col0

should null values be colored or not (no, by default)?

grid.col

color to use for grid lines, or NULL for not drawing grid lines.

names

names of the two classifiers to compare.

main

main title of the graph.

min.width

minimum bar width required to add numbers.

value

a single positive numeric to set all class frequencies to this value (use 1 for relative frequencies and 100 for relative freqs in percent), or a vector of positive numbers of the same length as the levels in the object. If the vector is named, names must match levels. Alternatively, providing NULL or an object of null length resets row class frequencies into their initial values.

Value

A confusion matrix in a 'confusion' object. prior() returns the current class frequencies associated with first classification tabulated, i.e., for rows in the confusion matrix.

Author(s)

Philippe Grosjean <Philippe.Grosjean@umons.ac.be> and Kevin Denis <Kevin.Denis@umons.ac.be>

See Also

mlearning, hclust, cm.colors

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
data("Glass", package = "mlbench")
## Use a little bit more informative labels for Type
Glass$Type <- as.factor(paste("Glass", Glass$Type))

## Use learning vector quantization to classify the glass types
## (using default parameters)
summary(glassLvq <- mlLvq(Type ~ ., data = Glass))

## Calculate cross-validated confusion matrix and plot it in different ways
(glassConf <- confusion(cvpredict(glassLvq), Glass$Type))
## Raw confusion matrix: no sort and no margins
print(glassConf, sums = FALSE, sort = FALSE)
## Graphs
plot(glassConf) # Image by default
plot(glassConf, sort = FALSE) # No sorting
plot(glassConf, type = "barplot")
plot(glassConf, type = "stars")
plot(glassConf, type = "dendrogram")

summary(glassConf)
summary(glassConf, type = "Fscore")

## Build another classifier and make a comparison
summary(glassNaiveBayes <- mlNaiveBayes(Type ~ ., data = Glass))
(glassConf2 <- confusion(cvpredict(glassNaiveBayes), Glass$Type))

## Comparison plot for two classifiers
plot(glassConf, glassConf2)

## When the probabilities in each class do not match the proportions in the
## training set, all these calculations are useless. Having an idea of
## the real proportions (so-called, priors), one should first reweight the
## confusion matrix before calculating statistics, for instance:
prior1 <- c(10, 10, 10, 100, 100, 100) # Glass types 1-3 are rare
prior(glassConf) <- prior1
glassConf
summary(glassConf, type = c("Fscore", "Recall", "Precision"))
plot(glassConf)

## This is very different than if glass types 1-3 are abundants!
prior2 <- c(100, 100, 100, 10, 10, 10) # Glass types 1-3 are abundants
prior(glassConf) <- prior2
glassConf
summary(glassConf, type = c("Fscore", "Recall", "Precision"))
plot(glassConf)

## Weight can also be used to construct a matrix of relative frequencies
## In this case, all rows sum to one
prior(glassConf) <- 1
print(glassConf, digits = 2)
## However, it is easier to work with relative frequencies in percent
## and one gets a more compact presentation
prior(glassConf) <- 100
glassConf

## To reset row class frequencies to original propotions, just assign NULL
prior(glassConf) <- NULL
glassConf
prior(glassConf)

Example output

A mlearning object of class mlLvq (learning vector quantization):
Initial call: mlLvq.formula(formula = Type ~ ., data = Glass)
Codebook:
      Class       RI       Na          Mg        Al       Si           K
33  Glass 1 1.517317 12.85914  3.59890638 1.2467600 72.99825  0.60078606
69  Glass 1 1.521764 13.12777  3.94028626 0.7662097 72.24582  0.13959585
32  Glass 1 1.517744 12.76209  3.56673131 1.2316529 73.27689  0.56057829
19  Glass 1 1.520327 13.70504  3.97829919 1.0854126 71.92834  0.16477585
47  Glass 1 1.516863 13.32051  3.56418389 1.1125048 73.08200  0.53923290
58  Glass 1 1.518416 12.79598  3.79949806 1.0753073 73.05648  0.59309925
30  Glass 1 1.517533 13.33433  3.25018053 1.0661936 73.15664  0.59949454
40  Glass 1 1.522093 14.11397  3.95661943 0.5213497 71.81089  0.05502570
76  Glass 2 1.516318 13.04299  3.54115626 1.5081499 72.99635  0.52424462
113 Glass 2 1.528634 11.88672  0.00000000 1.1048070 71.73786  0.19105037
114 Glass 2 1.516951 13.58692  3.76117260 1.4554840 72.67885  0.28709186
121 Glass 2 1.517809 12.81165  4.04232923 0.9572273 72.30728  0.54023702
117 Glass 2 1.518044 13.36739  3.94141555 1.3324328 72.36158  0.53469621
139 Glass 2 1.516782 12.74646  3.46664004 1.7386492 73.17228  0.63122842
134 Glass 2 1.515413 14.16375  3.85774447 1.3912645 72.54854  0.19161185
144 Glass 2 1.518036 12.97606  4.44821000 1.5047000 73.38303 -0.89736000
147 Glass 3 1.515408 13.45211  3.55812419 1.3712625 72.61403  0.42488625
162 Glass 3 1.518738 13.61782  3.65487568 0.6600767 72.80729  0.12984963
168 Glass 5 1.521031 11.61363  0.93420197 2.0353640 73.13578  0.57337260
177 Glass 6 1.516987 14.47053  2.48088546 1.4272312 73.18558 -0.19939780
214 Glass 7 1.517538 12.52441  0.00000000 1.0474557 76.04187  0.84404027
205 Glass 7 1.515225 14.55433 -0.11089829 2.8042790 73.33751 -0.10408968
194 Glass 7 1.517971 14.60551  0.07568646 2.0038459 72.96277 -0.02348194
           Ca           Ba           Fe
33   8.502731  0.004559509  0.098219788
69   9.709894 -0.006055086  0.092747405
32   8.479045 -0.011963675  0.034439814
19   8.947139  0.047701058 -0.001919289
47   8.271788 -0.008806225  0.011591587
58   8.635486 -0.019603092  0.037407265
30   8.585616 -0.080441163 -0.034574544
40   9.437864 -0.034247530  0.017465328
76   8.154422 -0.042365005  0.002073014
113 14.467914  0.400214015  0.114335407
114  8.207301 -0.213231271  0.069836685
121  8.656392  0.000000000  0.131455906
117  8.257190 -0.038742756  0.113677883
139  8.117721 -0.042601494 -0.071169989
134  8.325948 -0.814863707  0.148782735
144  8.312780  0.000000000 -0.062790000
147  8.484285  0.000000000  0.026220884
162  8.864101  0.099228501  0.158765602
168 11.597804  0.000000000  0.026184233
177  8.801206 -0.187108461 -0.080022443
214  8.629069  1.035287273  0.000000000
205  8.595163  0.880022389  0.018772124
194  8.506778  1.798622558  0.020855154
214 items classified with 143 true positives (error rate = 33.2%)
The "ward" method has been renamed to "ward.D"; note new "ward.D2"
            Predicted
Actual        01  02  03  04  05  06 (sum) (FNR%)
  01 Glass 6   4   0   2   0   2   1     9     56
  02 Glass 5   1   7   3   0   0   2    13     46
  03 Glass 7   1   2  23   0   1   2    29     21
  04 Glass 3   0   0   0   1  11   5    17     94
  05 Glass 1   0   0   0   1  60   9    70     14
  06 Glass 2   2   2   1   0  23  48    76     37
  (sum)        8  11  29   2  97  67   214     33
214 items classified with 143 true positives (error rate = 33.2%)
            Predicted
Actual       01 02 03 04 05 06
  01 Glass 1 60  9  1  0  0  0
  02 Glass 2 23 48  0  2  2  1
  03 Glass 3 11  5  1  0  0  0
  04 Glass 5  0  2  0  7  1  3
  05 Glass 6  2  1  0  0  4  2
  06 Glass 7  1  2  0  2  1 23
The "ward" method has been renamed to "ward.D"; note new "ward.D2"
The "ward" method has been renamed to "ward.D"; note new "ward.D2"
214 items classified with 143 true positives (error = 33.2%)

Global statistics on reweighted data:
Error rate: 33.2%, F(micro-average): 0.588, F(macro-average): 0.557

           Fscore     Recall Precision Specificity       NPV         FPR
Glass 7 0.7931034 0.79310345 0.7931034   0.9675676 0.9675676 0.032432432
Glass 1 0.7185629 0.85714286 0.6185567   0.7430556 0.9145299 0.256944444
Glass 2 0.6713287 0.63157895 0.7164179   0.8623188 0.8095238 0.137681159
Glass 5 0.5833333 0.53846154 0.6363636   0.9800995 0.9704433 0.019900498
Glass 6 0.4705882 0.44444444 0.5000000   0.9804878 0.9757282 0.019512195
Glass 3 0.1052632 0.05882353 0.5000000   0.9949239 0.9245283 0.005076142
              FNR       FDR        FOR      LRPT      LRNT      LRPS      LRNS
Glass 7 0.2068966 0.2068966 0.03243243 24.454023 0.2138316 24.454023 0.2138316
Glass 1 0.1428571 0.3814433 0.08547009  3.335907 0.1922563  7.237113 0.4170922
Glass 2 0.3684211 0.2835821 0.19047619  4.587258 0.4272446  3.761194 0.3503073
Glass 5 0.4615385 0.3636364 0.02955665 27.057692 0.4709098 21.530303 0.3747116
Glass 6 0.5555556 0.5000000 0.02427184 22.777778 0.5666114 20.600000 0.5124378
Glass 3 0.9411765 0.5000000 0.07547170 11.588235 0.9459784  6.625000 0.5408163
           BalAcc       MCC      Chisq        Bray Auto Manu A_M TP FP FN  TN
Glass 7 0.8803355 0.7606710 123.824764 0.000000000   29   29   0 23  6  6 179
Glass 1 0.8000992 0.5656481  68.470956 0.063084112   97   70  27 60 37 10 107
Glass 2 0.7469489 0.5096680  55.588951 0.021028037   67   76  -9 48 19 28 119
Glass 5 0.7592805 0.5609514  67.338623 0.004672897   11   13  -2  7  4  6 197
Glass 6 0.7124661 0.4496134  43.260578 0.002336449    8    9  -1  4  4  5 201
Glass 3 0.5268737 0.1510539   4.882899 0.035046729    2   17 -15  1  1 16 196
  Glass 7   Glass 1   Glass 2   Glass 5   Glass 6   Glass 3 
0.7931034 0.7185629 0.6713287 0.5833333 0.4705882 0.1052632 
attr(,"stat.type")
[1] "Fscore"
A mlearning object of class mlNaiveBayes (naive Bayes classifier):
Initial call: mlNaiveBayes.formula(formula = Type ~ ., data = Glass)

Naive Bayes Classifier for Discrete Predictors

Call:
e1071:::naiveBayes.default(x = train, y = response, laplace = laplace, 
    .args. = ..1)

A-priori probabilities:
response
   Glass 1    Glass 2    Glass 3    Glass 5    Glass 6    Glass 7 
0.32710280 0.35514019 0.07943925 0.06074766 0.04205607 0.13551402 

Conditional probabilities:
         RI
response      [,1]        [,2]
  Glass 1 1.518718 0.002268097
  Glass 2 1.518619 0.003802126
  Glass 3 1.517964 0.001916360
  Glass 5 1.518928 0.003345355
  Glass 6 1.517456 0.003115783
  Glass 7 1.517116 0.002545069

         Na
response      [,1]      [,2]
  Glass 1 13.24229 0.4993015
  Glass 2 13.11171 0.6641594
  Glass 3 13.43706 0.5068871
  Glass 5 12.82769 0.7770366
  Glass 6 14.64667 1.0840203
  Glass 7 14.44207 0.6863588

         Mg
response       [,1]      [,2]
  Glass 1 3.5524286 0.2470430
  Glass 2 3.0021053 1.2156615
  Glass 3 3.5435294 0.1627859
  Glass 5 0.7738462 0.9991458
  Glass 6 1.3055556 1.0971339
  Glass 7 0.5382759 1.1176828

         Al
response      [,1]      [,2]
  Glass 1 1.163857 0.2731581
  Glass 2 1.408158 0.3183403
  Glass 3 1.201176 0.3474889
  Glass 5 2.033846 0.6939205
  Glass 6 1.366667 0.5718610
  Glass 7 2.122759 0.4427261

         Si
response      [,1]      [,2]
  Glass 1 72.61914 0.5694842
  Glass 2 72.59803 0.7245726
  Glass 3 72.40471 0.5122758
  Glass 5 72.36615 1.2823191
  Glass 6 73.20667 1.0794675
  Glass 7 72.96586 0.9402337

         K
response       [,1]      [,2]
  Glass 1 0.4474286 0.2148790
  Glass 2 0.5210526 0.2137262
  Glass 3 0.4064706 0.2298897
  Glass 5 1.4700000 2.1386951
  Glass 6 0.0000000 0.0000000
  Glass 7 0.3251724 0.6684931

         Ca
response       [,1]      [,2]
  Glass 1  8.797286 0.5748066
  Glass 2  9.073684 1.9216353
  Glass 3  8.782941 0.3801112
  Glass 5 10.123846 2.1837908
  Glass 6  9.356667 1.4499483
  Glass 7  8.491379 0.9735052

         Ba
response         [,1]       [,2]
  Glass 1 0.012714286 0.08383769
  Glass 2 0.050263158 0.36234044
  Glass 3 0.008823529 0.03638034
  Glass 5 0.187692308 0.60825096
  Glass 6 0.000000000 0.00000000
  Glass 7 1.040000000 0.66534094

         Fe
response        [,1]       [,2]
  Glass 1 0.05700000 0.08907496
  Glass 2 0.07973684 0.10643275
  Glass 3 0.05705882 0.10786361
  Glass 5 0.06076923 0.15558821
  Glass 6 0.00000000 0.00000000
  Glass 7 0.01344828 0.02979404

214 items classified with 84 true positives (error rate = 60.7%)
The "ward" method has been renamed to "ward.D"; note new "ward.D2"
            Predicted
Actual        01  02  03  04  05  06 (sum) (FNR%)
  01 Glass 3   3  13   0   0   1   0    17     82
  02 Glass 1  16  49   4   0   1   0    70     30
  03 Glass 2   9  46   9   5   7   0    76     88
  04 Glass 5   0   0   5   0   7   1    13    100
  05 Glass 6   0   0   0   0   8   1     9     11
  06 Glass 7   0   1   0   1  12  15    29     48
  (sum)       28 109  18   6  36  17   214     61
214 items classified with 143 true positives (error rate = 33.2%)
with initial row frequencies:
Glass 1 Glass 2 Glass 3 Glass 5 Glass 6 Glass 7 
     70      76      17      13       9      29 
Rescaled to:
The "ward" method has been renamed to "ward.D"; note new "ward.D2"
            Predicted
Actual        01  02  03  04  05  06 (sum) (FNR%)
  01 Glass 2   6   0   0   0   3   0    10     37
  02 Glass 5  15  54  23   0   0   8   100     46
  03 Glass 7   7   7  79   0   3   3   100     21
  04 Glass 3   3   0   0   1   6   0    10     94
  05 Glass 1   1   0   0   0   9   0    10     14
  06 Glass 6  11   0  22   0  22  44   100     56
  (sum)       44  61 125   1  44  56   330     41
214 items classified with 143 true positives (error = 33.2%)

Global statistics on reweighted data:
Error rate: 41.5%, F(micro-average): 0.565, F(macro-average): 0.435

           Fscore     Recall Precision
Glass 7 0.7057931 0.79310345 0.6357998
Glass 5 0.6688720 0.53846154 0.8826390
Glass 6 0.5703556 0.44444444 0.7958082
Glass 1 0.3190032 0.85714286 0.1959684
Glass 2 0.2342002 0.63157895 0.1437532
Glass 3 0.1096319 0.05882353 0.8045977
The "ward" method has been renamed to "ward.D"; note new "ward.D2"
214 items classified with 143 true positives (error rate = 33.2%)
with initial row frequencies:
Glass 1 Glass 2 Glass 3 Glass 5 Glass 6 Glass 7 
     70      76      17      13       9      29 
Rescaled to:
The "ward" method has been renamed to "ward.D"; note new "ward.D2"
            Predicted
Actual        01  02  03  04  05  06 (sum) (FNR%)
  01 Glass 2  63  30   0   3   3   1   100     37
  02 Glass 1  13  86   1   0   0   0   100     14
  03 Glass 3  29  65   6   0   0   0   100     94
  04 Glass 6   1   2   0   4   0   2    10     56
  05 Glass 5   2   0   0   1   5   2    10     46
  06 Glass 7   1   0   0   0   1   8    10     21
  (sum)      109 183   7   8   9  14   330     48
214 items classified with 143 true positives (error = 33.2%)

Global statistics on reweighted data:
Error rate: 47.7%, F(micro-average): 0.575, F(macro-average): 0.509

           Fscore     Recall Precision
Glass 7 0.6671255 0.79310345 0.5756830
Glass 1 0.6052192 0.85714286 0.4677441
Glass 2 0.6050591 0.63157895 0.5806767
Glass 5 0.5757146 0.53846154 0.6185055
Glass 6 0.4886668 0.44444444 0.5426618
Glass 3 0.1096319 0.05882353 0.8045977
The "ward" method has been renamed to "ward.D"; note new "ward.D2"
214 items classified with 143 true positives (error rate = 33.2%)
with initial row frequencies:
Glass 1 Glass 2 Glass 3 Glass 5 Glass 6 Glass 7 
     70      76      17      13       9      29 
Rescaled to:
The "ward" method has been renamed to "ward.D"; note new "ward.D2"
            Predicted
Actual          01    02    03    04    05    06 (sum) (FNR%)
  01 Glass 2  0.63  0.30  0.00  0.03  0.03  0.01  1.00  37.00
  02 Glass 1  0.13  0.86  0.01  0.00  0.00  0.00  1.00  14.00
  03 Glass 3  0.29  0.65  0.06  0.00  0.00  0.00  1.00  94.00
  04 Glass 6  0.11  0.22  0.00  0.44  0.00  0.22  1.00  56.00
  05 Glass 5  0.15  0.00  0.00  0.08  0.54  0.23  1.00  46.00
  06 Glass 7  0.07  0.03  0.00  0.03  0.07  0.79  1.00  21.00
  (sum)       1.39  2.06  0.07  0.58  0.63  1.26  6.00  45.00
214 items classified with 143 true positives (error rate = 33.2%)
with initial row frequencies:
Glass 1 Glass 2 Glass 3 Glass 5 Glass 6 Glass 7 
     70      76      17      13       9      29 
Rescaled to:
The "ward" method has been renamed to "ward.D"; note new "ward.D2"
            Predicted
Actual        01  02  03  04  05  06 (sum) (FNR%)
  01 Glass 2  63  30   0   3   3   1   100     37
  02 Glass 1  13  86   1   0   0   0   100     14
  03 Glass 3  29  65   6   0   0   0   100     94
  04 Glass 6  11  22   0  44   0  22   100     56
  05 Glass 5  15   0   0   8  54  23   100     46
  06 Glass 7   7   3   0   3   7  79   100     21
  (sum)      139 206   7  58  63 126   600     45
214 items classified with 143 true positives (error rate = 33.2%)
The "ward" method has been renamed to "ward.D"; note new "ward.D2"
            Predicted
Actual        01  02  03  04  05  06 (sum) (FNR%)
  01 Glass 6   4   0   2   0   2   1     9     56
  02 Glass 5   1   7   3   0   0   2    13     46
  03 Glass 7   1   2  23   0   1   2    29     21
  04 Glass 3   0   0   0   1  11   5    17     94
  05 Glass 1   0   0   0   1  60   9    70     14
  06 Glass 2   2   2   1   0  23  48    76     37
  (sum)        8  11  29   2  97  67   214     33
Glass 1 Glass 2 Glass 3 Glass 5 Glass 6 Glass 7 
     70      76      17      13       9      29 

mlearning documentation built on May 2, 2019, 6:05 p.m.