plot: Plot performance evaluation measures

Description Usage Arguments Value See Also Examples

Description

The plot function creates a plot of performance evaluation measures.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
## S3 method for class 'sscurves'
plot(x, y = NULL, ...)

## S3 method for class 'mscurves'
plot(x, y = NULL, ...)

## S3 method for class 'smcurves'
plot(x, y = NULL, ...)

## S3 method for class 'mmcurves'
plot(x, y = NULL, ...)

## S3 method for class 'sspoints'
plot(x, y = NULL, ...)

## S3 method for class 'mspoints'
plot(x, y = NULL, ...)

## S3 method for class 'smpoints'
plot(x, y = NULL, ...)

## S3 method for class 'mmpoints'
plot(x, y = NULL, ...)

Arguments

x

An S3 object generated by evalmod. The plot function accepts the following S3 objects.

  1. ROC and Precision-Recall curves (mode = "rocprc")

    S3 object # of models # of test datasets
    sscurves single single
    mscurves multiple single
    smcurves single multiple
    mmcurves multiple multiple
  2. Basic evaluation measures (mode = "basic")

    S3 object # of models # of test datasets
    sspoints single single
    mspoints multiple single
    smpoints single multiple
    mmpoints multiple multiple

See the Value section of evalmod for more details.

y

Equivalent with curvetype.

...

All the following arguments can be specified.

curvetype
  1. ROC and Precision-Recall curves (mode = "rocprc")

    curvetype description
    ROC ROC curve
    PRC Precision-Recall curve

    Multiple curvetype can be combined, such as c("ROC", "PRC").

  2. Basic evaluation measures (mode = "basic")

    curvetype description
    error Normalized ranks vs. error rate
    accuracy Normalized ranks vs. accuracy
    specificity Normalized ranks vs. specificity
    sensitivity Normalized ranks vs. sensitivity
    precision Normalized ranks vs. precision
    mcc Normalized ranks vs. Matthews correlation coefficient
    fscore Normalized ranks vs. F-score

    Multiple curvetype can be combined, such as c("precision", "sensitivity").

type

A character to specify the line type as follows.

"l"

lines

"p"

points

"b"

both lines and points

show_cb

A Boolean value to specify whether point-wise confidence bounds are drawn. It is effective only when calc_avg of the evalmod function is set to TRUE.

raw_curves

A Boolean value to specify whether raw curves are shown instead of the average curve. It is effective only when raw_curves of the evalmod function is set to TRUE.

show_legend

A Boolean value to specify whether the legend is shown.

Value

The plot function shows a plot and returns NULL.

See Also

evalmod for generating an S3 object. autoplot for plotting the equivalent curves with ggplot2.

Examples

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
## Not run: 
##################################################
### Single model & single test dataset
###

## Load a dataset with 10 positives and 10 negatives
data(P10N10)

## Generate an sscurve object that contains ROC and Precision-Recall curves
sscurves <- evalmod(scores = P10N10$scores, labels = P10N10$labels)

## Plot both ROC and Precision-Recall curves
plot(sscurves)

## Plot a ROC curve
plot(sscurves, curvetype = "ROC")

## Plot a Precision-Recall curve
plot(sscurves, curvetype = "PRC")

## Generate an sspoints object that contains basic evaluation measures
sspoints <- evalmod(mode = "basic", scores = P10N10$scores,
                    labels = P10N10$labels)

## Plot normalized ranks vs. basic evaluation measures
plot(sspoints)

## Plot normalized ranks vs. precision
plot(sspoints, curvetype = "precision")


##################################################
### Multiple models & single test dataset
###

## Create sample datasets with 100 positives and 100 negatives
samps <- create_sim_samples(1, 100, 100, "all")
mdat <- mmdata(samps[["scores"]], samps[["labels"]],
               modnames = samps[["modnames"]])

## Generate an mscurve object that contains ROC and Precision-Recall curves
mscurves <- evalmod(mdat)

## Plot both ROC and Precision-Recall curves
plot(mscurves)

## Hide the legend
plot(mscurves, show_legend = FALSE)

## Generate an mspoints object that contains basic evaluation measures
mspoints <- evalmod(mdat, mode = "basic")

## Plot normalized ranks vs. basic evaluation measures
plot(mspoints)

## Hide the legend
plot(mspoints, show_legend = FALSE)


##################################################
### Single model & multiple test datasets
###

## Create sample datasets with 100 positives and 100 negatives
samps <- create_sim_samples(10, 100, 100, "good_er")
mdat <- mmdata(samps[["scores"]], samps[["labels"]],
               modnames = samps[["modnames"]],
               dsids = samps[["dsids"]])

## Generate an smcurve object that contains ROC and Precision-Recall curves
smcurves <- evalmod(mdat, raw_curves = TRUE)

## Plot average ROC and Precision-Recall curves
plot(smcurves, raw_curves = FALSE)

## Hide confidence bounds
plot(smcurves, raw_curves = FALSE, show_cb = FALSE)

## Plot raw ROC and Precision-Recall curves
plot(smcurves, raw_curves = TRUE, show_cb = FALSE)

## Generate an smpoints object that contains basic evaluation measures
smpoints <- evalmod(mdat, mode = "basic")

## Plot normalized ranks vs. average basic evaluation measures
plot(smpoints)


##################################################
### Multiple models & multiple test datasets
###

## Create sample datasets with 100 positives and 100 negatives
samps <- create_sim_samples(10, 100, 100, "all")
mdat <- mmdata(samps[["scores"]], samps[["labels"]],
               modnames = samps[["modnames"]],
               dsids = samps[["dsids"]])

## Generate an mscurve object that contains ROC and Precision-Recall curves
mmcurves <- evalmod(mdat, raw_curves = TRUE)

## Plot average ROC and Precision-Recall curves
plot(mmcurves, raw_curves = FALSE)

## Show confidence bounds
plot(mmcurves, raw_curves = FALSE, show_cb = TRUE)

## Plot raw ROC and Precision-Recall curves
plot(mmcurves, raw_curves = TRUE)

## Generate an mmpoints object that contains basic evaluation measures
mmpoints <- evalmod(mdat, mode = "basic")

## Plot normalized ranks vs. average basic evaluation measures
plot(mmpoints)


##################################################
### N-fold cross validation datasets
###

## Load test data
data(M2N50F5)

## Speficy nessesary columns to create mdat
cvdat <- mmdata(nfold_df = M2N50F5, score_cols = c(1, 2),
                lab_col = 3, fold_col = 4,
                modnames = c("m1", "m2"), dsids = 1:5)

## Generate an mmcurve object that contains ROC and Precision-Recall curves
cvcurves <- evalmod(cvdat)

## Average ROC and Precision-Recall curves
plot(cvcurves)

## Show confidence bounds
plot(cvcurves, show_cb = TRUE)

## Generate an mmpoints object that contains basic evaluation measures
cvpoints <- evalmod(cvdat, mode = "basic")

## Normalized ranks vs. average basic evaluation measures
plot(cvpoints)


## End(Not run)

guillermozbta/precrec documentation built on May 11, 2019, 7:22 p.m.