autoplot: Plot performance evaluation measures with ggplot2

Description Usage Arguments Value See Also Examples

Description

The autoplot function plots performance evaluation measures by using ggplot2 instead of the general R plot.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
## S3 method for class 'sscurves'
autoplot(object, curvetype = c("ROC", "PRC"), ...)

## S3 method for class 'mscurves'
autoplot(object, curvetype = c("ROC", "PRC"), ...)

## S3 method for class 'smcurves'
autoplot(object, curvetype = c("ROC", "PRC"), ...)

## S3 method for class 'mmcurves'
autoplot(object, curvetype = c("ROC", "PRC"), ...)

## S3 method for class 'sspoints'
autoplot(object, curvetype = .get_metric_names("basic"),
  ...)

## S3 method for class 'mspoints'
autoplot(object, curvetype = .get_metric_names("basic"),
  ...)

## S3 method for class 'smpoints'
autoplot(object, curvetype = .get_metric_names("basic"),
  ...)

## S3 method for class 'mmpoints'
autoplot(object, curvetype = .get_metric_names("basic"),
  ...)

Arguments

object

An S3 object generated by evalmod. The autoplot function accepts the following codeS3 objects for two different modes, "rocprc" and "basic".

  1. ROC and Precision-Recall curves (mode = "rocprc")

    S3 object # of models # of test datasets
    sscurves single single
    mscurves multiple single
    smcurves single multiple
    mmcurves multiple multiple
  2. Basic evaluation measures (mode = "basic")

    S3 object # of models # of test datasets
    sspoints single single
    mspoints multiple single
    smpoints single multiple
    mmpoints multiple multiple

See the Value section of evalmod for more details.

curvetype

A character vector with the following curve types.

  1. ROC and Precision-Recall curves (mode = "rocprc")

    curvetype description
    ROC ROC curve
    PRC Precision-Recall curve

    Multiple curvetype can be combined, such as c("ROC", "PRC").

  2. Basic evaluation measures (mode = "basic")

    curvetype description
    error Normalized ranks vs. error rate
    accuracy Normalized ranks vs. accuracy
    specificity Normalized ranks vs. specificity
    sensitivity Normalized ranks vs. sensitivity
    precision Normalized ranks vs. precision
    mcc Normalized ranks vs. Matthews correlation coefficient
    fscore Normalized ranks vs. F-score

    Multiple curvetype can be combined, such as c("precision", "sensitivity").

...

Following additional arguments can be specified.

type

A character to specify the line type as follows.

"l"

lines

"p"

points

"b"

both lines and points

show_cb

A Boolean value to specify whether point-wise confidence bounds are drawn. It is effective only when calc_avg of the evalmod function is set to TRUE .

raw_curves

A Boolean value to specify whether raw curves are shown instead of the average curve. It is effective only when raw_curves of the evalmod function is set to TRUE.

show_legend

A Boolean value to specify whether the legend is shown.

ret_grob

A logical value to indicate whether autoplot returns a grob object. The grob object is internally generated by arrangeGrob. The grid.draw function takes a grob object and shows a plot. It is effective only when a multiple-panel plot is generated, for example, when curvetype is c("ROC", "PRC").

reduce_points

A Boolean value to decide whether the points should be reduced when mode = "rocprc". The points are reduced according to x_bins of the evalmod function. The default values is TRUE.

Value

The autoplot function returns a ggplot object for a single-panel plot and a frame-grob object for a multiple-panel plot.

See Also

evalmod for generating an S3 object. fortify for converting a curves and points object to a data frame. plot for plotting the equivalent curves with the general R plot.

Examples

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
## Not run: 

## Load libraries
library(ggplot2)
library(grid)

##################################################
### Single model & single test dataset
###

## Load a dataset with 10 positives and 10 negatives
data(P10N10)

## Generate an sscurve object that contains ROC and Precision-Recall curves
sscurves <- evalmod(scores = P10N10$scores, labels = P10N10$labels)

## Plot both ROC and Precision-Recall curves
autoplot(sscurves)

## Reduced/Full supporting points
sampss <- create_sim_samples(1, 50000, 50000)
evalss <- evalmod(scores = sampss$scores, labels = sampss$labels)

# Reduced supporting point
system.time(autoplot(evalss))

# Full supporting points
system.time(autoplot(evalss, reduce_points = FALSE))

## Get a grob object for multiple plots
pp1 <- autoplot(sscurves, ret_grob = TRUE)
plot.new()
grid.draw(pp1)

## A ROC curve
autoplot(sscurves, curvetype = "ROC")

## A Precision-Recall curve
autoplot(sscurves, curvetype = "PRC")

## Generate an sspoints object that contains basic evaluation measures
sspoints <- evalmod(mode = "basic", scores = P10N10$scores,
                    labels = P10N10$labels)

## Normalized ranks vs. basic evaluation measures
autoplot(sspoints)

## Normalized ranks vs. precision
autoplot(sspoints, curvetype = "precision")


##################################################
### Multiple models & single test dataset
###

## Create sample datasets with 100 positives and 100 negatives
samps <- create_sim_samples(1, 100, 100, "all")
mdat <- mmdata(samps[["scores"]], samps[["labels"]],
               modnames = samps[["modnames"]])

## Generate an mscurve object that contains ROC and Precision-Recall curves
mscurves <- evalmod(mdat)

## ROC and Precision-Recall curves
autoplot(mscurves)

## Reduced/Full supporting points
sampms <- create_sim_samples(5, 50000, 50000)
evalms <- evalmod(scores = sampms$scores, labels = sampms$labels)

# Reduced supporting point
system.time(autoplot(evalms))

# Full supporting points
system.time(autoplot(evalms, reduce_points = FALSE))

## Hide the legend
autoplot(mscurves, show_legend = FALSE)

## Generate an mspoints object that contains basic evaluation measures
mspoints <- evalmod(mdat, mode = "basic")

## Normalized ranks vs. basic evaluation measures
autoplot(mspoints)

## Hide the legend
autoplot(mspoints, show_legend = FALSE)


##################################################
### Single model & multiple test datasets
###

## Create sample datasets with 100 positives and 100 negatives
samps <- create_sim_samples(10, 100, 100, "good_er")
mdat <- mmdata(samps[["scores"]], samps[["labels"]],
               modnames = samps[["modnames"]],
               dsids = samps[["dsids"]])

## Generate an smcurve object that contains ROC and Precision-Recall curves
smcurves <- evalmod(mdat, raw_curves = TRUE)

## Average ROC and Precision-Recall curves
autoplot(smcurves, raw_curves = FALSE)

## Hide confidence bounds
autoplot(smcurves, raw_curves = FALSE, show_cb = FALSE)

## Raw ROC and Precision-Recall curves
autoplot(smcurves, raw_curves = TRUE, show_cb = FALSE)

## Reduced/Full supporting points
sampsm <- create_sim_samples(4, 5000, 5000)
mdatsm <- mmdata(sampsm$scores, sampsm$labels, expd_first = "dsids")
evalsm <- evalmod(mdatsm, raw_curves = TRUE)

# Reduced supporting point
system.time(autoplot(evalsm, raw_curves = TRUE))

# Full supporting points
system.time(autoplot(evalsm, raw_curves = TRUE, reduce_points = FALSE))

## Generate an smpoints object that contains basic evaluation measures
smpoints <- evalmod(mdat, mode = "basic")

## Normalized ranks vs. average basic evaluation measures
autoplot(smpoints)


##################################################
### Multiple models & multiple test datasets
###

## Create sample datasets with 100 positives and 100 negatives
samps <- create_sim_samples(10, 100, 100, "all")
mdat <- mmdata(samps[["scores"]], samps[["labels"]],
               modnames = samps[["modnames"]],
               dsids = samps[["dsids"]])

## Generate an mscurve object that contains ROC and Precision-Recall curves
mmcurves <- evalmod(mdat, raw_curves = TRUE)

## Average ROC and Precision-Recall curves
autoplot(mmcurves, raw_curves = FALSE)

## Show confidence bounds
autoplot(mmcurves, raw_curves = FALSE, show_cb = TRUE)

## Raw ROC and Precision-Recall curves
autoplot(mmcurves, raw_curves = TRUE)

## Reduced/Full supporting points
sampmm <- create_sim_samples(4, 5000, 5000)
mdatmm <- mmdata(sampmm$scores, sampmm$labels, modnames = c("m1", "m2"),
                 dsids = c(1, 2), expd_first = "modnames")
evalmm <- evalmod(mdatmm, raw_curves = TRUE)

# Reduced supporting point
system.time(autoplot(evalmm, raw_curves = TRUE))

# Full supporting points
system.time(autoplot(evalmm, raw_curves = TRUE, reduce_points = FALSE))

## Generate an mmpoints object that contains basic evaluation measures
mmpoints <- evalmod(mdat, mode = "basic")

## Normalized ranks vs. average basic evaluation measures
autoplot(mmpoints)


##################################################
### N-fold cross validation datasets
###

## Load test data
data(M2N50F5)

## Speficy nessesary columns to create mdat
cvdat <- mmdata(nfold_df = M2N50F5, score_cols = c(1, 2),
                lab_col = 3, fold_col = 4,
                modnames = c("m1", "m2"), dsids = 1:5)

## Generate an mmcurve object that contains ROC and Precision-Recall curves
cvcurves <- evalmod(cvdat)

## Average ROC and Precision-Recall curves
autoplot(cvcurves)

## Show confidence bounds
autoplot(cvcurves, show_cb = TRUE)

## Generate an mmpoints object that contains basic evaluation measures
cvpoints <- evalmod(cvdat, mode = "basic")

## Normalized ranks vs. average basic evaluation measures
autoplot(cvpoints)


## End(Not run)

guillermozbta/precrec documentation built on May 11, 2019, 7:22 p.m.