roc.test | R Documentation |
This function compares two correlated (or paired) or uncorrelated (unpaired) ROC curves. Delong and bootstrap methods test for a difference in the (partial) AUC of the ROC curves. The Venkatraman method tests if the two curves are perfectly superposed. The sensitivity and specificity methods test if the sensitivity (respectively specificity) of the ROC curves are different at the given level of specificity (respectively sensitivity). Several syntaxes are available: two object of class roc (which can be AUC or smoothed ROC), or either three vectors (response, predictor1, predictor2) or a response vector and a matrix or data.frame with two columns (predictors).
# roc.test(...)
## S3 method for class 'roc'
roc.test(roc1, roc2, method=c("delong", "bootstrap",
"venkatraman", "sensitivity", "specificity"), sensitivity = NULL,
specificity = NULL, alternative = c("two.sided", "less", "greater"),
paired=NULL, reuse.auc=TRUE, boot.n=2000, boot.stratified=TRUE,
ties.method="first", progress=getOption("pROCProgress")$name,
parallel=FALSE, conf.level=0.95, ...)
## S3 method for class 'auc'
roc.test(roc1, roc2, ...)
## S3 method for class 'smooth.roc'
roc.test(roc1, roc2, ...)
## S3 method for class 'formula'
roc.test(formula, data, ...)
## Default S3 method:
roc.test(response, predictor1, predictor2=NULL,
na.rm=TRUE, method=NULL, ...)
roc1, roc2 |
the two ROC curves to compare. Either “roc”, “auc” or “smooth.roc” objects (types can be mixed). |
response |
a vector or factor, as for the roc function. |
predictor1 |
a numeric or ordered vector as for the roc function, or a matrix or data.frame with predictors two colums. |
predictor2 |
only if predictor1 was a vector, the second predictor as a numeric vector. |
formula |
a formula of the type response~predictor1+predictor2.
Additional arguments |
data |
a matrix or data.frame containing the variables in the
formula. See |
na.rm |
if |
method |
the method to use, either “delong”, “bootstrap” or “venkatraman”. The first letter is sufficient. If omitted, the appropriate method is selected as explained in details. |
sensitivity, specificity |
if |
alternative |
specifies the alternative hypothesis. Either of
“two.sided”, “less” or “greater”. The first letter is
sufficient. Default: “two.sided”. Only “two.sided” is available
with |
paired |
a logical indicating whether you want a paired roc.test.
If |
reuse.auc |
if |
boot.n |
for |
boot.stratified |
for |
ties.method |
for |
progress |
the name of progress bar to display. Typically
“none”, “win”, “tk” or “text” (see the
|
parallel |
if TRUE, the bootstrap is processed in parallel, using parallel backend provided by plyr (foreach). |
conf.level |
a numeric scalar between 0 and 1 (non-inclusive) which species the confidence level to use for any calculated CI's. |
... |
further arguments passed to or from other methods,
especially arguments for |
This function compares two ROC curves. It is typically called with the two roc objects to
compare. roc.test.default
is provided as a convenience
method and creates two roc objects before calling
roc.test.roc
.
Three methods are available: “delong”, “bootstrap” and “venkatraman” (see “Computational details” section below). “delong” and “bootstrap” are tests over the AUC whereas “venkatraman” compares the the ROC curves themselves.
Default is to use “delong” method except for comparison of partial AUC, smoothed
curves and curves with different direction
, where bootstrap
is used. Using “delong” for partial AUC and smoothed ROCs is not
supported in pROC and result in an error.
It is spurious to use “delong” for roc
with different
direction
(a warning is issued but the spurious comparison is
enforced). “venkatraman”'s test cannot be employed to compare smoothed
ROC curves, or curves with partial AUC specifications. In addition,
and comparison of ROC curves with different
direction
should be used with care (a warning is produced as well).
If alternative="two.sided"
, a two-sided test for difference in AUC is performed. If
alternative="less"
, the alternative is that the AUC of roc1 is
smaller than the AUC of roc2. For method="venkatraman"
, only
“two.sided” test is available.
If the paired
argument is not provided, the are.paired
function is
employed to detect the paired status of the ROC curves. It will test if the original response
is
identical between the two ROC curves (this is always the case if the call is made with
roc.test.default
). This detection is unlikely to raise false positives, but
this possibility cannot be excluded entierly. It would require equal sample sizes
and response
values and order in both ROC curves. If it happens to you, use paired=FALSE
.
If you know the ROC curves are paired you can pass paired=TRUE
. However this is
useless as it will be tested anyway.
For smoothed ROC curves, smoothing is performed again at each
bootstrap replicate with the parameters originally provided.
If a density smoothing was performed with user-provided
density.cases
or density.controls
the bootstrap cannot
be performed and an error is issued.
A list of class "htest" with following content:
p.value |
the p-value of the test. |
statistic |
the value of the Z ( |
conf.int |
the confidence interval of the test (currently only returned for the paired DeLong's test). Has an attribute |
alternative |
the alternative hypothesis. |
method |
the character string “DeLong's test for two
correlated ROC curves” (if |
null.value |
the expected value of the statistic under the null hypothesis, that is 0. |
estimate |
the AUC in the two ROC curves. |
data.name |
the names of the data that was used. |
parameter |
for |
The comparison of the AUC of the ROC curves needs a specification of the AUC. The specification is defined by:
the “auc” field in the “roc” objects if
reuse.auc
is set to TRUE
(default)
passing the specification to auc
with ...
(arguments partial.auc
, partial.auc.correct
and
partial.auc.focus
). In this case, you must ensure either that
the roc
object do not contain an auc
field (if
you called roc
with auc=FALSE
), or set
reuse.auc=FALSE
.
If reuse.auc=FALSE
the auc
function will always
be called with ...
to determine the specification, even if
the “roc” objects do contain an auc
field.
As well if the “roc” objects do not contain an auc
field, the auc
function will always be called with
...
to determine the specification.
The AUC specification is ignored in the Venkatraman test.
Warning: if the roc object passed to roc.test contains an auc
field and reuse.auc=TRUE
, auc is not called and
arguments such as partial.auc
are silently ignored.
With method="bootstrap"
, the processing is done as follow:
boot.n
bootstrap replicates are drawn from the
data. If boot.stratified
is TRUE, each replicate contains
exactly the same number of controls and cases than the original
sample, otherwise if FALSE the numbers can vary.
for each bootstrap replicate, the AUC of the two ROC curves are computed and the difference is stored.
The following formula is used:
D=\frac{AUC1-AUC2}{s}
where s is the standard deviation of the bootstrap differences and AUC1 and AUC2 the AUC of the two (original) ROC curves.
D is then compared to the normal distribution,
according to the value of alternative
.
See also the Bootstrap section in this package's documentation.
With method="delong"
, the processing is done as described in
DeLong et al. (1988) for paired ROC curves, using the algorithm
of Sun and Xu (2014). Only comparison of
two ROC curves is implemented. The method has been extended for
unpaired ROC curves where the p-value is computed with an unpaired
t-test with unequal sample size and unequal variance, with
D=\frac{V^r(\theta^r) - V^s(\theta^s) }{ \sqrt{S^r + S^s}}
With method="venkatraman"
, the processing is done as described
in Venkatraman and Begg (1996) (for paired ROC curves) and Venkatraman
(2000) (for unpaired ROC curves) with boot.n
permutation of
sample ranks (with ties breaking). For consistency reasons, the same argument boot.n
as
in bootstrap defines the number of permutations to execute,
even though no bootstrap is performed.
For method="specificity"
, the test assesses if the sensitivity of
the ROC curves are different at the level of specificity given by the
specificity
argument, which must be a numeric of length 1. Bootstrap is employed as with method="bootstrap"
and boot.n
and boot.stratified
are available. This is
identical to the test proposed by Pepe et al. (2009).
The method="sensitivity"
is very similar, but assesses if the specificity of
the ROC curves are different at the level of sensitivity given by the
sensitivity
argument.
If “auc” specifications are different in both roc objects, the warning “Different AUC specifications in the ROC curves. Enforcing the inconsistency, but unexpected results may be produced.” is issued. Unexpected results may be produced.
If one or both ROC curves are “smooth.roc” objects with
different smoothing specifications, the warning
“Different smoothing parameters in the ROC curves. Enforcing
the inconsistency, but unexpected results may be produced.” is issued.
This warning can be benign, especially if ROC curves were generated
with roc(..., smooth=TRUE)
with different arguments to other
functions (such as plot), or if you really want to compare two ROC
curves smoothed differently.
If method="venkatraman"
, and alternative
is
“less” or “greater”, the warning “Only two-sided
tests are available for Venkatraman. Performing two-sided test instead.”
is produced and a two tailed test is performed.
Both DeLong and Venkatraman's test ignores the direction of the ROC curve so that if two
ROC curves have a different differ in the value of
direction
, the warning “(DeLong|Venkatraman)'s test should not be
applied to ROC curves with different directions.” is
printed. However, the spurious test is enforced.
If boot.stratified=FALSE
and the sample has a large imbalance between
cases and controls, it could happen that one or more of the replicates
contains no case or control observation, or that there are not enough
points for smoothing, producing a NA
area.
The warning “NA value(s) produced during bootstrap were ignored.”
will be issued and the observation will be ignored. If you have a large
imbalance in your sample, it could be safer to keep
boot.stratified=TRUE
.
When both ROC curves have an auc
of 1 (or 100%), their variances and covariance will always be null,
and therefore the p-value will always be 1. This is true for both “delong”, “bootstrap” and
“venkatraman” methods. This result is misleading, as the variances and covariance are of course not null.
A warning
will be displayed to inform of this condition, and of the misleading output.
An error will also occur if you give a predictor2
when
predictor1
is a matrix
or a
data.frame
, if predictor1
has more than two
columns, or if you do not give a predictor2
when
predictor1
is a vector.
If density.cases
and density.controls
were provided
for smoothing, the error “Cannot compute the statistic on ROC
curves smoothed with density.controls and density.cases.” is
issued.
If method="venkatraman"
and one of the ROC curves is smoothed,
the error “Using Venkatraman's test for smoothed ROCs is not
supported.” is produced.
With method="specificity"
, the error “Argument
'specificity' must be numeric of length 1 for a specificity test.”
is given unless the specificity argument is specified as a numeric of
length 1. The “Argument 'sensitivity' must be numeric of length
1 for a sensitivity test.” message is given for
method="sensitivity"
under similar conditions.
We would like to thank E. S. Venkatraman and Colin B. Begg for their support in the implementation of their test.
Elisabeth R. DeLong, David M. DeLong and Daniel L. Clarke-Pearson (1988) “Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach”. Biometrics 44, 837–845.
James A. Hanley and Barbara J. McNeil (1982) “The meaning and use of the area under a receiver operating characteristic (ROC) curve”. Radiology 143, 29–36.
Margaret Pepe, Gary Longton and Holly Janes (2009) “Estimation and Comparison of Receiver Operating Characteristic Curves”. The Stata journal 9, 1.
Xavier Robin, Natacha Turck, Jean-Charles Sanchez and Markus Müller (2009) “Combination of protein biomarkers”. useR! 2009, Rennes. https://www.r-project.org/nosvn/conferences/useR-2009/abstracts/user_author.html
Xavier Robin, Natacha Turck, Alexandre Hainard, et al. (2011) “pROC: an open-source package for R and S+ to analyze and compare ROC curves”. BMC Bioinformatics, 7, 77. DOI: \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1186/1471-2105-12-77")}.
Xu Sun and Weichao Xu (2014) “Fast Implementation of DeLongs Algorithm for Comparing the Areas Under Correlated Receiver Operating Characteristic Curves”. IEEE Signal Processing Letters, 21, 1389–1393. DOI: \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1109/LSP.2014.2337313")}.
E. S. Venkatraman and Colin B. Begg (1996) “A distribution-free procedure for comparing receiver operating characteristic curves from a paired experiment”. Biometrika 83, 835–848. DOI: \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1093/biomet/83.4.835")}.
E. S. Venkatraman (2000) “A Permutation Test to Compare Receiver Operating Characteristic Curves”. Biometrics 56, 1134–1138. DOI: \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1111/j.0006-341X.2000.01134.x")}.
Hadley Wickham (2011) “The Split-Apply-Combine Strategy for Data Analysis”. Journal of Statistical Software, 40, 1–29. URL: \Sexpr[results=rd]{tools:::Rd_expr_doi("10.18637/jss.v040.i01")}.
roc
, power.roc.test
CRAN package plyr, employed in this function.
data(aSAH)
# Basic example with 2 roc objects
roc1 <- roc(aSAH$outcome, aSAH$s100b)
roc2 <- roc(aSAH$outcome, aSAH$wfns)
roc.test(roc1, roc2)
## Not run:
# The latter used Delong's test. To use bootstrap test:
roc.test(roc1, roc2, method="bootstrap")
# Increase boot.n for a more precise p-value:
roc.test(roc1, roc2, method="bootstrap", boot.n=10000)
## End(Not run)
# Alternative syntaxes
roc.test(aSAH$outcome, aSAH$s100b, aSAH$wfns)
roc.test(aSAH$outcome, data.frame(aSAH$s100b, aSAH$wfns))
# If we had a good a priori reason to think that wfns gives a
# better classification than s100b (in other words, AUC of roc1
# should be lower than AUC of roc2):
roc.test(roc1, roc2, alternative="less")
## Not run:
# Comparison can be done on smoothed ROCs
# Smoothing is re-done at each iteration, and execution is slow
roc.test(smooth(roc1), smooth(roc2))
# or:
roc.test(aSAH$outcome, aSAH$s100b, aSAH$wfns, smooth=TRUE, boot.n=100)
## End(Not run)
# or from an AUC (no smoothing)
roc.test(auc(roc1), roc2)
## Not run:
# Comparison of partial AUC:
roc3 <- roc(aSAH$outcome, aSAH$s100b, partial.auc=c(1, 0.8), partial.auc.focus="se")
roc4 <- roc(aSAH$outcome, aSAH$wfns, partial.auc=c(1, 0.8), partial.auc.focus="se")
roc.test(roc3, roc4)
# This is strictly equivalent to:
roc.test(roc3, roc4, method="bootstrap")
# Alternatively, we could re-use roc1 and roc2 to get the same result:
roc.test(roc1, roc2, reuse.auc=FALSE, partial.auc=c(1, 0.8), partial.auc.focus="se")
# Comparison on specificity and sensitivity
roc.test(roc1, roc2, method="specificity", specificity=0.9)
roc.test(roc1, roc2, method="sensitivity", sensitivity=0.9)
## End(Not run)
# Spurious use of DeLong's test with different direction:
roc5 <- roc(aSAH$outcome, aSAH$s100b, direction="<")
roc6 <- roc(aSAH$outcome, aSAH$s100b, direction=">")
roc.test(roc5, roc6, method="delong")
## Not run:
# Comparisons of the ROC curves
roc.test(roc1, roc2, method="venkatraman")
## End(Not run)
# Unpaired tests
roc7 <- roc(aSAH$outcome, aSAH$s100b)
# artificially create an roc8 unpaired with roc7
roc8 <- roc(aSAH$outcome[1:100], aSAH$s100b[1:100])
## Not run:
roc.test(roc7, roc8, paired=FALSE, method="delong")
roc.test(roc7, roc8, paired=FALSE, method="bootstrap")
roc.test(roc7, roc8, paired=FALSE, method="venkatraman")
roc.test(roc7, roc8, paired=FALSE, method="specificity", specificity=0.9)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.