Description Usage Arguments Details Value Author(s) References Examples
Evaluation of the performance of risk prediction models with binary status response variable (case/control or similar). Roc curves are either based on a single continuous marker, or on the probability prediction of an event. Probability predictions are extracted from a given (statistical) model, such as logistic regression, or algorithm, such as random forest. The area under the curve and the Brier score is used to summarize and compare the performance.
1 2 3 4 5 6 7 8  ## S3 method for class 'list'
Roc(object, formula, data, splitMethod='noSplitMethod',
noinf.method=c('simulate'), simulate='reeval', B, M, breaks, cbRatio=1,
RocAverageMethod='vertical',
RocAverageGrid=switch(RocAverageMethod, 'vertical'=seq(0,1,.01),
'horizontal'=seq(1,0,.01)), model.args=NULL, model.parms=NULL,
keepModels=FALSE, keepSampleIndex=FALSE, keepCrossValRes=FALSE,
keepNoInfSimu, slaveseed, cores=1, na.accept=0, verbose=FALSE, ...)

object 
A named list of R objects that represent
predictive markers, prediction models, or prediction
algorithms. The function predictStatusProb is
called on the R objects to extract the predicted risk
(see details). For crossvalidation (e.g. when

formula 
A formula whose left hand side is used to
identify the binary outcome variable in 
data 
A data set in which to validate the prediction models. If missing, the function tries to extract the data from the call of the (first) model in object. The data set needs to have the same structure, variable
names, factor levels, etc., as the data in which the
models were trained. If the subjects in data were not
used to train the models given in However, note that if one of the elements in

splitMethod 
Method for estimating the generalization error.

noinf.method 
Experimental: For .632+ method the way to obtain noinformation performance. This can either be 'simulate' or 'none'. 
simulate 
Experimental: For .632+ method. If

B 
Number of repetitions for internal
crossvalidation. The meaning depends on the argument

M 
The size of the bootstrap samples for crossvalidation without replacement. 
breaks 
Break points for computing the Roc curve.
Defaults to 
cbRatio 
Experimental. Cost/benefit ratio. Default value is 1, meaning that misclassified cases are as bad as misclassified controls. 
RocAverageMethod 
Method for averaging ROC curves
across data splits. If 
RocAverageGrid 
Grid points for the averaging of Roc curves. A sequence of values at which to compute averages across the ROC curves obtained for different data splits during crossvalidation. 
model.args 
List of extra arguments that can be
passed to the 
model.parms 
List with exactly one entry for each
entry in 
keepModels 
If 
keepSampleIndex 
Logical. If 
keepCrossValRes 
Logical. If 
keepNoInfSimu 
Logical. If 
slaveseed 
Vector of seeds, as long as 
cores 
Number of cores for parallel computing.
Passed as the value of the argument 
na.accept 
For 'Bootcv' estimate of performance.
The maximal number of bootstrap samples in which the
training the models may fail This should usually be a
small number relative to 
verbose 
if 
... 
Used to pass arguments to submodules. 
All functions work on a list of models to ease comparison.
Bootstrapcrossvalidation techniques are implemented to estimate the generalization performance of the model(s), i.e., the performance which can be expected in new subjects.
By default, when crossvalidation is involved, the ROC curve is approximated on a grid of either sensitivities or specificities and not computed at all unique changepoints of the crossvalidated ROC curves, see Fawcett, T. (2006). The (density of the) grid can be controlled with the argument: RocAverageGrid
Missing data in the response or in the marker/predicted risk cause a failure.
For each R object which potentially can predict a
probability for an event, there should be a corresponding
predictStatusProb
method:
For example, to assess a prediction model which evaluates
to a myclass
object one defines a function called
predictStatusProb.myclass
with arguments
object,newdata,...
. For example, the function
predictStatusProb.lrm looks like this:
predictStatusProb.lrm < function(object,newdata,...) p < as.numeric(predict(object,newdata=newdata,type='fitted')) class(p) < 'predictStatusProb' p
Currently implemented are predictStatusProb
methods
for the following Rfunctions:
numeric
(marker values are passed on)
formula
(single predictor: extracted from
newdata and passed on, multiple predictors: projected to
score by logistic regression)
glm
(from
library(stats)
lrm
(from
library(Design)
rpart
(from
library(rpart)
)
BinaryTree
(from
library(party)
)
ElasticNet
(a
wrapper for glmnet from library(glmnet)
)
randomForest
from
library(randomForest)
rfsrc
from
library(randomForestSRC)
Object of class Roc
or class Brier
.
Depending on the splitMethod
the object includes the
following components:
Roc, Brier, Auc 
A list of Roc curve(s), Brier scores
(BS), and areas under the curves (Auc), one for each
element of argument 
weight 
The weight used to linear combine the

overfit 
Estimated 
call 
The call that produced the object 
models 
See keepModels 
method 
Summary of the splitMethod used. 
Thomas Gerds [email protected]
Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognition Letters, 27, 861874.
Gerds, Cai & Schumacher (2008). The Performance of Risk Prediction Models. Biometrical Journal, Vol 50, 4, 457479.
Efron, Tibshirani (1997) Journal of the American Statistical Association 92, 548–560 Improvement On CrossValidation: The .632+ Bootstrap Method.
Wehberg, S and Schumacher, M (2004) A comparison of nonparametric error rate estimation methods in classification problems. Biometrical Journal, Vol 46, 35–47
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47  ## Generate some data with binary response Y
## depending on X1 and X2 and X1*X2
set.seed(40)
N < 40
X1 < rnorm(N)
X2 < abs(rnorm(N,4))
X3 < rbinom(N,1,.4)
expit < function(x) exp(x)/(1+exp(x))
lp < expit(2 + X1 + X2 + X3  X3*X2)
Y < factor(rbinom(N,1,lp))
dat < data.frame(Y=Y,X1=X1,X2=X2)
# single markers, one by one
r1 < Roc(Y~X1,data=dat)
plot(r1,col=1)
r2 < Roc(Y~X2,data=dat)
lines(r2,col=2)
# or, directly multiple in one
r12 < Roc(list(Y~X1,Y~X2),data=dat)
plot(r12)
## compare logistic regression
lm1 < glm(Y~X1,data=dat,family="binomial")
lm2 < glm(Y~X1+X2,data=dat,family="binomial")
r1=Roc(list(LR.X1=lm1,LR.X1.X2=lm2))
summary(r1)
Brier(list(lm1,lm2))
# machine learning
library(randomForest)
dat$Y=factor(dat$Y)
rf < randomForest(Y~X2,data=dat)
rocCV=Roc(list(RandomForest=rf,LogisticRegression=lm2),
data=dat,
splitMethod="bootcv",
B=3,
cbRatio=1)
plot(rocCV)
# compute .632+ estimate of Brier score
bs < Brier(list(LR.X1=lm1,LR.X2=lm2),
data=dat,
splitMethod="boot632+",
B=3)
bs
#'

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.