README.md

cvAUC

The cvAUC R package provides a computationally efficient means of estimating confidence intervals (or variance) of cross-validated Area Under the ROC Curve (AUC) estimates. This allows you to generate confidence intervals in seconds, compared to other techniques that are many orders of magnitude slower.

In binary classification problems, the AUC is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance.

For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC can be used.

The primary functions of the package are ci.cvAUC() and ci.pooled.cvAUC(), which report cross-validated AUC and compute confidence intervals for cross-validated AUC estimates based on influence curves for i.i.d. and pooled repeated measures data, respectively. One benefit to using influence curve based confidence intervals is that they require much less computation time than bootstrapping methods. The utility functions, AUC() and cvAUC(), are simple wrappers for functions from the ROCR package.

Erin LeDell, Maya L. Petersen & Mark J. van der Laan, "Computationally Efficient Confidence Intervals for Cross-validated Area Under the ROC Curve Estimates." (Electronic Journal of Statistics) - Open access article: https://doi.org/10.1214/15-EJS1035

Install cvAUC

You can install:

Using cvAUC

Here is a demo of how you can use the package, along with some benchmarks of the speed of the method. For a simpler example that runs faster, you can check out the help files for the various functions inside the R package.

In this example of the ci.cvAUC() function, we do the following:

First, we define a few utility functions:

.cvFolds <- function(Y, V){
  # Create CV folds (stratify by outcome)   
  Y0 <- split(sample(which(Y==0)), rep(1:V, length = length(which(Y==0))))
  Y1 <- split(sample(which(Y==1)), rep(1:V, length = length(which(Y==1))))
  folds <- vector("list", length = V)
  for (v in seq(V)) {folds[[v]] <- c(Y0[[v]], Y1[[v]])}     
  return(folds)
}

.doFit <- function(v, folds, train, y){
  # Train & test a model; return predicted values on test samples
  set.seed(v)
  ycol <- which(names(train) == y)
  params <- list(x = train[-folds[[v]], -ycol],
                 y = as.factor(train[-folds[[v]], ycol]),
                 xtest = train[folds[[v]], -ycol])
  fit <- do.call(randomForest, params)
  pred <- fit$test$votes[,2]
  return(pred)
}

This function will execute the example:

iid_example <- function(train, y = "response", V = 10, seed = 1) {

  # Create folds
  set.seed(seed)
  folds <- .cvFolds(Y = train[,c(y)], V = V)

  # Generate CV predicted values
  cl <- makeCluster(detectCores())
  registerDoParallel(cl)
  predictions <- foreach(v = 1:V, .combine = "c", 
    .packages = c("randomForest"),
    .export = c(".doFit")) %dopar% .doFit(v, folds, train, y)
  stopCluster(cl)
  predictions[unlist(folds)] <- predictions

  # Get CV AUC and 95% confidence interval
  runtime <- system.time(res <- ci.cvAUC(predictions = predictions, 
                                         labels = train[,c(y)],
                                         folds = folds, 
                                         confidence = 0.95))
  print(runtime)
  return(res)
}

Load a sample binary outcome training set into R with 10,000 rows:

train_csv <- "https://erin-data.s3.amazonaws.com/higgs/higgs_train_10k.csv"
train <- read.csv(train_csv, header = TRUE, sep = ",")

Run the example:

library(randomForest)
library(doParallel)  # to speed up the model training in the example
library(cvAUC)

res <- iid_example(train = train, y = "response", V = 10, seed = 1)
#   user  system elapsed 
#  0.096   0.005   0.102 

print(res)
# $cvAUC
# [1] 0.7818224
# 
# $se
# [1] 0.004531916
# 
# $ci
# [1] 0.7729400 0.7907048
# 
# $confidence
# [1] 0.95

cvAUC Performance

For the example above (10,000 observations), it took ~0.1 seconds to calculate the cross-validated AUC and the influence curve based confidence intervals. This was benchmarked on a 3.1 GHz Intel Core i7 processor using cvAUC package version 1.1.3.

For bigger (i.i.d.) training sets, here are a few rough benchmarks:

To try it on bigger datasets yourself, feel free to replace the 10k-row training csv with either of these files here:

train_csv <- "https://erin-data.s3.amazonaws.com/higgs/higgs_train_100k.csv"
train_csv <- "https://erin-data.s3.amazonaws.com/higgs/higgs_train_1M.csv"  


ledell/cvAUC documentation built on Jan. 24, 2022, 5:37 p.m.