View source: R/ranger_RF_util.R
get.auprc | R Documentation |
calculates the area under Precision-recall curve (AUPRC).
get.auprc(predictor, y, positive_class)
predictor |
A predictor. For example, one column in the probability table, indicating the probability of a given observation being the positive class in the factor y. |
y |
A binary factor vector indicates observed values. |
positive_class |
A class of the factor y. |
It’s a bit trickier to interpret AUPRC than it is to interpret AUROC. The AUPRC of a random classifier is equal to the fraction of positives (Saito et al.), where the fraction of positives is calculated as (# positive examples / total # examples). That means that _different_ classes have _different_ AUPRC baselines. A class with 12 has a baseline AUPRC of 0.12, so obtaining an AUPRC of 0.40 on this class is great. However a class with 98 of 0.40 on this class is bad.
auprc
Shi Huang
y<-factor(c(rep("A", 10), rep("B", 50))) y1<-factor(c(rep("A", 18), rep("B", 20), rep("C", 22))) y0<-factor(rep("A", 60)) pred <- c(runif(10, 0.4, 0.9), runif(50, 0, 0.6)) prob <-data.frame(A=pred, B=1-pred) positive_class="A" get.auprc(predictor=prob[, positive_class], y, positive_class="A") get.auprc(predictor=prob[, positive_class], y, positive_class="B") get.auprc(predictor=prob[, positive_class], y0, positive_class="A") get.auprc(predictor=prob[, positive_class], y1, positive_class="A")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.