classification | R Documentation |
Weighted versions of non-probabilistic and probabilistic classification metrics:
accuracy()
: Accuracy (higher is better).
classification_error()
: Classification error = 1 - Accuracy (lower is better).
precision()
: Precision (higher is better).
recall()
: Recall (higher is better).
f1_score()
: F1 Score. Harmonic mean of precision and recall (higher is better).
AUC()
: Area under the ROC (higher is better).
gini_coefficient()
: Gini coefficient, equivalent to 2 \cdot \textrm{AUC} - 1
.
Up to ties in predicted
, equivalent to Somer's D (higher is better).
deviance_bernoulli()
: Average Bernoulli deviance. Equals twice the
log loss/binary cross entropy (smaller is better).
logLoss()
: Log loss/binary cross entropy. Equals half the average Bernoulli
deviance (smaller is better).
accuracy(actual, predicted, w = NULL, ...)
classification_error(actual, predicted, w = NULL, ...)
precision(actual, predicted, w = NULL, ...)
recall(actual, predicted, w = NULL, ...)
f1_score(actual, predicted, w = NULL, ...)
AUC(actual, predicted, w = NULL, ...)
gini_coefficient(actual, predicted, w = NULL, ...)
deviance_bernoulli(actual, predicted, w = NULL, ...)
logLoss(actual, predicted, w = NULL, ...)
actual |
Observed values. |
predicted |
Predicted values. |
w |
Optional case weights. |
... |
Further arguments passed to |
Note that the function AUC()
was originally modified from the 'glmnet' package
to ensure deterministic results. The unweighted version can be different from the
weighted one with unit weights due to ties in predicted
.
A numeric vector of length one.
For precision()
, recall()
, and f1_score()
: The actual
and predicted
values
need to be in \{0, 1\}
.
For accuracy()
and classification_error()
: Any discrete input.
For AUC()
and gini_coefficient()
: Only actual
must be in \{0, 1\}
.
For deviance_bernoulli()
and logLoss()
: The values of actual
must be in
\{0, 1\}
, while predicted
must be in the closed interval [0, 1]
.
y <- c(0, 0, 1, 1)
pred <- c(0, 0, 1, 0)
w <- y * 2
accuracy(y, pred)
classification_error(y, pred, w = w)
precision(y, pred, w = w)
recall(y, pred, w = w)
f1_score(y, pred, w = w)
y2 <- c(0, 1, 0, 1)
pred2 <- c(0.1, 0.1, 0.9, 0.8)
w2 <- 1:4
AUC(y2, pred2)
AUC(y2, pred2, w = rep(1, 4)) # Different due to ties in predicted
gini_coefficient(y2, pred2, w = w2)
logLoss(y2, pred2, w = w2)
deviance_bernoulli(y2, pred2, w = w2)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.