Description Usage Arguments Value Note Author(s) See Also Examples
Log loss is an effective metric for measuring the performance of a classification model where the prediction output is a probability value between 0 and 1.
Log loss quantifies the accuracy of a classifier by penalizing false classifications. A perfect model would have a log loss of 0. Log loss increases as the predicted probability diverges from the actual label. This is the cost function used in logistic regression and neural networks.
mtr_log_loss
computes the elementwise log loss between two numeric
vectors. While mtr_mean_log_loss
computes the average log loss
between two numeric vectors
1 2 3 4 5 6 7 | mtr_log_loss(actual, predicted, eps = 1e-15)
mtr_cross_entropy(actual, predicted, eps = 1e-15)
mtr_mean_log_loss(actual, predicted, eps = 1e-15)
mtr_mean_cross_entropy(actual, predicted, eps = 1e-15)
|
actual |
|
predicted |
|
eps |
|
A numeric vector output
The logarithm used is the natural logarithm (base-e)
An Chu
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | ## log loss for scalar inputs, see how log loss is converging to zero
mtr_log_loss(1, 0.1)
mtr_log_loss(1, 0.5)
mtr_log_loss(1, 0.9)
mtr_log_loss(1, 1)
## sample data
act <- c(0, 1, 1, 0, 0)
pred <- c(0.12, 0.45, 0.9, 0.3, 0.4)
## log loss vector
mtr_log_loss(act, pred)
mtr_cross_entropy(act, pred)
## mean log loss
mtr_mean_log_loss(act, pred)
mtr_mean_cross_entropy(act, pred)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.