Description Usage Arguments Value See Also Examples
Taking as input an mldr
object and a matrix with the predictions
given by a classifier, this function evaluates the classifier performance through
several multilabel metrics.
1 | mldr_evaluate(mldr, predictions, threshold = 0.5)
|
mldr |
Object of |
predictions |
Matrix with the labels predicted for each instance in the |
threshold |
Threshold to use to generate bipartition of labels. By default the value 0.5 is used |
A list with multilabel predictive performance measures. The items in the list will be
accuracy
example_auc
average_precision
coverage
fmeasure
hamming_loss
macro_auc
macro_fmeasure
macro_precision
macro_recall
micro_auc
micro_fmeasure
micro_precision
micro_recall
one_error
precision
ranking_loss
recall
subset_accuracy
roc
The roc
element corresponds to a roc
object associated to the MicroAUC
value. This object can be given as input to plot
for plotting the ROC curve
The example_auc
, macro_auc
, micro_auc
and roc
members will be NULL
if the pROC
package is not installed.
mldr
, Basic metrics, Averaged metrics, Ranking-based metrics, roc.mldr
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | ## Not run:
library(mldr)
# Get the true labels in emotions
predictions <- as.matrix(emotions$dataset[, emotions$labels$index])
# and introduce some noise (alternatively get the predictions from some classifier)
noised_labels <- cbind(sample(1:593, 200, replace = TRUE), sample(1:6, 200, replace = TRUE))
predictions[noised_labels] <- sample(0:1, 100, replace = TRUE)
# then evaluate predictive performance
res <- mldr_evaluate(emotions, predictions)
str(res)
plot(res$roc, main = "ROC curve for emotions")
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.