mldr_evaluate: Evaluate predictions made by a multilabel classifier

Description Usage Arguments Value See Also Examples

View source: R/evaluate.R

Description

Taking as input an mldr object and a matrix with the predictions given by a classifier, this function evaluates the classifier performance through several multilabel metrics.

Usage

1
mldr_evaluate(mldr, predictions, threshold = 0.5)

Arguments

mldr

Object of "mldr" class containing the instances to evaluate

predictions

Matrix with the labels predicted for each instance in the mldr parameter. Each element should be a value into [0,1] range

threshold

Threshold to use to generate bipartition of labels. By default the value 0.5 is used

Value

A list with multilabel predictive performance measures. The items in the list will be

The roc element corresponds to a roc object associated to the MicroAUC value. This object can be given as input to plot for plotting the ROC curve The example_auc, macro_auc, micro_auc and roc members will be NULL if the pROC package is not installed.

See Also

mldr, Basic metrics, Averaged metrics, Ranking-based metrics, roc.mldr

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
## Not run: 
library(mldr)

# Get the true labels in emotions
predictions <- as.matrix(emotions$dataset[, emotions$labels$index])
# and introduce some noise (alternatively get the predictions from some classifier)
noised_labels <- cbind(sample(1:593, 200, replace = TRUE), sample(1:6, 200, replace = TRUE))
predictions[noised_labels] <- sample(0:1, 100, replace = TRUE)
# then evaluate predictive performance
res <- mldr_evaluate(emotions, predictions)
str(res)
plot(res$roc, main = "ROC curve for emotions")

## End(Not run)

fcharte/mldr documentation built on Dec. 16, 2019, 12:56 p.m.