Evaluate multi-label predictions

Share:

Description

This method is used to evaluate multi-label predictions. You can create a confusion matrix object or use directly the test dataset and the predictions. You can also especify whiches measures do you desire use.

Usage

1
2
3
4
5
6
7
multilabel_evaluate(object, ...)

## S3 method for class 'mldr'
multilabel_evaluate(object, mlresult, measures = c("all"), ...)

## S3 method for class 'mlconfmat'
multilabel_evaluate(object, measures = c("all"), ...)

Arguments

object

A mldr dataset or a mlconfmat confusion matrix

...

Extra parameters to specific measures.

mlresult

The prediction result (Optional, required only when the mldr is used).

measures

The measures names to be computed. Call multilabel_measures() to see the expected measures. You also can use "bipartition", "ranking", "label-based", "example-based", "macro-based" and "micro-based" to include a set of measures. (Default: "all").

Value

a vector with the expected measures

Methods (by class)

  • mldr: Default S3 method

  • mlconfmat: Default S3 method

References

Madjarov, G., Kocev, D., Gjorgjevikj, D., & Dzeroski, S. (2012). An extensive experimental comparison of methods for multi-label learning. Pattern Recognition, 45(9), 3084-3104. Zhang, M.-L., & Zhou, Z.-H. (2014). A Review on Multi-Label Learning Algorithms. IEEE Transactions on Knowledge and Data Engineering, 26(8), 1819-1837. Gibaja, E., & Ventura, S. (2015). A Tutorial on Multilabel Learning. ACM Comput. Surv., 47(3), 52:1-2:38.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
## Not run: 
prediction <- predict(br(toyml), toyml)

# Compute all measures
multilabel_evaluate(toyml, prediction)

# Compute bipartition measures
multilabel_evaluate(toyml, prediction, "bipartition")

# Compute multilples measures
multilabel_evaluate(toyml, prediction, c("accuracy", "F1", "macro-based"))

# Compute the confusion matrix before the measures
cm <- multilabel_confusion_matrix(toyml, prediction)
multilabel_evaluate(cm)
multilabel_evaluate(cm, "example-based")
multilabel_evaluate(cm, c("hamming-loss", "subset-accuracy", "F1"))

## End(Not run)

Want to suggest features or report bugs for rdrr.io? Use the GitHub issue tracker.