mlknn: Multi-label KNN (ML-KNN) for multi-label Classification

Description Usage Arguments Value References Examples

View source: R/method_mlknn.R

Description

Create a ML-KNN classifier to predict multi-label data. It is a multi-label lazy learning, which is derived from the traditional K-nearest neighbor (KNN) algorithm. For each unseen instance, its K nearest neighbors in the training set are identified and based on statistical information gained from the label sets of these neighboring instances, the maximum a posteriori (MAP) principle is utilized to determine the label set for the unseen instance.

Usage

1
2
3
4
5
6
7
8
9
mlknn(
  mdata,
  k = 10,
  s = 1,
  distance = "euclidean",
  ...,
  cores = getOption("utiml.cores", 1),
  seed = getOption("utiml.seed", NA)
)

Arguments

mdata

A mldr dataset used to train the binary models.

k

The number of neighbors. (Default: 10)

s

Smoothing parameter controlling the strength of uniform prior. When it is set to be 1, we have the Laplace smoothing. (Default: 1).

distance

The name of method used to compute the distance. See dist to the list of options. (Default: "euclidian")

...

Not used.

cores

Ignored because this method does not support multi-core.

seed

Ignored because this method is deterministic.

Value

An object of class MLKNNmodel containing the set of fitted models, including:

labels

A vector with the label names.

prior

The prior probability of each label to occur.

posterior

The posterior probability of each label to occur given that k neighbors have it.

References

Zhang, M.L. L., & Zhou, Z.H. H. (2007). ML-KNN: A lazy learning approach to multi-label learning. Pattern Recognition, 40(7), 2038-2048.

Examples

1
2
model <- mlknn(toyml, k=3)
pred <- predict(model, toyml)

Example output

Loading required package: mldr
Loading required package: parallel
Loading required package: ROCR

utiml documentation built on May 31, 2021, 9:09 a.m.