JMIM: Minimal joint mutual information maximisation filter

Description Usage Arguments Value Note References Examples

View source: R/algorithms.R

Description

The method starts with an attribute of a maximal mutual information with the decision Y. Then, it greedily adds attribute X with a maximal value of the following criterion:

J(X)=\min_{W\in S} I(X,W;Y),

where S is the set of already selected attributes.

Usage

1
JMIM(X, Y, k = 3, threads = 0)

Arguments

X

Attribute table, given as a data frame with either factors (preferred), booleans, integers (treated as categorical) or reals (which undergo automatic categorisation; see below for details). NAs are not allowed.

Y

Decision attribute; should be given as a factor, but other options are accepted, exactly like for attributes. NAs are not allowed.

k

Number of attributes to select. Must not exceed ncol(X).

threads

Number of threads to use; default value, 0, means all available to OpenMP.

Value

A list with two elements: selection, a vector of indices of the selected features in the selection order, and score, a vector of corresponding feature scores. Names of both vectors will correspond to the names of features in X. Both vectors will be at most of a length k, as the selection will stop as soon as all the remaining features will have a score of zero. This may happen during initial selection, in which case both vectors will be empty.

Note

NJMIM is a normalised version of JMIM; JMI and DISR are modifications of JMIM and NJMIM in which a sum of joint information over already selected attributes is used instead of a minimum.

The method requires input to be discrete to use empirical estimators of distribution, and, consequently, information gain or entropy. To allow smoother user experience, praznik automatically coerces non-factor vectors in X and Y, which requires additional time and space and may yield confusing results – the best practice is to convert data to factors prior to feeding them in this function. Real attributes are cut into about 10 equally-spaced bins, following the heuristic often used in literature. Precise number of cuts depends on the number of objects; namely, it is n/3, but never less than 2 and never more than 10. Integers (which technically are also numeric) are treated as categorical variables (for compatibility with similar software), so in a very different way – one should be aware that an actually numeric attribute which happens to be an integer could be coerced into a n-level categorical, which would have a perfect mutual information score and would likely become a very disruptive false positive.

References

"Feature selection using Joint Mutual Information Maximisation" M. Bennasar, Y. Hicks and R. Setchi, (2015)

Examples

1
2

mbq/praznik documentation built on May 9, 2018, 12:59 a.m.