Evaluation and algorithms combination

```r knitr::opts_chunk$set( collapse = TRUE, comment = "#>" )

### Evaluation and algorithms combination

FSinR evaluation measures and algorithms functions are designed following a common interface so that they are easier to use.

In this example a selection of evaluation measures (Inconsistent Examples, Inconsistent Examples Pairs, Mutual Information and Binary consistency) are being used with Sequential Feature Selection algorithm.

```r
library(FSinR)

sequentialForwardSelection()(iris, 'Species', IEConsistency())
sequentialForwardSelection()(iris, 'Species', mutualInformation())

In order to save some lines of code, as these functions share the same inteface, they can be used as following:

measures <- list(IEConsistency(), mutualInformation())
for (measure in measures) {
  result <- sequentialForwardSelection()(iris, 'Species', measure)
  print(attr(measure,'name'))
  print(result$bestFeatures)
}

Algorithms also share the same interface, so they can be combined as well. In this example Sequential Forward Selection, Genetic Algorithm and Las Vegas Wrapper are run with all the previous mentioned evaluation measures:

measures <- list(IEConsistency(), mutualInformation())
algorithms <- list(sequentialForwardSelection(), LasVegas())
for (algorithm in algorithms) {
  for (measure in measures) {
    result <- algorithm(iris, 'Species', measure)
    print(paste("Algorithm: ",attr(algorithm,'name')))
    print(paste("Evaluation measure: ", attr(measure,'name')))
    print(result$bestFeatures)
  }
}

Wrapper evaluation measures can also be combined. In the example a knn model is used, since the iris problem is a classification problem. The FSinR package is able to detect automatically depending on the metric whether the objective of the problem is to maximize or minimize. To tune the model, the resampling method is established as a 10-fold crossvalidation, the dataset is centered and scaled, the accuracy is used as a metric, and a grid of the k parameter is performed. The wrapper evaluation measure function is created and added to the list of features:

resamplingParams <- list(method = "cv", number = 10)
fittingParams <- list(preProc = c("center", "scale"), metric = "Accuracy", tuneGrid = expand.grid(k = c(1:20)))

wra <- wrapperEvaluator("knn", resamplingParams, fittingParams)

measures <- list(IEConsistency(), mutualInformation(), wra)


Try the FSinR package in your browser

Any scripts or data that you put into this service are public.

FSinR documentation built on Nov. 23, 2020, 5:10 p.m.