gafs.default: Genetic algorithm feature selection

gafs.defaultR Documentation

Genetic algorithm feature selection


Supervised feature selection using genetic algorithms


## Default S3 method:
  iters = 10,
  popSize = 50,
  pcrossover = 0.8,
  pmutation = 0.1,
  elite = 0,
  suggestions = NULL,
  differences = TRUE,
  gafsControl = gafsControl(),

## S3 method for class 'recipe'
  iters = 10,
  popSize = 50,
  pcrossover = 0.8,
  pmutation = 0.1,
  elite = 0,
  suggestions = NULL,
  differences = TRUE,
  gafsControl = gafsControl(),



An object where samples are in rows and features are in columns. This could be a simple matrix, data frame or other type (e.g. sparse matrix). For the recipes method, x is a recipe object. See Details below


a numeric or factor vector containing the outcome for each sample


number of search iterations


number of subsets evaluated at each iteration


the crossover probability


the mutation probability


the number of best subsets to survive at each generation


a binary matrix of subsets strings to be included in the initial population. If provided the number of columns must match the number of columns in x


a logical: should the difference in fitness values with and without each predictor be calculated?


a list of values that define how this function acts. See gafsControl and URL.


additional arguments to be passed to other methods


Data frame from which variables specified in formula or recipe are preferentially to be taken.


gafs conducts a supervised binary search of the predictor space using a genetic algorithm. See Mitchell (1996) and Scrucca (2013) for more details on genetic algorithms.

This function conducts the search of the feature space repeatedly within resampling iterations. First, the training data are split be whatever resampling method was specified in the control function. For example, if 10-fold cross-validation is selected, the entire genetic algorithm is conducted 10 separate times. For the first fold, nine tenths of the data are used in the search while the remaining tenth is used to estimate the external performance since these data points were not used in the search.

During the genetic algorithm, a measure of fitness is needed to guide the search. This is the internal measure of performance. During the search, the data that are available are the instances selected by the top-level resampling (e.g. the nine tenths mentioned above). A common approach is to conduct another resampling procedure. Another option is to use a holdout set of samples to determine the internal estimate of performance (see the holdout argument of the control function). While this is faster, it is more likely to cause overfitting of the features and should only be used when a large amount of training data are available. Yet another idea is to use a penalized metric (such as the AIC statistic) but this may not exist for some metrics (e.g. the area under the ROC curve).

The internal estimates of performance will eventually overfit the subsets to the data. However, since the external estimate is not used by the search, it is able to make better assessments of overfitting. After resampling, this function determines the optimal number of generations for the GA.

Finally, the entire data set is used in the last execution of the genetic algorithm search and the final model is built on the predictor subset that is associated with the optimal number of generations determined by resampling (although the update function can be used to manually set the number of generations).

This is an example of the output produced when gafsControl(verbose = TRUE) is used:

Fold2 1 0.715 (13)
Fold2 2 0.715->0.737 (13->17, 30.4%) *
Fold2 3 0.737->0.732 (17->14, 24.0%)
Fold2 4 0.737->0.769 (17->23, 25.0%) *

For the second resample (e.g. fold 2), the best subset across all individuals tested in the first generation contained 13 predictors and was associated with a fitness value of 0.715. The second generation produced a better subset containing 17 samples with an associated fitness values of 0.737 (and improvement is symbolized by the *. The percentage listed is the Jaccard similarity between the previous best individual (with 13 predictors) and the new best. The third generation did not produce a better fitness value but the fourth generation did.

The search algorithm can be parallelized in several places:

  1. each externally resampled GA can be run independently (controlled by the allowParallel option of gafsControl)

  2. within a GA, the fitness calculations at a particular generation can be run in parallel over the current set of individuals (see the genParallel option in gafsControl)

  3. if inner resampling is used, these can be run in parallel (controls depend on the function used. See, for example, trainControl)

  4. any parallelization of the individual model fits. This is also specific to the modeling function.

It is probably best to pick one of these areas for parallelization and the first is likely to produces the largest decrease in run-time since it is the least likely to incur multiple re-starting of the worker processes. Keep in mind that if multiple levels of parallelization occur, this can effect the number of workers and the amount of memory required exponentially.


an object of class gafs


Max Kuhn, Luca Scrucca (for GA internals)


Kuhn M and Johnson K (2013), Applied Predictive Modeling, Springer, Chapter 19

Scrucca L (2013). GA: A Package for Genetic Algorithms in R. Journal of Statistical Software, 53(4), 1-37.

Mitchell M (1996), An Introduction to Genetic Algorithms, MIT Press.

See Also

gafsControl, predict.gafs, caretGA, rfGA treebagGA


## Not run: 
train_data <- twoClassSim(100, noiseVars = 10)
test_data  <- twoClassSim(10,  noiseVars = 10)

## A short example
ctrl <- gafsControl(functions = rfGA,
                    method = "cv",
                    number = 3)

rf_search <- gafs(x = train_data[, -ncol(train_data)],
                  y = train_data$Class,
                  iters = 3,
                  gafsControl = ctrl)

## End(Not run)

caret documentation built on March 31, 2023, 9:49 p.m.