library("mlr")
library("BBmisc")
library("ParamHelpers")
library("ggplot2")

# show grouped code output instead of single lines
knitr::opts_chunk$set(collapse = TRUE)
set.seed(123)

In order to obtain honest performance estimates for a learner all parts of the model building like preprocessing and model selection steps should be included in the resampling, i.e., repeated for every pair of training/test data. For steps that themselves require resampling like parameter tuning{target="_blank"} or feature selection{target="_blank"} (via the wrapper approach) this results in two nested resampling loops.

knitr::include_graphics("../img/nested_resampling.png")
knitr::include_graphics("../img/nested_resampling.png")

The graphic above illustrates nested resampling for parameter tuning with 3-fold cross-validation in the outer and 4-fold cross-validation in the inner loop.

In the outer resampling loop, we have three pairs of training/test sets. On each of these outer training sets parameter tuning is done, thereby executing the inner resampling loop. This way, we get one set of selected hyperparameters for each outer training set. Then the learner is fitted on each outer training set using the corresponding selected hyperparameters and its performance is evaluated on the outer test sets.

In mlr, you can get nested resampling for free without programming any looping by using the wrapper functionality{target="_blank"}. This works as follows:

  1. Generate a wrapped Learner (makeLearner()) via function makeTuneWrapper() or makeFeatSelWrapper(). Specify the inner resampling strategy using their resampling argument.
  2. Call function resample() (see also the section about resampling{target="_blank"} and pass the outer resampling strategy to its resampling argument.

You can freely combine different inner and outer resampling strategies.

The outer strategy can be a resample description ResampleDesc (makeResampleDesc())) or a resample instance (makeResampleInstance())). A common setup is prediction and performance evaluation on a fixed outer test set. This can be achieved by using function makeFixedHoldoutInstance() to generate the outer resample instance(makeResampleInstance()`).

The inner resampling strategy should preferably be a ResampleDesc (makeResampleDesc()), as the sizes of the outer training sets might differ. Per default, the inner resample description is instantiated once for every outer training set. This way during tuning/feature selection all parameter or feature sets are compared on the same inner training/test sets to reduce variance. You can also turn this off using the same.resampling.instance argument of makeTuneControl* (TuneControl()) or makeFeatSelControl* (FeatSelControl()).

Nested resampling is computationally expensive. For this reason in the examples shown below we use relatively small search spaces and a low number of resampling iterations. In practice, you normally have to increase both. As this is computationally intensive you might want to have a look at section parallelization{target="_blank"}.

Tuning

As you might recall from the tutorial page about tuning{target="_blank"}, you need to define a search space by function ParamHelpers::makeParamSet(), a search strategy by makeTuneControl*(TuneControl()), and a method to evaluate hyperparameter settings (i.e., the inner resampling strategy and a performance measure).

Below is a classification example. We evaluate the performance of a support vector machine (kernlab::ksvm()) with tuned cost parameter C and RBF kernel parameter sigma. We use 3-fold cross-validation in the outer and subsampling with 2 iterations in the inner loop. For tuing a grid search is used to find the hyperparameters with lowest error rate (mmce{target="_blank"} is the default measure for classification). The wrapped Learner (makeLearner()) is generated by calling makeTuneWrapper().

Note that in practice the parameter set should be larger. A common recommendation is 2^(-12:12) for both C and sigma.

# Tuning in inner resampling loop
ps = makeParamSet(
  makeDiscreteParam("C", values = 2^(-2:2)),
  makeDiscreteParam("sigma", values = 2^(-2:2))
)
ctrl = makeTuneControlGrid()
inner = makeResampleDesc("Subsample", iters = 2)
lrn = makeTuneWrapper("classif.ksvm", resampling = inner, par.set = ps, control = ctrl, show.info = FALSE)

# Outer resampling loop
outer = makeResampleDesc("CV", iters = 3)
r = resample(lrn, iris.task, resampling = outer, extract = getTuneResult, show.info = FALSE)

r
# Tuning in inner resampling loop
ps = makeParamSet(
  makeDiscreteParam("C", values = 2^(-2:2)),
  makeDiscreteParam("sigma", values = 2^(-2:2))
)
ctrl = makeTuneControlGrid()
inner = makeResampleDesc("Subsample", iters = 2)
lrn = makeTuneWrapper("classif.ksvm", resampling = inner, par.set = ps, control = ctrl, show.info = FALSE)

# Outer resampling loop
outer = makeResampleDesc("CV", iters = 3)
r = resample(lrn, iris.task, resampling = outer, extract = getTuneResult, show.info = FALSE)

r

## Resample Result
## Task: iris-example
## Learner: classif.ksvm.tuned
## Aggr perf: mmce.test.mean=0.0400000
## Runtime: 5.50584

You can obtain the error rates on the 3 outer test sets by:

r$measures.test

Accessing the tuning result

We have kept the results of the tuning for further evaluations. For example one might want to find out, if the best obtained configurations vary for the different outer splits. As storing entire models may be expensive (but possible by setting models = TRUE) we used the extract option of resample(). Function getTuneResult() returns, among other things, the optimal hyperparameter values and the optimization path (ParamHelpers::OptPath()) for each iteration of the outer resampling loop. Note that the performance values shown when printing r$extract are the aggregated performances resulting from inner resampling on the outer training set for the best hyperparameter configurations (not to be confused with r$measures.test shown above).

r$extract

names(r$extract[[1]])

We can compare the optimal parameter settings obtained in the 3 resampling iterations. As you can see, the optimal configuration usually depends on the data. You may be able to identify a range of parameter settings that achieve good performance though, e.g., the values for C should be at least 1 and the values for sigma should be between 0 and 1.

With function getNestedTuneResultsOptPathDf() you can extract the optimization paths for the 3 outer cross-validation iterations for further inspection and analysis. These are stacked in one data.frame with column iter indicating the resampling iteration.

opt.paths = getNestedTuneResultsOptPathDf(r)
head(opt.paths, 10)
opt.paths = getNestedTuneResultsOptPathDf(r)
head(opt.paths, 10)

##       C sigma mmce.test.mean dob eol error.message exec.time iter
## 1  0.25  0.25     0.10294118   1  NA          <NA>     1.463    1
## 2   0.5  0.25     0.11764706   2  NA          <NA>     0.036    1
## 3     1  0.25     0.07352941   3  NA          <NA>     0.043    1
## 4     2  0.25     0.07352941   4  NA          <NA>     0.040    1
## 5     4  0.25     0.08823529   5  NA          <NA>     0.042    1
## 6  0.25   0.5     0.13235294   6  NA          <NA>     0.041    1
## 7   0.5   0.5     0.07352941   7  NA          <NA>     0.043    1
## 8     1   0.5     0.07352941   8  NA          <NA>     0.042    1
## 9     2   0.5     0.07352941   9  NA          <NA>     0.037    1
## 10    4   0.5     0.10294118  10  NA          <NA>     0.042    1

Below we visualize the opt.paths for the 3 outer resampling iterations.

g = ggplot(opt.paths, aes(x = C, y = sigma, fill = mmce.test.mean))
g + geom_tile() + facet_wrap(~iter)

Another useful function is getNestedTuneResultsX(), which extracts the best found hyperparameter settings for each outer resampling iteration.

getNestedTuneResultsX(r)

You can furthermore access the resampling indices of the inner level using getResamplingIndices() if you used either extract = getTuneResult or extract = getFeatSelResult in the resample() call:

getResamplingIndices(r, inner = TRUE)

Feature selection

As you might recall from the section about feature selection{target="_blank"}, mlr supports the filter and the wrapper approach.

Wrapper methods

Wrapper methods use the performance of a learning algorithm to assess the usefulness of a feature set. In order to select a feature subset a learner is trained repeatedly on different feature subsets and the subset which leads to the best learner performance is chosen.

For feature selection in the inner resampling loop, you need to choose a search strategy (function makeFeatSelControl* (FeatSelControl())), a performance measure and the inner resampling strategy. Then use function makeFeatSelWrapper() to bind everything together.

Below we use sequential forward selection with linear regression on the BostonHousing (mlbench::BostonHousing() data set (bh.task()).

# Feature selection in inner resampling loop
inner = makeResampleDesc("CV", iters = 3)
lrn = makeFeatSelWrapper("regr.lm",
  resampling = inner,
  control = makeFeatSelControlSequential(method = "sfs"), show.info = FALSE)

# Outer resampling loop
outer = makeResampleDesc("Subsample", iters = 2)
r = resample(
  learner = lrn, task = bh.task, resampling = outer, extract = getFeatSelResult,
  show.info = FALSE)

r

r$measures.test
# Feature selection in inner resampling loop
inner = makeResampleDesc("CV", iters = 3)
lrn = makeFeatSelWrapper("regr.lm",
  resampling = inner,
  control = makeFeatSelControlSequential(method = "sfs"), show.info = FALSE)

# Outer resampling loop
outer = makeResampleDesc("Subsample", iters = 2)
r = resample(
  learner = lrn, task = bh.task, resampling = outer, extract = getFeatSelResult,
  show.info = FALSE)

r

## Resample Result
## Task: BostonHousing-example
## Learner: regr.lm.featsel
## Aggr perf: mse.test.mean=24.8753005
## Runtime: 7.01506

r$measures.test

##   iter      mse
## 1    1 22.28967
## 2    2 27.46093

Accessing the selected features

The result of the feature selection can be extracted by function getFeatSelResult(). It is also possible to keep whole models (makeWrappedModel()) by setting models = TRUE when calling resample().

r$extract

# Selected features in the first outer resampling iteration
r$extract[[1]]$x

# Resampled performance of the selected feature subset on the first inner training set
r$extract[[1]]$y

As for tuning, you can extract the optimization paths. The resulting data.frames contain, among others, binary columns for all features, indicating if they were included in the linear regression model, and the corresponding performances.

opt.paths = lapply(r$extract, function(x) as.data.frame(x$opt.path))
head(opt.paths[[1]])
opt.paths = lapply(r$extract, function(x) as.data.frame(x$opt.path))
head(opt.paths[[1]])

##   crim zn indus chas nox rm age dis rad tax ptratio b lstat mse.test.mean
## 1    0  0     0    0   0  0   0   0   0   0       0 0     0      84.52018
## 2    1  0     0    0   0  0   0   0   0   0       0 0     0      95.46348
## 3    0  1     0    0   0  0   0   0   0   0       0 0     0      74.97858
## 4    0  0     1    0   0  0   0   0   0   0       0 0     0      66.35546
## 5    0  0     0    1   0  0   0   0   0   0       0 0     0      81.49228
## 6    0  0     0    0   1  0   0   0   0   0       0 0     0      67.72664
##   dob eol error.message exec.time
## 1   1   2          <NA>     0.031
## 2   2   2          <NA>     0.037
## 3   2   2          <NA>     0.027
## 4   2   2          <NA>     0.030
## 5   2   2          <NA>     0.032
## 6   2   2          <NA>     0.031

An easy-to-read version of the optimization path for sequential feature selection can be obtained with function analyzeFeatSelResult().

analyzeFeatSelResult(r$extract[[1]])

Filter methods with tuning

Filter methods assign an importance value to each feature. Based on these values you can select a feature subset by either keeping all features with importance higher than a certain threshold or by keeping a fixed number or percentage of the highest ranking features. Often, neither the theshold nor the number or percentage of features is known in advance and thus tuning is necessary.

In the example below the threshold value (fw.threshold) is tuned in the inner resampling loop. For this purpose the base Learner (makeLearner()) "regr.lm" is wrapped two times. First, makeFilterWrapper() is used to fuse linear regression with a feature filtering preprocessing step. Then a tuning step is added by makeTuneWrapper().

# Tuning of the percentage of selected filters in the inner loop
lrn = makeFilterWrapper(learner = "regr.lm", fw.method = "FSelectorRcpp_information.gain")
ps = makeParamSet(makeDiscreteParam("fw.threshold", values = seq(0, 1, 0.2)))
ctrl = makeTuneControlGrid()
inner = makeResampleDesc("CV", iters = 3)
lrn = makeTuneWrapper(lrn, resampling = inner, par.set = ps, control = ctrl, show.info = FALSE)

# Outer resampling loop
outer = makeResampleDesc("CV", iters = 3)
r = resample(learner = lrn, task = bh.task, resampling = outer, models = TRUE, show.info = FALSE)
r
# Tuning of the percentage of selected filters in the inner loop
lrn = makeFilterWrapper(learner = "regr.lm", fw.method = "FSelectorRcpp_information.gain")
ps = makeParamSet(makeDiscreteParam("fw.threshold", values = seq(0, 1, 0.2)))
ctrl = makeTuneControlGrid()
inner = makeResampleDesc("CV", iters = 3)
lrn = makeTuneWrapper(lrn, resampling = inner, par.set = ps, control = ctrl, show.info = FALSE)

# Outer resampling loop
outer = makeResampleDesc("CV", iters = 3)
r = resample(learner = lrn, task = bh.task, resampling = outer, models = TRUE, show.info = FALSE)
r

## Resample Result
## Task: BostonHousing-example
## Learner: regr.lm.filtered.tuned
## Aggr perf: mse.test.mean=23.5449481
## Runtime: 3.85235

Accessing the selected features and optimal percentage

In the above example we kept the complete model (makeWrappedModel())s.

Below are some examples that show how to extract information from the model (makeWrappedModel())s.

r$models

The result of the feature selection can be extracted by function getFilteredFeatures(). Almost always all 13 features are selected.

lapply(r$models, function(x) getFilteredFeatures(x$learner.model$next.model))

Below the tune results (TuneResult()) and optimization paths (ParamHelpers::OptPath()) are accessed.

res = lapply(r$models, getTuneResult)
res

opt.paths = lapply(res, function(x) as.data.frame(x$opt.path))
opt.paths[[1]][, -ncol(opt.paths[[1]])]

Benchmark experiments

In a benchmark experiment multiple learners are compared on one or several tasks (see also the section about benchmarking{target="_blank"}. Nested resampling in benchmark experiments is achieved the same way as in resampling:

The inner resampling strategies should be resample descriptions (makeResampleDesc()). You can use different inner resampling strategies for different wrapped learners. For example it might be practical to do fewer subsampling or bootstrap iterations for slower learners.

If you have larger benchmark experiments you might want to have a look at the section about parallelization{target="_blank"}.

As mentioned in the section about benchmark experiments{target="_blank"} you can also use different resampling strategies for different learning tasks by passing a list of resampling descriptions or instances to benchmark().

We will see three examples to show different benchmark settings:

  1. Two data sets + two classification algorithms + tuning
  2. One data set + two regression algorithms + feature selection
  3. One data set + two regression algorithms + feature filtering + tuning

Example 1: Two tasks, two learners, tuning

Below is a benchmark experiment with two data sets, datasets::iris() and mlbench::sonar(), and two Learner (makeLearner())s, kernlab::ksvm() and kknn::kknn(), that are both tuned.

As inner resampling strategies we use holdout for kernlab::ksvm() and subsampling with 3 iterations for kknn::kknn(). As outer resampling strategies we take holdout for the datasets::iris() and bootstrap with 2 iterations for the mlbench::sonar() data (sonar.task()). We consider the accuracy (acc{target="_blank"}), which is used as tuning criterion, and also calculate the balanced error rate (ber{target="_blank"}).

# List of learning tasks
tasks = list(iris.task, sonar.task)

# Tune svm in the inner resampling loop
ps = makeParamSet(
  makeDiscreteParam("C", 2^(-1:1)),
  makeDiscreteParam("sigma", 2^(-1:1)))
ctrl = makeTuneControlGrid()
inner = makeResampleDesc("Holdout")
lrn1 = makeTuneWrapper("classif.ksvm",
  resampling = inner, par.set = ps, control = ctrl,
  show.info = FALSE)

# Tune k-nearest neighbor in inner resampling loop
ps = makeParamSet(makeDiscreteParam("k", 3:5))
ctrl = makeTuneControlGrid()
inner = makeResampleDesc("Subsample", iters = 3)
lrn2 = makeTuneWrapper("classif.kknn",
  resampling = inner, par.set = ps, control = ctrl,
  show.info = FALSE)

# Learners
lrns = list(lrn1, lrn2)

# Outer resampling loop
outer = list(makeResampleDesc("Holdout"), makeResampleDesc("Bootstrap", iters = 2))
res = benchmark(lrns, tasks, outer,
  measures = list(acc, ber), show.info = FALSE,
  keep.extract = TRUE)
res

The print method for the BenchmarkResult() shows the aggregated performances from the outer resampling loop.

As you might recall, mlr offers several accessor function to extract information from the benchmark result. These are listed on the help page of BenchmarkResult() and many examples are shown on the tutorial page about benchmark experiments{target="_blank"}.

The performance values in individual outer resampling runs can be obtained by getBMRPerformances(). Note that, since we used different outer resampling strategies for the two tasks, the number of rows per task differ.

getBMRPerformances(res, as.df = TRUE)

The results from the parameter tuning can be obtained through function getBMRTuneResults().

getBMRTuneResults(res)

As for several other accessor functions a clearer representation as data.frame can be achieved by setting as.df = TRUE.

getBMRTuneResults(res, as.df = TRUE)

It is also possible to extract the tuning results for individual tasks and learners and, as shown in earlier examples, inspect the optimization path (ParamHelpers::OptPath()).

tune.res = getBMRTuneResults(res,
  task.ids = "Sonar-example", learner.ids = "classif.ksvm.tuned",
  as.df = TRUE)
tune.res

getNestedTuneResultsOptPathDf(res$results[["Sonar-example"]][["classif.ksvm.tuned"]])

Example 2: One task, two learners, feature selection

Let's see how we can do feature selection{target="_blank"} in a benchmark experiment:

# Feature selection in inner resampling loop
ctrl = makeFeatSelControlSequential(method = "sfs")
inner = makeResampleDesc("Subsample", iters = 2)
lrn = makeFeatSelWrapper("regr.lm", resampling = inner, control = ctrl, show.info = FALSE)

# Learners
lrns = list("regr.rpart", lrn)

# Outer resampling loop
outer = makeResampleDesc("Subsample", iters = 2)
res = benchmark(
  tasks = bh.task, learners = lrns, resampling = outer,
  show.info = FALSE, keep.extract = TRUE)

res

The selected features can be extracted by function getBMRFeatSelResults(). By default, a nested list, with the first level indicating the task and the second level indicating the learner, is returned. If only a single learner or, as in our case, a single task is considered, setting drop = TRUE simplifies the result to a flat list.

getBMRFeatSelResults(res)
getBMRFeatSelResults(res, drop = TRUE)

You can access results for individual learners and tasks and inspect them further.

feats = getBMRFeatSelResults(res, learner.id = "regr.lm.featsel", drop = TRUE)

# Selected features in the first outer resampling iteration
feats[[1]]$x

# Resampled performance of the selected feature subset on the first inner training set
feats[[1]]$y

As for tuning, you can extract the optimization paths. The resulting data.frames contain, among others, binary columns for all features, indicating if they were included in the linear regression model, and the corresponding performances. analyzeFeatSelResult() gives a clearer overview.

opt.paths = lapply(feats, function(x) as.data.frame(x$opt.path))
head(opt.paths[[1]][, -ncol(opt.paths[[1]])])

analyzeFeatSelResult(feats[[1]])

Example 3: One task, two learners, feature filtering with tuning

Here is a minimal example for feature filtering with tuning of the feature subset size.

# Feature filtering with tuning in the inner resampling loop
lrn = makeFilterWrapper(learner = "regr.lm", fw.method = "FSelectorRcpp_information.gain")
ps = makeParamSet(makeDiscreteParam("fw.abs", values = seq_len(getTaskNFeats(bh.task))))
ctrl = makeTuneControlGrid()
inner = makeResampleDesc("CV", iter = 2)
lrn = makeTuneWrapper(lrn,
  resampling = inner, par.set = ps, control = ctrl,
  show.info = FALSE)

# Learners
lrns = list("regr.rpart", lrn)

# Outer resampling loop
outer = makeResampleDesc("Subsample", iter = 3)
res = benchmark(tasks = bh.task, learners = lrns, resampling = outer, show.info = FALSE)

res
# Performances on individual outer test data sets
getBMRPerformances(res, as.df = TRUE)


mlr-org/mlr documentation built on Jan. 12, 2023, 5:16 a.m.