This chapter is a collection of use cases to showcase r mlr_pkg("mlr3")
.
The first use case shows different functions, using house price data, housing price data in King Country.
Following features are illustrated:
Further use cases are following soon!
We use the house_sales_prediction
dataset contained in this book in order to provide a use-case for the application of mlr3
on real-world data.
library(mlr3book) data("house_sales_prediction", package = "mlr3book")
In order to get a quick impression of our data, we perform some initial Exploratory Data Analysis. This helps us to get a first impression of our data and might help us arrive at additional features that can help with the prediction of the house prices.
We can get a quick overview using R's summary function:
summary(house_sales_prediction) dim(house_sales_prediction)
Our dataset has r nrow(house_sales_prediction)
observations and r ncol(house_sales_prediction)
columns.
The variable we want to predict is price
.
In addition to the price column, we have several other columns:
id:
A unique identifier for every house.
date
: A date column, indicating when the house was sold.
This column is currently not encoded as a date
and requires some preprocessing.
zipcode
: A column indicating the ZIP code. This is a categorical variable with many factor levels.
long, lat
The longitude and latitude of the house
...
several other numeric columns providing information about the house, such as number of rooms, square feet etc.
Before we continue with the analysis, we preprocess some features so that they are stored in the correct format.
First we convert the date
column to numeric
to be able to treat it as a numeric feature:
library(lubridate) house_sales_prediction$date = ymd(substr(house_sales_prediction$date, 1, 8)) house_sales_prediction$date = as.numeric(as.Date(house_sales_prediction$date, origin = "1900-01-01")) house_sales_prediction$date = house_sales_prediction$date
Afterwards, we convert the zip code to a factor:
house_sales_prediction$zipcode = as.factor(house_sales_prediction$zipcode)
And add a new column renovated indicating whether a house was renovated at some point.
house_sales_prediction$renovated = as.numeric(house_sales_prediction$yr_renovated > 0) # And drop the id column: house_sales_prediction$id = NULL
Additionally we convert the price from Dollar to units of 1000 Dollar to improve readability.
house_sales_prediction$price = house_sales_prediction$price / 1000
We can now plot the density of the price to get a first impression on its distribution.
library(ggplot2) ggplot(house_sales_prediction, aes(x = price)) + geom_density()
We can see that the prices for most houses lie between 75.000 and 1.5 million dollars. There are few extreme values of up to 7.7 million dollars.
Feature engineering often allows us to incorporate additional knowledge about the data and underlying processes.
This can often greatly enhance predictive performance.
A simple example: A house which has yr_renovated == 0
means that is has not been renovated yet.
Additionally we want to drop features which should not have any influence (id column
).
After those initial manipulations, we load all required packages and create a Task containing our data.
library(mlr3) library(mlr3viz) tsk = TaskRegr$new("sales", house_sales_prediction, target = "price")
We can inspect associations between variables using mlr3viz
's autoplot
function in order to get some good first impressions for our data.
Note, that this does in no way prevent us from using other powerful plot functions of our choice on the original data.
The outcome we want to predict is the price variable.
The autoplot
function provides a good first glimpse on our data.
As the resulting object is a ggplot2
object, we can use faceting
and other functions from ggplot2 in order to enhance plots.
library(ggplot2) autoplot(tsk) + facet_wrap(~renovated)
We can observe that renovated flats seem to achieve higher sales values, and this might thus be a relevant feature.
Additionally, we can for example look at the condition of the house. Again, we clearly can see that the price rises with increasing condition.
autoplot(tsk) + facet_wrap(~condition)
In addition to the association with the target variable, the association between the features can also lead to interesting insights.
We investigate using variables associated with the quality and size of the house.
Note that we use $clone()
and $select()
to clone the task and select only a subset of the features for the autoplot
function, as autoplot
per default uses all features.
The task is cloned before we select features in order to keep the original task intact.
# Variables associated with quality autoplot(tsk$clone()$select(tsk$feature_names[c(3, 17)]), type = "pairs")
autoplot(tsk$clone()$select(tsk$feature_names[c(9:12)]), type = "pairs")
In mlr3
, we do not create train
and test
data sets, but instead keep only a vector of train and test indices.
set.seed(4411) train.idx = sample(seq_len(tsk$nrow), 0.7 * tsk$nrow) test.idx = setdiff(seq_len(tsk$nrow), train.idx)
Decision trees cannot only be used as a powerful tool for predictive models but also for exploratory data analysis.
In order to fit a decision tree, we first get the regr.rpart
learner from the mlr_learners
dictionary by using the sugar function r ref("lrn")
.
For now we leave out the zipcode
variable, as we also have the latitude
and longitude
of each house.
tsk_nozip = tsk$clone()$select(setdiff(tsk$feature_names, "zipcode")) # Get the learner lrn = lrn("regr.rpart") # And train on the task lrn$train(tsk_nozip, row_ids = train.idx)
plot(lrn$model) text(lrn$model)
The learned tree relies on several variables in order to distinguish between cheaper and pricier houses. The features we split along are grade, sqft_living, but also some features related to the area (longitude and latitude).
We can visualize the price across different regions in order to get more info:
# Load the ggmap package in order to visualize on a map library(ggmap) # And create a quick plot for the price qmplot(long, lat, maptype = "watercolor", color = log(price), data = house_sales_prediction[train.idx[1:3000],]) + scale_colour_viridis_c() # And the zipcode qmplot(long, lat, maptype = "watercolor", color = zipcode, data = house_sales_prediction[train.idx[1:3000],]) + guides(color = FALSE)
We can see that the price is clearly associated with the zipcode when comparing then two plots. As a result, we might want to indeed use the zipcode column in our future endeavours.
After getting an initial idea for our data, we might want to construct a first baseline, in order to see what a simple model already can achieve.
We use resample
with 3-fold cross-validation
on our training data in order to get a reliable estimate of the algorithm's performance on future data.
Before we start with defining and training learners, we create a r ref("Resampling")
in order to make sure that we always compare on exactly the same data.
library(mlr3learners) cv3 = rsmp("cv", folds = 3) cv3$instantiate(tsk$clone()$filter(train.idx))
For the cross-validation we only use the training data by cloning the task and selecting only observations from the training set.
lrn_rpart = lrn("regr.rpart") res = resample(task = tsk$clone()$filter(train.idx), lrn_rpart, cv3) res$score(msr("regr.mse")) sprintf("RMSE of the simple rpart: %s", round(sqrt(res$aggregate()), 2))
We might be able to improve upon the RMSE using more powerful learners.
We first load the mlr3learners
package, which contains the ranger learner (a package which implements the "Random Forest" algorithm).
lrn_ranger = lrn("regr.ranger", num.trees = 15L) res = resample(task = tsk$clone()$filter(train.idx), lrn_ranger, cv3) res$score(msr("regr.mse")) sprintf("RMSE of the simple ranger: %s", round(sqrt(res$aggregate()), 2))
Often tuning RandomForest methods does not increase predictive performances substantially.
If time permits, it can nonetheless lead to improvements and should thus be performed.
In this case, we resort to tune a different kind of model: Gradient Boosted Decision Trees from the package r cran_pkg("xgboost")
.
AutoTuner
Tuning can often further improve the performance.
In this case, we tune the xgboost learner in order to see whether this can improve performance.
For the AutoTuner
we have to specify a Termination Criterion (how long the tuning should run) a Tuner (which tuning method to use) and a ParamSet (which space we might want to search through).
For now we do not use the zipcode column, as r cran_pkg("xgboost")
cannot naturally
deal with categorical features.
The AutoTuner automatically performs nested cross-validation.
set.seed(444L) library(mlr3tuning) library(paradox) lrn_xgb = lrn("regr.xgboost") # Define the ParamSet ps = ParamSet$new( params = list( ParamDbl$new(id = "eta", lower = 0.2, upper = .4), ParamDbl$new(id = "min_child_weight", lower = 1, upper = 20), ParamDbl$new(id = "subsample", lower = .7, upper = .8), ParamDbl$new(id = "colsample_bytree", lower = .9, upper = 1), ParamDbl$new(id = "colsample_bylevel", lower = .5, upper = .7), ParamInt$new(id = "nrounds", lower = 1L, upper = 25) )) # Define the Terminator terminator = TerminatorEvaluations$new(10) cv3 = rsmp("cv", folds = 3) at = AutoTuner$new(lrn_xgb, cv3, measures = msr("regr.mse"), ps, terminator, tuner = TunerRandomSearch, tuner_settings = list())
# And resample the AutoTuner res = resample(tsk_nozip$clone()$filter(train.idx), at, cv3)
res$score(msr("regr.mse")) sprintf("RMSE of the tuned xgboost: %s", round(sqrt(res$aggregate()), 2))
We can obtain the resulting params in the respective splits by accessing the r ref("ResampleResult")
.
sapply(res$learners, function(x) x$param_set$values)
NOTE: To keep runtime low, we only tune parts of the hyperparameter space of r cran_pkg("xgboost")
in this example.
Additionally, we only allow for $10$ random search iterations, which is usually to little for real-world applications.
Nonetheless, we are able to obtain an improved performance when comparing to the r cran_pkg("ranger")
model.
In order to further improve our results we have several options:
Below we will investigate some of those possibilities and investigate whether this improves performance.
In order to better cluster the zip codes, we compute a new feature: med_price: It computes the median price in each zip-code. This might help our model to improve the prediction.
# Create a new feature and append it to the task zip_price = house_sales_prediction[, .(med_price = median(price)), by = zipcode] # Join on the original data to match with original columns, then cbind to the task tsk$cbind(house_sales_prediction[zip_price, on="zipcode"][,"med_price"])
Again, we run resample
and compute the RMSE.
lrn_ranger = lrn("regr.ranger") res = resample(task = tsk$clone()$filter(train.idx), lrn_ranger, cv3) res$score(msr("regr.mse")) sprintf("RMSE of ranger with med_price: %s", round(sqrt(res$aggregate()), 2))
In many cases, we might want to have a sparse model.
For this purpose we can use a r ref("mlr3filters::Filter")
implemented in r mlr_pkg("mlr3filters")
.
This can prevent our learner from overfitting make it easier for humans to interpret models as fewer variables influence the resulting prediction.
library(mlr3filters) filter = FilterMRMR$new()$calculate(tsk) tsk_ftsel = tsk$clone()$select(head(names(filter$scores), 12))
The resulting RMSE is slightly higher, and at the same time we only use $12$ features.
lrn_ranger = lrn("regr.ranger") res = resample(task = tsk_ftsel$clone()$filter(train.idx), lrn_ranger, cv3) res$score(msr("regr.mse")) sprintf("RMSE of ranger with filtering: %s", round(sqrt(res$aggregate()), 2))
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.