knitr::opts_chunk$set(echo = TRUE) library(Cubist) library(dplyr) library(rlang) library(tidyrules) options(digits = 3)
Cubist
is an R
port of the Cubist GPL C
code released by RuleQuest at http://rulequest.com/cubist-info.html
. See the last section of this document for information on the porting. The other parts describes the functionality of the R
package.
Cubist is a rule-based model that is an extension of Quinlan's M5 model tree. A tree is grown where the terminal leaves contain linear regression models. These models are based on the predictors used in previous splits. Also, there are intermediate linear models at each step of the tree. A prediction is made using the linear regression model at the terminal node of the tree, but is "smoothed" by taking into account the prediction from the linear model in the previous node of the tree (which also occurs recursively up the tree). The tree is reduced to a set of rules, which initially are paths from the top of the tree to the bottom. Rules are eliminated via pruning and/or combined for simplification.
This is explained better in Quinlan (1992). Wang and Witten (1997) attempted to recreate this model using a "rational reconstruction" of Quinlan (1992) that is the basis for the M5P
model in Weka
(and the R package RWeka
).
An example of a model tree can be illustrated using the Ames housing data in the modeldata
package.
library(Cubist) data(ames, package = "modeldata") # model the data on the log10 scale ames$Sale_Price <- log10(ames$Sale_Price) set.seed(11) in_train_set <- sample(1:nrow(ames), floor(.8*nrow(ames))) predictors <- c("Lot_Area", "Alley", "Lot_Shape", "Neighborhood", "Bldg_Type", "Year_Built", "Total_Bsmt_SF", "Central_Air", "Gr_Liv_Area", "Bsmt_Full_Bath", "Bsmt_Half_Bath", "Full_Bath", "Half_Bath", "TotRms_AbvGrd", "Year_Sold", "Longitude", "Latitude") ames$Sale_Price <- log10(ames$Sale_Price) train_pred <- ames[ in_train_set, predictors] test_pred <- ames[-in_train_set, predictors] train_resp <- ames$Sale_Price[ in_train_set] test_resp <- ames$Sale_Price[-in_train_set] model_tree <- cubist(x = train_pred, y = train_resp) model_tree
summary(model_tree)
There is no formula method for cubist()
; the predictors are specified as matrix or data frame, The outcome is a numeric vector.
There is a predict method for the model:
model_tree_pred <- predict(model_tree, test_pred) ## Test set RMSE sqrt(mean((model_tree_pred - test_resp)^2)) ## Test set R^2 cor(model_tree_pred, test_resp)^2
The Cubist model can also use a boosting-like scheme called committees where iterative model trees are created in sequence. The first tree follows the procedure described in the last section. Subsequent trees are created using adjusted versions to the training set outcome: if the model over-predicted a value, the response is adjusted downward for the next model (and so on, see this blog post). Unlike traditional boosting, stage weights for each committee are not used to average the predictions from each model tree; the final prediction is a simple average of the predictions from each model tree.
The committee
option can be used to control number of model trees:
set.seed(1) com_model <- cubist(x = train_pred, y = train_resp, committees = 3) summary(com_model)
For this model:
com_pred <- predict(com_model, test_pred) ## RMSE sqrt(mean((com_pred - test_resp)^2)) ## R^2 cor(com_pred, test_resp)^2
Another innovation in Cubist using nearest-neighbors to adjust the predictions from the rule-based model. First, a model tree (with or without committees) is created. Once a sample is predicted by this model, Cubist can find it's nearest neighbors and determine the average of these training set points. See Quinlan (1993a) for the details of the adjustment as well as this blog post.
The development of rules and committees is independent of the choice of using instances. The original C
code allowed the program to choose whether to use instances, not use them or let the program decide. Our approach is to build a model with the cubist()
function that is ignorant to the decision about instances. When samples are predicted, the argument neighbors
can be used to adjust the rule-based model predictions (or not).
We can add instances to the previously fit committee model:
inst_pred <- predict(com_model, test_pred, neighbors = 5) ## RMSE sqrt(mean((inst_pred - test_resp)^2)) ## R^2 cor(inst_pred, test_resp)^2
Note that the previous models used the implicit default of neighbors = 0
for their predictions.
It may also be useful to see how the different models fit a single predictor. Here is the test set data for a model with one predictor (Gr_Liv_Area
), 100 committees, and various values of neighbors
:
knitr::include_graphics("neighbors.gif")
After the initial use of the instance-based correction, there is very little change in the mainstream of the data.
R modeling packages such as caret
, tidymodels
, and mlr3
can be used to tune the model. See the examples here for more details.
It should be noted that this variable importance measure does not capture the influence of the predictors when using the instance-based correction.
Rules from a Cubist model can be viewed using summary
as follows:
summary(model_tree)
The tidyRules
function in the tidyrules
package returns rules in a tibble (an extension of data frames) with one row per rule. The tibble provides these information about the rule: support
, mean
, min
, max
, error
, LHS
, RHS
and committee
. The values in LHS
and RHS
columns are strings which can be parsed as R expressions. These can be pasted inside the parenthesis of dplyr::filter()
to obtain the rows of the data corresponding to the rule and evaluate the response variable.
library(tidyrules) tr <- tidyRules(model_tree) tr tr[, c("LHS", "RHS")] # lets look at 7th rule tr$LHS[7] tr$RHS[7]
These results can be used to look at specific parts of the data. For example, the 7th rule predictions are:
library(dplyr) library(rlang) char_to_expr <- function(x, index = 1, model = TRUE) { x <- x %>% dplyr::slice(index) if (model) { x <- x %>% dplyr::pull(RHS) %>% rlang::parse_expr() } else { x <- x %>% dplyr::pull(LHS) %>% rlang::parse_expr() } x } rule_expr <- char_to_expr(tr, 7, model = FALSE) model_expr <- char_to_expr(tr, 7, model = TRUE) # filter the data corresponding to rule 7 ames %>% dplyr::filter(eval_tidy(rule_expr, ames)) %>% dplyr::mutate(sale_price_pred = eval_tidy(model_expr, .)) %>% dplyr::select(Sale_Price, sale_price_pred)
The summary()
method for Cubist shows the usage of each variable in either the rule conditions or the (terminal) linear model. In actuality, many more linear models are used in prediction that are shown in the output. Because of this, the variable usage statistics shown at the end of the output of the summary()
function will probably be inconsistent with the rules also shown in the output. At each split of the tree, Cubist saves a linear model (after feature selection) that is allowed to have terms for each variable used in the current split or any split above it. Quinlan (1992) discusses a smoothing algorithm where each model prediction is a linear combination of the parent and child model along the tree. As such, the final prediction is a function of all the linear models from the initial node to the terminal node. The percentages shown in the Cubist output reflects all the models involved in prediction (as opposed to the terminal models shown in the output).
The raw usage statistics are contained in a data frame called usage
in the cubist
object.
The caret
and vip
packages have general variable importance functions caret::varImp()
and vip::vi()
. When using this function on a cubist
argument, the variable importance is a linear combination of the usage in the rule conditions and the model.
For example, to compute the scores:
caret::varImp(model_tree) # or vip::vi(model_tree)
As previously mentioned, this code is a port of the command-line C
code. To run the C
code, the training set data must be converted to a specific file format as detailed on the RuleQuest website. Two files are created. The file.data
file is a header-less, comma delimited version of the data (the file
part is a name given by the user). The file.names
file provides information about the columns (eg. levels for categorical data and so on). After running the C
program, another text file called file.models
, which contains the information needed for prediction.
Once a model has been built with the R
cubist
package, the exportCubistFiles
can be used to create the .data
, .names
and .model
files so that the same model can be run at the command-line.
There are a few features in the C
code that are not yet operational in the R
package:
C
code decide on using instances or not. The choice is more explicit in this packageC
code supports binning of predictorsAny scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.