knitr::opts_chunk$set( message = FALSE, digits = 3, collapse = TRUE, comment = "#>" ) options(digits = 3) library(recipes)
Recipes can be different from their base R counterparts such as
model.matrix. This vignette describes the different methods for encoding categorical predictors with special attention to interaction terms and contrasts.
Let's start, of course, with
iris data. This has four numeric columns and a single factor column with three levels:
'virginica'. Our initial recipe will have no outcome:
library(recipes) # make a copy for use below iris <- iris %>% mutate(original = Species) iris_rec <- recipe( ~ ., data = iris) summary(iris_rec)
A contrast function in R is a method for translating a column with categorical values into one or more numeric columns that take the place of the original. This can also be known as an encoding method or a parameterization function.
The default approach is to create dummy variables using the "reference cell" parameterization. This means that, if there are C levels of the factor, there will be C - 1 dummy variables created and all but the first factor level are made into new columns:
ref_cell <- iris_rec %>% step_dummy(Species) %>% prep(training = iris) summary(ref_cell) # Get a row for each factor level bake(ref_cell, new_data = NULL, original, starts_with("Species")) %>% distinct()
Note that the column that was used to make the new columns (
Species) is no longer there. See the section below on obtaining the entire set of C columns.
There are different types of contrasts that can be used for different types of factors. The defaults are:
param <- getOption("contrasts") param
?contrast, there are other options. One alternative is the little known Helmert contrast:
contr.helmertreturns Helmert contrasts, which contrast the second level with the first, the third with the average of the first two, and so on.
To get this encoding, the global option for the contrasts can be changed and saved.
step_dummy picks up on this and makes the correct calculations:
# change it: go_helmert <- param go_helmert["unordered"] <- "contr.helmert" options(contrasts = go_helmert) # now make dummy variables with new parameterization helmert <- iris_rec %>% step_dummy(Species) %>% prep(training = iris) summary(helmert) bake(helmert, new_data = NULL, original, starts_with("Species")) %>% distinct() # Yuk; go back to the original method options(contrasts = param)
Note that the column names do not reference a specific level of the species variable. This contrast function has columns that can involve multiple levels; level-specific columns wouldn't make sense.
If no columns are selected (perhaps due to an earlier
bake() function will return the data as-is (e.g. with no dummy variables).
step_dummy() has an option called
keep_original_cols that can be used to keep the original columns that are being used to create the dummy variables.
Creating interactions with recipes requires the use of a model formula, such as
iris_int <- iris_rec %>% step_interact( ~ Sepal.Width:Sepal.Length) %>% prep(training = iris) summary(iris_int)
In R model formulae, using a
* between two variables would expand to
a*b = a + b + a:b so that the main effects are included. In
step_interact, you can do use
*, but only the interactions are recorded as columns that needs to be created.
One thing that
recipes does differently than base R is to construct the design matrix in sequential iterations. This is relevant when thinking about interactions between continuous and categorical predictors.
For example, if you were to use the standard formula interface, the creation of the dummy variables happens at the same time as the interactions are created:
model.matrix(~ Species*Sepal.Length, data = iris) %>% as.data.frame() %>% # show a few specific rows slice(c(1, 51, 101)) %>% as.data.frame()
With recipes, you create them sequentially. This raises an issue: do I have to type out all of the interaction effects by their specific names when using dummy variable?
# Must I do this? iris_rec %>% step_interact( ~ Species_versicolor:Sepal.Length + Species_virginica:Sepal.Length)
Note only is this a pain, but it may not be obvious what dummy variables are available (especially when
step_other is used).
The solution is to use a selector:
iris_int <- iris_rec %>% step_dummy(Species) %>% step_interact( ~ starts_with("Species"):Sepal.Length) %>% prep(training = iris) summary(iris_int)
What happens here is that
starts_with("Species") is executed on the data that are available when the previous steps have been applied to the data. That means that the dummy variable columns are present. The results of this selectors are then translated to an additive function of the results. In this case, that means that
(Species_versicolor + Species_virginica)
The entire interaction formula is shown here:
For interactions between multiple sets of dummy variables, the formula could include multiple selectors (e.g.
Would it work if I didn't convert species to a factor and used the interactions step?
iris_int <- iris_rec %>% step_interact( ~ Species:Sepal.Length) %>% prep(training = iris) summary(iris_int)
Species isn't affected and a warning is issued. Basically, you only get half of what
model.matrix does and that could really be problematic in subsequent steps.
As mentioned above, if there are C levels of the factor, there will be C - 1 dummy variables created. You might want to get all of them back.
Historically, C - 1 are used so that a linear dependency is avoided in the design matrix; all C dummy variables would add up row-wise to the intercept column and the inverse matrix for linear regression can't be computed. This technical term for a the design matrix like this is "less than full rank".
There are models (e.g.
glmnet and others) that can avoid this issue so you might want to get all of the columns. To do this,
step_dummy has an option called
one_hot that will make sure that all C are produced:
iris_rec %>% step_dummy(Species, one_hot = TRUE) %>% prep(training = iris) %>% bake(original, new_data = NULL, starts_with("Species")) %>% distinct()
The option is named that way since this is that the computer scientists call "one-hot encoding".
This will give you the full set of indicators and, when you use the typical contrast function, it does. It might do some seemingly weird (but legitimate) things when used with other contrasts:
hot_reference <- iris_rec %>% step_dummy(Species, one_hot = TRUE) %>% prep(training = iris) %>% bake(original, new_data = NULL, starts_with("Species")) %>% distinct() hot_reference # from above options(contrasts = go_helmert) hot_helmert <- iris_rec %>% step_dummy(Species, one_hot = TRUE) %>% prep(training = iris) %>% bake(original, new_data = NULL, starts_with("Species")) %>% distinct() hot_helmert
Since this contrast doesn't make sense using all C columns, it reverts back to the default encoding.
When a recipe is used with new samples, some factors may have acquired new levels that were not present when
prep was run. If
step_dummy encounters this situation, a warning is issues ("There are new levels in a factor") and the indicator variables that correspond to the factor are assigned missing values.
One way around this is to use
step_other. This step can convert infrequently occurring levels to a new category (that defaults to "other"). This step can also be used to convert new factor levels to "other" also.
step_integer has functionality similar to
LabelEncoder and encodes new values as zero.
embed package can also handle novel factors levels within a recipe.
step_tfembed assign a common numeric score to novel levels.
There are a bunch of steps related to going in-between factors and dummy variables:
step_unknownassigns missing factor values into an
step_othercan collapse infrequently occurring levels into
step_regexwill create a single dummy variable based on applying a regular expression to a text field. Similarly,
step_countdoes the same but counts the occurrences of the pattern in the string.
step_holidaycreates dummy variables from date fields to capture holidays.
step_lincombcan be useful if you over-specify interactions and need to remove linear dependencies.
step_zvcan remove dummy variables that never show a 1 in the column (i.e. is zero-variance).
step_bin2factortakes a binary indicator and makes a factor variable. This can be useful when using naive Bayes models.
step_lencode_bayesand others in the
embedpackage can use one or more (non-binary) values to encode factor predictors into a numeric form.
step_dummy also works with ordered factors. As seen above, the default encoding is to create a series of polynomial variables. There are also a few steps for ordered factors:
step_ordinalscorecan translate the levels to a single numeric score.
step_unordercan convert to an unordered factor.
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.