View source: R/word_embeddings.R
step_word_embeddings | R Documentation |
step_word_embeddings()
creates a specification of a recipe step that will
convert a token
variable into word-embedding dimensions by
aggregating the vectors of each token from a pre-trained embedding.
step_word_embeddings(
recipe,
...,
role = "predictor",
trained = FALSE,
columns = NULL,
embeddings,
aggregation = c("sum", "mean", "min", "max"),
aggregation_default = 0,
prefix = "wordembed",
keep_original_cols = FALSE,
skip = FALSE,
id = rand_id("word_embeddings")
)
recipe |
A recipe object. The step will be added to the sequence of operations for this recipe. |
... |
One or more selector functions to choose which
variables are affected by the step. See |
role |
For model terms created by this step, what analysis role should they be assigned?. By default, the function assumes that the new columns created by the original variables will be used as predictors in a model. |
trained |
A logical to indicate if the quantities for preprocessing have been estimated. |
columns |
A character string of variable names that will
be populated (eventually) by the |
embeddings |
A tibble of pre-trained word embeddings, such as those returned by the embedding_glove function from the textdata package. The first column should contain tokens, and additional columns should contain embeddings vectors. |
aggregation |
A character giving the name of the aggregation function to use. Must be one of "sum", "mean", "min", and "max". Defaults to "sum". |
aggregation_default |
A numeric denoting the default value for case with no words are matched in embedding. Defaults to 0. |
prefix |
A character string that will be the prefix to the resulting new variables. See notes below. |
keep_original_cols |
A logical to keep the original variables in the
output. Defaults to |
skip |
A logical. Should the step be skipped when the
recipe is baked by |
id |
A character string that is unique to this step to identify it. |
Word embeddings map words (or other tokens) into a high-dimensional feature space. This function maps pre-trained word embeddings onto the tokens in your data.
The argument embeddings
provides the pre-trained vectors. Each dimension
present in this tibble becomes a new feature column, with each column
aggregated across each row of your text using the function supplied in the
aggregation
argument.
The new components will have names that begin with prefix
, then the name of
the aggregation function, then the name of the variable from the embeddings
tibble (usually something like "d7"). For example, using the default
"wordembedding" prefix, and the GloVe embeddings from the textdata package
(where the column names are d1
, d2
, etc), new columns would be
wordembedding_d1
, wordembedding_d1
, etc.
An updated version of recipe
with the new step added
to the sequence of existing steps (if any).
When you tidy()
this step, a tibble with columns terms
(the selectors or variables selected), embedding_rows
(number of rows in
embedding), and aggregation
(the aggregation method).
The underlying operation does not allow for case weights.
step_tokenize()
to turn characters into tokens
Other Steps for Numeric Variables From Tokens:
step_lda()
,
step_texthash()
,
step_tfidf()
,
step_tf()
library(recipes)
embeddings <- tibble(
tokens = c("the", "cat", "ran"),
d1 = c(1, 0, 0),
d2 = c(0, 1, 0),
d3 = c(0, 0, 1)
)
sample_data <- tibble(
text = c(
"The.",
"The cat.",
"The cat ran."
),
text_label = c("fragment", "fragment", "sentence")
)
rec <- recipe(text_label ~ ., data = sample_data) %>%
step_tokenize(text) %>%
step_word_embeddings(text, embeddings = embeddings)
obj <- rec %>%
prep()
bake(obj, sample_data)
tidy(rec, number = 2)
tidy(obj, number = 2)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.