Description Usage Arguments See Also
View source: R/preprocessing.R
Only top "num_words" most frequent words will be taken into account. Only words known by the tokenizer will be taken into account.
1 | texts_to_sequences(tokenizer, texts)
|
tokenizer |
Tokenizer |
texts |
Vector/list of texts (strings). |
Other text tokenization:
fit_text_tokenizer()
,
save_text_tokenizer()
,
sequences_to_matrix()
,
text_tokenizer()
,
texts_to_matrix()
,
texts_to_sequences_generator()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.