prepare_and_tokenize | R Documentation |
This is an extremely simple tokenizer that simply splits text on spaces. It
also optionally applies the cleaning processes from
prepare_text
.
prepare_and_tokenize(text, prepare = TRUE, ...)
text |
A character vector to clean. |
prepare |
Logical; should the text be passed through
|
... |
Arguments passed on to
|
The text as a list of character vectors. Each element of each vector is roughly equivalent to a word.
prepare_and_tokenize("This is some text.")
prepare_and_tokenize("This is some text.", space_punctuation = FALSE)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.