tokenize | R Documentation |
Simple version of tokenizer function.
tokenize(text, match_option = Match$ALL, stopwords = TRUE) tokenize_tbl(text, match_option = Match$ALL, stopwords = TRUE) tokenize_tidytext(text, match_option = Match$ALL, stopwords = TRUE) tokenize_tidy(text, match_option = Match$ALL, stopwords = TRUE)
text |
target text. |
match_option |
|
stopwords |
stopwords option. Default is TRUE which is
to use embaded stopwords dictionany.
If FALSE, use not embaded stopwords dictionany.
If char: path of dictionary txt file, use file.
If |
list type of result.
## Not run: tokenize("Test text.") tokenize("Please use Korean.", Match$ALL_WITH_NORMALIZING) ## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.