tokenizer | R Documentation |
Tokenize a document or character vector.
Boost_tokenizer(x)
MC_tokenizer(x)
scan_tokenizer(x)
x |
A character vector, or an object that can be coerced to character by
|
The quality and correctness of a tokenization algorithm highly depends on the context and application scenario. Relevant factors are the language of the underlying text and the notions of whitespace (which can vary with the used encoding and the language) and punctuation marks. Consequently, for superior results you probably need a custom tokenization function.
Uses the Boost (https://www.boost.org) Tokenizer (via Rcpp).
Implements the functionality of the tokenizer in the MC toolkit (https://www.cs.utexas.edu/~dml/software/mc/).
Simulates scan(..., what = "character")
.
A character vector consisting of tokens obtained by tokenization of x
.
getTokenizers
to list tokenizers provided by package tm.
Regexp_Tokenizer
for tokenizers using regular expressions
provided by package NLP.
tokenize
for a simple regular expression based tokenizer
provided by package tau.
tokenizers
for a collection of tokenizers provided
by package tokenizers.
data("crude")
Boost_tokenizer(crude[[1]])
MC_tokenizer(crude[[1]])
scan_tokenizer(crude[[1]])
strsplit_space_tokenizer <- function(x)
unlist(strsplit(as.character(x), "[[:space:]]+"))
strsplit_space_tokenizer(crude[[1]])
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.