vectorize | R Documentation |
This function turns texts into feature vectors.
vectorize(
input,
tokens,
remove_punct,
remove_symbols,
remove_numbers,
lowercase,
n,
weighting,
trim,
threshold
)
input |
This should be a |
tokens |
The type of tokens to extract, either "character" or "word". |
remove_punct |
A logical value. FALSE to keep the punctuation marks or TRUE to remove them. |
remove_symbols |
A logical value. TRUE removes symbols and FALSE keeps them. |
remove_numbers |
A logical value. TRUE removes numbers and FALSE keeps them. |
lowercase |
A logical value. TRUE transforms all tokens to lower case. |
n |
The order or size of the n-grams being extracted. |
weighting |
The type of weighting to use, "rel" for relative frequencies, "tf-idf", or "boolean". |
trim |
A logical value. If TRUE then only the most frequent tokens are kept. |
threshold |
A numeric value indicating how many most frequent tokens to keep. |
All the authorship analysis functions call vectorize()
with the standard parameters for the algorithm selected. This function is therefore left only for those users who want to modify these parameters or for convenience if the same dfm has to be reused by the algorithms so to avoid vectorizing the same data many times. Most users who only need to run a standard analysis do not need use this function.
A dfm (document-feature matrix) containing each text as a feature vector. N-gram tokenisation does not cross sentence boundaries.
mycorpus <- quanteda::corpus("The cat sat on the mat.")
quanteda::docvars(mycorpus, "author") <- "author1"
matrix <- vectorize(mycorpus, tokens = "character", remove_punct = FALSE, remove_symbols = TRUE,
remove_numbers = TRUE, lowercase = TRUE, n = 5, weighting = "rel", trim = TRUE, threshold = 1500)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.