process_document: Tokenize text using spaCy

Description Usage Arguments Value Examples

Description

Tokenize text using spaCy. The results of tokenization is stored as a Python object. To obtain the tokens results in R, use get_tokens(). http://spacy.io.

Usage

1
process_document(x, multithread, ...)

Arguments

x

input text functionalities including the tagging, named entity recognition, dependency analysis. This slows down spacy_parse() but speeds up the later parsing. If FALSE, tagging, entity recognition, and dependency analysis when relevant functions are called.

multithread

logical;

...

arguments passed to specific methods

Value

result marker object

Examples

1
2
3
4
5
spacy_initialize()
# the result has to be "tag() is ready to run" to run the following
txt <- c(text1 = "This is the first sentence.\nHere is the second sentence.", 
         text2 = "This is the second document.")
results <- spacy_parse(txt)

spacyr documentation built on March 26, 2020, 5:25 p.m.