View source: R/natural-language.R
gl_nlp | R Documentation |
Analyse text for entities, sentiment, syntax and classification using the Google Natural Language API.
gl_nlp(
string,
nlp_type = c("annotateText", "analyzeEntities", "analyzeSentiment", "analyzeSyntax",
"analyzeEntitySentiment", "classifyText"),
type = c("PLAIN_TEXT", "HTML"),
language = c("en", "zh", "zh-Hant", "fr", "de", "it", "ja", "ko", "pt", "es"),
encodingType = c("UTF8", "UTF16", "UTF32", "NONE")
)
string |
Character vector. Text to analyse or Google Cloud Storage URI(s) in the form |
nlp_type |
Character. Type of analysis to perform. Default |
type |
Character. Whether the input is plain text ( |
language |
Character. Language of the source text. Must be supported by the API. |
encodingType |
Character. Text encoding used to process the output. Default |
Encoding type can usually be left at the default UTF8
.
Further details on encoding types.
Current language support is listed here.
A list containing the requested components as specified by nlp_type
:
sentences |
Sentences in the input document. API reference. |
tokens |
Tokens with syntactic information. API reference. |
entities |
Entities with semantic information. API reference. |
documentSentiment |
Overall sentiment of the document. API reference. |
classifyText |
Document classification. API reference. |
language |
Detected language of the text, or the language specified in the request. |
text |
Original text passed to the API. Returns |
https://cloud.google.com/natural-language/docs/reference/rest/v1/documents
## Not run:
library(googleLanguageR)
text <- "To administer medicine to animals is frequently difficult, yet sometimes necessary."
nlp <- gl_nlp(text)
nlp$sentences
nlp$tokens
nlp$entities
nlp$documentSentiment
# Vectorised input
texts <- c("The cat sat on the mat.", "Oh no, it did not, you fool!")
nlp_results <- gl_nlp(texts)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.