View source: R/comprehend_service.R
comprehend | R Documentation |
Amazon Comprehend is an Amazon Web Services service for gaining insight into the content of documents. Use these actions to determine the topics contained in your documents, the topics they discuss, the predominant sentiment expressed in them, the predominant language used, and more.
comprehend(
config = list(),
credentials = list(),
endpoint = NULL,
region = NULL
)
config |
Optional configuration of credentials, endpoint, and/or region.
|
credentials |
Optional credentials shorthand for the config parameter
|
endpoint |
Optional shorthand for complete URL to use for the constructed client. |
region |
Optional shorthand for AWS Region used in instantiating the client. |
A client for the service. You can call the service's operations using
syntax like svc$operation(...)
, where svc
is the name you've assigned
to the client. The available operations are listed in the
Operations section.
svc <- comprehend( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" )
batch_detect_dominant_language | Determines the dominant language of the input text for a batch of documents |
batch_detect_entities | Inspects the text of a batch of documents for named entities and returns information about them |
batch_detect_key_phrases | Detects the key noun phrases found in a batch of documents |
batch_detect_sentiment | Inspects a batch of documents and returns an inference of the prevailing sentiment, POSITIVE, NEUTRAL, MIXED, or NEGATIVE, in each one |
batch_detect_syntax | Inspects the text of a batch of documents for the syntax and part of speech of the words in the document and returns information about them |
batch_detect_targeted_sentiment | Inspects a batch of documents and returns a sentiment analysis for each entity identified in the documents |
classify_document | Creates a classification request to analyze a single document in real-time |
contains_pii_entities | Analyzes input text for the presence of personally identifiable information (PII) and returns the labels of identified PII entity types such as name, address, bank account number, or phone number |
create_dataset | Creates a dataset to upload training or test data for a model associated with a flywheel |
create_document_classifier | Creates a new document classifier that you can use to categorize documents |
create_endpoint | Creates a model-specific endpoint for synchronous inference for a previously trained custom model For information about endpoints, see Managing endpoints |
create_entity_recognizer | Creates an entity recognizer using submitted files |
create_flywheel | A flywheel is an Amazon Web Services resource that orchestrates the ongoing training of a model for custom classification or custom entity recognition |
delete_document_classifier | Deletes a previously created document classifier |
delete_endpoint | Deletes a model-specific endpoint for a previously-trained custom model |
delete_entity_recognizer | Deletes an entity recognizer |
delete_flywheel | Deletes a flywheel |
delete_resource_policy | Deletes a resource-based policy that is attached to a custom model |
describe_dataset | Returns information about the dataset that you specify |
describe_document_classification_job | Gets the properties associated with a document classification job |
describe_document_classifier | Gets the properties associated with a document classifier |
describe_dominant_language_detection_job | Gets the properties associated with a dominant language detection job |
describe_endpoint | Gets the properties associated with a specific endpoint |
describe_entities_detection_job | Gets the properties associated with an entities detection job |
describe_entity_recognizer | Provides details about an entity recognizer including status, S3 buckets containing training data, recognizer metadata, metrics, and so on |
describe_events_detection_job | Gets the status and details of an events detection job |
describe_flywheel | Provides configuration information about the flywheel |
describe_flywheel_iteration | Retrieve the configuration properties of a flywheel iteration |
describe_key_phrases_detection_job | Gets the properties associated with a key phrases detection job |
describe_pii_entities_detection_job | Gets the properties associated with a PII entities detection job |
describe_resource_policy | Gets the details of a resource-based policy that is attached to a custom model, including the JSON body of the policy |
describe_sentiment_detection_job | Gets the properties associated with a sentiment detection job |
describe_targeted_sentiment_detection_job | Gets the properties associated with a targeted sentiment detection job |
describe_topics_detection_job | Gets the properties associated with a topic detection job |
detect_dominant_language | Determines the dominant language of the input text |
detect_entities | Detects named entities in input text when you use the pre-trained model |
detect_key_phrases | Detects the key noun phrases found in the text |
detect_pii_entities | Inspects the input text for entities that contain personally identifiable information (PII) and returns information about them |
detect_sentiment | Inspects text and returns an inference of the prevailing sentiment (POSITIVE, NEUTRAL, MIXED, or NEGATIVE) |
detect_syntax | Inspects text for syntax and the part of speech of words in the document |
detect_targeted_sentiment | Inspects the input text and returns a sentiment analysis for each entity identified in the text |
detect_toxic_content | Performs toxicity analysis on the list of text strings that you provide as input |
import_model | Creates a new custom model that replicates a source custom model that you import |
list_datasets | List the datasets that you have configured in this Region |
list_document_classification_jobs | Gets a list of the documentation classification jobs that you have submitted |
list_document_classifiers | Gets a list of the document classifiers that you have created |
list_document_classifier_summaries | Gets a list of summaries of the document classifiers that you have created |
list_dominant_language_detection_jobs | Gets a list of the dominant language detection jobs that you have submitted |
list_endpoints | Gets a list of all existing endpoints that you've created |
list_entities_detection_jobs | Gets a list of the entity detection jobs that you have submitted |
list_entity_recognizers | Gets a list of the properties of all entity recognizers that you created, including recognizers currently in training |
list_entity_recognizer_summaries | Gets a list of summaries for the entity recognizers that you have created |
list_events_detection_jobs | Gets a list of the events detection jobs that you have submitted |
list_flywheel_iteration_history | Information about the history of a flywheel iteration |
list_flywheels | Gets a list of the flywheels that you have created |
list_key_phrases_detection_jobs | Get a list of key phrase detection jobs that you have submitted |
list_pii_entities_detection_jobs | Gets a list of the PII entity detection jobs that you have submitted |
list_sentiment_detection_jobs | Gets a list of sentiment detection jobs that you have submitted |
list_tags_for_resource | Lists all tags associated with a given Amazon Comprehend resource |
list_targeted_sentiment_detection_jobs | Gets a list of targeted sentiment detection jobs that you have submitted |
list_topics_detection_jobs | Gets a list of the topic detection jobs that you have submitted |
put_resource_policy | Attaches a resource-based policy to a custom model |
start_document_classification_job | Starts an asynchronous document classification job using a custom classification model |
start_dominant_language_detection_job | Starts an asynchronous dominant language detection job for a collection of documents |
start_entities_detection_job | Starts an asynchronous entity detection job for a collection of documents |
start_events_detection_job | Starts an asynchronous event detection job for a collection of documents |
start_flywheel_iteration | Start the flywheel iteration |
start_key_phrases_detection_job | Starts an asynchronous key phrase detection job for a collection of documents |
start_pii_entities_detection_job | Starts an asynchronous PII entity detection job for a collection of documents |
start_sentiment_detection_job | Starts an asynchronous sentiment detection job for a collection of documents |
start_targeted_sentiment_detection_job | Starts an asynchronous targeted sentiment detection job for a collection of documents |
start_topics_detection_job | Starts an asynchronous topic detection job |
stop_dominant_language_detection_job | Stops a dominant language detection job in progress |
stop_entities_detection_job | Stops an entities detection job in progress |
stop_events_detection_job | Stops an events detection job in progress |
stop_key_phrases_detection_job | Stops a key phrases detection job in progress |
stop_pii_entities_detection_job | Stops a PII entities detection job in progress |
stop_sentiment_detection_job | Stops a sentiment detection job in progress |
stop_targeted_sentiment_detection_job | Stops a targeted sentiment detection job in progress |
stop_training_document_classifier | Stops a document classifier training job while in progress |
stop_training_entity_recognizer | Stops an entity recognizer training job while in progress |
tag_resource | Associates a specific tag with an Amazon Comprehend resource |
untag_resource | Removes a specific tag associated with an Amazon Comprehend resource |
update_endpoint | Updates information about the specified endpoint |
update_flywheel | Update the configuration information for an existing flywheel |
## Not run:
svc <- comprehend()
svc$batch_detect_dominant_language(
Foo = 123
)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.