comprehend_detect_toxic_content: Performs toxicity analysis on the list of text strings that...

View source: R/comprehend_operations.R

comprehend_detect_toxic_contentR Documentation

Performs toxicity analysis on the list of text strings that you provide as input

Description

Performs toxicity analysis on the list of text strings that you provide as input. The API response contains a results list that matches the size of the input list. For more information about toxicity detection, see Toxicity detection in the Amazon Comprehend Developer Guide.

See https://www.paws-r-sdk.com/docs/comprehend_detect_toxic_content/ for full documentation.

Usage

comprehend_detect_toxic_content(TextSegments, LanguageCode)

Arguments

TextSegments

[required] A list of up to 10 text strings. Each string has a maximum size of 1 KB, and the maximum size of the list is 10 KB.

LanguageCode

[required] The language of the input text. Currently, English is the only supported language.


paws.machine.learning documentation built on Sept. 12, 2024, 6:23 a.m.