View source: R/rekognition_operations.R
rekognition_detect_moderation_labels | R Documentation |
Detects unsafe content in a specified JPEG or PNG format image. Use detect_moderation_labels
to moderate images depending on your requirements. For example, you might want to filter images that contain nudity, but not images containing suggestive content.
See https://www.paws-r-sdk.com/docs/rekognition_detect_moderation_labels/ for full documentation.
rekognition_detect_moderation_labels(
Image,
MinConfidence = NULL,
HumanLoopConfig = NULL,
ProjectVersion = NULL
)
Image |
[required] The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported. If you are using an AWS SDK to call Amazon Rekognition, you might not
need to base64-encode image bytes passed using the |
MinConfidence |
Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with a confidence level lower than this specified value. If you don't specify |
HumanLoopConfig |
Sets up the configuration for human evaluation, including the FlowDefinition the image will be sent to. |
ProjectVersion |
Identifier for the custom adapter. Expects the ProjectVersionArn as a value. Use the CreateProject or CreateProjectVersion APIs to create a custom adapter. |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.