rekognition_detect_moderation_labels: Detects unsafe content in a specified JPEG or PNG format...

View source: R/rekognition_operations.R

rekognition_detect_moderation_labelsR Documentation

Detects unsafe content in a specified JPEG or PNG format image

Description

Detects unsafe content in a specified JPEG or PNG format image. Use detect_moderation_labels to moderate images depending on your requirements. For example, you might want to filter images that contain nudity, but not images containing suggestive content.

See https://www.paws-r-sdk.com/docs/rekognition_detect_moderation_labels/ for full documentation.

Usage

rekognition_detect_moderation_labels(
  Image,
  MinConfidence = NULL,
  HumanLoopConfig = NULL,
  ProjectVersion = NULL
)

Arguments

Image

[required] The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.

MinConfidence

Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with a confidence level lower than this specified value.

If you don't specify MinConfidence, the operation returns labels with confidence values greater than or equal to 50 percent.

HumanLoopConfig

Sets up the configuration for human evaluation, including the FlowDefinition the image will be sent to.

ProjectVersion

Identifier for the custom adapter. Expects the ProjectVersionArn as a value. Use the CreateProject or CreateProjectVersion APIs to create a custom adapter.


paws.machine.learning documentation built on Sept. 12, 2024, 6:23 a.m.