R/rekognition_operations.R

Defines functions rekognition_update_stream_processor rekognition_update_dataset_entries rekognition_untag_resource rekognition_tag_resource rekognition_stop_stream_processor rekognition_stop_project_version rekognition_start_text_detection rekognition_start_stream_processor rekognition_start_segment_detection rekognition_start_project_version rekognition_start_person_tracking rekognition_start_label_detection rekognition_start_face_search rekognition_start_face_detection rekognition_start_content_moderation rekognition_start_celebrity_recognition rekognition_search_users_by_image rekognition_search_users rekognition_search_faces_by_image rekognition_search_faces rekognition_recognize_celebrities rekognition_put_project_policy rekognition_list_users rekognition_list_tags_for_resource rekognition_list_stream_processors rekognition_list_project_policies rekognition_list_faces rekognition_list_dataset_labels rekognition_list_dataset_entries rekognition_list_collections rekognition_index_faces rekognition_get_text_detection rekognition_get_segment_detection rekognition_get_person_tracking rekognition_get_label_detection rekognition_get_face_search rekognition_get_face_liveness_session_results rekognition_get_face_detection rekognition_get_content_moderation rekognition_get_celebrity_recognition rekognition_get_celebrity_info rekognition_distribute_dataset_entries rekognition_disassociate_faces rekognition_detect_text rekognition_detect_protective_equipment rekognition_detect_moderation_labels rekognition_detect_labels rekognition_detect_faces rekognition_detect_custom_labels rekognition_describe_stream_processor rekognition_describe_projects rekognition_describe_project_versions rekognition_describe_dataset rekognition_describe_collection rekognition_delete_user rekognition_delete_stream_processor rekognition_delete_project_version rekognition_delete_project_policy rekognition_delete_project rekognition_delete_faces rekognition_delete_dataset rekognition_delete_collection rekognition_create_user rekognition_create_stream_processor rekognition_create_project_version rekognition_create_project rekognition_create_face_liveness_session rekognition_create_dataset rekognition_create_collection rekognition_copy_project_version rekognition_compare_faces rekognition_associate_faces

Documented in rekognition_associate_faces rekognition_compare_faces rekognition_copy_project_version rekognition_create_collection rekognition_create_dataset rekognition_create_face_liveness_session rekognition_create_project rekognition_create_project_version rekognition_create_stream_processor rekognition_create_user rekognition_delete_collection rekognition_delete_dataset rekognition_delete_faces rekognition_delete_project rekognition_delete_project_policy rekognition_delete_project_version rekognition_delete_stream_processor rekognition_delete_user rekognition_describe_collection rekognition_describe_dataset rekognition_describe_projects rekognition_describe_project_versions rekognition_describe_stream_processor rekognition_detect_custom_labels rekognition_detect_faces rekognition_detect_labels rekognition_detect_moderation_labels rekognition_detect_protective_equipment rekognition_detect_text rekognition_disassociate_faces rekognition_distribute_dataset_entries rekognition_get_celebrity_info rekognition_get_celebrity_recognition rekognition_get_content_moderation rekognition_get_face_detection rekognition_get_face_liveness_session_results rekognition_get_face_search rekognition_get_label_detection rekognition_get_person_tracking rekognition_get_segment_detection rekognition_get_text_detection rekognition_index_faces rekognition_list_collections rekognition_list_dataset_entries rekognition_list_dataset_labels rekognition_list_faces rekognition_list_project_policies rekognition_list_stream_processors rekognition_list_tags_for_resource rekognition_list_users rekognition_put_project_policy rekognition_recognize_celebrities rekognition_search_faces rekognition_search_faces_by_image rekognition_search_users rekognition_search_users_by_image rekognition_start_celebrity_recognition rekognition_start_content_moderation rekognition_start_face_detection rekognition_start_face_search rekognition_start_label_detection rekognition_start_person_tracking rekognition_start_project_version rekognition_start_segment_detection rekognition_start_stream_processor rekognition_start_text_detection rekognition_stop_project_version rekognition_stop_stream_processor rekognition_tag_resource rekognition_untag_resource rekognition_update_dataset_entries rekognition_update_stream_processor

# This file is generated by make.paws. Please do not edit here.
#' @importFrom paws.common get_config new_operation new_request send_request
#' @include rekognition_service.R
NULL

#' Associates one or more faces with an existing UserID
#'
#' @description
#' Associates one or more faces with an existing UserID. Takes an array of `FaceIds`. Each `FaceId` that are present in the `FaceIds` list is associated with the provided UserID. The maximum number of total `FaceIds` per UserID is 100.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_associate_faces/](https://www.paws-r-sdk.com/docs/rekognition_associate_faces/) for full documentation.
#'
#' @param CollectionId [required] The ID of an existing collection containing the UserID.
#' @param UserId [required] The ID for the existing UserID.
#' @param FaceIds [required] An array of FaceIDs to associate with the UserID.
#' @param UserMatchThreshold An optional value specifying the minimum confidence in the UserID match
#' to return. The default value is 75.
#' @param ClientRequestToken Idempotent token used to identify the request to
#' [`associate_faces`][rekognition_associate_faces]. If you use the same
#' token with multiple [`associate_faces`][rekognition_associate_faces]
#' requests, the same response is returned. Use ClientRequestToken to
#' prevent the same request from being processed more than once.
#'
#' @keywords internal
#'
#' @rdname rekognition_associate_faces
rekognition_associate_faces <- function(CollectionId, UserId, FaceIds, UserMatchThreshold = NULL, ClientRequestToken = NULL) {
  op <- new_operation(
    name = "AssociateFaces",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$associate_faces_input(CollectionId = CollectionId, UserId = UserId, FaceIds = FaceIds, UserMatchThreshold = UserMatchThreshold, ClientRequestToken = ClientRequestToken)
  output <- .rekognition$associate_faces_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$associate_faces <- rekognition_associate_faces

#' Compares a face in the source input image with each of the 100 largest
#' faces detected in the target input image
#'
#' @description
#' Compares a face in the *source* input image with each of the 100 largest faces detected in the *target* input image.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_compare_faces/](https://www.paws-r-sdk.com/docs/rekognition_compare_faces/) for full documentation.
#'
#' @param SourceImage &#91;required&#93; The input image as base64-encoded bytes or an S3 object. If you use the
#' AWS CLI to call Amazon Rekognition operations, passing base64-encoded
#' image bytes is not supported.
#' 
#' If you are using an AWS SDK to call Amazon Rekognition, you might not
#' need to base64-encode image bytes passed using the `Bytes` field. For
#' more information, see Images in the Amazon Rekognition developer guide.
#' @param TargetImage &#91;required&#93; The target image as base64-encoded bytes or an S3 object. If you use the
#' AWS CLI to call Amazon Rekognition operations, passing base64-encoded
#' image bytes is not supported.
#' 
#' If you are using an AWS SDK to call Amazon Rekognition, you might not
#' need to base64-encode image bytes passed using the `Bytes` field. For
#' more information, see Images in the Amazon Rekognition developer guide.
#' @param SimilarityThreshold The minimum level of confidence in the face matches that a match must
#' meet to be included in the `FaceMatches` array.
#' @param QualityFilter A filter that specifies a quality bar for how much filtering is done to
#' identify faces. Filtered faces aren't compared. If you specify `AUTO`,
#' Amazon Rekognition chooses the quality bar. If you specify `LOW`,
#' `MEDIUM`, or `HIGH`, filtering removes all faces that don’t meet the
#' chosen quality bar. The quality bar is based on a variety of common use
#' cases. Low-quality detections can occur for a number of reasons. Some
#' examples are an object that's misidentified as a face, a face that's too
#' blurry, or a face with a pose that's too extreme to use. If you specify
#' `NONE`, no filtering is performed. The default value is `NONE`.
#' 
#' To use quality filtering, the collection you are using must be
#' associated with version 3 of the face model or higher.
#'
#' @keywords internal
#'
#' @rdname rekognition_compare_faces
rekognition_compare_faces <- function(SourceImage, TargetImage, SimilarityThreshold = NULL, QualityFilter = NULL) {
  op <- new_operation(
    name = "CompareFaces",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$compare_faces_input(SourceImage = SourceImage, TargetImage = TargetImage, SimilarityThreshold = SimilarityThreshold, QualityFilter = QualityFilter)
  output <- .rekognition$compare_faces_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$compare_faces <- rekognition_compare_faces

#' Copies a version of an Amazon Rekognition Custom Labels model from a
#' source project to a destination project
#'
#' @description
#' Copies a version of an Amazon Rekognition Custom Labels model from a source project to a destination project. The source and destination projects can be in different AWS accounts but must be in the same AWS Region. You can't copy a model to another AWS service.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_copy_project_version/](https://www.paws-r-sdk.com/docs/rekognition_copy_project_version/) for full documentation.
#'
#' @param SourceProjectArn &#91;required&#93; The ARN of the source project in the trusting AWS account.
#' @param SourceProjectVersionArn &#91;required&#93; The ARN of the model version in the source project that you want to copy
#' to a destination project.
#' @param DestinationProjectArn &#91;required&#93; The ARN of the project in the trusted AWS account that you want to copy
#' the model version to.
#' @param VersionName &#91;required&#93; A name for the version of the model that's copied to the destination
#' project.
#' @param OutputConfig &#91;required&#93; The S3 bucket and folder location where the training output for the
#' source model version is placed.
#' @param Tags The key-value tags to assign to the model version.
#' @param KmsKeyId The identifier for your AWS Key Management Service key (AWS KMS key).
#' You can supply the Amazon Resource Name (ARN) of your KMS key, the ID of
#' your KMS key, an alias for your KMS key, or an alias ARN. The key is
#' used to encrypt training results and manifest files written to the
#' output Amazon S3 bucket (`OutputConfig`).
#' 
#' If you choose to use your own KMS key, you need the following
#' permissions on the KMS key.
#' 
#' -   kms:CreateGrant
#' 
#' -   kms:DescribeKey
#' 
#' -   kms:GenerateDataKey
#' 
#' -   kms:Decrypt
#' 
#' If you don't specify a value for `KmsKeyId`, images copied into the
#' service are encrypted using a key that AWS owns and manages.
#'
#' @keywords internal
#'
#' @rdname rekognition_copy_project_version
rekognition_copy_project_version <- function(SourceProjectArn, SourceProjectVersionArn, DestinationProjectArn, VersionName, OutputConfig, Tags = NULL, KmsKeyId = NULL) {
  op <- new_operation(
    name = "CopyProjectVersion",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$copy_project_version_input(SourceProjectArn = SourceProjectArn, SourceProjectVersionArn = SourceProjectVersionArn, DestinationProjectArn = DestinationProjectArn, VersionName = VersionName, OutputConfig = OutputConfig, Tags = Tags, KmsKeyId = KmsKeyId)
  output <- .rekognition$copy_project_version_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$copy_project_version <- rekognition_copy_project_version

#' Creates a collection in an AWS Region
#'
#' @description
#' Creates a collection in an AWS Region. You can add faces to the collection using the [`index_faces`][rekognition_index_faces] operation.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_create_collection/](https://www.paws-r-sdk.com/docs/rekognition_create_collection/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; ID for the collection that you are creating.
#' @param Tags A set of tags (key-value pairs) that you want to attach to the
#' collection.
#'
#' @keywords internal
#'
#' @rdname rekognition_create_collection
rekognition_create_collection <- function(CollectionId, Tags = NULL) {
  op <- new_operation(
    name = "CreateCollection",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$create_collection_input(CollectionId = CollectionId, Tags = Tags)
  output <- .rekognition$create_collection_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$create_collection <- rekognition_create_collection

#' Creates a new Amazon Rekognition Custom Labels dataset
#'
#' @description
#' Creates a new Amazon Rekognition Custom Labels dataset. You can create a dataset by using an Amazon Sagemaker format manifest file or by copying an existing Amazon Rekognition Custom Labels dataset.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_create_dataset/](https://www.paws-r-sdk.com/docs/rekognition_create_dataset/) for full documentation.
#'
#' @param DatasetSource The source files for the dataset. You can specify the ARN of an existing
#' dataset or specify the Amazon S3 bucket location of an Amazon Sagemaker
#' format manifest file. If you don't specify `datasetSource`, an empty
#' dataset is created. To add labeled images to the dataset, You can use
#' the console or call
#' [`update_dataset_entries`][rekognition_update_dataset_entries].
#' @param DatasetType &#91;required&#93; The type of the dataset. Specify `TRAIN` to create a training dataset.
#' Specify `TEST` to create a test dataset.
#' @param ProjectArn &#91;required&#93; The ARN of the Amazon Rekognition Custom Labels project to which you
#' want to asssign the dataset.
#'
#' @keywords internal
#'
#' @rdname rekognition_create_dataset
rekognition_create_dataset <- function(DatasetSource = NULL, DatasetType, ProjectArn) {
  op <- new_operation(
    name = "CreateDataset",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$create_dataset_input(DatasetSource = DatasetSource, DatasetType = DatasetType, ProjectArn = ProjectArn)
  output <- .rekognition$create_dataset_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$create_dataset <- rekognition_create_dataset

#' This API operation initiates a Face Liveness session
#'
#' @description
#' This API operation initiates a Face Liveness session. It returns a `SessionId`, which you can use to start streaming Face Liveness video and get the results for a Face Liveness session.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_create_face_liveness_session/](https://www.paws-r-sdk.com/docs/rekognition_create_face_liveness_session/) for full documentation.
#'
#' @param KmsKeyId The identifier for your AWS Key Management Service key (AWS KMS key).
#' Used to encrypt audit images and reference images.
#' @param Settings A session settings object. It contains settings for the operation to be
#' performed. For Face Liveness, it accepts `OutputConfig` and
#' `AuditImagesLimit`.
#' @param ClientRequestToken Idempotent token is used to recognize the Face Liveness request. If the
#' same token is used with multiple
#' [`create_face_liveness_session`][rekognition_create_face_liveness_session]
#' requests, the same session is returned. This token is employed to avoid
#' unintentionally creating the same session multiple times.
#'
#' @keywords internal
#'
#' @rdname rekognition_create_face_liveness_session
rekognition_create_face_liveness_session <- function(KmsKeyId = NULL, Settings = NULL, ClientRequestToken = NULL) {
  op <- new_operation(
    name = "CreateFaceLivenessSession",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$create_face_liveness_session_input(KmsKeyId = KmsKeyId, Settings = Settings, ClientRequestToken = ClientRequestToken)
  output <- .rekognition$create_face_liveness_session_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$create_face_liveness_session <- rekognition_create_face_liveness_session

#' Creates a new Amazon Rekognition Custom Labels project
#'
#' @description
#' Creates a new Amazon Rekognition Custom Labels project. A project is a group of resources (datasets, model versions) that you use to create and manage Amazon Rekognition Custom Labels models.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_create_project/](https://www.paws-r-sdk.com/docs/rekognition_create_project/) for full documentation.
#'
#' @param ProjectName &#91;required&#93; The name of the project to create.
#'
#' @keywords internal
#'
#' @rdname rekognition_create_project
rekognition_create_project <- function(ProjectName) {
  op <- new_operation(
    name = "CreateProject",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$create_project_input(ProjectName = ProjectName)
  output <- .rekognition$create_project_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$create_project <- rekognition_create_project

#' Creates a new version of a model and begins training
#'
#' @description
#' Creates a new version of a model and begins training. Models are managed as part of an Amazon Rekognition Custom Labels project. The response from [`create_project_version`][rekognition_create_project_version] is an Amazon Resource Name (ARN) for the version of the model.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_create_project_version/](https://www.paws-r-sdk.com/docs/rekognition_create_project_version/) for full documentation.
#'
#' @param ProjectArn &#91;required&#93; The ARN of the Amazon Rekognition Custom Labels project that manages the
#' model that you want to train.
#' @param VersionName &#91;required&#93; A name for the version of the model. This value must be unique.
#' @param OutputConfig &#91;required&#93; The Amazon S3 bucket location to store the results of training. The S3
#' bucket can be in any AWS account as long as the caller has
#' `s3:PutObject` permissions on the S3 bucket.
#' @param TrainingData Specifies an external manifest that the services uses to train the
#' model. If you specify `TrainingData` you must also specify
#' `TestingData`. The project must not have any associated datasets.
#' @param TestingData Specifies an external manifest that the service uses to test the model.
#' If you specify `TestingData` you must also specify `TrainingData`. The
#' project must not have any associated datasets.
#' @param Tags A set of tags (key-value pairs) that you want to attach to the model.
#' @param KmsKeyId The identifier for your AWS Key Management Service key (AWS KMS key).
#' You can supply the Amazon Resource Name (ARN) of your KMS key, the ID of
#' your KMS key, an alias for your KMS key, or an alias ARN. The key is
#' used to encrypt training and test images copied into the service for
#' model training. Your source images are unaffected. The key is also used
#' to encrypt training results and manifest files written to the output
#' Amazon S3 bucket (`OutputConfig`).
#' 
#' If you choose to use your own KMS key, you need the following
#' permissions on the KMS key.
#' 
#' -   kms:CreateGrant
#' 
#' -   kms:DescribeKey
#' 
#' -   kms:GenerateDataKey
#' 
#' -   kms:Decrypt
#' 
#' If you don't specify a value for `KmsKeyId`, images copied into the
#' service are encrypted using a key that AWS owns and manages.
#'
#' @keywords internal
#'
#' @rdname rekognition_create_project_version
rekognition_create_project_version <- function(ProjectArn, VersionName, OutputConfig, TrainingData = NULL, TestingData = NULL, Tags = NULL, KmsKeyId = NULL) {
  op <- new_operation(
    name = "CreateProjectVersion",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$create_project_version_input(ProjectArn = ProjectArn, VersionName = VersionName, OutputConfig = OutputConfig, TrainingData = TrainingData, TestingData = TestingData, Tags = Tags, KmsKeyId = KmsKeyId)
  output <- .rekognition$create_project_version_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$create_project_version <- rekognition_create_project_version

#' Creates an Amazon Rekognition stream processor that you can use to
#' detect and recognize faces or to detect labels in a streaming video
#'
#' @description
#' Creates an Amazon Rekognition stream processor that you can use to detect and recognize faces or to detect labels in a streaming video.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_create_stream_processor/](https://www.paws-r-sdk.com/docs/rekognition_create_stream_processor/) for full documentation.
#'
#' @param Input &#91;required&#93; Kinesis video stream stream that provides the source streaming video. If
#' you are using the AWS CLI, the parameter name is `StreamProcessorInput`.
#' This is required for both face search and label detection stream
#' processors.
#' @param Output &#91;required&#93; Kinesis data stream stream or Amazon S3 bucket location to which Amazon
#' Rekognition Video puts the analysis results. If you are using the AWS
#' CLI, the parameter name is `StreamProcessorOutput`. This must be a
#' S3Destination of an Amazon S3 bucket that you own for a label detection
#' stream processor or a Kinesis data stream ARN for a face search stream
#' processor.
#' @param Name &#91;required&#93; An identifier you assign to the stream processor. You can use `Name` to
#' manage the stream processor. For example, you can get the current status
#' of the stream processor by calling
#' [`describe_stream_processor`][rekognition_describe_stream_processor].
#' `Name` is idempotent. This is required for both face search and label
#' detection stream processors.
#' @param Settings &#91;required&#93; Input parameters used in a streaming video analyzed by a stream
#' processor. You can use `FaceSearch` to recognize faces in a streaming
#' video, or you can use `ConnectedHome` to detect labels.
#' @param RoleArn &#91;required&#93; The Amazon Resource Number (ARN) of the IAM role that allows access to
#' the stream processor. The IAM role provides Rekognition read permissions
#' for a Kinesis stream. It also provides write permissions to an Amazon S3
#' bucket and Amazon Simple Notification Service topic for a label
#' detection stream processor. This is required for both face search and
#' label detection stream processors.
#' @param Tags A set of tags (key-value pairs) that you want to attach to the stream
#' processor.
#' @param NotificationChannel 
#' @param KmsKeyId The identifier for your AWS Key Management Service key (AWS KMS key).
#' This is an optional parameter for label detection stream processors and
#' should not be used to create a face search stream processor. You can
#' supply the Amazon Resource Name (ARN) of your KMS key, the ID of your
#' KMS key, an alias for your KMS key, or an alias ARN. The key is used to
#' encrypt results and data published to your Amazon S3 bucket, which
#' includes image frames and hero images. Your source images are
#' unaffected.
#' @param RegionsOfInterest Specifies locations in the frames where Amazon Rekognition checks for
#' objects or people. You can specify up to 10 regions of interest, and
#' each region has either a polygon or a bounding box. This is an optional
#' parameter for label detection stream processors and should not be used
#' to create a face search stream processor.
#' @param DataSharingPreference Shows whether you are sharing data with Rekognition to improve model
#' performance. You can choose this option at the account level or on a
#' per-stream basis. Note that if you opt out at the account level this
#' setting is ignored on individual streams.
#'
#' @keywords internal
#'
#' @rdname rekognition_create_stream_processor
rekognition_create_stream_processor <- function(Input, Output, Name, Settings, RoleArn, Tags = NULL, NotificationChannel = NULL, KmsKeyId = NULL, RegionsOfInterest = NULL, DataSharingPreference = NULL) {
  op <- new_operation(
    name = "CreateStreamProcessor",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$create_stream_processor_input(Input = Input, Output = Output, Name = Name, Settings = Settings, RoleArn = RoleArn, Tags = Tags, NotificationChannel = NotificationChannel, KmsKeyId = KmsKeyId, RegionsOfInterest = RegionsOfInterest, DataSharingPreference = DataSharingPreference)
  output <- .rekognition$create_stream_processor_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$create_stream_processor <- rekognition_create_stream_processor

#' Creates a new User within a collection specified by CollectionId
#'
#' @description
#' Creates a new User within a collection specified by `CollectionId`. Takes `UserId` as a parameter, which is a user provided ID which should be unique within the collection. The provided `UserId` will alias the system generated UUID to make the `UserId` more user friendly.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_create_user/](https://www.paws-r-sdk.com/docs/rekognition_create_user/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; The ID of an existing collection to which the new UserID needs to be
#' created.
#' @param UserId &#91;required&#93; ID for the UserID to be created. This ID needs to be unique within the
#' collection.
#' @param ClientRequestToken Idempotent token used to identify the request to
#' [`create_user`][rekognition_create_user]. If you use the same token with
#' multiple [`create_user`][rekognition_create_user] requests, the same
#' response is returned. Use ClientRequestToken to prevent the same request
#' from being processed more than once.
#'
#' @keywords internal
#'
#' @rdname rekognition_create_user
rekognition_create_user <- function(CollectionId, UserId, ClientRequestToken = NULL) {
  op <- new_operation(
    name = "CreateUser",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$create_user_input(CollectionId = CollectionId, UserId = UserId, ClientRequestToken = ClientRequestToken)
  output <- .rekognition$create_user_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$create_user <- rekognition_create_user

#' Deletes the specified collection
#'
#' @description
#' Deletes the specified collection. Note that this operation removes all faces in the collection. For an example, see [Deleting a collection](https://docs.aws.amazon.com/rekognition/latest/dg/delete-collection-procedure.html).
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_delete_collection/](https://www.paws-r-sdk.com/docs/rekognition_delete_collection/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; ID of the collection to delete.
#'
#' @keywords internal
#'
#' @rdname rekognition_delete_collection
rekognition_delete_collection <- function(CollectionId) {
  op <- new_operation(
    name = "DeleteCollection",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$delete_collection_input(CollectionId = CollectionId)
  output <- .rekognition$delete_collection_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$delete_collection <- rekognition_delete_collection

#' Deletes an existing Amazon Rekognition Custom Labels dataset
#'
#' @description
#' Deletes an existing Amazon Rekognition Custom Labels dataset. Deleting a dataset might take while. Use [`describe_dataset`][rekognition_describe_dataset] to check the current status. The dataset is still deleting if the value of `Status` is `DELETE_IN_PROGRESS`. If you try to access the dataset after it is deleted, you get a `ResourceNotFoundException` exception.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_delete_dataset/](https://www.paws-r-sdk.com/docs/rekognition_delete_dataset/) for full documentation.
#'
#' @param DatasetArn &#91;required&#93; The ARN of the Amazon Rekognition Custom Labels dataset that you want to
#' delete.
#'
#' @keywords internal
#'
#' @rdname rekognition_delete_dataset
rekognition_delete_dataset <- function(DatasetArn) {
  op <- new_operation(
    name = "DeleteDataset",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$delete_dataset_input(DatasetArn = DatasetArn)
  output <- .rekognition$delete_dataset_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$delete_dataset <- rekognition_delete_dataset

#' Deletes faces from a collection
#'
#' @description
#' Deletes faces from a collection. You specify a collection ID and an array of face IDs to remove from the collection.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_delete_faces/](https://www.paws-r-sdk.com/docs/rekognition_delete_faces/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; Collection from which to remove the specific faces.
#' @param FaceIds &#91;required&#93; An array of face IDs to delete.
#'
#' @keywords internal
#'
#' @rdname rekognition_delete_faces
rekognition_delete_faces <- function(CollectionId, FaceIds) {
  op <- new_operation(
    name = "DeleteFaces",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$delete_faces_input(CollectionId = CollectionId, FaceIds = FaceIds)
  output <- .rekognition$delete_faces_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$delete_faces <- rekognition_delete_faces

#' Deletes an Amazon Rekognition Custom Labels project
#'
#' @description
#' Deletes an Amazon Rekognition Custom Labels project. To delete a project you must first delete all models associated with the project. To delete a model, see [`delete_project_version`][rekognition_delete_project_version].
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_delete_project/](https://www.paws-r-sdk.com/docs/rekognition_delete_project/) for full documentation.
#'
#' @param ProjectArn &#91;required&#93; The Amazon Resource Name (ARN) of the project that you want to delete.
#'
#' @keywords internal
#'
#' @rdname rekognition_delete_project
rekognition_delete_project <- function(ProjectArn) {
  op <- new_operation(
    name = "DeleteProject",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$delete_project_input(ProjectArn = ProjectArn)
  output <- .rekognition$delete_project_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$delete_project <- rekognition_delete_project

#' Deletes an existing project policy
#'
#' @description
#' Deletes an existing project policy.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_delete_project_policy/](https://www.paws-r-sdk.com/docs/rekognition_delete_project_policy/) for full documentation.
#'
#' @param ProjectArn &#91;required&#93; The Amazon Resource Name (ARN) of the project that the project policy
#' you want to delete is attached to.
#' @param PolicyName &#91;required&#93; The name of the policy that you want to delete.
#' @param PolicyRevisionId The ID of the project policy revision that you want to delete.
#'
#' @keywords internal
#'
#' @rdname rekognition_delete_project_policy
rekognition_delete_project_policy <- function(ProjectArn, PolicyName, PolicyRevisionId = NULL) {
  op <- new_operation(
    name = "DeleteProjectPolicy",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$delete_project_policy_input(ProjectArn = ProjectArn, PolicyName = PolicyName, PolicyRevisionId = PolicyRevisionId)
  output <- .rekognition$delete_project_policy_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$delete_project_policy <- rekognition_delete_project_policy

#' Deletes an Amazon Rekognition Custom Labels model
#'
#' @description
#' Deletes an Amazon Rekognition Custom Labels model.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_delete_project_version/](https://www.paws-r-sdk.com/docs/rekognition_delete_project_version/) for full documentation.
#'
#' @param ProjectVersionArn &#91;required&#93; The Amazon Resource Name (ARN) of the model version that you want to
#' delete.
#'
#' @keywords internal
#'
#' @rdname rekognition_delete_project_version
rekognition_delete_project_version <- function(ProjectVersionArn) {
  op <- new_operation(
    name = "DeleteProjectVersion",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$delete_project_version_input(ProjectVersionArn = ProjectVersionArn)
  output <- .rekognition$delete_project_version_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$delete_project_version <- rekognition_delete_project_version

#' Deletes the stream processor identified by Name
#'
#' @description
#' Deletes the stream processor identified by `Name`. You assign the value for `Name` when you create the stream processor with [`create_stream_processor`][rekognition_create_stream_processor]. You might not be able to use the same name for a stream processor for a few seconds after calling [`delete_stream_processor`][rekognition_delete_stream_processor].
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_delete_stream_processor/](https://www.paws-r-sdk.com/docs/rekognition_delete_stream_processor/) for full documentation.
#'
#' @param Name &#91;required&#93; The name of the stream processor you want to delete.
#'
#' @keywords internal
#'
#' @rdname rekognition_delete_stream_processor
rekognition_delete_stream_processor <- function(Name) {
  op <- new_operation(
    name = "DeleteStreamProcessor",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$delete_stream_processor_input(Name = Name)
  output <- .rekognition$delete_stream_processor_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$delete_stream_processor <- rekognition_delete_stream_processor

#' Deletes the specified UserID within the collection
#'
#' @description
#' Deletes the specified UserID within the collection. Faces that are associated with the UserID are disassociated from the UserID before deleting the specified UserID. If the specified `Collection` or `UserID` is already deleted or not found, a `ResourceNotFoundException` will be thrown. If the action is successful with a 200 response, an empty HTTP body is returned.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_delete_user/](https://www.paws-r-sdk.com/docs/rekognition_delete_user/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; The ID of an existing collection from which the UserID needs to be
#' deleted.
#' @param UserId &#91;required&#93; ID for the UserID to be deleted.
#' @param ClientRequestToken Idempotent token used to identify the request to
#' [`delete_user`][rekognition_delete_user]. If you use the same token with
#' multiple [`delete_user`][rekognition_delete_user]requests, the same
#' response is returned. Use ClientRequestToken to prevent the same request
#' from being processed more than once.
#'
#' @keywords internal
#'
#' @rdname rekognition_delete_user
rekognition_delete_user <- function(CollectionId, UserId, ClientRequestToken = NULL) {
  op <- new_operation(
    name = "DeleteUser",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$delete_user_input(CollectionId = CollectionId, UserId = UserId, ClientRequestToken = ClientRequestToken)
  output <- .rekognition$delete_user_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$delete_user <- rekognition_delete_user

#' Describes the specified collection
#'
#' @description
#' Describes the specified collection. You can use [`describe_collection`][rekognition_describe_collection] to get information, such as the number of faces indexed into a collection and the version of the model used by the collection for face detection.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_describe_collection/](https://www.paws-r-sdk.com/docs/rekognition_describe_collection/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; The ID of the collection to describe.
#'
#' @keywords internal
#'
#' @rdname rekognition_describe_collection
rekognition_describe_collection <- function(CollectionId) {
  op <- new_operation(
    name = "DescribeCollection",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$describe_collection_input(CollectionId = CollectionId)
  output <- .rekognition$describe_collection_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$describe_collection <- rekognition_describe_collection

#' Describes an Amazon Rekognition Custom Labels dataset
#'
#' @description
#' Describes an Amazon Rekognition Custom Labels dataset. You can get information such as the current status of a dataset and statistics about the images and labels in a dataset.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_describe_dataset/](https://www.paws-r-sdk.com/docs/rekognition_describe_dataset/) for full documentation.
#'
#' @param DatasetArn &#91;required&#93; The Amazon Resource Name (ARN) of the dataset that you want to describe.
#'
#' @keywords internal
#'
#' @rdname rekognition_describe_dataset
rekognition_describe_dataset <- function(DatasetArn) {
  op <- new_operation(
    name = "DescribeDataset",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$describe_dataset_input(DatasetArn = DatasetArn)
  output <- .rekognition$describe_dataset_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$describe_dataset <- rekognition_describe_dataset

#' Lists and describes the versions of a model in an Amazon Rekognition
#' Custom Labels project
#'
#' @description
#' Lists and describes the versions of a model in an Amazon Rekognition Custom Labels project. You can specify up to 10 model versions in `ProjectVersionArns`. If you don't specify a value, descriptions for all model versions in the project are returned.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_describe_project_versions/](https://www.paws-r-sdk.com/docs/rekognition_describe_project_versions/) for full documentation.
#'
#' @param ProjectArn &#91;required&#93; The Amazon Resource Name (ARN) of the project that contains the models
#' you want to describe.
#' @param VersionNames A list of model version names that you want to describe. You can add up
#' to 10 model version names to the list. If you don't specify a value, all
#' model descriptions are returned. A version name is part of a model
#' (ProjectVersion) ARN. For example, `my-model.2020-01-21T09.10.15` is the
#' version name in the following ARN.
#' `arn:aws:rekognition:us-east-1:123456789012:project/getting-started/version/my-model.2020-01-21T09.10.15/1234567890123`.
#' @param NextToken If the previous response was incomplete (because there is more results
#' to retrieve), Amazon Rekognition Custom Labels returns a pagination
#' token in the response. You can use this pagination token to retrieve the
#' next set of results.
#' @param MaxResults The maximum number of results to return per paginated call. The largest
#' value you can specify is 100. If you specify a value greater than 100, a
#' ValidationException error occurs. The default value is 100.
#'
#' @keywords internal
#'
#' @rdname rekognition_describe_project_versions
rekognition_describe_project_versions <- function(ProjectArn, VersionNames = NULL, NextToken = NULL, MaxResults = NULL) {
  op <- new_operation(
    name = "DescribeProjectVersions",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken", result_key = "ProjectVersionDescriptions")
  )
  input <- .rekognition$describe_project_versions_input(ProjectArn = ProjectArn, VersionNames = VersionNames, NextToken = NextToken, MaxResults = MaxResults)
  output <- .rekognition$describe_project_versions_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$describe_project_versions <- rekognition_describe_project_versions

#' Gets information about your Amazon Rekognition Custom Labels projects
#'
#' @description
#' Gets information about your Amazon Rekognition Custom Labels projects.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_describe_projects/](https://www.paws-r-sdk.com/docs/rekognition_describe_projects/) for full documentation.
#'
#' @param NextToken If the previous response was incomplete (because there is more results
#' to retrieve), Amazon Rekognition Custom Labels returns a pagination
#' token in the response. You can use this pagination token to retrieve the
#' next set of results.
#' @param MaxResults The maximum number of results to return per paginated call. The largest
#' value you can specify is 100. If you specify a value greater than 100, a
#' ValidationException error occurs. The default value is 100.
#' @param ProjectNames A list of the projects that you want Amazon Rekognition Custom Labels to
#' describe. If you don't specify a value, the response includes
#' descriptions for all the projects in your AWS account.
#'
#' @keywords internal
#'
#' @rdname rekognition_describe_projects
rekognition_describe_projects <- function(NextToken = NULL, MaxResults = NULL, ProjectNames = NULL) {
  op <- new_operation(
    name = "DescribeProjects",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken", result_key = "ProjectDescriptions")
  )
  input <- .rekognition$describe_projects_input(NextToken = NextToken, MaxResults = MaxResults, ProjectNames = ProjectNames)
  output <- .rekognition$describe_projects_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$describe_projects <- rekognition_describe_projects

#' Provides information about a stream processor created by
#' CreateStreamProcessor
#'
#' @description
#' Provides information about a stream processor created by [`create_stream_processor`][rekognition_create_stream_processor]. You can get information about the input and output streams, the input parameters for the face recognition being performed, and the current status of the stream processor.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_describe_stream_processor/](https://www.paws-r-sdk.com/docs/rekognition_describe_stream_processor/) for full documentation.
#'
#' @param Name &#91;required&#93; Name of the stream processor for which you want information.
#'
#' @keywords internal
#'
#' @rdname rekognition_describe_stream_processor
rekognition_describe_stream_processor <- function(Name) {
  op <- new_operation(
    name = "DescribeStreamProcessor",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$describe_stream_processor_input(Name = Name)
  output <- .rekognition$describe_stream_processor_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$describe_stream_processor <- rekognition_describe_stream_processor

#' Detects custom labels in a supplied image by using an Amazon Rekognition
#' Custom Labels model
#'
#' @description
#' Detects custom labels in a supplied image by using an Amazon Rekognition Custom Labels model.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_detect_custom_labels/](https://www.paws-r-sdk.com/docs/rekognition_detect_custom_labels/) for full documentation.
#'
#' @param ProjectVersionArn &#91;required&#93; The ARN of the model version that you want to use.
#' @param Image &#91;required&#93; 
#' @param MaxResults Maximum number of results you want the service to return in the
#' response. The service returns the specified number of highest confidence
#' labels ranked from highest confidence to lowest.
#' @param MinConfidence Specifies the minimum confidence level for the labels to return.
#' [`detect_custom_labels`][rekognition_detect_custom_labels] doesn't
#' return any labels with a confidence value that's lower than this
#' specified value. If you specify a value of 0,
#' [`detect_custom_labels`][rekognition_detect_custom_labels] returns all
#' labels, regardless of the assumed threshold applied to each label. If
#' you don't specify a value for `MinConfidence`,
#' [`detect_custom_labels`][rekognition_detect_custom_labels] returns
#' labels based on the assumed threshold of each label.
#'
#' @keywords internal
#'
#' @rdname rekognition_detect_custom_labels
rekognition_detect_custom_labels <- function(ProjectVersionArn, Image, MaxResults = NULL, MinConfidence = NULL) {
  op <- new_operation(
    name = "DetectCustomLabels",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$detect_custom_labels_input(ProjectVersionArn = ProjectVersionArn, Image = Image, MaxResults = MaxResults, MinConfidence = MinConfidence)
  output <- .rekognition$detect_custom_labels_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$detect_custom_labels <- rekognition_detect_custom_labels

#' Detects faces within an image that is provided as input
#'
#' @description
#' Detects faces within an image that is provided as input.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_detect_faces/](https://www.paws-r-sdk.com/docs/rekognition_detect_faces/) for full documentation.
#'
#' @param Image &#91;required&#93; The input image as base64-encoded bytes or an S3 object. If you use the
#' AWS CLI to call Amazon Rekognition operations, passing base64-encoded
#' image bytes is not supported.
#' 
#' If you are using an AWS SDK to call Amazon Rekognition, you might not
#' need to base64-encode image bytes passed using the `Bytes` field. For
#' more information, see Images in the Amazon Rekognition developer guide.
#' @param Attributes An array of facial attributes you want to be returned. A `DEFAULT`
#' subset of facial attributes - `BoundingBox`, `Confidence`, `Pose`,
#' `Quality`, and `Landmarks` - will always be returned. You can request
#' for specific facial attributes (in addition to the default list) - by
#' using \[`"DEFAULT", "FACE_OCCLUDED"`\] or just \[`"FACE_OCCLUDED"`\].
#' You can request for all facial attributes by using \[`"ALL"]`.
#' Requesting more attributes may increase response time.
#' 
#' If you provide both, `["ALL", "DEFAULT"]`, the service uses a logical
#' "AND" operator to determine which attributes to return (in this case,
#' all attributes).
#' 
#' Note that while the FaceOccluded and EyeDirection attributes are
#' supported when using [`detect_faces`][rekognition_detect_faces], they
#' aren't supported when analyzing videos with
#' [`start_face_detection`][rekognition_start_face_detection] and
#' [`get_face_detection`][rekognition_get_face_detection].
#'
#' @keywords internal
#'
#' @rdname rekognition_detect_faces
rekognition_detect_faces <- function(Image, Attributes = NULL) {
  op <- new_operation(
    name = "DetectFaces",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$detect_faces_input(Image = Image, Attributes = Attributes)
  output <- .rekognition$detect_faces_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$detect_faces <- rekognition_detect_faces

#' Detects instances of real-world entities within an image (JPEG or PNG)
#' provided as input
#'
#' @description
#' Detects instances of real-world entities within an image (JPEG or PNG) provided as input. This includes objects like flower, tree, and table; events like wedding, graduation, and birthday party; and concepts like landscape, evening, and nature.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_detect_labels/](https://www.paws-r-sdk.com/docs/rekognition_detect_labels/) for full documentation.
#'
#' @param Image &#91;required&#93; The input image as base64-encoded bytes or an S3 object. If you use the
#' AWS CLI to call Amazon Rekognition operations, passing image bytes is
#' not supported. Images stored in an S3 Bucket do not need to be
#' base64-encoded.
#' 
#' If you are using an AWS SDK to call Amazon Rekognition, you might not
#' need to base64-encode image bytes passed using the `Bytes` field. For
#' more information, see Images in the Amazon Rekognition developer guide.
#' @param MaxLabels Maximum number of labels you want the service to return in the response.
#' The service returns the specified number of highest confidence labels.
#' Only valid when GENERAL_LABELS is specified as a feature type in the
#' Feature input parameter.
#' @param MinConfidence Specifies the minimum confidence level for the labels to return. Amazon
#' Rekognition doesn't return any labels with confidence lower than this
#' specified value.
#' 
#' If `MinConfidence` is not specified, the operation returns labels with a
#' confidence values greater than or equal to 55 percent. Only valid when
#' GENERAL_LABELS is specified as a feature type in the Feature input
#' parameter.
#' @param Features A list of the types of analysis to perform. Specifying GENERAL_LABELS
#' uses the label detection feature, while specifying IMAGE_PROPERTIES
#' returns information regarding image color and quality. If no option is
#' specified GENERAL_LABELS is used by default.
#' @param Settings A list of the filters to be applied to returned detected labels and
#' image properties. Specified filters can be inclusive, exclusive, or a
#' combination of both. Filters can be used for individual labels or label
#' categories. The exact label names or label categories must be supplied.
#' For a full list of labels and label categories, see [Detecting
#' labels](https://docs.aws.amazon.com/rekognition/latest/dg/labels.html).
#'
#' @keywords internal
#'
#' @rdname rekognition_detect_labels
rekognition_detect_labels <- function(Image, MaxLabels = NULL, MinConfidence = NULL, Features = NULL, Settings = NULL) {
  op <- new_operation(
    name = "DetectLabels",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$detect_labels_input(Image = Image, MaxLabels = MaxLabels, MinConfidence = MinConfidence, Features = Features, Settings = Settings)
  output <- .rekognition$detect_labels_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$detect_labels <- rekognition_detect_labels

#' Detects unsafe content in a specified JPEG or PNG format image
#'
#' @description
#' Detects unsafe content in a specified JPEG or PNG format image. Use [`detect_moderation_labels`][rekognition_detect_moderation_labels] to moderate images depending on your requirements. For example, you might want to filter images that contain nudity, but not images containing suggestive content.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_detect_moderation_labels/](https://www.paws-r-sdk.com/docs/rekognition_detect_moderation_labels/) for full documentation.
#'
#' @param Image &#91;required&#93; The input image as base64-encoded bytes or an S3 object. If you use the
#' AWS CLI to call Amazon Rekognition operations, passing base64-encoded
#' image bytes is not supported.
#' 
#' If you are using an AWS SDK to call Amazon Rekognition, you might not
#' need to base64-encode image bytes passed using the `Bytes` field. For
#' more information, see Images in the Amazon Rekognition developer guide.
#' @param MinConfidence Specifies the minimum confidence level for the labels to return. Amazon
#' Rekognition doesn't return any labels with a confidence level lower than
#' this specified value.
#' 
#' If you don't specify `MinConfidence`, the operation returns labels with
#' confidence values greater than or equal to 50 percent.
#' @param HumanLoopConfig Sets up the configuration for human evaluation, including the
#' FlowDefinition the image will be sent to.
#'
#' @keywords internal
#'
#' @rdname rekognition_detect_moderation_labels
rekognition_detect_moderation_labels <- function(Image, MinConfidence = NULL, HumanLoopConfig = NULL) {
  op <- new_operation(
    name = "DetectModerationLabels",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$detect_moderation_labels_input(Image = Image, MinConfidence = MinConfidence, HumanLoopConfig = HumanLoopConfig)
  output <- .rekognition$detect_moderation_labels_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$detect_moderation_labels <- rekognition_detect_moderation_labels

#' Detects Personal Protective Equipment (PPE) worn by people detected in
#' an image
#'
#' @description
#' Detects Personal Protective Equipment (PPE) worn by people detected in an image. Amazon Rekognition can detect the following types of PPE.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_detect_protective_equipment/](https://www.paws-r-sdk.com/docs/rekognition_detect_protective_equipment/) for full documentation.
#'
#' @param Image &#91;required&#93; The image in which you want to detect PPE on detected persons. The image
#' can be passed as image bytes or you can reference an image stored in an
#' Amazon S3 bucket.
#' @param SummarizationAttributes An array of PPE types that you want to summarize.
#'
#' @keywords internal
#'
#' @rdname rekognition_detect_protective_equipment
rekognition_detect_protective_equipment <- function(Image, SummarizationAttributes = NULL) {
  op <- new_operation(
    name = "DetectProtectiveEquipment",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$detect_protective_equipment_input(Image = Image, SummarizationAttributes = SummarizationAttributes)
  output <- .rekognition$detect_protective_equipment_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$detect_protective_equipment <- rekognition_detect_protective_equipment

#' Detects text in the input image and converts it into machine-readable
#' text
#'
#' @description
#' Detects text in the input image and converts it into machine-readable text.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_detect_text/](https://www.paws-r-sdk.com/docs/rekognition_detect_text/) for full documentation.
#'
#' @param Image &#91;required&#93; The input image as base64-encoded bytes or an Amazon S3 object. If you
#' use the AWS CLI to call Amazon Rekognition operations, you can't pass
#' image bytes.
#' 
#' If you are using an AWS SDK to call Amazon Rekognition, you might not
#' need to base64-encode image bytes passed using the `Bytes` field. For
#' more information, see Images in the Amazon Rekognition developer guide.
#' @param Filters Optional parameters that let you set the criteria that the text must
#' meet to be included in your response.
#'
#' @keywords internal
#'
#' @rdname rekognition_detect_text
rekognition_detect_text <- function(Image, Filters = NULL) {
  op <- new_operation(
    name = "DetectText",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$detect_text_input(Image = Image, Filters = Filters)
  output <- .rekognition$detect_text_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$detect_text <- rekognition_detect_text

#' Removes the association between a Face supplied in an array of FaceIds
#' and the User
#'
#' @description
#' Removes the association between a `Face` supplied in an array of `FaceIds` and the User. If the User is not present already, then a `ResourceNotFound` exception is thrown. If successful, an array of faces that are disassociated from the User is returned. If a given face is already disassociated from the given UserID, it will be ignored and not be returned in the response. If a given face is already associated with a different User or not found in the collection it will be returned as part of `UnsuccessfulDisassociations`. You can remove 1 - 100 face IDs from a user at one time.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_disassociate_faces/](https://www.paws-r-sdk.com/docs/rekognition_disassociate_faces/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; The ID of an existing collection containing the UserID.
#' @param UserId &#91;required&#93; ID for the existing UserID.
#' @param ClientRequestToken Idempotent token used to identify the request to
#' [`disassociate_faces`][rekognition_disassociate_faces]. If you use the
#' same token with multiple
#' [`disassociate_faces`][rekognition_disassociate_faces] requests, the
#' same response is returned. Use ClientRequestToken to prevent the same
#' request from being processed more than once.
#' @param FaceIds &#91;required&#93; An array of face IDs to disassociate from the UserID.
#'
#' @keywords internal
#'
#' @rdname rekognition_disassociate_faces
rekognition_disassociate_faces <- function(CollectionId, UserId, ClientRequestToken = NULL, FaceIds) {
  op <- new_operation(
    name = "DisassociateFaces",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$disassociate_faces_input(CollectionId = CollectionId, UserId = UserId, ClientRequestToken = ClientRequestToken, FaceIds = FaceIds)
  output <- .rekognition$disassociate_faces_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$disassociate_faces <- rekognition_disassociate_faces

#' Distributes the entries (images) in a training dataset across the
#' training dataset and the test dataset for a project
#'
#' @description
#' Distributes the entries (images) in a training dataset across the training dataset and the test dataset for a project. [`distribute_dataset_entries`][rekognition_distribute_dataset_entries] moves 20% of the training dataset images to the test dataset. An entry is a JSON Line that describes an image.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_distribute_dataset_entries/](https://www.paws-r-sdk.com/docs/rekognition_distribute_dataset_entries/) for full documentation.
#'
#' @param Datasets &#91;required&#93; The ARNS for the training dataset and test dataset that you want to use.
#' The datasets must belong to the same project. The test dataset must be
#' empty.
#'
#' @keywords internal
#'
#' @rdname rekognition_distribute_dataset_entries
rekognition_distribute_dataset_entries <- function(Datasets) {
  op <- new_operation(
    name = "DistributeDatasetEntries",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$distribute_dataset_entries_input(Datasets = Datasets)
  output <- .rekognition$distribute_dataset_entries_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$distribute_dataset_entries <- rekognition_distribute_dataset_entries

#' Gets the name and additional information about a celebrity based on
#' their Amazon Rekognition ID
#'
#' @description
#' Gets the name and additional information about a celebrity based on their Amazon Rekognition ID. The additional information is returned as an array of URLs. If there is no additional information about the celebrity, this list is empty.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_get_celebrity_info/](https://www.paws-r-sdk.com/docs/rekognition_get_celebrity_info/) for full documentation.
#'
#' @param Id &#91;required&#93; The ID for the celebrity. You get the celebrity ID from a call to the
#' [`recognize_celebrities`][rekognition_recognize_celebrities] operation,
#' which recognizes celebrities in an image.
#'
#' @keywords internal
#'
#' @rdname rekognition_get_celebrity_info
rekognition_get_celebrity_info <- function(Id) {
  op <- new_operation(
    name = "GetCelebrityInfo",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$get_celebrity_info_input(Id = Id)
  output <- .rekognition$get_celebrity_info_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$get_celebrity_info <- rekognition_get_celebrity_info

#' Gets the celebrity recognition results for a Amazon Rekognition Video
#' analysis started by StartCelebrityRecognition
#'
#' @description
#' Gets the celebrity recognition results for a Amazon Rekognition Video analysis started by [`start_celebrity_recognition`][rekognition_start_celebrity_recognition].
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_get_celebrity_recognition/](https://www.paws-r-sdk.com/docs/rekognition_get_celebrity_recognition/) for full documentation.
#'
#' @param JobId &#91;required&#93; Job identifier for the required celebrity recognition analysis. You can
#' get the job identifer from a call to
#' [`start_celebrity_recognition`][rekognition_start_celebrity_recognition].
#' @param MaxResults Maximum number of results to return per paginated call. The largest
#' value you can specify is 1000. If you specify a value greater than 1000,
#' a maximum of 1000 results is returned. The default value is 1000.
#' @param NextToken If the previous response was incomplete (because there is more
#' recognized celebrities to retrieve), Amazon Rekognition Video returns a
#' pagination token in the response. You can use this pagination token to
#' retrieve the next set of celebrities.
#' @param SortBy Sort to use for celebrities returned in `Celebrities` field. Specify
#' `ID` to sort by the celebrity identifier, specify `TIMESTAMP` to sort by
#' the time the celebrity was recognized.
#'
#' @keywords internal
#'
#' @rdname rekognition_get_celebrity_recognition
rekognition_get_celebrity_recognition <- function(JobId, MaxResults = NULL, NextToken = NULL, SortBy = NULL) {
  op <- new_operation(
    name = "GetCelebrityRecognition",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken")
  )
  input <- .rekognition$get_celebrity_recognition_input(JobId = JobId, MaxResults = MaxResults, NextToken = NextToken, SortBy = SortBy)
  output <- .rekognition$get_celebrity_recognition_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$get_celebrity_recognition <- rekognition_get_celebrity_recognition

#' Gets the inappropriate, unwanted, or offensive content analysis results
#' for a Amazon Rekognition Video analysis started by
#' StartContentModeration
#'
#' @description
#' Gets the inappropriate, unwanted, or offensive content analysis results for a Amazon Rekognition Video analysis started by [`start_content_moderation`][rekognition_start_content_moderation]. For a list of moderation labels in Amazon Rekognition, see [Using the image and video moderation APIs](https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html#moderation-api).
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_get_content_moderation/](https://www.paws-r-sdk.com/docs/rekognition_get_content_moderation/) for full documentation.
#'
#' @param JobId &#91;required&#93; The identifier for the inappropriate, unwanted, or offensive content
#' moderation job. Use `JobId` to identify the job in a subsequent call to
#' [`get_content_moderation`][rekognition_get_content_moderation].
#' @param MaxResults Maximum number of results to return per paginated call. The largest
#' value you can specify is 1000. If you specify a value greater than 1000,
#' a maximum of 1000 results is returned. The default value is 1000.
#' @param NextToken If the previous response was incomplete (because there is more data to
#' retrieve), Amazon Rekognition returns a pagination token in the
#' response. You can use this pagination token to retrieve the next set of
#' content moderation labels.
#' @param SortBy Sort to use for elements in the `ModerationLabelDetections` array. Use
#' `TIMESTAMP` to sort array elements by the time labels are detected. Use
#' `NAME` to alphabetically group elements for a label together. Within
#' each label group, the array element are sorted by detection confidence.
#' The default sort is by `TIMESTAMP`.
#' @param AggregateBy Defines how to aggregate results of the StartContentModeration request.
#' Default aggregation option is TIMESTAMPS. SEGMENTS mode aggregates
#' moderation labels over time.
#'
#' @keywords internal
#'
#' @rdname rekognition_get_content_moderation
rekognition_get_content_moderation <- function(JobId, MaxResults = NULL, NextToken = NULL, SortBy = NULL, AggregateBy = NULL) {
  op <- new_operation(
    name = "GetContentModeration",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken")
  )
  input <- .rekognition$get_content_moderation_input(JobId = JobId, MaxResults = MaxResults, NextToken = NextToken, SortBy = SortBy, AggregateBy = AggregateBy)
  output <- .rekognition$get_content_moderation_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$get_content_moderation <- rekognition_get_content_moderation

#' Gets face detection results for a Amazon Rekognition Video analysis
#' started by StartFaceDetection
#'
#' @description
#' Gets face detection results for a Amazon Rekognition Video analysis started by [`start_face_detection`][rekognition_start_face_detection].
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_get_face_detection/](https://www.paws-r-sdk.com/docs/rekognition_get_face_detection/) for full documentation.
#'
#' @param JobId &#91;required&#93; Unique identifier for the face detection job. The `JobId` is returned
#' from [`start_face_detection`][rekognition_start_face_detection].
#' @param MaxResults Maximum number of results to return per paginated call. The largest
#' value you can specify is 1000. If you specify a value greater than 1000,
#' a maximum of 1000 results is returned. The default value is 1000.
#' @param NextToken If the previous response was incomplete (because there are more faces to
#' retrieve), Amazon Rekognition Video returns a pagination token in the
#' response. You can use this pagination token to retrieve the next set of
#' faces.
#'
#' @keywords internal
#'
#' @rdname rekognition_get_face_detection
rekognition_get_face_detection <- function(JobId, MaxResults = NULL, NextToken = NULL) {
  op <- new_operation(
    name = "GetFaceDetection",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken")
  )
  input <- .rekognition$get_face_detection_input(JobId = JobId, MaxResults = MaxResults, NextToken = NextToken)
  output <- .rekognition$get_face_detection_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$get_face_detection <- rekognition_get_face_detection

#' Retrieves the results of a specific Face Liveness session
#'
#' @description
#' Retrieves the results of a specific Face Liveness session. It requires the `sessionId` as input, which was created using [`create_face_liveness_session`][rekognition_create_face_liveness_session]. Returns the corresponding Face Liveness confidence score, a reference image that includes a face bounding box, and audit images that also contain face bounding boxes. The Face Liveness confidence score ranges from 0 to 100.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_get_face_liveness_session_results/](https://www.paws-r-sdk.com/docs/rekognition_get_face_liveness_session_results/) for full documentation.
#'
#' @param SessionId &#91;required&#93; A unique 128-bit UUID. This is used to uniquely identify the session and
#' also acts as an idempotency token for all operations associated with the
#' session.
#'
#' @keywords internal
#'
#' @rdname rekognition_get_face_liveness_session_results
rekognition_get_face_liveness_session_results <- function(SessionId) {
  op <- new_operation(
    name = "GetFaceLivenessSessionResults",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$get_face_liveness_session_results_input(SessionId = SessionId)
  output <- .rekognition$get_face_liveness_session_results_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$get_face_liveness_session_results <- rekognition_get_face_liveness_session_results

#' Gets the face search results for Amazon Rekognition Video face search
#' started by StartFaceSearch
#'
#' @description
#' Gets the face search results for Amazon Rekognition Video face search started by [`start_face_search`][rekognition_start_face_search]. The search returns faces in a collection that match the faces of persons detected in a video. It also includes the time(s) that faces are matched in the video.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_get_face_search/](https://www.paws-r-sdk.com/docs/rekognition_get_face_search/) for full documentation.
#'
#' @param JobId &#91;required&#93; The job identifer for the search request. You get the job identifier
#' from an initial call to
#' [`start_face_search`][rekognition_start_face_search].
#' @param MaxResults Maximum number of results to return per paginated call. The largest
#' value you can specify is 1000. If you specify a value greater than 1000,
#' a maximum of 1000 results is returned. The default value is 1000.
#' @param NextToken If the previous response was incomplete (because there is more search
#' results to retrieve), Amazon Rekognition Video returns a pagination
#' token in the response. You can use this pagination token to retrieve the
#' next set of search results.
#' @param SortBy Sort to use for grouping faces in the response. Use `TIMESTAMP` to group
#' faces by the time that they are recognized. Use `INDEX` to sort by
#' recognized faces.
#'
#' @keywords internal
#'
#' @rdname rekognition_get_face_search
rekognition_get_face_search <- function(JobId, MaxResults = NULL, NextToken = NULL, SortBy = NULL) {
  op <- new_operation(
    name = "GetFaceSearch",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken")
  )
  input <- .rekognition$get_face_search_input(JobId = JobId, MaxResults = MaxResults, NextToken = NextToken, SortBy = SortBy)
  output <- .rekognition$get_face_search_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$get_face_search <- rekognition_get_face_search

#' Gets the label detection results of a Amazon Rekognition Video analysis
#' started by StartLabelDetection
#'
#' @description
#' Gets the label detection results of a Amazon Rekognition Video analysis started by [`start_label_detection`][rekognition_start_label_detection].
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_get_label_detection/](https://www.paws-r-sdk.com/docs/rekognition_get_label_detection/) for full documentation.
#'
#' @param JobId &#91;required&#93; Job identifier for the label detection operation for which you want
#' results returned. You get the job identifer from an initial call to
#' `StartlabelDetection`.
#' @param MaxResults Maximum number of results to return per paginated call. The largest
#' value you can specify is 1000. If you specify a value greater than 1000,
#' a maximum of 1000 results is returned. The default value is 1000.
#' @param NextToken If the previous response was incomplete (because there are more labels
#' to retrieve), Amazon Rekognition Video returns a pagination token in the
#' response. You can use this pagination token to retrieve the next set of
#' labels.
#' @param SortBy Sort to use for elements in the `Labels` array. Use `TIMESTAMP` to sort
#' array elements by the time labels are detected. Use `NAME` to
#' alphabetically group elements for a label together. Within each label
#' group, the array element are sorted by detection confidence. The default
#' sort is by `TIMESTAMP`.
#' @param AggregateBy Defines how to aggregate the returned results. Results can be aggregated
#' by timestamps or segments.
#'
#' @keywords internal
#'
#' @rdname rekognition_get_label_detection
rekognition_get_label_detection <- function(JobId, MaxResults = NULL, NextToken = NULL, SortBy = NULL, AggregateBy = NULL) {
  op <- new_operation(
    name = "GetLabelDetection",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken")
  )
  input <- .rekognition$get_label_detection_input(JobId = JobId, MaxResults = MaxResults, NextToken = NextToken, SortBy = SortBy, AggregateBy = AggregateBy)
  output <- .rekognition$get_label_detection_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$get_label_detection <- rekognition_get_label_detection

#' Gets the path tracking results of a Amazon Rekognition Video analysis
#' started by StartPersonTracking
#'
#' @description
#' Gets the path tracking results of a Amazon Rekognition Video analysis started by [`start_person_tracking`][rekognition_start_person_tracking].
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_get_person_tracking/](https://www.paws-r-sdk.com/docs/rekognition_get_person_tracking/) for full documentation.
#'
#' @param JobId &#91;required&#93; The identifier for a job that tracks persons in a video. You get the
#' `JobId` from a call to
#' [`start_person_tracking`][rekognition_start_person_tracking].
#' @param MaxResults Maximum number of results to return per paginated call. The largest
#' value you can specify is 1000. If you specify a value greater than 1000,
#' a maximum of 1000 results is returned. The default value is 1000.
#' @param NextToken If the previous response was incomplete (because there are more persons
#' to retrieve), Amazon Rekognition Video returns a pagination token in the
#' response. You can use this pagination token to retrieve the next set of
#' persons.
#' @param SortBy Sort to use for elements in the `Persons` array. Use `TIMESTAMP` to sort
#' array elements by the time persons are detected. Use `INDEX` to sort by
#' the tracked persons. If you sort by `INDEX`, the array elements for each
#' person are sorted by detection confidence. The default sort is by
#' `TIMESTAMP`.
#'
#' @keywords internal
#'
#' @rdname rekognition_get_person_tracking
rekognition_get_person_tracking <- function(JobId, MaxResults = NULL, NextToken = NULL, SortBy = NULL) {
  op <- new_operation(
    name = "GetPersonTracking",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken")
  )
  input <- .rekognition$get_person_tracking_input(JobId = JobId, MaxResults = MaxResults, NextToken = NextToken, SortBy = SortBy)
  output <- .rekognition$get_person_tracking_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$get_person_tracking <- rekognition_get_person_tracking

#' Gets the segment detection results of a Amazon Rekognition Video
#' analysis started by StartSegmentDetection
#'
#' @description
#' Gets the segment detection results of a Amazon Rekognition Video analysis started by [`start_segment_detection`][rekognition_start_segment_detection].
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_get_segment_detection/](https://www.paws-r-sdk.com/docs/rekognition_get_segment_detection/) for full documentation.
#'
#' @param JobId &#91;required&#93; Job identifier for the text detection operation for which you want
#' results returned. You get the job identifer from an initial call to
#' [`start_segment_detection`][rekognition_start_segment_detection].
#' @param MaxResults Maximum number of results to return per paginated call. The largest
#' value you can specify is 1000.
#' @param NextToken If the response is truncated, Amazon Rekognition Video returns this
#' token that you can use in the subsequent request to retrieve the next
#' set of text.
#'
#' @keywords internal
#'
#' @rdname rekognition_get_segment_detection
rekognition_get_segment_detection <- function(JobId, MaxResults = NULL, NextToken = NULL) {
  op <- new_operation(
    name = "GetSegmentDetection",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken")
  )
  input <- .rekognition$get_segment_detection_input(JobId = JobId, MaxResults = MaxResults, NextToken = NextToken)
  output <- .rekognition$get_segment_detection_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$get_segment_detection <- rekognition_get_segment_detection

#' Gets the text detection results of a Amazon Rekognition Video analysis
#' started by StartTextDetection
#'
#' @description
#' Gets the text detection results of a Amazon Rekognition Video analysis started by [`start_text_detection`][rekognition_start_text_detection].
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_get_text_detection/](https://www.paws-r-sdk.com/docs/rekognition_get_text_detection/) for full documentation.
#'
#' @param JobId &#91;required&#93; Job identifier for the text detection operation for which you want
#' results returned. You get the job identifer from an initial call to
#' [`start_text_detection`][rekognition_start_text_detection].
#' @param MaxResults Maximum number of results to return per paginated call. The largest
#' value you can specify is 1000.
#' @param NextToken If the previous response was incomplete (because there are more labels
#' to retrieve), Amazon Rekognition Video returns a pagination token in the
#' response. You can use this pagination token to retrieve the next set of
#' text.
#'
#' @keywords internal
#'
#' @rdname rekognition_get_text_detection
rekognition_get_text_detection <- function(JobId, MaxResults = NULL, NextToken = NULL) {
  op <- new_operation(
    name = "GetTextDetection",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken")
  )
  input <- .rekognition$get_text_detection_input(JobId = JobId, MaxResults = MaxResults, NextToken = NextToken)
  output <- .rekognition$get_text_detection_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$get_text_detection <- rekognition_get_text_detection

#' Detects faces in the input image and adds them to the specified
#' collection
#'
#' @description
#' Detects faces in the input image and adds them to the specified collection.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_index_faces/](https://www.paws-r-sdk.com/docs/rekognition_index_faces/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; The ID of an existing collection to which you want to add the faces that
#' are detected in the input images.
#' @param Image &#91;required&#93; The input image as base64-encoded bytes or an S3 object. If you use the
#' AWS CLI to call Amazon Rekognition operations, passing base64-encoded
#' image bytes isn't supported.
#' 
#' If you are using an AWS SDK to call Amazon Rekognition, you might not
#' need to base64-encode image bytes passed using the `Bytes` field. For
#' more information, see Images in the Amazon Rekognition developer guide.
#' @param ExternalImageId The ID you want to assign to all the faces detected in the image.
#' @param DetectionAttributes An array of facial attributes you want to be returned. A `DEFAULT`
#' subset of facial attributes - `BoundingBox`, `Confidence`, `Pose`,
#' `Quality`, and `Landmarks` - will always be returned. You can request
#' for specific facial attributes (in addition to the default list) - by
#' using `["DEFAULT", "FACE_OCCLUDED"]` or just `["FACE_OCCLUDED"]`. You
#' can request for all facial attributes by using `["ALL"]`. Requesting
#' more attributes may increase response time.
#' 
#' If you provide both, `["ALL", "DEFAULT"]`, the service uses a logical
#' AND operator to determine which attributes to return (in this case, all
#' attributes).
#' @param MaxFaces The maximum number of faces to index. The value of `MaxFaces` must be
#' greater than or equal to 1. [`index_faces`][rekognition_index_faces]
#' returns no more than 100 detected faces in an image, even if you specify
#' a larger value for `MaxFaces`.
#' 
#' If [`index_faces`][rekognition_index_faces] detects more faces than the
#' value of `MaxFaces`, the faces with the lowest quality are filtered out
#' first. If there are still more faces than the value of `MaxFaces`, the
#' faces with the smallest bounding boxes are filtered out (up to the
#' number that's needed to satisfy the value of `MaxFaces`). Information
#' about the unindexed faces is available in the `UnindexedFaces` array.
#' 
#' The faces that are returned by [`index_faces`][rekognition_index_faces]
#' are sorted by the largest face bounding box size to the smallest size,
#' in descending order.
#' 
#' `MaxFaces` can be used with a collection associated with any version of
#' the face model.
#' @param QualityFilter A filter that specifies a quality bar for how much filtering is done to
#' identify faces. Filtered faces aren't indexed. If you specify `AUTO`,
#' Amazon Rekognition chooses the quality bar. If you specify `LOW`,
#' `MEDIUM`, or `HIGH`, filtering removes all faces that don’t meet the
#' chosen quality bar. The default value is `AUTO`. The quality bar is
#' based on a variety of common use cases. Low-quality detections can occur
#' for a number of reasons. Some examples are an object that's
#' misidentified as a face, a face that's too blurry, or a face with a pose
#' that's too extreme to use. If you specify `NONE`, no filtering is
#' performed.
#' 
#' To use quality filtering, the collection you are using must be
#' associated with version 3 of the face model or higher.
#'
#' @keywords internal
#'
#' @rdname rekognition_index_faces
rekognition_index_faces <- function(CollectionId, Image, ExternalImageId = NULL, DetectionAttributes = NULL, MaxFaces = NULL, QualityFilter = NULL) {
  op <- new_operation(
    name = "IndexFaces",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$index_faces_input(CollectionId = CollectionId, Image = Image, ExternalImageId = ExternalImageId, DetectionAttributes = DetectionAttributes, MaxFaces = MaxFaces, QualityFilter = QualityFilter)
  output <- .rekognition$index_faces_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$index_faces <- rekognition_index_faces

#' Returns list of collection IDs in your account
#'
#' @description
#' Returns list of collection IDs in your account. If the result is truncated, the response also provides a `NextToken` that you can use in the subsequent request to fetch the next set of collection IDs.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_list_collections/](https://www.paws-r-sdk.com/docs/rekognition_list_collections/) for full documentation.
#'
#' @param NextToken Pagination token from the previous response.
#' @param MaxResults Maximum number of collection IDs to return.
#'
#' @keywords internal
#'
#' @rdname rekognition_list_collections
rekognition_list_collections <- function(NextToken = NULL, MaxResults = NULL) {
  op <- new_operation(
    name = "ListCollections",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken", result_key = "CollectionIds")
  )
  input <- .rekognition$list_collections_input(NextToken = NextToken, MaxResults = MaxResults)
  output <- .rekognition$list_collections_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$list_collections <- rekognition_list_collections

#' Lists the entries (images) within a dataset
#'
#' @description
#' Lists the entries (images) within a dataset. An entry is a JSON Line that contains the information for a single image, including the image location, assigned labels, and object location bounding boxes. For more information, see [Creating a manifest file](https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/).
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_list_dataset_entries/](https://www.paws-r-sdk.com/docs/rekognition_list_dataset_entries/) for full documentation.
#'
#' @param DatasetArn &#91;required&#93; The Amazon Resource Name (ARN) for the dataset that you want to use.
#' @param ContainsLabels Specifies a label filter for the response. The response includes an
#' entry only if one or more of the labels in `ContainsLabels` exist in the
#' entry.
#' @param Labeled Specify `true` to get only the JSON Lines where the image is labeled.
#' Specify `false` to get only the JSON Lines where the image isn't
#' labeled. If you don't specify `Labeled`,
#' [`list_dataset_entries`][rekognition_list_dataset_entries] returns JSON
#' Lines for labeled and unlabeled images.
#' @param SourceRefContains If specified, [`list_dataset_entries`][rekognition_list_dataset_entries]
#' only returns JSON Lines where the value of `SourceRefContains` is part
#' of the `source-ref` field. The `source-ref` field contains the Amazon S3
#' location of the image. You can use `SouceRefContains` for tasks such as
#' getting the JSON Line for a single image, or gettting JSON Lines for all
#' images within a specific folder.
#' @param HasErrors Specifies an error filter for the response. Specify `True` to only
#' include entries that have errors.
#' @param NextToken If the previous response was incomplete (because there is more results
#' to retrieve), Amazon Rekognition Custom Labels returns a pagination
#' token in the response. You can use this pagination token to retrieve the
#' next set of results.
#' @param MaxResults The maximum number of results to return per paginated call. The largest
#' value you can specify is 100. If you specify a value greater than 100, a
#' ValidationException error occurs. The default value is 100.
#'
#' @keywords internal
#'
#' @rdname rekognition_list_dataset_entries
rekognition_list_dataset_entries <- function(DatasetArn, ContainsLabels = NULL, Labeled = NULL, SourceRefContains = NULL, HasErrors = NULL, NextToken = NULL, MaxResults = NULL) {
  op <- new_operation(
    name = "ListDatasetEntries",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken", result_key = "DatasetEntries")
  )
  input <- .rekognition$list_dataset_entries_input(DatasetArn = DatasetArn, ContainsLabels = ContainsLabels, Labeled = Labeled, SourceRefContains = SourceRefContains, HasErrors = HasErrors, NextToken = NextToken, MaxResults = MaxResults)
  output <- .rekognition$list_dataset_entries_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$list_dataset_entries <- rekognition_list_dataset_entries

#' Lists the labels in a dataset
#'
#' @description
#' Lists the labels in a dataset. Amazon Rekognition Custom Labels uses labels to describe images. For more information, see [Labeling images](https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/md-labeling-images.html).
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_list_dataset_labels/](https://www.paws-r-sdk.com/docs/rekognition_list_dataset_labels/) for full documentation.
#'
#' @param DatasetArn &#91;required&#93; The Amazon Resource Name (ARN) of the dataset that you want to use.
#' @param NextToken If the previous response was incomplete (because there is more results
#' to retrieve), Amazon Rekognition Custom Labels returns a pagination
#' token in the response. You can use this pagination token to retrieve the
#' next set of results.
#' @param MaxResults The maximum number of results to return per paginated call. The largest
#' value you can specify is 100. If you specify a value greater than 100, a
#' ValidationException error occurs. The default value is 100.
#'
#' @keywords internal
#'
#' @rdname rekognition_list_dataset_labels
rekognition_list_dataset_labels <- function(DatasetArn, NextToken = NULL, MaxResults = NULL) {
  op <- new_operation(
    name = "ListDatasetLabels",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken", result_key = "DatasetLabelDescriptions")
  )
  input <- .rekognition$list_dataset_labels_input(DatasetArn = DatasetArn, NextToken = NextToken, MaxResults = MaxResults)
  output <- .rekognition$list_dataset_labels_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$list_dataset_labels <- rekognition_list_dataset_labels

#' Returns metadata for faces in the specified collection
#'
#' @description
#' Returns metadata for faces in the specified collection. This metadata includes information such as the bounding box coordinates, the confidence (that the bounding box contains a face), and face ID. For an example, see Listing Faces in a Collection in the Amazon Rekognition Developer Guide.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_list_faces/](https://www.paws-r-sdk.com/docs/rekognition_list_faces/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; ID of the collection from which to list the faces.
#' @param NextToken If the previous response was incomplete (because there is more data to
#' retrieve), Amazon Rekognition returns a pagination token in the
#' response. You can use this pagination token to retrieve the next set of
#' faces.
#' @param MaxResults Maximum number of faces to return.
#' @param UserId An array of user IDs to filter results with when listing faces in a
#' collection.
#' @param FaceIds An array of face IDs to filter results with when listing faces in a
#' collection.
#'
#' @keywords internal
#'
#' @rdname rekognition_list_faces
rekognition_list_faces <- function(CollectionId, NextToken = NULL, MaxResults = NULL, UserId = NULL, FaceIds = NULL) {
  op <- new_operation(
    name = "ListFaces",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken", result_key = "Faces")
  )
  input <- .rekognition$list_faces_input(CollectionId = CollectionId, NextToken = NextToken, MaxResults = MaxResults, UserId = UserId, FaceIds = FaceIds)
  output <- .rekognition$list_faces_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$list_faces <- rekognition_list_faces

#' Gets a list of the project policies attached to a project
#'
#' @description
#' Gets a list of the project policies attached to a project.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_list_project_policies/](https://www.paws-r-sdk.com/docs/rekognition_list_project_policies/) for full documentation.
#'
#' @param ProjectArn &#91;required&#93; The ARN of the project for which you want to list the project policies.
#' @param NextToken If the previous response was incomplete (because there is more results
#' to retrieve), Amazon Rekognition Custom Labels returns a pagination
#' token in the response. You can use this pagination token to retrieve the
#' next set of results.
#' @param MaxResults The maximum number of results to return per paginated call. The largest
#' value you can specify is 5. If you specify a value greater than 5, a
#' ValidationException error occurs. The default value is 5.
#'
#' @keywords internal
#'
#' @rdname rekognition_list_project_policies
rekognition_list_project_policies <- function(ProjectArn, NextToken = NULL, MaxResults = NULL) {
  op <- new_operation(
    name = "ListProjectPolicies",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken", result_key = "ProjectPolicies")
  )
  input <- .rekognition$list_project_policies_input(ProjectArn = ProjectArn, NextToken = NextToken, MaxResults = MaxResults)
  output <- .rekognition$list_project_policies_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$list_project_policies <- rekognition_list_project_policies

#' Gets a list of stream processors that you have created with
#' CreateStreamProcessor
#'
#' @description
#' Gets a list of stream processors that you have created with [`create_stream_processor`][rekognition_create_stream_processor].
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_list_stream_processors/](https://www.paws-r-sdk.com/docs/rekognition_list_stream_processors/) for full documentation.
#'
#' @param NextToken If the previous response was incomplete (because there are more stream
#' processors to retrieve), Amazon Rekognition Video returns a pagination
#' token in the response. You can use this pagination token to retrieve the
#' next set of stream processors.
#' @param MaxResults Maximum number of stream processors you want Amazon Rekognition Video to
#' return in the response. The default is 1000.
#'
#' @keywords internal
#'
#' @rdname rekognition_list_stream_processors
rekognition_list_stream_processors <- function(NextToken = NULL, MaxResults = NULL) {
  op <- new_operation(
    name = "ListStreamProcessors",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken")
  )
  input <- .rekognition$list_stream_processors_input(NextToken = NextToken, MaxResults = MaxResults)
  output <- .rekognition$list_stream_processors_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$list_stream_processors <- rekognition_list_stream_processors

#' Returns a list of tags in an Amazon Rekognition collection, stream
#' processor, or Custom Labels model
#'
#' @description
#' Returns a list of tags in an Amazon Rekognition collection, stream processor, or Custom Labels model.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_list_tags_for_resource/](https://www.paws-r-sdk.com/docs/rekognition_list_tags_for_resource/) for full documentation.
#'
#' @param ResourceArn &#91;required&#93; Amazon Resource Name (ARN) of the model, collection, or stream processor
#' that contains the tags that you want a list of.
#'
#' @keywords internal
#'
#' @rdname rekognition_list_tags_for_resource
rekognition_list_tags_for_resource <- function(ResourceArn) {
  op <- new_operation(
    name = "ListTagsForResource",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$list_tags_for_resource_input(ResourceArn = ResourceArn)
  output <- .rekognition$list_tags_for_resource_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$list_tags_for_resource <- rekognition_list_tags_for_resource

#' Returns metadata of the User such as UserID in the specified collection
#'
#' @description
#' Returns metadata of the User such as `UserID` in the specified collection. Anonymous User (to reserve faces without any identity) is not returned as part of this request. The results are sorted by system generated primary key ID. If the response is truncated, `NextToken` is returned in the response that can be used in the subsequent request to retrieve the next set of identities.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_list_users/](https://www.paws-r-sdk.com/docs/rekognition_list_users/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; The ID of an existing collection.
#' @param MaxResults Maximum number of UsersID to return.
#' @param NextToken Pagingation token to receive the next set of UsersID.
#'
#' @keywords internal
#'
#' @rdname rekognition_list_users
rekognition_list_users <- function(CollectionId, MaxResults = NULL, NextToken = NULL) {
  op <- new_operation(
    name = "ListUsers",
    http_method = "POST",
    http_path = "/",
    paginator = list(input_token = "NextToken", limit_key = "MaxResults", output_token = "NextToken", result_key = "Users")
  )
  input <- .rekognition$list_users_input(CollectionId = CollectionId, MaxResults = MaxResults, NextToken = NextToken)
  output <- .rekognition$list_users_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$list_users <- rekognition_list_users

#' Attaches a project policy to a Amazon Rekognition Custom Labels project
#' in a trusting AWS account
#'
#' @description
#' Attaches a project policy to a Amazon Rekognition Custom Labels project in a trusting AWS account. A project policy specifies that a trusted AWS account can copy a model version from a trusting AWS account to a project in the trusted AWS account. To copy a model version you use the [`copy_project_version`][rekognition_copy_project_version] operation.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_put_project_policy/](https://www.paws-r-sdk.com/docs/rekognition_put_project_policy/) for full documentation.
#'
#' @param ProjectArn &#91;required&#93; The Amazon Resource Name (ARN) of the project that the project policy is
#' attached to.
#' @param PolicyName &#91;required&#93; A name for the policy.
#' @param PolicyRevisionId The revision ID for the Project Policy. Each time you modify a policy,
#' Amazon Rekognition Custom Labels generates and assigns a new
#' `PolicyRevisionId` and then deletes the previous version of the policy.
#' @param PolicyDocument &#91;required&#93; A resource policy to add to the model. The policy is a JSON structure
#' that contains one or more statements that define the policy. The policy
#' must follow the IAM syntax. For more information about the contents of a
#' JSON policy document, see [IAM JSON policy
#' reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies.html).
#'
#' @keywords internal
#'
#' @rdname rekognition_put_project_policy
rekognition_put_project_policy <- function(ProjectArn, PolicyName, PolicyRevisionId = NULL, PolicyDocument) {
  op <- new_operation(
    name = "PutProjectPolicy",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$put_project_policy_input(ProjectArn = ProjectArn, PolicyName = PolicyName, PolicyRevisionId = PolicyRevisionId, PolicyDocument = PolicyDocument)
  output <- .rekognition$put_project_policy_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$put_project_policy <- rekognition_put_project_policy

#' Returns an array of celebrities recognized in the input image
#'
#' @description
#' Returns an array of celebrities recognized in the input image. For more information, see Recognizing celebrities in the Amazon Rekognition Developer Guide.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_recognize_celebrities/](https://www.paws-r-sdk.com/docs/rekognition_recognize_celebrities/) for full documentation.
#'
#' @param Image &#91;required&#93; The input image as base64-encoded bytes or an S3 object. If you use the
#' AWS CLI to call Amazon Rekognition operations, passing base64-encoded
#' image bytes is not supported.
#' 
#' If you are using an AWS SDK to call Amazon Rekognition, you might not
#' need to base64-encode image bytes passed using the `Bytes` field. For
#' more information, see Images in the Amazon Rekognition developer guide.
#'
#' @keywords internal
#'
#' @rdname rekognition_recognize_celebrities
rekognition_recognize_celebrities <- function(Image) {
  op <- new_operation(
    name = "RecognizeCelebrities",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$recognize_celebrities_input(Image = Image)
  output <- .rekognition$recognize_celebrities_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$recognize_celebrities <- rekognition_recognize_celebrities

#' For a given input face ID, searches for matching faces in the collection
#' the face belongs to
#'
#' @description
#' For a given input face ID, searches for matching faces in the collection the face belongs to. You get a face ID when you add a face to the collection using the [`index_faces`][rekognition_index_faces] operation. The operation compares the features of the input face with faces in the specified collection.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_search_faces/](https://www.paws-r-sdk.com/docs/rekognition_search_faces/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; ID of the collection the face belongs to.
#' @param FaceId &#91;required&#93; ID of a face to find matches for in the collection.
#' @param MaxFaces Maximum number of faces to return. The operation returns the maximum
#' number of faces with the highest confidence in the match.
#' @param FaceMatchThreshold Optional value specifying the minimum confidence in the face match to
#' return. For example, don't return any matches where confidence in
#' matches is less than 70%. The default value is 80%.
#'
#' @keywords internal
#'
#' @rdname rekognition_search_faces
rekognition_search_faces <- function(CollectionId, FaceId, MaxFaces = NULL, FaceMatchThreshold = NULL) {
  op <- new_operation(
    name = "SearchFaces",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$search_faces_input(CollectionId = CollectionId, FaceId = FaceId, MaxFaces = MaxFaces, FaceMatchThreshold = FaceMatchThreshold)
  output <- .rekognition$search_faces_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$search_faces <- rekognition_search_faces

#' For a given input image, first detects the largest face in the image,
#' and then searches the specified collection for matching faces
#'
#' @description
#' For a given input image, first detects the largest face in the image, and then searches the specified collection for matching faces. The operation compares the features of the input face with faces in the specified collection.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_search_faces_by_image/](https://www.paws-r-sdk.com/docs/rekognition_search_faces_by_image/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; ID of the collection to search.
#' @param Image &#91;required&#93; The input image as base64-encoded bytes or an S3 object. If you use the
#' AWS CLI to call Amazon Rekognition operations, passing base64-encoded
#' image bytes is not supported.
#' 
#' If you are using an AWS SDK to call Amazon Rekognition, you might not
#' need to base64-encode image bytes passed using the `Bytes` field. For
#' more information, see Images in the Amazon Rekognition developer guide.
#' @param MaxFaces Maximum number of faces to return. The operation returns the maximum
#' number of faces with the highest confidence in the match.
#' @param FaceMatchThreshold (Optional) Specifies the minimum confidence in the face match to return.
#' For example, don't return any matches where confidence in matches is
#' less than 70%. The default value is 80%.
#' @param QualityFilter A filter that specifies a quality bar for how much filtering is done to
#' identify faces. Filtered faces aren't searched for in the collection. If
#' you specify `AUTO`, Amazon Rekognition chooses the quality bar. If you
#' specify `LOW`, `MEDIUM`, or `HIGH`, filtering removes all faces that
#' don’t meet the chosen quality bar. The quality bar is based on a variety
#' of common use cases. Low-quality detections can occur for a number of
#' reasons. Some examples are an object that's misidentified as a face, a
#' face that's too blurry, or a face with a pose that's too extreme to use.
#' If you specify `NONE`, no filtering is performed. The default value is
#' `NONE`.
#' 
#' To use quality filtering, the collection you are using must be
#' associated with version 3 of the face model or higher.
#'
#' @keywords internal
#'
#' @rdname rekognition_search_faces_by_image
rekognition_search_faces_by_image <- function(CollectionId, Image, MaxFaces = NULL, FaceMatchThreshold = NULL, QualityFilter = NULL) {
  op <- new_operation(
    name = "SearchFacesByImage",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$search_faces_by_image_input(CollectionId = CollectionId, Image = Image, MaxFaces = MaxFaces, FaceMatchThreshold = FaceMatchThreshold, QualityFilter = QualityFilter)
  output <- .rekognition$search_faces_by_image_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$search_faces_by_image <- rekognition_search_faces_by_image

#' Searches for UserIDs within a collection based on a FaceId or UserId
#'
#' @description
#' Searches for UserIDs within a collection based on a `FaceId` or `UserId`. This API can be used to find the closest UserID (with a highest similarity) to associate a face. The request must be provided with either `FaceId` or `UserId`. The operation returns an array of UserID that match the `FaceId` or `UserId`, ordered by similarity score with the highest similarity first.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_search_users/](https://www.paws-r-sdk.com/docs/rekognition_search_users/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; The ID of an existing collection containing the UserID, used with a
#' UserId or FaceId. If a FaceId is provided, UserId isn’t required to be
#' present in the Collection.
#' @param UserId ID for the existing User.
#' @param FaceId ID for the existing face.
#' @param UserMatchThreshold Optional value that specifies the minimum confidence in the matched
#' UserID to return. Default value of 80.
#' @param MaxUsers Maximum number of identities to return.
#'
#' @keywords internal
#'
#' @rdname rekognition_search_users
rekognition_search_users <- function(CollectionId, UserId = NULL, FaceId = NULL, UserMatchThreshold = NULL, MaxUsers = NULL) {
  op <- new_operation(
    name = "SearchUsers",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$search_users_input(CollectionId = CollectionId, UserId = UserId, FaceId = FaceId, UserMatchThreshold = UserMatchThreshold, MaxUsers = MaxUsers)
  output <- .rekognition$search_users_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$search_users <- rekognition_search_users

#' Searches for UserIDs using a supplied image
#'
#' @description
#' Searches for UserIDs using a supplied image. It first detects the largest face in the image, and then searches a specified collection for matching UserIDs.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_search_users_by_image/](https://www.paws-r-sdk.com/docs/rekognition_search_users_by_image/) for full documentation.
#'
#' @param CollectionId &#91;required&#93; The ID of an existing collection containing the UserID.
#' @param Image &#91;required&#93; 
#' @param UserMatchThreshold Specifies the minimum confidence in the UserID match to return. Default
#' value is 80.
#' @param MaxUsers Maximum number of UserIDs to return.
#' @param QualityFilter A filter that specifies a quality bar for how much filtering is done to
#' identify faces. Filtered faces aren't searched for in the collection.
#' The default value is NONE.
#'
#' @keywords internal
#'
#' @rdname rekognition_search_users_by_image
rekognition_search_users_by_image <- function(CollectionId, Image, UserMatchThreshold = NULL, MaxUsers = NULL, QualityFilter = NULL) {
  op <- new_operation(
    name = "SearchUsersByImage",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$search_users_by_image_input(CollectionId = CollectionId, Image = Image, UserMatchThreshold = UserMatchThreshold, MaxUsers = MaxUsers, QualityFilter = QualityFilter)
  output <- .rekognition$search_users_by_image_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$search_users_by_image <- rekognition_search_users_by_image

#' Starts asynchronous recognition of celebrities in a stored video
#'
#' @description
#' Starts asynchronous recognition of celebrities in a stored video.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_start_celebrity_recognition/](https://www.paws-r-sdk.com/docs/rekognition_start_celebrity_recognition/) for full documentation.
#'
#' @param Video &#91;required&#93; The video in which you want to recognize celebrities. The video must be
#' stored in an Amazon S3 bucket.
#' @param ClientRequestToken Idempotent token used to identify the start request. If you use the same
#' token with multiple
#' [`start_celebrity_recognition`][rekognition_start_celebrity_recognition]
#' requests, the same `JobId` is returned. Use `ClientRequestToken` to
#' prevent the same job from being accidently started more than once.
#' @param NotificationChannel The Amazon SNS topic ARN that you want Amazon Rekognition Video to
#' publish the completion status of the celebrity recognition analysis to.
#' The Amazon SNS topic must have a topic name that begins with
#' *AmazonRekognition* if you are using the AmazonRekognitionServiceRole
#' permissions policy.
#' @param JobTag An identifier you specify that's returned in the completion notification
#' that's published to your Amazon Simple Notification Service topic. For
#' example, you can use `JobTag` to group related jobs and identify them in
#' the completion notification.
#'
#' @keywords internal
#'
#' @rdname rekognition_start_celebrity_recognition
rekognition_start_celebrity_recognition <- function(Video, ClientRequestToken = NULL, NotificationChannel = NULL, JobTag = NULL) {
  op <- new_operation(
    name = "StartCelebrityRecognition",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$start_celebrity_recognition_input(Video = Video, ClientRequestToken = ClientRequestToken, NotificationChannel = NotificationChannel, JobTag = JobTag)
  output <- .rekognition$start_celebrity_recognition_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$start_celebrity_recognition <- rekognition_start_celebrity_recognition

#' Starts asynchronous detection of inappropriate, unwanted, or offensive
#' content in a stored video
#'
#' @description
#' Starts asynchronous detection of inappropriate, unwanted, or offensive content in a stored video. For a list of moderation labels in Amazon Rekognition, see [Using the image and video moderation APIs](https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html#moderation-api).
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_start_content_moderation/](https://www.paws-r-sdk.com/docs/rekognition_start_content_moderation/) for full documentation.
#'
#' @param Video &#91;required&#93; The video in which you want to detect inappropriate, unwanted, or
#' offensive content. The video must be stored in an Amazon S3 bucket.
#' @param MinConfidence Specifies the minimum confidence that Amazon Rekognition must have in
#' order to return a moderated content label. Confidence represents how
#' certain Amazon Rekognition is that the moderated content is correctly
#' identified. 0 is the lowest confidence. 100 is the highest confidence.
#' Amazon Rekognition doesn't return any moderated content labels with a
#' confidence level lower than this specified value. If you don't specify
#' `MinConfidence`,
#' [`get_content_moderation`][rekognition_get_content_moderation] returns
#' labels with confidence values greater than or equal to 50 percent.
#' @param ClientRequestToken Idempotent token used to identify the start request. If you use the same
#' token with multiple
#' [`start_content_moderation`][rekognition_start_content_moderation]
#' requests, the same `JobId` is returned. Use `ClientRequestToken` to
#' prevent the same job from being accidently started more than once.
#' @param NotificationChannel The Amazon SNS topic ARN that you want Amazon Rekognition Video to
#' publish the completion status of the content analysis to. The Amazon SNS
#' topic must have a topic name that begins with *AmazonRekognition* if you
#' are using the AmazonRekognitionServiceRole permissions policy to access
#' the topic.
#' @param JobTag An identifier you specify that's returned in the completion notification
#' that's published to your Amazon Simple Notification Service topic. For
#' example, you can use `JobTag` to group related jobs and identify them in
#' the completion notification.
#'
#' @keywords internal
#'
#' @rdname rekognition_start_content_moderation
rekognition_start_content_moderation <- function(Video, MinConfidence = NULL, ClientRequestToken = NULL, NotificationChannel = NULL, JobTag = NULL) {
  op <- new_operation(
    name = "StartContentModeration",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$start_content_moderation_input(Video = Video, MinConfidence = MinConfidence, ClientRequestToken = ClientRequestToken, NotificationChannel = NotificationChannel, JobTag = JobTag)
  output <- .rekognition$start_content_moderation_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$start_content_moderation <- rekognition_start_content_moderation

#' Starts asynchronous detection of faces in a stored video
#'
#' @description
#' Starts asynchronous detection of faces in a stored video.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_start_face_detection/](https://www.paws-r-sdk.com/docs/rekognition_start_face_detection/) for full documentation.
#'
#' @param Video &#91;required&#93; The video in which you want to detect faces. The video must be stored in
#' an Amazon S3 bucket.
#' @param ClientRequestToken Idempotent token used to identify the start request. If you use the same
#' token with multiple
#' [`start_face_detection`][rekognition_start_face_detection] requests, the
#' same `JobId` is returned. Use `ClientRequestToken` to prevent the same
#' job from being accidently started more than once.
#' @param NotificationChannel The ARN of the Amazon SNS topic to which you want Amazon Rekognition
#' Video to publish the completion status of the face detection operation.
#' The Amazon SNS topic must have a topic name that begins with
#' *AmazonRekognition* if you are using the AmazonRekognitionServiceRole
#' permissions policy.
#' @param FaceAttributes The face attributes you want returned.
#' 
#' `DEFAULT` - The following subset of facial attributes are returned:
#' BoundingBox, Confidence, Pose, Quality and Landmarks.
#' 
#' `ALL` - All facial attributes are returned.
#' @param JobTag An identifier you specify that's returned in the completion notification
#' that's published to your Amazon Simple Notification Service topic. For
#' example, you can use `JobTag` to group related jobs and identify them in
#' the completion notification.
#'
#' @keywords internal
#'
#' @rdname rekognition_start_face_detection
rekognition_start_face_detection <- function(Video, ClientRequestToken = NULL, NotificationChannel = NULL, FaceAttributes = NULL, JobTag = NULL) {
  op <- new_operation(
    name = "StartFaceDetection",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$start_face_detection_input(Video = Video, ClientRequestToken = ClientRequestToken, NotificationChannel = NotificationChannel, FaceAttributes = FaceAttributes, JobTag = JobTag)
  output <- .rekognition$start_face_detection_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$start_face_detection <- rekognition_start_face_detection

#' Starts the asynchronous search for faces in a collection that match the
#' faces of persons detected in a stored video
#'
#' @description
#' Starts the asynchronous search for faces in a collection that match the faces of persons detected in a stored video.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_start_face_search/](https://www.paws-r-sdk.com/docs/rekognition_start_face_search/) for full documentation.
#'
#' @param Video &#91;required&#93; The video you want to search. The video must be stored in an Amazon S3
#' bucket.
#' @param ClientRequestToken Idempotent token used to identify the start request. If you use the same
#' token with multiple [`start_face_search`][rekognition_start_face_search]
#' requests, the same `JobId` is returned. Use `ClientRequestToken` to
#' prevent the same job from being accidently started more than once.
#' @param FaceMatchThreshold The minimum confidence in the person match to return. For example, don't
#' return any matches where confidence in matches is less than 70%. The
#' default value is 80%.
#' @param CollectionId &#91;required&#93; ID of the collection that contains the faces you want to search for.
#' @param NotificationChannel The ARN of the Amazon SNS topic to which you want Amazon Rekognition
#' Video to publish the completion status of the search. The Amazon SNS
#' topic must have a topic name that begins with *AmazonRekognition* if you
#' are using the AmazonRekognitionServiceRole permissions policy to access
#' the topic.
#' @param JobTag An identifier you specify that's returned in the completion notification
#' that's published to your Amazon Simple Notification Service topic. For
#' example, you can use `JobTag` to group related jobs and identify them in
#' the completion notification.
#'
#' @keywords internal
#'
#' @rdname rekognition_start_face_search
rekognition_start_face_search <- function(Video, ClientRequestToken = NULL, FaceMatchThreshold = NULL, CollectionId, NotificationChannel = NULL, JobTag = NULL) {
  op <- new_operation(
    name = "StartFaceSearch",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$start_face_search_input(Video = Video, ClientRequestToken = ClientRequestToken, FaceMatchThreshold = FaceMatchThreshold, CollectionId = CollectionId, NotificationChannel = NotificationChannel, JobTag = JobTag)
  output <- .rekognition$start_face_search_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$start_face_search <- rekognition_start_face_search

#' Starts asynchronous detection of labels in a stored video
#'
#' @description
#' Starts asynchronous detection of labels in a stored video.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_start_label_detection/](https://www.paws-r-sdk.com/docs/rekognition_start_label_detection/) for full documentation.
#'
#' @param Video &#91;required&#93; The video in which you want to detect labels. The video must be stored
#' in an Amazon S3 bucket.
#' @param ClientRequestToken Idempotent token used to identify the start request. If you use the same
#' token with multiple
#' [`start_label_detection`][rekognition_start_label_detection] requests,
#' the same `JobId` is returned. Use `ClientRequestToken` to prevent the
#' same job from being accidently started more than once.
#' @param MinConfidence Specifies the minimum confidence that Amazon Rekognition Video must have
#' in order to return a detected label. Confidence represents how certain
#' Amazon Rekognition is that a label is correctly identified.0 is the
#' lowest confidence. 100 is the highest confidence. Amazon Rekognition
#' Video doesn't return any labels with a confidence level lower than this
#' specified value.
#' 
#' If you don't specify `MinConfidence`, the operation returns labels and
#' bounding boxes (if detected) with confidence values greater than or
#' equal to 50 percent.
#' @param NotificationChannel The Amazon SNS topic ARN you want Amazon Rekognition Video to publish
#' the completion status of the label detection operation to. The Amazon
#' SNS topic must have a topic name that begins with *AmazonRekognition* if
#' you are using the AmazonRekognitionServiceRole permissions policy.
#' @param JobTag An identifier you specify that's returned in the completion notification
#' that's published to your Amazon Simple Notification Service topic. For
#' example, you can use `JobTag` to group related jobs and identify them in
#' the completion notification.
#' @param Features The features to return after video analysis. You can specify that
#' GENERAL_LABELS are returned.
#' @param Settings The settings for a StartLabelDetection request.Contains the specified
#' parameters for the label detection request of an asynchronous label
#' analysis operation. Settings can include filters for GENERAL_LABELS.
#'
#' @keywords internal
#'
#' @rdname rekognition_start_label_detection
rekognition_start_label_detection <- function(Video, ClientRequestToken = NULL, MinConfidence = NULL, NotificationChannel = NULL, JobTag = NULL, Features = NULL, Settings = NULL) {
  op <- new_operation(
    name = "StartLabelDetection",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$start_label_detection_input(Video = Video, ClientRequestToken = ClientRequestToken, MinConfidence = MinConfidence, NotificationChannel = NotificationChannel, JobTag = JobTag, Features = Features, Settings = Settings)
  output <- .rekognition$start_label_detection_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$start_label_detection <- rekognition_start_label_detection

#' Starts the asynchronous tracking of a person's path in a stored video
#'
#' @description
#' Starts the asynchronous tracking of a person's path in a stored video.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_start_person_tracking/](https://www.paws-r-sdk.com/docs/rekognition_start_person_tracking/) for full documentation.
#'
#' @param Video &#91;required&#93; The video in which you want to detect people. The video must be stored
#' in an Amazon S3 bucket.
#' @param ClientRequestToken Idempotent token used to identify the start request. If you use the same
#' token with multiple
#' [`start_person_tracking`][rekognition_start_person_tracking] requests,
#' the same `JobId` is returned. Use `ClientRequestToken` to prevent the
#' same job from being accidently started more than once.
#' @param NotificationChannel The Amazon SNS topic ARN you want Amazon Rekognition Video to publish
#' the completion status of the people detection operation to. The Amazon
#' SNS topic must have a topic name that begins with *AmazonRekognition* if
#' you are using the AmazonRekognitionServiceRole permissions policy.
#' @param JobTag An identifier you specify that's returned in the completion notification
#' that's published to your Amazon Simple Notification Service topic. For
#' example, you can use `JobTag` to group related jobs and identify them in
#' the completion notification.
#'
#' @keywords internal
#'
#' @rdname rekognition_start_person_tracking
rekognition_start_person_tracking <- function(Video, ClientRequestToken = NULL, NotificationChannel = NULL, JobTag = NULL) {
  op <- new_operation(
    name = "StartPersonTracking",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$start_person_tracking_input(Video = Video, ClientRequestToken = ClientRequestToken, NotificationChannel = NotificationChannel, JobTag = JobTag)
  output <- .rekognition$start_person_tracking_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$start_person_tracking <- rekognition_start_person_tracking

#' Starts the running of the version of a model
#'
#' @description
#' Starts the running of the version of a model. Starting a model takes a while to complete. To check the current state of the model, use [`describe_project_versions`][rekognition_describe_project_versions].
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_start_project_version/](https://www.paws-r-sdk.com/docs/rekognition_start_project_version/) for full documentation.
#'
#' @param ProjectVersionArn &#91;required&#93; The Amazon Resource Name(ARN) of the model version that you want to
#' start.
#' @param MinInferenceUnits &#91;required&#93; The minimum number of inference units to use. A single inference unit
#' represents 1 hour of processing.
#' 
#' For information about the number of transactions per second (TPS) that
#' an inference unit can support, see *Running a trained Amazon Rekognition
#' Custom Labels model* in the Amazon Rekognition Custom Labels Guide.
#' 
#' Use a higher number to increase the TPS throughput of your model. You
#' are charged for the number of inference units that you use.
#' @param MaxInferenceUnits The maximum number of inference units to use for auto-scaling the model.
#' If you don't specify a value, Amazon Rekognition Custom Labels doesn't
#' auto-scale the model.
#'
#' @keywords internal
#'
#' @rdname rekognition_start_project_version
rekognition_start_project_version <- function(ProjectVersionArn, MinInferenceUnits, MaxInferenceUnits = NULL) {
  op <- new_operation(
    name = "StartProjectVersion",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$start_project_version_input(ProjectVersionArn = ProjectVersionArn, MinInferenceUnits = MinInferenceUnits, MaxInferenceUnits = MaxInferenceUnits)
  output <- .rekognition$start_project_version_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$start_project_version <- rekognition_start_project_version

#' Starts asynchronous detection of segment detection in a stored video
#'
#' @description
#' Starts asynchronous detection of segment detection in a stored video.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_start_segment_detection/](https://www.paws-r-sdk.com/docs/rekognition_start_segment_detection/) for full documentation.
#'
#' @param Video &#91;required&#93; 
#' @param ClientRequestToken Idempotent token used to identify the start request. If you use the same
#' token with multiple
#' [`start_segment_detection`][rekognition_start_segment_detection]
#' requests, the same `JobId` is returned. Use `ClientRequestToken` to
#' prevent the same job from being accidently started more than once.
#' @param NotificationChannel The ARN of the Amazon SNS topic to which you want Amazon Rekognition
#' Video to publish the completion status of the segment detection
#' operation. Note that the Amazon SNS topic must have a topic name that
#' begins with *AmazonRekognition* if you are using the
#' AmazonRekognitionServiceRole permissions policy to access the topic.
#' @param JobTag An identifier you specify that's returned in the completion notification
#' that's published to your Amazon Simple Notification Service topic. For
#' example, you can use `JobTag` to group related jobs and identify them in
#' the completion notification.
#' @param Filters Filters for technical cue or shot detection.
#' @param SegmentTypes &#91;required&#93; An array of segment types to detect in the video. Valid values are
#' TECHNICAL_CUE and SHOT.
#'
#' @keywords internal
#'
#' @rdname rekognition_start_segment_detection
rekognition_start_segment_detection <- function(Video, ClientRequestToken = NULL, NotificationChannel = NULL, JobTag = NULL, Filters = NULL, SegmentTypes) {
  op <- new_operation(
    name = "StartSegmentDetection",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$start_segment_detection_input(Video = Video, ClientRequestToken = ClientRequestToken, NotificationChannel = NotificationChannel, JobTag = JobTag, Filters = Filters, SegmentTypes = SegmentTypes)
  output <- .rekognition$start_segment_detection_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$start_segment_detection <- rekognition_start_segment_detection

#' Starts processing a stream processor
#'
#' @description
#' Starts processing a stream processor. You create a stream processor by calling [`create_stream_processor`][rekognition_create_stream_processor]. To tell [`start_stream_processor`][rekognition_start_stream_processor] which stream processor to start, use the value of the `Name` field specified in the call to [`create_stream_processor`][rekognition_create_stream_processor].
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_start_stream_processor/](https://www.paws-r-sdk.com/docs/rekognition_start_stream_processor/) for full documentation.
#'
#' @param Name &#91;required&#93; The name of the stream processor to start processing.
#' @param StartSelector Specifies the starting point in the Kinesis stream to start processing.
#' You can use the producer timestamp or the fragment number. If you use
#' the producer timestamp, you must put the time in milliseconds. For more
#' information about fragment numbers, see
#' [Fragment](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_reader_Fragment.html).
#' 
#' This is a required parameter for label detection stream processors and
#' should not be used to start a face search stream processor.
#' @param StopSelector Specifies when to stop processing the stream. You can specify a maximum
#' amount of time to process the video.
#' 
#' This is a required parameter for label detection stream processors and
#' should not be used to start a face search stream processor.
#'
#' @keywords internal
#'
#' @rdname rekognition_start_stream_processor
rekognition_start_stream_processor <- function(Name, StartSelector = NULL, StopSelector = NULL) {
  op <- new_operation(
    name = "StartStreamProcessor",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$start_stream_processor_input(Name = Name, StartSelector = StartSelector, StopSelector = StopSelector)
  output <- .rekognition$start_stream_processor_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$start_stream_processor <- rekognition_start_stream_processor

#' Starts asynchronous detection of text in a stored video
#'
#' @description
#' Starts asynchronous detection of text in a stored video.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_start_text_detection/](https://www.paws-r-sdk.com/docs/rekognition_start_text_detection/) for full documentation.
#'
#' @param Video &#91;required&#93; 
#' @param ClientRequestToken Idempotent token used to identify the start request. If you use the same
#' token with multiple
#' [`start_text_detection`][rekognition_start_text_detection] requests, the
#' same `JobId` is returned. Use `ClientRequestToken` to prevent the same
#' job from being accidentaly started more than once.
#' @param NotificationChannel 
#' @param JobTag An identifier returned in the completion status published by your Amazon
#' Simple Notification Service topic. For example, you can use `JobTag` to
#' group related jobs and identify them in the completion notification.
#' @param Filters Optional parameters that let you set criteria the text must meet to be
#' included in your response.
#'
#' @keywords internal
#'
#' @rdname rekognition_start_text_detection
rekognition_start_text_detection <- function(Video, ClientRequestToken = NULL, NotificationChannel = NULL, JobTag = NULL, Filters = NULL) {
  op <- new_operation(
    name = "StartTextDetection",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$start_text_detection_input(Video = Video, ClientRequestToken = ClientRequestToken, NotificationChannel = NotificationChannel, JobTag = JobTag, Filters = Filters)
  output <- .rekognition$start_text_detection_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$start_text_detection <- rekognition_start_text_detection

#' Stops a running model
#'
#' @description
#' Stops a running model. The operation might take a while to complete. To check the current status, call [`describe_project_versions`][rekognition_describe_project_versions].
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_stop_project_version/](https://www.paws-r-sdk.com/docs/rekognition_stop_project_version/) for full documentation.
#'
#' @param ProjectVersionArn &#91;required&#93; The Amazon Resource Name (ARN) of the model version that you want to
#' delete.
#' 
#' This operation requires permissions to perform the
#' `rekognition:StopProjectVersion` action.
#'
#' @keywords internal
#'
#' @rdname rekognition_stop_project_version
rekognition_stop_project_version <- function(ProjectVersionArn) {
  op <- new_operation(
    name = "StopProjectVersion",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$stop_project_version_input(ProjectVersionArn = ProjectVersionArn)
  output <- .rekognition$stop_project_version_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$stop_project_version <- rekognition_stop_project_version

#' Stops a running stream processor that was created by
#' CreateStreamProcessor
#'
#' @description
#' Stops a running stream processor that was created by [`create_stream_processor`][rekognition_create_stream_processor].
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_stop_stream_processor/](https://www.paws-r-sdk.com/docs/rekognition_stop_stream_processor/) for full documentation.
#'
#' @param Name &#91;required&#93; The name of a stream processor created by
#' [`create_stream_processor`][rekognition_create_stream_processor].
#'
#' @keywords internal
#'
#' @rdname rekognition_stop_stream_processor
rekognition_stop_stream_processor <- function(Name) {
  op <- new_operation(
    name = "StopStreamProcessor",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$stop_stream_processor_input(Name = Name)
  output <- .rekognition$stop_stream_processor_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$stop_stream_processor <- rekognition_stop_stream_processor

#' Adds one or more key-value tags to an Amazon Rekognition collection,
#' stream processor, or Custom Labels model
#'
#' @description
#' Adds one or more key-value tags to an Amazon Rekognition collection, stream processor, or Custom Labels model. For more information, see [Tagging AWS Resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html).
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_tag_resource/](https://www.paws-r-sdk.com/docs/rekognition_tag_resource/) for full documentation.
#'
#' @param ResourceArn &#91;required&#93; Amazon Resource Name (ARN) of the model, collection, or stream processor
#' that you want to assign the tags to.
#' @param Tags &#91;required&#93; The key-value tags to assign to the resource.
#'
#' @keywords internal
#'
#' @rdname rekognition_tag_resource
rekognition_tag_resource <- function(ResourceArn, Tags) {
  op <- new_operation(
    name = "TagResource",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$tag_resource_input(ResourceArn = ResourceArn, Tags = Tags)
  output <- .rekognition$tag_resource_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$tag_resource <- rekognition_tag_resource

#' Removes one or more tags from an Amazon Rekognition collection, stream
#' processor, or Custom Labels model
#'
#' @description
#' Removes one or more tags from an Amazon Rekognition collection, stream processor, or Custom Labels model.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_untag_resource/](https://www.paws-r-sdk.com/docs/rekognition_untag_resource/) for full documentation.
#'
#' @param ResourceArn &#91;required&#93; Amazon Resource Name (ARN) of the model, collection, or stream processor
#' that you want to remove the tags from.
#' @param TagKeys &#91;required&#93; A list of the tags that you want to remove.
#'
#' @keywords internal
#'
#' @rdname rekognition_untag_resource
rekognition_untag_resource <- function(ResourceArn, TagKeys) {
  op <- new_operation(
    name = "UntagResource",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$untag_resource_input(ResourceArn = ResourceArn, TagKeys = TagKeys)
  output <- .rekognition$untag_resource_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$untag_resource <- rekognition_untag_resource

#' Adds or updates one or more entries (images) in a dataset
#'
#' @description
#' Adds or updates one or more entries (images) in a dataset. An entry is a JSON Line which contains the information for a single image, including the image location, assigned labels, and object location bounding boxes. For more information, see Image-Level labels in manifest files and Object localization in manifest files in the *Amazon Rekognition Custom Labels Developer Guide*.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_update_dataset_entries/](https://www.paws-r-sdk.com/docs/rekognition_update_dataset_entries/) for full documentation.
#'
#' @param DatasetArn &#91;required&#93; The Amazon Resource Name (ARN) of the dataset that you want to update.
#' @param Changes &#91;required&#93; The changes that you want to make to the dataset.
#'
#' @keywords internal
#'
#' @rdname rekognition_update_dataset_entries
rekognition_update_dataset_entries <- function(DatasetArn, Changes) {
  op <- new_operation(
    name = "UpdateDatasetEntries",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$update_dataset_entries_input(DatasetArn = DatasetArn, Changes = Changes)
  output <- .rekognition$update_dataset_entries_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$update_dataset_entries <- rekognition_update_dataset_entries

#' Allows you to update a stream processor
#'
#' @description
#' Allows you to update a stream processor. You can change some settings and regions of interest and delete certain parameters.
#'
#' See [https://www.paws-r-sdk.com/docs/rekognition_update_stream_processor/](https://www.paws-r-sdk.com/docs/rekognition_update_stream_processor/) for full documentation.
#'
#' @param Name &#91;required&#93; Name of the stream processor that you want to update.
#' @param SettingsForUpdate The stream processor settings that you want to update. Label detection
#' settings can be updated to detect different labels with a different
#' minimum confidence.
#' @param RegionsOfInterestForUpdate Specifies locations in the frames where Amazon Rekognition checks for
#' objects or people. This is an optional parameter for label detection
#' stream processors.
#' @param DataSharingPreferenceForUpdate Shows whether you are sharing data with Rekognition to improve model
#' performance. You can choose this option at the account level or on a
#' per-stream basis. Note that if you opt out at the account level this
#' setting is ignored on individual streams.
#' @param ParametersToDelete A list of parameters you want to delete from the stream processor.
#'
#' @keywords internal
#'
#' @rdname rekognition_update_stream_processor
rekognition_update_stream_processor <- function(Name, SettingsForUpdate = NULL, RegionsOfInterestForUpdate = NULL, DataSharingPreferenceForUpdate = NULL, ParametersToDelete = NULL) {
  op <- new_operation(
    name = "UpdateStreamProcessor",
    http_method = "POST",
    http_path = "/",
    paginator = list()
  )
  input <- .rekognition$update_stream_processor_input(Name = Name, SettingsForUpdate = SettingsForUpdate, RegionsOfInterestForUpdate = RegionsOfInterestForUpdate, DataSharingPreferenceForUpdate = DataSharingPreferenceForUpdate, ParametersToDelete = ParametersToDelete)
  output <- .rekognition$update_stream_processor_output()
  config <- get_config()
  svc <- .rekognition$service(config)
  request <- new_request(svc, op, input, output)
  response <- send_request(request)
  return(response)
}
.rekognition$operations$update_stream_processor <- rekognition_update_stream_processor

Try the paws.machine.learning package in your browser

Any scripts or data that you put into this service are public.

paws.machine.learning documentation built on Sept. 12, 2023, 1:14 a.m.