sagemakerruntime_invoke_endpoint: After you deploy a model into production using Amazon...

View source: R/sagemakerruntime_operations.R

sagemakerruntime_invoke_endpointR Documentation

After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint

Description

After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint.

See https://www.paws-r-sdk.com/docs/sagemakerruntime_invoke_endpoint/ for full documentation.

Usage

sagemakerruntime_invoke_endpoint(
  EndpointName,
  Body,
  ContentType = NULL,
  Accept = NULL,
  CustomAttributes = NULL,
  TargetModel = NULL,
  TargetVariant = NULL,
  TargetContainerHostname = NULL,
  InferenceId = NULL,
  EnableExplanations = NULL,
  InferenceComponentName = NULL
)

Arguments

EndpointName

[required] The name of the endpoint that you specified when you created the endpoint using the CreateEndpoint API.

Body

[required] Provides input data, in the format specified in the ContentType request header. Amazon SageMaker passes all of the data in the body to the model.

For information about the format of the request body, see Common Data Formats-Inference.

ContentType

The MIME type of the input data in the request body.

Accept

The desired MIME type of the inference response from the model container.

CustomAttributes

Provides additional information about a request for an inference submitted to a model hosted at an Amazon SageMaker endpoint. The information is an opaque value that is forwarded verbatim. You could use this value, for example, to provide an ID that you can use to track a request or to provide other metadata that a service endpoint was programmed to process. The value must consist of no more than 1024 visible US-ASCII characters as specified in Section 3.3.6. Field Value Components of the Hypertext Transfer Protocol (HTTP/1.1).

The code in your model is responsible for setting or updating any custom attributes in the response. If your code does not set this value in the response, an empty value is returned. For example, if a custom attribute represents the trace ID, your model can prepend the custom attribute with ⁠Trace ID:⁠ in your post-processing function.

This feature is currently supported in the Amazon Web Services SDKs but not in the Amazon SageMaker Python SDK.

TargetModel

The model to request for inference when invoking a multi-model endpoint.

TargetVariant

Specify the production variant to send the inference request to when invoking an endpoint that is running two or more variants. Note that this parameter overrides the default behavior for the endpoint, which is to distribute the invocation traffic based on the variant weights.

For information about how to use variant targeting to perform a/b testing, see Test models in production

TargetContainerHostname

If the endpoint hosts multiple containers and is configured to use direct invocation, this parameter specifies the host name of the container to invoke.

InferenceId

If you provide a value, it is added to the captured data when you enable data capture on the endpoint. For information about data capture, see Capture Data.

EnableExplanations

An optional JMESPath expression used to override the EnableExplanations parameter of the ClarifyExplainerConfig API. See the EnableExplanations section in the developer guide for more information.

InferenceComponentName

If the endpoint hosts one or more inference components, this parameter specifies the name of inference component to invoke.


paws.machine.learning documentation built on Sept. 12, 2024, 6:23 a.m.