Invoke Endpoint With Response Stream
sagemakerruntime_invoke_endpoint_with_response_stream | R Documentation |
Invokes a model at the specified endpoint to return the inference response as a stream¶
Description¶
Invokes a model at the specified endpoint to return the inference response as a stream. The inference stream provides the response payload incrementally as a series of parts. Before you can get an inference stream, you must have access to a model that's deployed using Amazon SageMaker hosting services, and the container for that model must support inference streaming.
For more information that can help you use this API, see the following sections in the Amazon SageMaker Developer Guide:
-
For information about how to add streaming support to a model, see How Containers Serve Requests.
-
For information about how to process the streaming response, see Invoke real-time endpoints.
Before you can use this operation, your IAM permissions must allow the
sagemaker:InvokeEndpoint
action. For more information about Amazon
SageMaker actions for IAM policies, see Actions, resources, and
condition keys for Amazon
SageMaker
in the IAM Service Authorization Reference.
Amazon SageMaker strips all POST headers except those supported by the API. Amazon SageMaker might add additional headers. You should not rely on the behavior of headers outside those enumerated in the request syntax.
Calls to invoke_endpoint_with_response_stream
are authenticated by
using Amazon Web Services Signature Version 4. For information, see
Authenticating Requests (Amazon Web Services Signature Version
4)
in the Amazon S3 API Reference.
Usage¶
sagemakerruntime_invoke_endpoint_with_response_stream(EndpointName,
Body, ContentType, Accept, CustomAttributes, TargetVariant,
TargetContainerHostname, InferenceId, InferenceComponentName)
Arguments¶
EndpointName
[required] The name of the endpoint that you specified when you created the endpoint using the CreateEndpoint API.
Body
[required] Provides input data, in the format specified in the
ContentType
request header. Amazon SageMaker passes all of the data in the body to the model.For information about the format of the request body, see Common Data Formats-Inference.
ContentType
The MIME type of the input data in the request body.
Accept
The desired MIME type of the inference response from the model container.
CustomAttributes
Provides additional information about a request for an inference submitted to a model hosted at an Amazon SageMaker endpoint. The information is an opaque value that is forwarded verbatim. You could use this value, for example, to provide an ID that you can use to track a request or to provide other metadata that a service endpoint was programmed to process. The value must consist of no more than 1024 visible US-ASCII characters as specified in Section 3.3.6. Field Value Components of the Hypertext Transfer Protocol (HTTP/1.1).
The code in your model is responsible for setting or updating any custom attributes in the response. If your code does not set this value in the response, an empty value is returned. For example, if a custom attribute represents the trace ID, your model can prepend the custom attribute with
Trace ID:
in your post-processing function.This feature is currently supported in the Amazon Web Services SDKs but not in the Amazon SageMaker Python SDK.
TargetVariant
Specify the production variant to send the inference request to when invoking an endpoint that is running two or more variants. Note that this parameter overrides the default behavior for the endpoint, which is to distribute the invocation traffic based on the variant weights.
For information about how to use variant targeting to perform a/b testing, see Test models in production
TargetContainerHostname
If the endpoint hosts multiple containers and is configured to use direct invocation, this parameter specifies the host name of the container to invoke.
InferenceId
An identifier that you assign to your request.
InferenceComponentName
If the endpoint hosts one or more inference components, this parameter specifies the name of inference component to invoke for a streaming response.
Value¶
A list with the following syntax:
list(
Body = list(
PayloadPart = list(
Bytes = raw
),
ModelStreamError = list(
Message = "string",
ErrorCode = "string"
),
InternalStreamFailure = list(
Message = "string"
)
),
ContentType = "string",
InvokedProductionVariant = "string",
CustomAttributes = "string"
)