bedrockruntime_converse_stream: Sends messages to the specified Amazon Bedrock model and...

View source: R/bedrockruntime_operations.R

bedrockruntime_converse_streamR Documentation

Sends messages to the specified Amazon Bedrock model and returns the response in a stream

Description

Sends messages to the specified Amazon Bedrock model and returns the response in a stream. converse_stream provides a consistent API that works with all Amazon Bedrock models that support messages. This allows you to write code once and use it with different models. Should a model have unique inference parameters, you can also pass those unique parameters to the model.

See https://www.paws-r-sdk.com/docs/bedrockruntime_converse_stream/ for full documentation.

Usage

bedrockruntime_converse_stream(
  modelId,
  messages = NULL,
  system = NULL,
  inferenceConfig = NULL,
  toolConfig = NULL,
  guardrailConfig = NULL,
  additionalModelRequestFields = NULL,
  promptVariables = NULL,
  additionalModelResponseFieldPaths = NULL,
  requestMetadata = NULL,
  performanceConfig = NULL
)

Arguments

modelId

[required] Specifies the model or throughput with which to run inference, or the prompt resource to use in inference. The value depends on the resource that you use:

The Converse API doesn't support imported models.

messages

The messages that you want to send to the model.

system

A prompt that provides instructions or context to the model about the task it should perform, or the persona it should adopt during the conversation.

inferenceConfig

Inference parameters to pass to the model. converse and converse_stream support a base set of inference parameters. If you need to pass additional parameters that the model supports, use the additionalModelRequestFields request field.

toolConfig

Configuration information for the tools that the model can use when generating a response.

For information about models that support streaming tool use, see Supported models and model features.

guardrailConfig

Configuration information for a guardrail that you want to use in the request. If you include guardContent blocks in the content field in the messages field, the guardrail operates only on those messages. If you include no guardContent blocks, the guardrail operates on all messages in the request body and in any included prompt resource.

additionalModelRequestFields

Additional inference parameters that the model supports, beyond the base set of inference parameters that converse and converse_stream support in the inferenceConfig field. For more information, see Model parameters.

promptVariables

Contains a map of variables in a prompt from Prompt management to objects containing the values to fill in for them when running model invocation. This field is ignored if you don't specify a prompt resource in the modelId field.

additionalModelResponseFieldPaths

Additional model parameters field paths to return in the response. converse and converse_stream return the requested fields as a JSON Pointer object in the additionalModelResponseFields field. The following is example JSON for additionalModelResponseFieldPaths.

⁠[ "/stop_sequence" ]⁠

For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation.

converse and converse_stream reject an empty JSON Pointer or incorrectly structured JSON Pointer with a 400 error code. if the JSON Pointer is valid, but the requested field is not in the model response, it is ignored by converse.

requestMetadata

Key-value pairs that you can use to filter invocation logs.

performanceConfig

Model performance settings for the request.


paws.machine.learning documentation built on April 3, 2025, 8:41 p.m.