lexruntimev2_recognize_utterance: Sends user input to Amazon Lex V2

View source: R/lexruntimev2_operations.R

lexruntimev2_recognize_utteranceR Documentation

Sends user input to Amazon Lex V2

Description

Sends user input to Amazon Lex V2. You can send text or speech. Clients use this API to send text and audio requests to Amazon Lex V2 at runtime. Amazon Lex V2 interprets the user input using the machine learning model built for the bot.

See https://www.paws-r-sdk.com/docs/lexruntimev2_recognize_utterance/ for full documentation.

Usage

lexruntimev2_recognize_utterance(
  botId,
  botAliasId,
  localeId,
  sessionId,
  sessionState = NULL,
  requestAttributes = NULL,
  requestContentType,
  responseContentType = NULL,
  inputStream = NULL
)

Arguments

botId

[required] The identifier of the bot that should receive the request.

botAliasId

[required] The alias identifier in use for the bot that should receive the request.

localeId

[required] The locale where the session is in use.

sessionId

[required] The identifier of the session in use.

sessionState

Sets the state of the session with the user. You can use this to set the current intent, attributes, context, and dialog action. Use the dialog action to determine the next step that Amazon Lex V2 should use in the conversation with the user.

The sessionState field must be compressed using gzip and then base64 encoded before sending to Amazon Lex V2.

requestAttributes

Request-specific information passed between the client application and Amazon Lex V2

The namespace ⁠x-amz-lex:⁠ is reserved for special attributes. Don't create any request attributes for prefix ⁠x-amz-lex:⁠.

The requestAttributes field must be compressed using gzip and then base64 encoded before sending to Amazon Lex V2.

requestContentType

[required] Indicates the format for audio input or that the content is text. The header must start with one of the following prefixes:

  • PCM format, audio data must be in little-endian byte order.

    • audio/l16; rate=16000; channels=1

    • audio/x-l16; sample-rate=16000; channel-count=1

    • audio/lpcm; sample-rate=8000; sample-size-bits=16; channel-count=1; is-big-endian=false

  • Opus format

    • audio/x-cbr-opus-with-preamble;preamble-size=0;bit-rate=256000;frame-size-milliseconds=4

  • Text format

    • text/plain; charset=utf-8

responseContentType

The message that Amazon Lex V2 returns in the response can be either text or speech based on the responseContentType value.

  • If the value is ⁠text/plain;charset=utf-8⁠, Amazon Lex V2 returns text in the response.

  • If the value begins with ⁠audio/⁠, Amazon Lex V2 returns speech in the response. Amazon Lex V2 uses Amazon Polly to generate the speech using the configuration that you specified in the responseContentType parameter. For example, if you specify audio/mpeg as the value, Amazon Lex V2 returns speech in the MPEG format.

  • If the value is audio/pcm, the speech returned is audio/pcm at 16 KHz in 16-bit, little-endian format.

  • The following are the accepted values:

    • audio/mpeg

    • audio/ogg

    • audio/pcm (16 KHz)

    • audio/* (defaults to mpeg)

    • text/plain; charset=utf-8

inputStream

User input in PCM or Opus audio format or text format as described in the requestContentType parameter.


paws.machine.learning documentation built on Sept. 12, 2023, 1:14 a.m.