chat | R Documentation |
The chat()
function sends a message to a language model via a specified provider and returns the response.
It routes the provided LLMMessage
object to the appropriate provider-specific chat function,
while allowing for the specification of common arguments applicable across different providers.
chat(
.llm,
.provider = getOption("tidyllm_chat_default"),
.dry_run = NULL,
.stream = NULL,
.temperature = NULL,
.timeout = NULL,
.top_p = NULL,
.max_tries = NULL,
.model = NULL,
.verbose = NULL,
.json_schema = NULL,
.tools = NULL,
.seed = NULL,
.stop = NULL,
.frequency_penalty = NULL,
.presence_penalty = NULL
)
.llm |
An |
.provider |
A function or function call specifying the language model provider and any additional parameters.
This should be a call to a provider function like |
.dry_run |
Logical; if |
.stream |
Logical; if |
.temperature |
Numeric; controls the randomness of the model's output (0 = deterministic). |
.timeout |
Numeric; the maximum time (in seconds) to wait for a response. |
.top_p |
Numeric; nucleus sampling parameter, which limits the sampling to the top cumulative probability |
.max_tries |
Integer; the maximum number of retries for failed requests. |
.model |
Character; the model identifier to use (e.g., |
.verbose |
Logical; if |
.json_schema |
List; A JSON schema object as R list to enforce the output structure |
.tools |
Either a single TOOL object or a list of TOOL objects representing the available functions for tool calls. |
.seed |
Integer; sets a random seed for reproducibility. |
.stop |
Character vector; specifies sequences where the model should stop generating further tokens. |
.frequency_penalty |
Numeric; adjusts the likelihood of repeating tokens (positive values decrease repetition). |
.presence_penalty |
Numeric; adjusts the likelihood of introducing new tokens (positive values encourage novelty). |
The chat()
function provides a unified interface for interacting with different language model providers.
Common arguments such as .temperature
, .model
, and .stream
are supported by most providers and can be
passed directly to chat()
. If a provider does not support a particular argument, an error will be raised.
Advanced provider-specific configurations can be accessed via the provider functions.
An updated LLMMessage
object containing the response from the language model.
## Not run:
# Basic usage with OpenAI provider
llm_message("Hello World") |>
chat(ollama(.ollama_server = "https://my-ollama-server.de"),.model="mixtral")
chat(mistral,.model="mixtral")
# Use streaming with Claude provider
llm_message("Tell me a story") |>
chat(claude(),.stream=TRUE)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.