| chat | R Documentation |
Send a message to a Large Language Model and get a response.
chat(prompt, model = NULL, system = NULL, history = NULL, temperature = NULL,
max_tokens = NULL,
provider = c("auto", "openai", "anthropic", "moonshot", "ollama"),
stream = FALSE, ...)
prompt |
Character. The user message to send. |
model |
Character. Model name (e.g., "gpt-4o", "claude-3-5-sonnet-latest", "llama3.2"). |
system |
Character or NULL. System prompt to set context. |
history |
List or NULL. Previous conversation turns. |
temperature |
Numeric or NULL. Sampling temperature (0-2). |
max_tokens |
Integer or NULL. Maximum tokens in response. |
provider |
Character. Provider: "auto", "openai", "anthropic", "moonshot", or "ollama". |
stream |
Logical. Stream the response (prints as it arrives). |
... |
Additional parameters passed to the API. |
A list with:
content |
The assistant's response text |
model |
Model used |
usage |
Token usage (if available) |
history |
Updated conversation history |
## Not run:
# Simple chat
chat("What is 2+2?")
# With system prompt
chat("Explain R", system = "You are a helpful programming tutor.")
# Continue conversation
result <- chat("Hello")
chat("Tell me more", history = result$history)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.