| llm_chat_session | R Documentation |
Create and interact with a stateful chat session object that retains
message history. This documentation page covers the constructor function
chat_session() as well as all S3 methods for the llm_chat_session class.
chat_session(config, system = NULL, ...)
## S3 method for class 'llm_chat_session'
as.data.frame(x, ...)
## S3 method for class 'llm_chat_session'
summary(object, ...)
## S3 method for class 'llm_chat_session'
head(x, n = 6L, width = getOption("width") - 15, ...)
## S3 method for class 'llm_chat_session'
tail(x, n = 6L, width = getOption("width") - 15, ...)
## S3 method for class 'llm_chat_session'
print(x, width = getOption("width") - 15, ...)
config |
An llm_config for a generative model ( |
system |
Optional system prompt inserted once at the beginning. |
... |
Default arguments forwarded to every |
x, object |
An |
n |
Number of turns to display. |
width |
Character width for truncating long messages. |
The chat_session object provides a simple way to hold a conversation with
a generative model. It wraps call_llm_robust() to benefit from retry logic,
caching, and error logging.
For chat_session(), an object of class llm_chat_session.
Other methods return what their titles state.
A private environment stores the running list of
list(role, content) messages.
At each $send() the history is sent in full to the model.
Provider-agnostic token counts are extracted from the JSON response.
$send(text, ..., role = "user")Append a message (default role "user"), query the model,
print the assistant's reply, and invisibly return it.
$send_structured(text, schema, ..., role = "user", .fields = NULL, .validate_local = TRUE)Send a message with structured-output enabled using schema, append the assistant's reply,
parse JSON (and optionally validate locally when .validate_local = TRUE),
returning the parsed result invisibly.
$history()Raw list of messages.
$history_df()Two-column data frame (role, content).
$tokens_sent()/$tokens_received()Running token totals.
$reset()Clear history (retains the optional system message).
llm_config(), call_llm(), call_llm_robust(), llm_fn(), llm_mutate()
if (interactive()) {
cfg <- llm_config("openai", "gpt-4o-mini")
chat <- chat_session(cfg, system = "Be concise.")
chat$send("Who invented the moon?")
chat$send("Explain why in one short sentence.")
chat # print() shows a summary and first 10 turns
summary(chat) # stats
tail(chat, 2)
as.data.frame(chat)
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.