LLMConversation | R Documentation |
An R6 class for managing a conversation among multiple Agent
objects.
Includes optional conversation-level summarization if 'summarizer_config' is provided:
summarizer_config: A list that can contain:
llm_config
: The llm_config
used for the summarizer call (default a basic OpenAI).
prompt
: A custom summarizer prompt (default provided).
threshold
: Word-count threshold (default 3000 words).
summary_length
: Target length in words for the summary (default 400).
Once the total conversation word count exceeds 'threshold', a summarization is triggered.
The conversation is replaced with a single condensed message that keeps track of who said what.
agents
A named list of Agent
objects.
conversation_history
A list of speaker/text pairs for the entire conversation.
conversation_history_full
A list of speaker/text pairs for the entire conversation that is never modified and never used directly.
topic
A short string describing the conversation's theme.
prompts
An optional list of prompt templates (may be ignored).
shared_memory
Global store that is also fed into each agent's memory.
last_response
last response received
total_tokens_sent
total tokens sent in conversation
total_tokens_received
total tokens received in conversation
summarizer_config
Config list controlling optional conversation-level summarization.
new()
Create a new conversation.
LLMConversation$new(topic, prompts = NULL, summarizer_config = NULL)
topic
Character. The conversation topic.
prompts
Optional named list of prompt templates.
summarizer_config
Optional list controlling conversation-level summarization.
add_agent()
Add an Agent
to this conversation. The agent is stored by agent$id
.
LLMConversation$add_agent(agent)
agent
An Agent object.
add_message()
Add a message to the global conversation log. Also appended to shared memory. Then possibly trigger summarization if configured.
LLMConversation$add_message(speaker, text)
speaker
Character. Who is speaking?
text
Character. What they said.
converse()
Have a specific agent produce a response. The entire global conversation plus shared memory is temporarily loaded into that agent. Then the new message is recorded in the conversation. The agent's memory is then reset except for its new line.
LLMConversation$converse( agent_id, prompt_template, replacements = list(), verbose = FALSE )
agent_id
Character. The ID of the agent to converse.
prompt_template
Character. The prompt template for the agent.
replacements
A named list of placeholders to fill in the prompt.
verbose
Logical. If TRUE, prints extra info.
run()
Run a multi-step conversation among a sequence of agents.
LLMConversation$run( agent_sequence, prompt_template, replacements = list(), verbose = FALSE )
agent_sequence
Character vector of agent IDs in the order they speak.
prompt_template
Single string or named list of strings keyed by agent ID.
replacements
Single list or list-of-lists with per-agent placeholders.
verbose
Logical. If TRUE, prints extra info.
print_history()
Print the conversation so far to the console.
LLMConversation$print_history()
reset_conversation()
Clear the global conversation and reset all agents' memories.
LLMConversation$reset_conversation()
|>()
Pipe-like operator to chain conversation steps. E.g., conv |> "Solver"(...)
LLMConversation$|>(agent_id)
agent_id
Character. The ID of the agent to call next.
A function that expects (prompt_template, replacements, verbose).
maybe_summarize_conversation()
Possibly summarize the conversation if summarizer_config is non-null and the word count of conversation_history exceeds summarizer_config$threshold.
LLMConversation$maybe_summarize_conversation()
summarize_conversation()
Summarize the conversation so far into one condensed message. The new conversation history becomes a single message with speaker = "summary".
LLMConversation$summarize_conversation()
clone()
The objects of this class are cloneable with this method.
LLMConversation$clone(deep = FALSE)
deep
Whether to make a deep clone.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.