Agent | R Documentation |
An R6 class representing an agent that interacts with language models.
*At agent-level we do not automate summarization.* The 'maybe_summarize_memory()' function can be called manually if the user wishes to compress the agent's memory.
id
Unique ID for this Agent.
context_length
Maximum number of conversation turns stored in memory.
model_config
The llm_config
specifying which LLM to call.
memory
A list of speaker/text pairs that the agent has memorized.
persona
Named list for additional agent-specific details (e.g., role, style).
enable_summarization
Logical. If TRUE, user *may* call 'maybe_summarize_memory()'.
token_threshold
Numeric. If manually triggered, we can compare total_tokens.
total_tokens
Numeric. Estimated total tokens in memory.
summarization_density
Character. "low", "medium", or "high".
summarization_prompt
Character. Optional custom prompt for summarization.
summarizer_config
Optional llm_config
for summarizing the agent's memory.
auto_inject_conversation
Logical. If TRUE, automatically prepend conversation memory if missing.
new()
Create a new Agent instance.
Agent$new( id, context_length = 5, persona = NULL, model_config, enable_summarization = TRUE, token_threshold = 1000, summarization_density = "medium", summarization_prompt = NULL, summarizer_config = NULL, auto_inject_conversation = TRUE )
id
Character. The agent's unique identifier.
context_length
Numeric. The maximum number of messages stored (default = 5).
persona
A named list of persona details.
model_config
An llm_config
object specifying LLM settings.
enable_summarization
Logical. If TRUE, you can manually call summarization.
token_threshold
Numeric. If you're calling summarization, use this threshold if desired.
summarization_density
Character. "low", "medium", "high" for summary detail.
summarization_prompt
Character. Optional custom prompt for summarization.
summarizer_config
Optional llm_config
for summarization calls.
auto_inject_conversation
Logical. If TRUE, auto-append conversation memory to prompt if missing.
A new Agent
object.
add_memory()
Add a new message to the agent's memory. We do NOT automatically call summarization here.
Agent$add_memory(speaker, text)
speaker
Character. The speaker name or ID.
text
Character. The message content.
maybe_summarize_memory()
Manually compress the agent's memory if desired. Summarizes all memory into a single "summary" message.
Agent$maybe_summarize_memory()
generate_prompt()
Internal helper to prepare final prompt by substituting placeholders.
Agent$generate_prompt(template, replacements = list())
template
Character. The prompt template.
replacements
A named list of placeholder values.
Character. The prompt with placeholders replaced.
call_llm_agent()
Low-level call to the LLM (via robust call_llm_robust) with a final prompt. If persona is defined, a system message is prepended to help set the role.
Agent$call_llm_agent(prompt, verbose = FALSE)
prompt
Character. The final prompt text.
verbose
Logical. If TRUE, prints debug info. Default FALSE.
A list with: * text * tokens_sent * tokens_received * full_response (raw list)
generate()
Generate a response from the LLM using a prompt template and optional replacements. Substitutes placeholders, calls the LLM, saves output to memory, returns the response.
Agent$generate(prompt_template, replacements = list(), verbose = FALSE)
prompt_template
Character. The prompt template.
replacements
A named list of placeholder values.
verbose
Logical. If TRUE, prints extra info. Default FALSE.
A list with fields text
, tokens_sent
, tokens_received
, full_response
.
think()
The agent "thinks" about a topic, possibly using the entire memory in the prompt. If auto_inject_conversation is TRUE and the template lacks {{conversation}}, we prepend the memory.
Agent$think(topic, prompt_template, replacements = list(), verbose = FALSE)
topic
Character. Label for the thought.
prompt_template
Character. The prompt template.
replacements
Named list for additional placeholders.
verbose
Logical. If TRUE, prints info.
respond()
The agent produces a public "response" about a topic. If auto_inject_conversation is TRUE and the template lacks {{conversation}}, we prepend the memory.
Agent$respond(topic, prompt_template, replacements = list(), verbose = FALSE)
topic
Character. A short label for the question/issue.
prompt_template
Character. The prompt template.
replacements
Named list of placeholder substitutions.
verbose
Logical. If TRUE, prints extra info.
A list with text
, tokens_sent
, tokens_received
, full_response
.
reset_memory()
Reset the agent's memory.
Agent$reset_memory()
clone()
The objects of this class are cloneable with this method.
Agent$clone(deep = FALSE)
deep
Whether to make a deep clone.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.