Agent: Agent Class for LLM Interactions

AgentR Documentation

Agent Class for LLM Interactions

Description

An R6 class representing an agent that interacts with language models.

*At agent-level we do not automate summarization.* The 'maybe_summarize_memory()' function can be called manually if the user wishes to compress the agent's memory.

Public fields

id

Unique ID for this Agent.

context_length

Maximum number of conversation turns stored in memory.

model_config

The llm_config specifying which LLM to call.

memory

A list of speaker/text pairs that the agent has memorized.

persona

Named list for additional agent-specific details (e.g., role, style).

enable_summarization

Logical. If TRUE, user *may* call 'maybe_summarize_memory()'.

token_threshold

Numeric. If manually triggered, we can compare total_tokens.

total_tokens

Numeric. Estimated total tokens in memory.

summarization_density

Character. "low", "medium", or "high".

summarization_prompt

Character. Optional custom prompt for summarization.

summarizer_config

Optional llm_config for summarizing the agent's memory.

auto_inject_conversation

Logical. If TRUE, automatically prepend conversation memory if missing.

Methods

Public methods


Method new()

Create a new Agent instance.

Usage
Agent$new(
  id,
  context_length = 5,
  persona = NULL,
  model_config,
  enable_summarization = TRUE,
  token_threshold = 1000,
  summarization_density = "medium",
  summarization_prompt = NULL,
  summarizer_config = NULL,
  auto_inject_conversation = TRUE
)
Arguments
id

Character. The agent's unique identifier.

context_length

Numeric. The maximum number of messages stored (default = 5).

persona

A named list of persona details.

model_config

An llm_config object specifying LLM settings.

enable_summarization

Logical. If TRUE, you can manually call summarization.

token_threshold

Numeric. If you're calling summarization, use this threshold if desired.

summarization_density

Character. "low", "medium", "high" for summary detail.

summarization_prompt

Character. Optional custom prompt for summarization.

summarizer_config

Optional llm_config for summarization calls.

auto_inject_conversation

Logical. If TRUE, auto-append conversation memory to prompt if missing.

Returns

A new Agent object.


Method add_memory()

Add a new message to the agent's memory. We do NOT automatically call summarization here.

Usage
Agent$add_memory(speaker, text)
Arguments
speaker

Character. The speaker name or ID.

text

Character. The message content.


Method maybe_summarize_memory()

Manually compress the agent's memory if desired. Summarizes all memory into a single "summary" message.

Usage
Agent$maybe_summarize_memory()

Method generate_prompt()

Internal helper to prepare final prompt by substituting placeholders.

Usage
Agent$generate_prompt(template, replacements = list())
Arguments
template

Character. The prompt template.

replacements

A named list of placeholder values.

Returns

Character. The prompt with placeholders replaced.


Method call_llm_agent()

Low-level call to the LLM (via robust call_llm_robust) with a final prompt. If persona is defined, a system message is prepended to help set the role.

Usage
Agent$call_llm_agent(prompt, verbose = FALSE)
Arguments
prompt

Character. The final prompt text.

verbose

Logical. If TRUE, prints debug info. Default FALSE.

Returns

A list with: * text * tokens_sent * tokens_received * full_response (raw list)


Method generate()

Generate a response from the LLM using a prompt template and optional replacements. Substitutes placeholders, calls the LLM, saves output to memory, returns the response.

Usage
Agent$generate(prompt_template, replacements = list(), verbose = FALSE)
Arguments
prompt_template

Character. The prompt template.

replacements

A named list of placeholder values.

verbose

Logical. If TRUE, prints extra info. Default FALSE.

Returns

A list with fields text, tokens_sent, tokens_received, full_response.


Method think()

The agent "thinks" about a topic, possibly using the entire memory in the prompt. If auto_inject_conversation is TRUE and the template lacks {{conversation}}, we prepend the memory.

Usage
Agent$think(topic, prompt_template, replacements = list(), verbose = FALSE)
Arguments
topic

Character. Label for the thought.

prompt_template

Character. The prompt template.

replacements

Named list for additional placeholders.

verbose

Logical. If TRUE, prints info.


Method respond()

The agent produces a public "response" about a topic. If auto_inject_conversation is TRUE and the template lacks {{conversation}}, we prepend the memory.

Usage
Agent$respond(topic, prompt_template, replacements = list(), verbose = FALSE)
Arguments
topic

Character. A short label for the question/issue.

prompt_template

Character. The prompt template.

replacements

Named list of placeholder substitutions.

verbose

Logical. If TRUE, prints extra info.

Returns

A list with text, tokens_sent, tokens_received, full_response.


Method reset_memory()

Reset the agent's memory.

Usage
Agent$reset_memory()

Method clone()

The objects of this class are cloneable with this method.

Usage
Agent$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.


LLMR documentation built on April 4, 2025, 1:11 a.m.

Related to Agent in LLMR...