cache_llm_call: Cache LLM API Calls

cache_llm_callR Documentation

Cache LLM API Calls

Description

A memoised version of call_llm to avoid repeated identical requests.

Arguments

config

An llm_config object from llm_config.

messages

A list of message objects or character vector for embeddings.

verbose

Logical. If TRUE, prints the full API response (passed to call_llm).

json

Logical. If TRUE, returns raw JSON (passed to call_llm).

Details

- Requires the memoise package. Add memoise to your package's DESCRIPTION. - Clearing the cache can be done via memoise::forget(cache_llm_call) or by restarting your R session.

Value

The (memoised) response object from call_llm.

Examples

## Not run: 
  # Using cache_llm_call:
  response1 <- cache_llm_call(my_config, list(list(role="user", content="Hello!")))
  # Subsequent identical calls won't hit the API unless we clear the cache.
  response2 <- cache_llm_call(my_config, list(list(role="user", content="Hello!")))

## End(Not run)

LLMR documentation built on April 4, 2025, 1:11 a.m.