call_llm_par: Mode 4: Parallel Processing - List of Config-Message Pairs

View source: R/LLM_parallel_utils.R

call_llm_parR Documentation

Mode 4: Parallel Processing - List of Config-Message Pairs

Description

Processes a list where each element contains both a config and message pair. Maximum flexibility for complex workflows with different configs and messages. This function requires setting up the parallel environment using 'setup_llm_parallel'.

Usage

call_llm_par(
  config_message_pairs,
  tries = 10,
  wait_seconds = 2,
  backoff_factor = 2,
  verbose = FALSE,
  json = FALSE,
  memoize = FALSE,
  max_workers = NULL,
  progress = FALSE
)

Arguments

config_message_pairs

A list where each element is a list with 'config' and 'messages' elements.

tries

Integer. Number of retries for each call. Default is 10.

wait_seconds

Numeric. Initial wait time (seconds) before retry. Default is 2.

backoff_factor

Numeric. Multiplier for wait time after each failure. Default is 2.

verbose

Logical. If TRUE, prints progress and debug information.

json

Logical. If TRUE, returns raw JSON responses.

memoize

Logical. If TRUE, enables caching for identical requests.

max_workers

Integer. Maximum number of parallel workers. If NULL, auto-detects.

progress

Logical. If TRUE, shows progress bar.

Value

A tibble with columns: pair_index, provider, model, response_text, success, error_message, plus all model parameters as additional columns.

Examples

## Not run: 
  # Full flexibility with different configs and messages
  config1 <- llm_config(provider = "openai", model = "gpt-4o-mini",
                        api_key = Sys.getenv("OPENAI_API_KEY"))
  config2 <- llm_config(provider = "openai", model = "gpt-3.5-turbo",
                        api_key = Sys.getenv("OPENAI_API_KEY"))

  pairs <- list(
    list(config = config1, messages = list(list(role = "user", content = "What is AI?"))),
    list(config = config2, messages = list(list(role = "user", content = "Explain ML")))
  )

  # setup paralle Environment
  setup_llm_parallel(workers = 4, verbose = TRUE)

  results <- call_llm_par(pairs)

  # Reset to sequential
  reset_llm_parallel(verbose = TRUE)

## End(Not run)

LLMR documentation built on June 8, 2025, 10:45 a.m.