call_llm_compare: Mode 3: Model Comparison - Multiple Configs, Fixed Message

View source: R/LLM_parallel_utils.R

call_llm_compareR Documentation

Mode 3: Model Comparison - Multiple Configs, Fixed Message

Description

Compares different configurations (models, providers, settings) using the same message. Perfect for benchmarking across different models or providers. This function requires setting up the parallel environment using 'setup_llm_parallel'.

Usage

call_llm_compare(
  configs_list,
  messages,
  tries = 10,
  wait_seconds = 2,
  backoff_factor = 2,
  verbose = FALSE,
  json = FALSE,
  memoize = FALSE,
  max_workers = NULL,
  progress = FALSE
)

Arguments

configs_list

A list of llm_config objects to compare.

messages

List of message objects (same for all configs).

tries

Integer. Number of retries for each call. Default is 10.

wait_seconds

Numeric. Initial wait time (seconds) before retry. Default is 2.

backoff_factor

Numeric. Multiplier for wait time after each failure. Default is 2.

verbose

Logical. If TRUE, prints processing information.

json

Logical. If TRUE, returns raw JSON responses.

memoize

Logical. If TRUE, enables caching for identical requests.

max_workers

Integer. Maximum number of parallel workers. If NULL, auto-detects.

progress

Logical. If TRUE, shows progress tracking.

Value

A tibble with columns: config_index, provider, model, response_text, success, error_message, plus all model parameters as additional columns.

Examples

## Not run: 
  # Compare different models
  config1 <- llm_config(provider = "openai", model = "gpt-4o-mini",
                        api_key = Sys.getenv("OPENAI_API_KEY"))
  config2 <- llm_config(provider = "openai", model = "gpt-3.5-turbo",
                        api_key = Sys.getenv("OPENAI_API_KEY"))

  configs_list <- list(config1, config2)
  messages <- list(list(role = "user", content = "Explain quantum computing"))

  # setup paralle Environment
  setup_llm_parallel(workers = 4, verbose = TRUE)

  results <- call_llm_compare(configs_list, messages)

  # Reset to sequential
  reset_llm_parallel(verbose = TRUE)

## End(Not run)

LLMR documentation built on June 8, 2025, 10:45 a.m.