call_llm_sweep: Mode 1: Parameter Sweep - Vary One Parameter, Fixed Message

View source: R/LLM_parallel_utils.R

call_llm_sweepR Documentation

Mode 1: Parameter Sweep - Vary One Parameter, Fixed Message

Description

Sweeps through different values of a single parameter while keeping the message constant. Perfect for hyperparameter tuning, temperature experiments, etc. This function requires setting up the parallel environment using 'setup_llm_parallel'.

Usage

call_llm_sweep(
  base_config,
  param_name,
  param_values,
  messages,
  tries = 10,
  wait_seconds = 2,
  backoff_factor = 2,
  verbose = FALSE,
  json = FALSE,
  memoize = FALSE,
  max_workers = NULL,
  progress = FALSE
)

Arguments

base_config

Base llm_config object to modify.

param_name

Character. Name of the parameter to vary (e.g., "temperature", "max_tokens").

param_values

Vector. Values to test for the parameter.

messages

List of message objects (same for all calls).

tries

Integer. Number of retries for each call. Default is 10.

wait_seconds

Numeric. Initial wait time (seconds) before retry. Default is 2.

backoff_factor

Numeric. Multiplier for wait time after each failure. Default is 2.

verbose

Logical. If TRUE, prints progress and debug information.

json

Logical. If TRUE, requests raw JSON responses from the API (note: final tibble's 'response_text' will be extracted text).

memoize

Logical. If TRUE, enables caching for identical requests via 'call_llm_robust'.

max_workers

Integer. Maximum number of parallel workers. If NULL, auto-detects.

progress

Logical. If TRUE, shows progress bar.

Value

A tibble with columns: param_name, param_value, provider, model, response_text, success, error_message, plus all model parameters as additional columns.

Examples

## Not run: 
  # Temperature sweep
  config <- llm_config(provider = "openai", model = "gpt-4o-mini",
                       api_key = Sys.getenv("OPENAI_API_KEY"))

  messages <- list(list(role = "user", content = "What is 15 * 23?"))
  temperatures <- c(0, 0.3, 0.7, 1.0, 1.5)

  # set up the parallel enviornment
  setup_llm_parallel(workers = 4, verbose = TRUE)

  results <- call_llm_sweep(config, "temperature", temperatures, messages)

  # Reset to sequential
  reset_llm_parallel(verbose = TRUE)

## End(Not run)

LLMR documentation built on June 8, 2025, 10:45 a.m.