call_llm_broadcast: Mode 2: Message Broadcast - Fixed Config, Multiple Messages

View source: R/LLM_parallel_utils.R

call_llm_broadcastR Documentation

Mode 2: Message Broadcast - Fixed Config, Multiple Messages

Description

Broadcasts different messages using the same configuration in parallel. Perfect for batch processing different prompts with consistent settings. This function requires setting up the parallel environment using 'setup_llm_parallel'.

Usage

call_llm_broadcast(
  config,
  messages_list,
  tries = 10,
  wait_seconds = 2,
  backoff_factor = 2,
  verbose = FALSE,
  json = FALSE,
  memoize = FALSE,
  max_workers = NULL,
  progress = FALSE
)

Arguments

config

Single llm_config object to use for all calls.

messages_list

A list of message lists, each for one API call.

tries

Integer. Number of retries for each call. Default is 10.

wait_seconds

Numeric. Initial wait time (seconds) before retry. Default is 2.

backoff_factor

Numeric. Multiplier for wait time after each failure. Default is 2.

verbose

Logical. If TRUE, prints progress and debug information.

json

Logical. If TRUE, requests raw JSON responses from the API.

memoize

Logical. If TRUE, enables caching for identical requests.

max_workers

Integer. Maximum number of parallel workers. If NULL, auto-detects.

progress

Logical. If TRUE, shows progress bar.

Value

A tibble with columns: message_index, provider, model, response_text, success, error_message, plus all model parameters as additional columns.

Examples

## Not run: 
  # Broadcast different questions
  config <- llm_config(provider = "openai", model = "gpt-4o-mini",
                       api_key = Sys.getenv("OPENAI_API_KEY"))

  messages_list <- list(
    list(list(role = "user", content = "What is 2+2?")),
    list(list(role = "user", content = "What is 3*5?")),
    list(list(role = "user", content = "What is 10/2?"))
  )

  # setup paralle Environment
  setup_llm_parallel(workers = 4, verbose = TRUE)

  results <- call_llm_broadcast(config, messages_list)

  # Reset to sequential
  reset_llm_parallel(verbose = TRUE)

## End(Not run)

LLMR documentation built on June 8, 2025, 10:45 a.m.