View source: R/LLM_parallel_utils.R
call_llm_broadcast | R Documentation |
Broadcasts different messages using the same configuration in parallel. Perfect for batch processing different prompts with consistent settings. This function requires setting up the parallel environment using 'setup_llm_parallel'.
call_llm_broadcast(
config,
messages_list,
tries = 10,
wait_seconds = 2,
backoff_factor = 2,
verbose = FALSE,
json = FALSE,
memoize = FALSE,
max_workers = NULL,
progress = FALSE
)
config |
Single llm_config object to use for all calls. |
messages_list |
A list of message lists, each for one API call. |
tries |
Integer. Number of retries for each call. Default is 10. |
wait_seconds |
Numeric. Initial wait time (seconds) before retry. Default is 2. |
backoff_factor |
Numeric. Multiplier for wait time after each failure. Default is 2. |
verbose |
Logical. If TRUE, prints progress and debug information. |
json |
Logical. If TRUE, requests raw JSON responses from the API. |
memoize |
Logical. If TRUE, enables caching for identical requests. |
max_workers |
Integer. Maximum number of parallel workers. If NULL, auto-detects. |
progress |
Logical. If TRUE, shows progress bar. |
A tibble with columns: message_index, provider, model, response_text, success, error_message, plus all model parameters as additional columns.
## Not run:
# Broadcast different questions
config <- llm_config(provider = "openai", model = "gpt-4o-mini",
api_key = Sys.getenv("OPENAI_API_KEY"))
messages_list <- list(
list(list(role = "user", content = "What is 2+2?")),
list(list(role = "user", content = "What is 3*5?")),
list(list(role = "user", content = "What is 10/2?"))
)
# setup paralle Environment
setup_llm_parallel(workers = 4, verbose = TRUE)
results <- call_llm_broadcast(config, messages_list)
# Reset to sequential
reset_llm_parallel(verbose = TRUE)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.