View source: R/LLM_parallel_utils.R
call_llm_broadcast | R Documentation |
Broadcasts different messages using the same configuration in parallel.
Perfect for batch processing different prompts with consistent settings.
This function requires setting up the parallel environment using setup_llm_parallel
.
call_llm_broadcast(config, messages, ...)
config |
Single llm_config object to use for all calls. |
messages |
A character vector (each element is a prompt) OR a list where each element is a pre-formatted message list. |
... |
Additional arguments passed to |
A tibble with columns: message_index (metadata), provider, model, all model parameters, response_text, raw_response_json, success, error_message.
All parallel functions require the future
backend to be configured.
The recommended workflow is:
Call setup_llm_parallel()
once at the start of your script.
Run one or more parallel experiments (e.g., call_llm_broadcast()
).
Call reset_llm_parallel()
at the end to restore sequential processing.
setup_llm_parallel
, reset_llm_parallel
## Not run:
# Broadcast different questions
config <- llm_config(provider = "openai", model = "gpt-4.1-nano")
messages <- list(
list(list(role = "user", content = "What is 2+2?")),
list(list(role = "user", content = "What is 3*5?")),
list(list(role = "user", content = "What is 10/2?"))
)
setup_llm_parallel(workers = 4, verbose = TRUE)
results <- call_llm_broadcast(config, messages)
reset_llm_parallel(verbose = TRUE)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.