View source: R/LLM_parallel_utils.R
call_llm_compare | R Documentation |
Compares different configurations (models, providers, settings) using the same message.
Perfect for benchmarking across different models or providers.
This function requires setting up the parallel environment using setup_llm_parallel
.
call_llm_compare(configs_list, messages, ...)
configs_list |
A list of llm_config objects to compare. |
messages |
A character vector or a list of message objects (same for all configs). |
... |
Additional arguments passed to |
A tibble with columns: config_index (metadata), provider, model, all varying model parameters, response_text, raw_response_json, success, error_message.
All parallel functions require the future
backend to be configured.
The recommended workflow is:
Call setup_llm_parallel()
once at the start of your script.
Run one or more parallel experiments (e.g., call_llm_broadcast()
).
Call reset_llm_parallel()
at the end to restore sequential processing.
setup_llm_parallel
, reset_llm_parallel
## Not run:
# Compare different models
config1 <- llm_config(provider = "openai", model = "gpt-4o-mini")
config2 <- llm_config(provider = "openai", model = "gpt-4.1-nano")
configs_list <- list(config1, config2)
messages <- "Explain quantum computing"
setup_llm_parallel(workers = 4, verbose = TRUE)
results <- call_llm_compare(configs_list, messages)
reset_llm_parallel(verbose = TRUE)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.