View source: R/LLM_parallel_utils.R
call_llm_compare | R Documentation |
Compares different configurations (models, providers, settings) using the same message. Perfect for benchmarking across different models or providers. This function requires setting up the parallel environment using 'setup_llm_parallel'.
call_llm_compare(
configs_list,
messages,
tries = 10,
wait_seconds = 2,
backoff_factor = 2,
verbose = FALSE,
json = FALSE,
memoize = FALSE,
max_workers = NULL,
progress = FALSE
)
configs_list |
A list of llm_config objects to compare. |
messages |
List of message objects (same for all configs). |
tries |
Integer. Number of retries for each call. Default is 10. |
wait_seconds |
Numeric. Initial wait time (seconds) before retry. Default is 2. |
backoff_factor |
Numeric. Multiplier for wait time after each failure. Default is 2. |
verbose |
Logical. If TRUE, prints processing information. |
json |
Logical. If TRUE, returns raw JSON responses. |
memoize |
Logical. If TRUE, enables caching for identical requests. |
max_workers |
Integer. Maximum number of parallel workers. If NULL, auto-detects. |
progress |
Logical. If TRUE, shows progress tracking. |
A tibble with columns: config_index, provider, model, response_text, success, error_message, plus all model parameters as additional columns.
## Not run:
# Compare different models
config1 <- llm_config(provider = "openai", model = "gpt-4o-mini",
api_key = Sys.getenv("OPENAI_API_KEY"))
config2 <- llm_config(provider = "openai", model = "gpt-3.5-turbo",
api_key = Sys.getenv("OPENAI_API_KEY"))
configs_list <- list(config1, config2)
messages <- list(list(role = "user", content = "Explain quantum computing"))
# setup paralle Environment
setup_llm_parallel(workers = 4, verbose = TRUE)
results <- call_llm_compare(configs_list, messages)
# Reset to sequential
reset_llm_parallel(verbose = TRUE)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.