View source: R/LLM_parallel_utils.R
call_llm_sweep | R Documentation |
Sweeps through different values of a single parameter while keeping the message constant. Perfect for hyperparameter tuning, temperature experiments, etc. This function requires setting up the parallel environment using 'setup_llm_parallel'.
call_llm_sweep(
base_config,
param_name,
param_values,
messages,
tries = 10,
wait_seconds = 2,
backoff_factor = 2,
verbose = FALSE,
json = FALSE,
memoize = FALSE,
max_workers = NULL,
progress = FALSE
)
base_config |
Base llm_config object to modify. |
param_name |
Character. Name of the parameter to vary (e.g., "temperature", "max_tokens"). |
param_values |
Vector. Values to test for the parameter. |
messages |
List of message objects (same for all calls). |
tries |
Integer. Number of retries for each call. Default is 10. |
wait_seconds |
Numeric. Initial wait time (seconds) before retry. Default is 2. |
backoff_factor |
Numeric. Multiplier for wait time after each failure. Default is 2. |
verbose |
Logical. If TRUE, prints progress and debug information. |
json |
Logical. If TRUE, requests raw JSON responses from the API (note: final tibble's 'response_text' will be extracted text). |
memoize |
Logical. If TRUE, enables caching for identical requests via 'call_llm_robust'. |
max_workers |
Integer. Maximum number of parallel workers. If NULL, auto-detects. |
progress |
Logical. If TRUE, shows progress bar. |
A tibble with columns: param_name, param_value, provider, model, response_text, success, error_message, plus all model parameters as additional columns.
## Not run:
# Temperature sweep
config <- llm_config(provider = "openai", model = "gpt-4o-mini",
api_key = Sys.getenv("OPENAI_API_KEY"))
messages <- list(list(role = "user", content = "What is 15 * 23?"))
temperatures <- c(0, 0.3, 0.7, 1.0, 1.5)
# set up the parallel enviornment
setup_llm_parallel(workers = 4, verbose = TRUE)
results <- call_llm_sweep(config, "temperature", temperatures, messages)
# Reset to sequential
reset_llm_parallel(verbose = TRUE)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.