View source: R/LLM_parallel_utils.R
call_llm_sweep | R Documentation |
Sweeps through different values of a single parameter while keeping the message constant.
Perfect for hyperparameter tuning, temperature experiments, etc.
This function requires setting up the parallel environment using setup_llm_parallel
.
call_llm_sweep(base_config, param_name, param_values, messages, ...)
base_config |
Base llm_config object to modify. |
param_name |
Character. Name of the parameter to vary (e.g., "temperature", "max_tokens"). |
param_values |
Vector. Values to test for the parameter. |
messages |
A character vector or a list of message objects (same for all calls). |
... |
Additional arguments passed to |
A tibble with columns: swept_param_name, the varied parameter column, provider, model, all other model parameters, response_text, raw_response_json, success, error_message.
All parallel functions require the future
backend to be configured.
The recommended workflow is:
Call setup_llm_parallel()
once at the start of your script.
Run one or more parallel experiments (e.g., call_llm_broadcast()
).
Call reset_llm_parallel()
at the end to restore sequential processing.
setup_llm_parallel
, reset_llm_parallel
## Not run:
# Temperature sweep
config <- llm_config(provider = "openai", model = "gpt-4.1-nano")
messages <- "What is 15 * 23?"
temperatures <- c(0, 0.3, 0.7, 1.0, 1.5)
setup_llm_parallel(workers = 4, verbose = TRUE)
results <- call_llm_sweep(config, "temperature", temperatures, messages)
results |> dplyr::select(temperature, response_text)
reset_llm_parallel(verbose = TRUE)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.