llm_config | R Documentation |
llm_config()
builds a provider-agnostic configuration object that
call_llm()
(and friends) understand. You can pass provider-specific
parameters via ...
; LLMR forwards them as-is, with a few safe conveniences.
llm_config(
provider,
model,
api_key = NULL,
troubleshooting = FALSE,
base_url = NULL,
embedding = NULL,
no_change = FALSE,
...
)
provider |
Character scalar. One of:
|
model |
Character scalar. Model name understood by the chosen provider.
(e.g., |
api_key |
Character scalar. Provider API key. |
troubleshooting |
Logical. If |
base_url |
Optional character. Back-compat alias; if supplied it is
stored as |
embedding |
|
no_change |
Logical. If |
... |
Additional provider-specific parameters (e.g., |
An object of class c("llm_config", provider)
. Fields:
provider
, model
, api_key
, troubleshooting
, embedding
,
no_change
, and model_params
(a named list of extras).
Anthropic temperatures must be in [0, 1]
; others in [0, 2]
. Out-of-range
values are clamped with a warning.
You can pass api_url
(or base_url=
alias) in ...
to point to gateways
or compatible proxies.
call_llm
,
call_llm_robust
,
llm_chat_session
,
call_llm_par
,
get_batched_embeddings
## Not run:
# Basic OpenAI config
cfg <- llm_config("openai", "gpt-4o-mini",
temperature = 0.7, max_tokens = 300)
# Generative call returns an llmr_response object
r <- call_llm(cfg, "Say hello in Greek.")
print(r)
as.character(r)
# Embeddings (inferred from the model name)
e_cfg <- llm_config("gemini", "text-embedding-004")
# Force embeddings even if model name does not contain "embedding"
e_cfg2 <- llm_config("voyage", "voyage-large-2", embedding = TRUE)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.