llm_config | R Documentation |
Create LLM Configuration
llm_config(
provider,
model,
api_key,
troubleshooting = FALSE,
base_url = NULL,
embedding = NULL,
...
)
provider |
Provider name (openai, anthropic, groq, together, voyage, gemini, deepseek) |
model |
Model name to use |
api_key |
API key for authentication |
troubleshooting |
Prints out all api calls. USE WITH EXTREME CAUTION as it prints your API key. |
base_url |
Optional base URL override |
embedding |
Logical indicating embedding mode: NULL (default, used for backward compatibility, uses prior defaults), TRUE (force embeddings), FALSE (force generative) |
... |
Additional provider-specific parameters#' |
Configuration object for use with call_llm()
## Not run:
### Generative example
openai_config <- llm_config(
provider = "openai",
model = "gpt-4.1-mini",
api_key = Sys.getenv("OPENAI_KEY"),
temperature = 0.7,
max_tokens = 500)
the_message <- list(
list(role = "system", content = "You are an expert data scientist."),
list(role = "user", content = "When will you ever use the OLS?") )
#Call the LLM api
response <- call_llm(
config = openai_config,
messages = the_message)
cat("Response:", response, "\n")
### Embedding example
# Voyage AI Example:
voyage_config <- llm_config(
provider = "voyage",
model = "voyage-large-2",
api_key = Sys.getenv("VOYAGE_API_KEY"),
embedding = TRUE
)
embedding_response <- call_llm(voyage_config, text_input)
embeddings <- parse_embeddings(embedding_response)
# Additional processing:
embeddings |> cor() |> print()
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.