llm_config: Create LLM Configuration

View source: R/LLMR.R

llm_configR Documentation

Create LLM Configuration

Description

Create LLM Configuration

Usage

llm_config(
  provider,
  model,
  api_key,
  troubleshooting = FALSE,
  base_url = NULL,
  embedding = NULL,
  ...
)

Arguments

provider

Provider name (openai, anthropic, groq, together, voyage, gemini, deepseek)

model

Model name to use

api_key

API key for authentication

troubleshooting

Prints out all api calls. USE WITH EXTREME CAUTION as it prints your API key.

base_url

Optional base URL override

embedding

Logical indicating embedding mode: NULL (default, used for backward compatibility, uses prior defaults), TRUE (force embeddings), FALSE (force generative)

...

Additional provider-specific parameters#'

Value

Configuration object for use with call_llm()

Examples

## Not run: 
### Generative example
  openai_config <- llm_config(
    provider = "openai",
    model = "gpt-4.1-mini",
    api_key = Sys.getenv("OPENAI_KEY"),
    temperature = 0.7,
    max_tokens = 500)

the_message <- list(
list(role = "system", content = "You are an expert data scientist."),
list(role = "user", content = "When will you ever use the OLS?") )

#Call the LLM api
response <- call_llm(
config = openai_config,
messages = the_message)
cat("Response:", response, "\n")

### Embedding example
  # Voyage AI Example:
  voyage_config <- llm_config(
    provider = "voyage",
    model = "voyage-large-2",
    api_key = Sys.getenv("VOYAGE_API_KEY"),
    embedding = TRUE
  )

  embedding_response <- call_llm(voyage_config, text_input)
  embeddings <- parse_embeddings(embedding_response)
  # Additional processing:
  embeddings |> cor() |> print()

## End(Not run)

LLMR documentation built on June 8, 2025, 10:45 a.m.