| llm_provider-class | R Documentation |
This class provides a structure for creating llm_provider
objects with different implementations of $complete_chat().
Using this class, you can create an llm_provider object that interacts
with different LLM providers, such Ollama, OpenAI, or other custom providers.
parametersA named list of parameters to configure the llm_provider. Parameters may be appended to the request body when interacting with the LLM provider API
verboseA logical indicating whether interaction with the LLM provider should be printed to the console
urlThe URL to the LLM provider API endpoint for chat completion
api_keyThe API key to use for authentication with the LLM provider API
api_typeThe type of API to use (e.g., "openai", "ollama", "ellmer").
This is used to determine certain specific behaviors for different APIs,
for instance, as is done in the answer_as_json() function
json_typeThe type of JSON mode to use (e.g., 'auto', 'openai', 'ollama', 'ellmer', or 'text-based').
Using 'auto' or having this field not set, the api_type field will be used to
determine the JSON mode during the answer_as_json() function. If this field
is set, this will override the api_type field for JSON mode determination.
(Note: this determination only happens when the 'type' argument in
answer_as_json() is also set to 'auto'.)
tool_typeThe type of tool use mode to use (e.g., 'auto', 'openai', 'ollama', 'ellmer', or 'text-based').
Using 'auto' or having this field not set, the api_type field will be used to
determine the tool use mode during the answer_using_tools() function. If this field
is set, this will override the api_type field for tool use mode determination
(Note: this determination only happens when the 'type' argument in
answer_using_tools() is also set to 'auto'.)
handler_fnsA list of functions that will be called after the completion of a chat.
See $add_handler_fn()
pre_prompt_wrapsA list of prompt wraps that will be applied to any prompt evaluated
by this llm_provider object, before any prompt-specific
prompt wraps are applied. See $add_prompt_wrap().
This can be used to set default behavior for all prompts
evaluated by this llm_provider object.
post_prompt_wrapsA list of prompt wraps that will be applied to any prompt evaluated
by this llm_provider object, after any prompt-specific
prompt wraps are applied. See $add_prompt_wrap().
This can be used to set default behavior for all prompts
evaluated by this llm_provider object.
new()Create a new llm_provider object
llm_provider-class$new( complete_chat_function, parameters = list(), verbose = TRUE, url = NULL, api_key = NULL, api_type = "unspecified" )
complete_chat_functionFunction that will be called by the llm_provider to complete a chat. This function should take a list containing at least '$chat_history' (a data frame with 'role' and 'content' columns) and return a response object, which contains:
'completed': A dataframe with 'role' and 'content' columns, containing the completed chat history
'http': A list containing a list 'requests' and a list 'responses', containing the HTTP requests and responses made during the chat completion
parametersA named list of parameters to configure the llm_provider.
These parameters may be appended to the request body when interacting with
the LLM provider.
For example, the model parameter may often be required.
The 'stream' parameter may be used to indicate that the API should stream.
Parameters should not include the chat_history, or 'api_key' or 'url', which
are handled separately by the llm_provider and '$complete_chat()'.
Parameters should also not be set when they are handled by prompt wraps
verboseA logical indicating whether interaction with the LLM provider should be printed to the console
urlThe URL to the LLM provider API endpoint for chat completion (typically required, but may be left NULL in some cases, for instance when creating a fake LLM provider)
api_keyThe API key to use for authentication with the LLM provider API (optional, not required for, for instance, Ollama)
api_typeThe type of API to use (e.g., "openai", "ollama").
This is used to determine certain specific behaviors for different APIs
(see for example the answer_as_json() function)
A new llm_provider R6 object
set_parameters()Helper function to set the parameters of the llm_provider object. This function appends new parameters to the existing parameters list.
llm_provider-class$set_parameters(new_parameters)
new_parametersA named list of new parameters to append to the existing parameters list
The modified llm_provider object
complete_chat()Sends a chat history (see chat_history()
for details) to the LLM provider using the configured $complete_chat().
This function is typically called by send_prompt() to interact with the LLM
provider, but it can also be called directly.
llm_provider-class$complete_chat(input)
inputA string, a data frame which is a valid chat history
(see chat_history()), or a list containing a valid chat history under key
'$chat_history'
The response from the LLM provider
add_handler_fn()Helper function to add a handler function to the llm_provider object. Handler functions are called after the completion of a chat and can be used to modify the response before it is returned by the llm_provider. Each handler function should take the response object as input (first argument) as well as 'self' (the llm_provider object) and return a modified response object. The functions will be called in the order they are added to the list.
llm_provider-class$add_handler_fn(handler_fn)
handler_fnA function that takes the response object plus 'self' (the llm_provider object) as input and returns a modified response object
If a handler function returns a list with a 'break' field set to TRUE,
the chat completion will be interrupted and the response will be returned
at that point.
If a handler function returns a list with a 'done' field set to FALSE,
the handler functions will continue to be called in a loop until the 'done'
field is not set to FALSE.
set_handler_fns()Helper function to set the handler functions of the
llm_provider object.
This function replaces the existing
handler functions list with a new list of handler functions.
See $add_handler_fn() for more information
llm_provider-class$set_handler_fns(handler_fns)
handler_fnsA list of handler functions to set
add_prompt_wrap()Add a provider-level prompt wrap template to be applied to all prompts.
llm_provider-class$add_prompt_wrap(prompt_wrap, position = c("pre", "post"))prompt_wrapA list created by provider_prompt_wrap()
positionOne of "pre" or "post" (applied before/after prompt-specific wraps)
apply_prompt_wraps()Apply all provider-level wraps to a prompt (character or tidyprompt)
and return a tidyprompt with wraps attached.
This is typically called inside send_prompt() before evaluation of
the prompt.
llm_provider-class$apply_prompt_wraps(prompt)
promptA string, a chat history, a list containing a chat history under key '$chat_history', or a tidyprompt object
clone()The objects of this class are cloneable with this method.
llm_provider-class$clone(deep = FALSE)
deepWhether to make a deep clone.
Other llm_provider:
llm_provider_ellmer(),
llm_provider_google_gemini(),
llm_provider_groq(),
llm_provider_mistral(),
llm_provider_ollama(),
llm_provider_openai(),
llm_provider_openrouter(),
llm_provider_xai()
# Example creation of a llm_provider-class object:
llm_provider_openai <- function(
parameters = list(
model = "gpt-4o-mini",
stream = getOption("tidyprompt.stream", TRUE)
),
verbose = getOption("tidyprompt.verbose", TRUE),
url = "https://api.openai.com/v1/chat/completions",
api_key = Sys.getenv("OPENAI_API_KEY")
) {
complete_chat <- function(chat_history) {
headers <- c(
"Content-Type" = "application/json",
"Authorization" = paste("Bearer", self$api_key)
)
body <- list(
messages = lapply(seq_len(nrow(chat_history)), function(i) {
list(role = chat_history$role[i], content = chat_history$content[i])
})
)
for (name in names(self$parameters))
body[[name]] <- self$parameters[[name]]
request <- httr2::request(self$url) |>
httr2::req_body_json(body) |>
httr2::req_headers(!!!headers)
request_llm_provider(
chat_history,
request,
stream = self$parameters$stream,
verbose = self$verbose,
api_type = self$api_type
)
}
return(`llm_provider-class`$new(
complete_chat_function = complete_chat,
parameters = parameters,
verbose = verbose,
url = url,
api_key = api_key,
api_type = "openai"
))
}
llm_provider <- llm_provider_openai()
## Not run:
llm_provider$complete_chat("Hi!")
# --- Sending request to LLM provider (gpt-4o-mini): ---
# Hi!
# --- Receiving response from LLM provider: ---
# Hello! How can I assist you today?
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.