View source: R/llm_providers.R
llm_provider_ellmer | R Documentation |
ellmer::chat()
objectThis function creates a llm_provider from an ellmer::chat()
object.
This allows the user to use the various LLM providers which are supported
by the 'ellmer' R package, including respective configuration and features.
Please note that this function is experimental. This provider type may show different behavior than other LLM providers, and may not function optimally.
llm_provider_ellmer(chat, verbose = getOption("tidyprompt.verbose", TRUE))
chat |
An |
verbose |
A logical indicating whether the interaction with the llm_provider should be printed to the console. Default is TRUE |
Unlike other LLM provider classes,
LLM provider settings need to be managed in the ellmer::chat()
object
(and not in the $parameters
list). $get_chat()
and $set_chat()
may be used
to manipulate the chat object.
A special parameter $.ellmer_structured_type
may be set in the $parameters
list;
this parameter is used to specify a structured output format. This should be a 'ellmer'
structured type (e.g., ellmer::type_object
; see https://ellmer.tidyverse.org/articles/structured-data.html).
answer_as_json()
sets this parameter to obtain structured output
(it is not recommended to set this parameter manually, but it is possible).
An llm_provider with api_type = "ellmer"
Other llm_provider:
llm_provider-class
,
llm_provider_google_gemini()
,
llm_provider_groq()
,
llm_provider_mistral()
,
llm_provider_ollama()
,
llm_provider_openai()
,
llm_provider_openrouter()
,
llm_provider_xai()
# Various providers:
ollama <- llm_provider_ollama()
openai <- llm_provider_openai()
openrouter <- llm_provider_openrouter()
mistral <- llm_provider_mistral()
groq <- llm_provider_groq()
xai <- llm_provider_xai()
gemini <- llm_provider_google_gemini()
# From an `ellmer::chat()` (e.g., `ellmer::chat_openai()`, ...):
## Not run:
ellmer <- llm_provider_ellmer(ellmer::chat_openai())
## End(Not run)
# Initialize with settings:
ollama <- llm_provider_ollama(
parameters = list(
model = "llama3.2:3b",
stream = TRUE
),
verbose = TRUE,
url = "http://localhost:11434/api/chat"
)
# Change settings:
ollama$verbose <- FALSE
ollama$parameters$stream <- FALSE
ollama$parameters$model <- "llama3.1:8b"
## Not run:
# Try a simple chat message with '$complete_chat()':
response <- ollama$complete_chat("Hi!")
response
# $role
# [1] "assistant"
#
# $content
# [1] "How's it going? Is there something I can help you with or would you like
# to chat?"
#
# $http
# Response [http://localhost:11434/api/chat]
# Date: 2024-11-18 14:21
# Status: 200
# Content-Type: application/json; charset=utf-8
# Size: 375 B
# Use with send_prompt():
"Hi" |>
send_prompt(ollama)
# [1] "How's your day going so far? Is there something I can help you with or
# would you like to chat?"
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.