View source: R/provider-ollama.R
chat_ollama | R Documentation |
To use chat_ollama()
first download and install
Ollama. Then install some models either from the
command line (e.g. with ollama pull llama3.1
) or within R using
ollamar (e.g.
ollamar::pull("llama3.1")
).
This function is a lightweight wrapper around chat_openai()
with
the defaults tweaked for ollama.
Tool calling is not supported with streaming (i.e. when echo
is
"text"
or "all"
)
Models can only use 2048 input tokens, and there's no way to get them to use more, except by creating a custom model with a different default.
Tool calling generally seems quite weak, at least with the models I have tried it with.
chat_ollama(
system_prompt = NULL,
turns = NULL,
base_url = "http://localhost:11434",
model,
seed = NULL,
api_args = list(),
echo = NULL
)
system_prompt |
A system prompt to set the behavior of the assistant. |
turns |
A list of Turns to start the chat with (i.e., continuing a previous conversation). If not provided, the conversation begins from scratch. |
base_url |
The base URL to the endpoint; the default uses OpenAI. |
model |
The model to use for the chat. The default, |
seed |
Optional integer seed that ChatGPT uses to try and make output more reproducible. |
api_args |
Named list of arbitrary extra arguments appended to the body of every chat API call. |
echo |
One of the following options:
Note this only affects the |
A Chat object.
Other chatbots:
chat_bedrock()
,
chat_claude()
,
chat_cortex_analyst()
,
chat_databricks()
,
chat_deepseek()
,
chat_gemini()
,
chat_github()
,
chat_groq()
,
chat_openai()
,
chat_openrouter()
,
chat_perplexity()
## Not run:
chat <- chat_ollama(model = "llama3.2")
chat$chat("Tell me three jokes about statisticians")
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.