| chat_ollama | R Documentation |
Convenience wrapper for local 'Ollama' models.
chat_ollama(prompt, model = "llama3.2", ...)
prompt |
Character. The user message to send. |
model |
Character. Model name (e.g., "gpt-4o", "claude-3-5-sonnet-latest", "llama3.2"). |
... |
Additional parameters passed to the API. |
The assistant's response as a character string, or a list when
history is in use. See chat for details.
## Not run:
chat_ollama("What is machine learning?")
chat_ollama("Explain Docker", model = "mistral")
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.