View source: R/api_functions.R
ollama | R Documentation |
Send LLMMessage to ollama API
ollama(
.llm,
.model = "llama3",
.stream = FALSE,
.seed = NULL,
.json = FALSE,
.temperature = NULL,
.num_ctx = 2048,
.ollama_server = "http://localhost:11434",
.timeout = 120
)
.llm |
An existing LLMMessage object or an initial text prompt. |
.model |
The model identifier (default: "llama3"). |
.stream |
Should the answer be streamed to console as it comes (optional) |
.seed |
Which seed should be used for random numbers (optional). |
.json |
Should output be structured as JSON (default: FALSE). |
.temperature |
Control for randomness in response generation (optional). |
.num_ctx |
The size of the context window in tokens (optional) |
.ollama_server |
The URL of the ollama server to be used |
.timeout |
When should our connection time out |
Returns an updated LLMMessage object.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.