llm_use | R Documentation |
Allows us to specify the back-end provider, model to use during the current R session
llm_use(
backend = NULL,
model = NULL,
...,
.silent = FALSE,
.cache = NULL,
.force = FALSE
)
backend |
The name of an supported back-end provider. Currently only 'ollama' is supported. |
model |
The name of model supported by the back-end provider |
... |
Additional arguments that this function will pass down to the
integrating function. In the case of Ollama, it will pass those arguments to
|
.silent |
Avoids console output |
.cache |
The path to save model results, so they can be re-used if
the same operation is ran again. To turn off, set this argument to an empty
character: |
.force |
Flag that tell the function to reset all of the settings in the R session |
A mall_session
object
library(mall)
llm_use("ollama", "llama3.2")
# Additional arguments will be passed 'as-is' to the
# downstream R function in this example, to ollama::chat()
llm_use("ollama", "llama3.2", seed = 100, temperature = 0.1)
# During the R session, you can change any argument
# individually and it will retain all of previous
# arguments used
llm_use(temperature = 0.3)
# Use .cache to modify the target folder for caching
llm_use(.cache = "_my_cache")
# Leave .cache empty to turn off this functionality
llm_use(.cache = "")
# Use .silent to avoid the print out
llm_use(.silent = TRUE)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.