llm_use: Specify the model to use

View source: R/llm-use.R

llm_useR Documentation

Specify the model to use

Description

Allows us to specify the back-end provider, model to use during the current R session

Usage

llm_use(
  backend = NULL,
  model = NULL,
  ...,
  .silent = FALSE,
  .cache = NULL,
  .force = FALSE
)

Arguments

backend

The name of an supported back-end provider. Currently only 'ollama' is supported.

model

The name of model supported by the back-end provider

...

Additional arguments that this function will pass down to the integrating function. In the case of Ollama, it will pass those arguments to ollamar::chat().

.silent

Avoids console output

.cache

The path to save model results, so they can be re-used if the same operation is ran again. To turn off, set this argument to an empty character: "". It defaults to a temp folder. If this argument is left NULL when calling this function, no changes to the path will be made.

.force

Flag that tell the function to reset all of the settings in the R session

Value

A mall_session object

Examples


library(mall)

llm_use("ollama", "llama3.2")

# Additional arguments will be passed 'as-is' to the
# downstream R function in this example, to ollama::chat()
llm_use("ollama", "llama3.2", seed = 100, temperature = 0.1)

# During the R session, you can change any argument
# individually and it will retain all of previous
# arguments used
llm_use(temperature = 0.3)

# Use .cache to modify the target folder for caching
llm_use(.cache = "_my_cache")

# Leave .cache empty to turn off this functionality
llm_use(.cache = "")

# Use .silent to avoid the print out
llm_use(.silent = TRUE)


mall documentation built on Oct. 24, 2024, 5:09 p.m.