provider_prompt_wrap | R Documentation |
Build a provider-specific prompt wrap, to store on an llm_provider object
(with
$add_prompt_wrap()
). These prompt wraps can be applied before or
after any prompt-specific prompt wraps. In this way, you can ensure that
certain prompt wraps are always applied when using a specific LLM provider.
provider_prompt_wrap(
modify_fn = NULL,
extraction_fn = NULL,
validation_fn = NULL,
handler_fn = NULL,
parameter_fn = NULL,
type = c("unspecified", "mode", "tool", "break", "check"),
name = NULL
)
modify_fn |
A function that takes the previous prompt text (as first argument) and returns the new prompt text |
extraction_fn |
A function that takes the LLM response (as first argument)
and attempts to extract a value from it.
Upon succesful extraction, the function should return the extracted value.
If the extraction fails, the function should return a |
validation_fn |
A function that takes the (extracted) LLM response
(as first argument) and attempts to validate it.
Upon succesful validation, the function should return TRUE. If the validation
fails, the function should return a |
handler_fn |
A function that takes a 'completion' object (a result
of a request to a LLM, as returned by |
parameter_fn |
A function that takes the llm_provider object
which is being used with |
type |
The type of prompt wrap. Must be one of:
Types are used to determine the order in which prompt wraps are applied. When constructing the prompt text, prompt wraps are applied to the base prompt in the following order: 'check', 'unspecified', 'break', 'mode', 'tool'. When evaluating the LLM response and applying extraction and validation functions, prompt wraps are applied in the reverse order: 'tool', 'mode', 'break', 'unspecified', 'check'. Order among the same type is preserved in the order they were added to the prompt. |
name |
An optional name for the prompt wrap. This can be used to identify the prompt wrap in the tidyprompt object |
A provider_prompt_wrap
object, to be stored on an llm_provider object
ollama <- llm_provider_ollama()
# Add a "short answer" mode (provider-level post prompt wrap)
ollama$add_prompt_wrap(
provider_prompt_wrap(
modify_fn = \(txt) paste0(
txt,
"\n\nPlease answer concisely (< 2 sentences)."
)
),
position = "post"
)
# Use as usual: wraps are applied automatically
## Not run:
"What's a vignette in R?" |> send_prompt(ollama)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.