make_query | R Documentation |
make_query
generates structured input for a language model, including
system prompt, user messages, and optional examples (assistant answers).
make_query(
text,
prompt,
template = "{prefix}{text}\n{prompt}\n{suffix}",
system = NULL,
prefix = NULL,
suffix = NULL,
examples = NULL
)
text |
A character vector of texts to be annotated. |
prompt |
A string defining the main task or question to be passed to the language model. |
template |
A string template for formatting user queries, containing
placeholders like |
system |
An optional string to specify a system prompt. |
prefix |
A prefix string to prepend to each user query. |
suffix |
A suffix string to append to each user query. |
examples |
A |
The function supports the inclusion of examples, which are dynamically added to the structured input. Each example follows the same format as the primary user query.
A list of tibbles, one for each input text
, containing structured
rows for system messages, user messages, and assistant responses.
template <- "{prefix}{text}\n\n{prompt}{suffix}"
examples <- tibble::tribble(
~text, ~answer,
"This movie was amazing, with great acting and story.", "positive",
"The film was okay, but not particularly memorable.", "neutral",
"I found this movie boring and poorly made.", "negative"
)
queries <- make_query(
text = c("A stunning visual spectacle.", "Predictable but well-acted."),
prompt = "Classify sentiment as positive, neutral, or negative.",
template = template,
system = "Provide a sentiment classification.",
prefix = "Review: ",
suffix = " Please classify.",
examples = examples
)
print(queries)
if (ping_ollama()) { # only run this example when Ollama is running
query(queries, screen = TRUE, output = "text")
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.