View source: R/answer_as_boolean.R
| answer_as_boolean | R Documentation |
Make LLM answer as a boolean (TRUE or FALSE)
answer_as_boolean(
prompt,
true_definition = NULL,
false_definition = NULL,
add_instruction_to_prompt = TRUE
)
prompt |
A single string or a |
true_definition |
(optional) Definition of what would constitute TRUE. This will be included in the instruction to the LLM. Should be a single string |
false_definition |
(optional) Definition of what would constitute FALSE. This will be included in the instruction to the LLM. Should be a single string |
add_instruction_to_prompt |
(optional) Add instruction for replying as a boolean to the prompt text. Set to FALSE for debugging if extractions/validations are working as expected (without instruction the answer should fail the validation function, initiating a retry) |
A tidyprompt() with an added prompt_wrap() which
will ensure that the LLM response is a boolean
Other pre_built_prompt_wraps:
add_text(),
answer_as_category(),
answer_as_integer(),
answer_as_json(),
answer_as_list(),
answer_as_multi_category(),
answer_as_named_list(),
answer_as_regex_match(),
answer_as_text(),
answer_by_chain_of_thought(),
answer_by_react(),
answer_using_r(),
answer_using_sql(),
answer_using_tools(),
prompt_wrap(),
quit_if(),
set_system_prompt()
Other answer_as_prompt_wraps:
answer_as_category(),
answer_as_integer(),
answer_as_json(),
answer_as_list(),
answer_as_multi_category(),
answer_as_named_list(),
answer_as_regex_match(),
answer_as_text()
## Not run:
"Are you a large language model?" |>
answer_as_boolean() |>
send_prompt()
# --- Sending request to LLM provider (llama3.1:8b): ---
# Are you a large language model?
#
# You must answer with only TRUE or FALSE (use no other characters).
# --- Receiving response from LLM provider: ---
# TRUE
# [1] TRUE
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.