llmr_response | R Documentation |
A lightweight S3 container for generative model calls. It standardizes finish reasons and token usage across providers and keeps the raw response for advanced users.
Returns the standardized finish reason for an llmr_response
.
Returns a list with token counts for an llmr_response
.
Convenience check for truncation due to token limits.
finish_reason(x)
tokens(x)
is_truncated(x)
## S3 method for class 'llmr_response'
as.character(x, ...)
## S3 method for class 'llmr_response'
print(x, ...)
x |
An |
... |
Ignored. |
text
: character scalar. Assistant reply.
provider
: character. Provider id (e.g., "openai"
, "gemini"
).
model
: character. Model id.
finish_reason
: one of "stop"
, "length"
, "filter"
, "tool"
, "other"
.
usage
: list with integers sent
, rec
, total
, reasoning
(if available).
response_id
: provider’s response identifier if present.
duration_s
: numeric seconds from request to parse.
raw
: parsed provider JSON (list).
raw_json
: raw JSON string.
print()
shows the text, then a compact status line with model, finish reason,
token counts, and a terse hint if truncated or filtered.
as.character()
extracts text
so the object remains drop-in for code that
expects a character return.
A length-1 character vector or NA_character_
.
A list list(sent, rec, total, reasoning)
. Missing values are NA
.
TRUE
if truncated, otherwise FALSE
.
call_llm()
, call_llm_robust()
, llm_chat_session()
,
llm_config()
, llm_mutate()
, llm_fn()
# Minimal fabricated example (no network):
r <- structure(
list(
text = "Hello!",
provider = "openai",
model = "demo",
finish_reason = "stop",
usage = list(sent = 12L, rec = 5L, total = 17L, reasoning = NA_integer_),
response_id = "resp_123",
duration_s = 0.012,
raw = list(choices = list(list(message = list(content = "Hello!")))),
raw_json = "{}"
),
class = "llmr_response"
)
as.character(r)
finish_reason(r)
tokens(r)
print(r)
## Not run:
fr <- finish_reason(r)
## End(Not run)
## Not run:
u <- tokens(r)
u$total
## End(Not run)
## Not run:
if (is_truncated(r)) message("Increase max_tokens")
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.