View source: R/api_functions.R
claude | R Documentation |
Call the Anthropic API to interact with Claude models
claude(
.llm,
.model = "claude-3-5-sonnet-20240620",
.max_tokens = 1024,
.temperature = NULL,
.top_k = NULL,
.top_p = NULL,
.metadata = NULL,
.stop_sequences = NULL,
.tools = NULL,
.api_url = "https://api.anthropic.com/",
.verbose = FALSE,
.wait = TRUE,
.min_tokens_reset = 0L,
.timeout = 60,
.stream = FALSE
)
.llm |
An existing LLMMessage object or an initial text prompt. |
.model |
The model identifier (default: "claude-3-5-sonnet-20240620"). |
.max_tokens |
The maximum number of tokens to generate (default: 1024). |
.temperature |
Control for randomness in response generation (optional). |
.top_k |
Top k sampling parameter (optional). |
.top_p |
Nucleus sampling parameter (optional). |
.metadata |
Additional metadata for the request (optional). |
.stop_sequences |
Sequences that stop generation (optional). |
.tools |
Additional tools used by the model (optional). |
.api_url |
Base URL for the API (default: "https://api.anthropic.com/v1/messages"). |
.verbose |
Should additional information be shown after the API call |
.wait |
Should we wait for rate limits if necessary? |
.min_tokens_reset |
How many tokens should be remaining to wait until we wait for token reset? |
.timeout |
Request timeout in seconds (default: 60). |
.stream |
Stream back the response piece by piece (default: FALSE). |
Returns an updated LLMMessage object.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.