View source: R/api_functions.R
chatgpt | R Documentation |
Call the OpenAI API to interact with ChatGPT or o-reasoning models
chatgpt(
.llm,
.model = "gpt-4",
.max_tokens = 1024,
.temperature = NULL,
.top_p = NULL,
.top_k = NULL,
.frequency_penalty = NULL,
.presence_penalty = NULL,
.api_url = "https://api.openai.com/",
.timeout = 60,
.verbose = FALSE,
.wait = TRUE,
.min_tokens_reset = 0L,
.stream = FALSE
)
.llm |
An existing LLMMessage object or an initial text prompt. |
.model |
The model identifier (default: "gpt-4o"). |
.max_tokens |
The maximum number of tokens to generate (default: 1024). |
.temperature |
Control for randomness in response generation (optional). |
.top_p |
Nucleus sampling parameter (optional). |
.top_k |
Top k sampling parameter (optional). |
.frequency_penalty |
Controls repetition frequency (optional). |
.presence_penalty |
Controls how much to penalize repeating content (optional) |
.api_url |
Base URL for the API (default: https://api.openai.com/v1/completions). |
.timeout |
Request timeout in seconds (default: 60). |
.verbose |
Should additional information be shown after the API call |
.wait |
Should we wait for rate limits if necessary? |
.min_tokens_reset |
How many tokens should be remaining to wait until we wait for token reset? |
.stream |
Stream back the response piece by piece (default: FALSE). |
Returns an updated LLMMessage object.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.