View source: R/complete_chat.R
complete_chat | R Documentation |
Submits a prompt to OpenAI's "Chat" API endpoint and formats the response into a string or tidy dataframe.
complete_chat(
prompt,
model = "gpt-3.5-turbo",
openai_api_key = Sys.getenv("OPENAI_API_KEY"),
max_tokens = 1,
temperature = 0,
seed = NULL,
parallel = FALSE
)
prompt |
The prompt |
model |
Which OpenAI model to use. Defaults to 'gpt-3.5-turbo' |
openai_api_key |
Your API key. By default, looks for a system environment variable called "OPENAI_API_KEY" (recommended option). Otherwise, it will prompt you to enter the API key as an argument. |
max_tokens |
How many tokens (roughly 4 characters of text) should the model return? Defaults to a single token (next word prediction). |
temperature |
A numeric between 0 and 2 When set to zero, the model will always return the most probable next token. For values greater than zero, the model selects the next word probabilistically. |
seed |
An integer. If specified, the OpenAI API will "make a best effort to sample deterministically". |
parallel |
TRUE to submit API requests in parallel. Setting to FALSE can reduce rate limit errors at the expense of longer runtime. |
If max_tokens = 1, returns a dataframe with the 5 most likely next-word responses and their probabilities. If max_tokens > 1, returns a single string of text generated by the model.
## Not run:
format_chat('Are frogs sentient? Yes or No.') |> complete_chat()
format_chat('Write a haiku about frogs.') |> complete_chat(max_tokens = 100)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.