View source: R/complete_prompt.R
complete_prompt | R Documentation |
Submits a text prompt to OpenAI's "Completion" API endpoint and formats the response into a string or tidy dataframe. (Note that, as of 2024, this endpoint is considered "Legacy" by OpenAI and is likely to be deprecated.)
complete_prompt(
prompt,
model = "gpt-3.5-turbo-instruct",
openai_api_key = Sys.getenv("OPENAI_API_KEY"),
max_tokens = 1,
temperature = 0,
seed = NULL,
parallel = FALSE
)
prompt |
The prompt |
model |
Which OpenAI model to use. Defaults to 'gpt-3.5-turbo-instruct' |
openai_api_key |
Your API key. By default, looks for a system environment variable called "OPENAI_API_KEY" (recommended option). Otherwise, it will prompt you to enter the API key as an argument. |
max_tokens |
How many tokens (roughly 4 characters of text) should the model return? Defaults to a single token (next word prediction). |
temperature |
A numeric between 0 and 2 When set to zero, the model will always return the most probable next token. For values greater than zero, the model selects the next word probabilistically. |
seed |
An integer. If specified, the OpenAI API will "make a best effort to sample deterministically". |
parallel |
TRUE to submit API requests in parallel. Setting to FALSE can reduce rate limit errors at the expense of longer runtime. |
If max_tokens = 1, returns a dataframe with the 5 most likely next words and their probabilities. If max_tokens > 1, returns a single string of text generated by the model.
## Not run:
complete_prompt('I feel like a')
complete_prompt('Here is my haiku about frogs:',
max_tokens = 100)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.