| rate_limits_per_minute | R Documentation |
rate_limits_per_minute reports the rate limits for a given API model.
The function returns the available requests per minute (RPM) as well as tokens per minute (TPM).
Find general information at
https://developers.openai.com/api/docs/models/model-endpoint-compatibility.
rate_limits_per_minute(
model = "gpt-4o-mini",
AI_tool = "OpenAI",
api_key = NULL
)
model |
Character string with the name of the completion model.
Default is |
AI_tool |
Character string specifying the AI tool from which the API is
issued. Currently supports |
api_key |
Character string with the API key. For OpenAI, use |
A tibble including variables with information about the model used,
the number of requests and tokens per minute.
## Not run:
set_api_key()
rate_limits_per_minute(
model = "gpt-4o-mini",
AI_tool = "OpenAI",
api_key = get_api_key()
)
# Groq example
rate_limits_per_minute(
model = "llama3-70b-8192",
AI_tool = "Groq",
api_key = get_api_key_groq()
)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.