groq: Call the Groq API to interact with fast opensource models on...

View source: R/api_functions.R

groqR Documentation

Call the Groq API to interact with fast opensource models on Groq

Description

Call the Groq API to interact with fast opensource models on Groq

Usage

groq(
  .llm,
  .model = "llama-3.2-90b-text-preview",
  .max_tokens = 1024,
  .temperature = NULL,
  .top_p = NULL,
  .frequency_penalty = NULL,
  .presence_penalty = NULL,
  .api_url = "https://api.groq.com/",
  .timeout = 60,
  .verbose = FALSE,
  .wait = TRUE,
  .min_tokens_reset = 0L
)

Arguments

.llm

An existing LLMMessage object or an initial text prompt.

.model

The model identifier (default: "llama-3.2-90b-text-preview").

.max_tokens

The maximum number of tokens to generate (default: 1024).

.temperature

Control for randomness in response generation (optional).

.top_p

Nucleus sampling parameter (optional).

.frequency_penalty

Controls repetition frequency (optional).

.presence_penalty

Controls how much to penalize repeating content (optional)

.api_url

Base URL for the API (default: "https://api.anthropic.com/v1/messages").

.timeout

Request timeout in seconds (default: 60).

.verbose

Should additional information be shown after the API call

.wait

Should we wait for rate limits if necessary?

.min_tokens_reset

How many tokens should be remaining to wait until we wait for token reset?

Value

Returns an updated LLMMessage object.


tidyllm documentation built on Oct. 10, 2024, 5:07 p.m.