View source: R/replicateAPI4R.R
replicatellmAPI4R | R Documentation |
This function interacts with the Replicate API (v1) to utilize language models (LLM) such as Llama. It sends a POST request with the provided input and handles both streaming and non-streaming responses.
replicatellmAPI4R(
input,
model_url,
simple = TRUE,
fetch_stream = FALSE,
api_key = Sys.getenv("Replicate_API_KEY")
)
input |
A list containing the API request body with parameters including prompt, max_tokens, top_k, top_p, min_tokens, temperature, system_prompt, presence_penalty, and frequency_penalty. |
model_url |
A character string specifying the model endpoint URL (e.g., "/models/meta/meta-llama-3.1-405b-instruct/predictions"). |
simple |
A logical value indicating whether to return a simplified output (only the model output) if TRUE, or the full API response if FALSE. Default is TRUE. |
fetch_stream |
A logical value indicating whether to fetch a streaming response. Default is FALSE. |
api_key |
A character string representing the Replicate API key. Defaults to the environment variable "Replicate_API_KEY". |
If fetch_stream is FALSE, returns either a simplified output (if simple is TRUE) or the full API response. In streaming mode, outputs the response stream directly to the console.
Satoshi Kume
## Not run:
Sys.setenv(Replicate_API_KEY = "Your API key")
input <- list(
input = list(
prompt = "What is the capital of France?",
max_tokens = 1024,
top_k = 50,
top_p = 0.9,
min_tokens = 0,
temperature = 0.6,
system_prompt = "You are a helpful assistant.",
presence_penalty = 0,
frequency_penalty = 0
)
)
model_url <- "/models/meta/meta-llama-3.1-405b-instruct/predictions"
response <- replicatellmAPI4R(input, model_url)
print(response)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.