| chat4R | R Documentation |
This function uses the OpenAI API to interact with the gpt-4o-mini model (default) and generates responses based on user input. In this function, currently, "gpt-4o-mini" (default), "gpt-4o", "gpt-4", and "gpt-4-turbo" can be selected as OpenAI's LLM model.
chat4R(
content,
Model = "gpt-4o-mini",
temperature = 1,
simple = TRUE,
fromJSON_parsed = FALSE,
check = FALSE,
api_key = Sys.getenv("OPENAI_API_KEY")
)
content |
A string containing the user's input message. |
Model |
A string specifying the GPT model to use (default: "gpt-4o-mini"). |
temperature |
A numeric value controlling the randomness of the model's output (default: 1). |
simple |
Logical, if TRUE, only the content of the model's message will be returned. |
fromJSON_parsed |
Logical, if TRUE, content will be parsed from JSON. |
check |
Logical, if TRUE, prints detailed error information (message, type, param, code) if the API response includes an error. If there is no error, "No error" is printed. |
api_key |
A string containing the user's OpenAI API key. Defaults to the value of the environment variable "OPENAI_API_KEY". |
Chat4R Function
A data frame or list containing the response from the GPT model, depending on arguments.
Satoshi Kume
## Not run:
Sys.setenv(OPENAI_API_KEY = "Your API key")
chat4R(content = "What is the capital of France?", check = TRUE)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.