chat_gpt: Generate a response to a natural language prompt using the...

View source: R/chat_gpt.R

chat_gptR Documentation

Generate a response to a natural language prompt using the OpenAI GPT-3.5 Turbo language model

Description

This function sends a natural language prompt to the OpenAI API and returns a response generated by the specified language model. The response can be used for a variety of tasks, such as generating text, code snippets, or even conversation.

Usage

chat_gpt(prompt, model = "gpt-3.5-turbo", temperature = 0.5,
max_tokens = 2048, n = 1, openai_api_key = NULL, verbose = TRUE)

Arguments

prompt

A character string containing the natural language prompt to send to the OpenAI API.

model

A character string specifying the ID of the OpenAI language model to use for generating the response. Default is "gpt-3.5-turbo", which is a high-performance model optimized for generating text.

temperature

A numeric value specifying the "creativity" of the response generated by the language model. Higher values lead to more diverse and unpredictable responses, while lower values lead to more predictable and "safe" responses. Default is 0.5.

max_tokens

An integer value specifying the maximum number of tokens to include in the response generated by the language model. Increasing this value can lead to longer and more detailed responses, but may also increase the risk of generating nonsensical or irrelevant content. Default is 2048.

n

An integer value controlling the number of responses to generate.

openai_api_key

An optional character string containing the OpenAI API key to use for authentication. If not specified, the function will attempt to retrieve the API key from the system environment variable OPENAI_API_KEY.

verbose

A logical value indicating whether to print the generated response to the console (default) or to return it as a character string.

Details

The chat_gpt function uses the OpenAI API to generate a response to a natural language prompt. The response is generated by a specified language model, which can be customized using the model, temperature, and max_tokens arguments. By default, the function uses the "gpt-3.5-turbo" model, which is a high-performance model optimized for generating text. The temperature argument controls the "creativity" of the generated response, with higher values leading to more diverse and unpredictable responses. The max_tokens argument controls the maximum length of the generated response, with higher values leading to longer and more detailed responses. The n argument can be used to generate multiple responses to the same prompt.

Value

A character string containing the response generated by the OpenAI language model in response to the specified prompt.

Author(s)

Dan Ouchi

Examples


## Assign API key for the actual session:
api_key <- "your_openai_api_key"
Sys.setenv(OPENAI_API_KEY = api_key)

## Persistent (session-wide) assign to .Renviron:
usethis::edit_r_environ()
# it will open the file and you just need to write a line with you api key
# OPENAI_API_KEY=XX-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

## How to ask to chatGPT for R code:
# The best way to ask for R code is to provide a clear and specific description of the task you want to accomplish,
# along with any relevant data or variables that are required.
# It is also helpful to specify any parameters or settings you would like to include in the code,
# such as data transformations, visualizations, or statistical analyses.
# Additionally, providing a desired output or format for the results can help ensure that the code meets
# your needs.

## This is an example to make a request and get a good answer from chatGPT:
main.prompt = 'I want you to act as R programmer. I will ask you to write scripts based on my requests and you will reply the answer.I want you to only reply with the code, and nothing else. Do not write explanations. My request is'

chat_gpt(prompt =paste0(main.prompt,"'Can you provide R code to create a scatterplot of two variables, x and y, from a dataset called 'data.csv'? Please label the x-axis 'X-axis' and the y-axis 'Y-axis', and include a title for the plot. Thank you.'"))

# chatGPT answer:
# library(ggplot2)
# data <- read.csv("data.csv")
# ggplot(data, aes(x = x, y = y)) +
#   geom_point() +
#   labs(x = "X-axis", y = "Y-axis", title = "Scatterplot of X and Y")


## Specifying non-default values for temperature and max_tokens
chat_gpt("What is the R code to create a color palette of more than the maximum values that has the Spectral Palette?",temperature = 0.7, max_tokens = 1024)

## Generating multiple responses
chat_gpt("What is the R code to create a color palette of more than the maximum values that has the Spectral Palette? Provide only the code no explanaitions nor any other information", n = 5)

## save the answer and use it to modify, extend or improve them by asking again to the chatGPT:
answer = chat_gpt(prompt =paste0(main.prompt,"'Can you provide R code to make webscrapping for the webpage idiapjgol.org?'"),verbose=F,n=1)
chat_gpt(prompt =paste0(main.prompt,"'Can you add a code to download images from the scrapping you programmed in the last answer?'",answer))

## Other examples:
chat_gpt(prompt =paste0(main.prompt,"Can you give a R markdown code which makes a dummy dataset, select all numerical columns and analyses the association between variables using dendrogram and cluster techniques? The values must be normalized before conducting the clusterization. I want to show the results in the markdown using plotly for an interactive plot."))
chat_gpt(prompt =paste0(main.prompt,"'What is the R code to separate one column into two using ',' as separator?'"),n=1)


douve/UEMR documentation built on Aug. 28, 2023, 2:30 p.m.