| gemini_chat | R Documentation | 
Generate text from text with Gemini
gemini_chat(
  prompt,
  history = list(),
  model = "2.0-flash",
  temperature = 1,
  maxOutputTokens = 8192,
  topK = 40,
  topP = 0.95,
  seed = 1234
)
| prompt | The prompt to generate text from | 
| history | history object to keep track of the conversation | 
| model | The model to use. Options are "2.0-flash", "2.0-flash-lite", "2.5-pro-exp-03-25". Default is '2.0-flash' see https://ai.google.dev/gemini-api/docs/models/gemini | 
| temperature | The temperature to use. Default is 1 value should be between 0 and 2 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters | 
| maxOutputTokens | The maximum number of tokens to generate. Default is 8192 and 100 tokens correspond to roughly 60-80 words. | 
| topK | The top-k value to use. Default is 40 value should be between 0 and 100 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters | 
| topP | The top-p value to use. Default is 0.95 value should be between 0 and 1 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters | 
| seed | The seed to use. Default is 1234 value should be integer see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters | 
Generated text
https://ai.google.dev/docs/gemini_api_overview#chat
## Not run: 
library(gemini.R)
setAPI("YOUR_API_KEY")
chats <- gemini_chat("Pretend you're a snowman and stay in character for each")
print(chats$outputs)
chats <- gemini_chat("What's your favorite season of the year?", chats$history)
print(chats$outputs)
chats <- gemini_chat("How do you think about summer?", chats$history)
print(chats$outputs)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.