chat: Chat with an LLM

View source: R/chat.R

chatR Documentation

Chat with an LLM

Description

Send a message to a Large Language Model and get a response.

Usage

chat(prompt, model = NULL, system = NULL, history = NULL, temperature = NULL,
     max_tokens = NULL,
     provider = c("auto", "openai", "anthropic", "moonshot", "ollama"),
     stream = FALSE, ...)

Arguments

prompt

Character. The user message to send.

model

Character. Model name (e.g., "gpt-4o", "claude-3-5-sonnet-latest", "llama3.2").

system

Character or NULL. System prompt to set context.

history

List or NULL. Previous conversation turns.

temperature

Numeric or NULL. Sampling temperature (0-2).

max_tokens

Integer or NULL. Maximum tokens in response.

provider

Character. Provider: "auto", "openai", "anthropic", "moonshot", or "ollama".

stream

Logical. Stream the response (prints as it arrives).

...

Additional parameters passed to the API.

Value

A list with:

content

The assistant's response text

model

Model used

usage

Token usage (if available)

history

Updated conversation history

Examples

## Not run: 
# Simple chat
chat("What is 2+2?")

# With system prompt
chat("Explain R", system = "You are a helpful programming tutor.")

# Continue conversation
result <- chat("Hello")
chat("Tell me more", history = result$history)

## End(Not run)

llm.api documentation built on April 16, 2026, 5:08 p.m.