README.md

llm.api

"Llamar" is Spanish for "to call"

Minimal-dependency LLM chat interface. Part of cornyverse.

Exports

| Function | Purpose | |----------|---------| | chat(prompt, model) | Chat with any LLM | | chat_openai(prompt) | OpenAI GPT models | | chat_claude(prompt) | Anthropic Claude models | | chat_ollama(prompt) | Local Ollama server | | list_ollama_models() | List Ollama models | | llm_base(url) | Set API endpoint | | llm_key(key) | Set API key |

Providers

Usage

# Auto-detect provider from model
chat("Hello", model = "gpt-4o")
chat("Hello", model = "claude-3-5-sonnet-latest")
chat("Hello", model = "kimi-k2")

# Use convenience wrappers
chat_ollama("What is R?")
chat_claude("Explain machine learning")

# Explicit Moonshot/Kimi provider
chat("Write a fast parser in R", provider = "moonshot", model = "kimi-k2")

# Conversation history
result <- chat("Hi, I'm Troy")
chat("What's my name?", history = result$history)

# Streaming
chat("Write a story", stream = TRUE)

Set MOONSHOT_API_KEY to use Moonshot/Kimi without overriding your OpenAI credentials.

Dependencies

Only curl and jsonlite. No tidyverse, no compiled code.



Try the llm.api package in your browser

Any scripts or data that you put into this service are public.

llm.api documentation built on April 16, 2026, 5:08 p.m.