View source: R/provider-vllm.R
chat_vllm | R Documentation |
vLLM is an open source library that
provides an efficient and convenient LLMs model server. You can use
chat_vllm()
to connect to endpoints powered by vLLM.
chat_vllm(
base_url,
system_prompt = NULL,
turns = NULL,
model,
seed = NULL,
api_args = list(),
api_key = vllm_key(),
echo = NULL
)
base_url |
The base URL to the endpoint; the default uses OpenAI. |
system_prompt |
A system prompt to set the behavior of the assistant. |
turns |
A list of Turns to start the chat with (i.e., continuing a previous conversation). If not provided, the conversation begins from scratch. |
model |
The model to use for the chat. The default, |
seed |
Optional integer seed that ChatGPT uses to try and make output more reproducible. |
api_args |
Named list of arbitrary extra arguments appended to the body of every chat API call. |
api_key |
The API key to use for authentication. You generally should
not supply this directly, but instead set the |
echo |
One of the following options:
Note this only affects the |
A Chat object.
## Not run:
chat <- chat_vllm("http://my-vllm.com")
chat$chat("Tell me three jokes about statisticians")
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.