convenience-models.Rd
These functions provide convenient access to common local models.
llama(server = ollama_server(), temperature = 0, ...)
llamafile_llama(temperature = 0, ...)
llama_vision(server = ollama_server(), temperature = 0, ...)
nomic(server = ollama_server(), temperature = 0, ...)
The Ollama-based models require an Ollama server to be running. By
default, this calls ollama_server
to start the server,
if it is not already running. See the cautionary note in
ollama_agent
for the potential pitfalls of this
convenience.
The temperature of the model, defaulting to zero for convenient stability during testing and demos.
Additional model parameters, see
LanguageModelParams
.
llama
, llama_vision
and nomic
are all pulled from
Ollama, so Ollama must be installed.
llamafile_llama
is meant as a quick start. It will download and
cache a self-contained, cross-platform llamafile binary using the same
weights as llama
. No Ollama required.
See llama_cpp_agent
for resolving OS-specific issues
with llamafile_llama
.
An Agent