These functions provide convenient access to common local models.

llama(server = ollama_server(), temperature = 0, ...)
llamafile_llama(temperature = 0, ...)
llama_vision(server = ollama_server(), temperature = 0, ...)
nomic(server = ollama_server(), temperature = 0, ...)

Arguments

server

The Ollama-based models require an Ollama server to be running. By default, this calls ollama_server to start the server, if it is not already running. See the cautionary note in ollama_agent for the potential pitfalls of this convenience.

temperature

The temperature of the model, defaulting to zero for convenient stability during testing and demos.

...

Additional model parameters, see LanguageModelParams.

Details

llama, llama_vision and nomic are all pulled from Ollama, so Ollama must be installed.

llamafile_llama is meant as a quick start. It will download and cache a self-contained, cross-platform llamafile binary using the same weights as llama. No Ollama required.

Issues with llamafile

See llama_cpp_agent for resolving OS-specific issues with llamafile_llama.

Value

An Agent

Author

Michael Lawrence

Examples

if (FALSE) { # \dontrun{
    llamafile_llama() |>
        instruct("Answer questions about the mtcars dataset:", mtcars) |> 
        predict("Relationship between hp and fuel efficiency")
} # }