LanguageModelParams.Rd
Constructs an object containing parameters for generating inferences
from a model. The included parameters seem to be those held in common
across most LLM implementations. Leaving a parameter at the default
(NULL
) inherits the default of the model (backend).
LanguageModelParams(temperature = NULL, top_p = NULL, top_k = NULL, max_tokens = NULL, presence_penalty = NULL, frequency_penalty = NULL, stop = NULL)
Sampling temperature, with higher being more random. Set to 0 for stable outputs.
Limit token selection to those contributing to the top_p
fraction of probability mass.
Limit token selection to the top_k
most probable.
Maximum limit on the total number of generated tokens.
Positive numbers penalize tokens already appearing in the context, while negative numbers do the opposite. Typical range is between -2 and 2.
Positive numbers penalize tokens that appear frequently in the context, while negative numbers do the opposite. Typical range is between -2 and 2.
A character vector of strings that, when generated, cause the model to stop generating.
Typically the above arguments are passed to chat
and
predict
. Constructing an instance of LanguageModelParams is
more for advanced use.
A LanguageModelParams object