The model configs are used to define the model and its parameters. All the parameters can be
set in the model-args or in the model yaml file (see example
here).
generation_parameters (GenerationParameters) —
Configuration parameters that control text generation behavior, including
temperature, top_p, max_new_tokens, etc. Defaults to empty GenerationParameters.
system_prompt (str | None) —
Optional system prompt to be used with chat models. This prompt sets the
behavior and context for the model during evaluation.
cache_dir (str) —
Directory to cache the model. Defaults to ”~/.cache/huggingface/lighteval”.
Base configuration class for all model types in Lighteval.
This is the foundation class that all specific model configurations inherit from.
It provides common functionality for parsing configuration from files and command-line arguments,
as well as shared attributes that are used by all models like generation parameters and system prompts.
Methods:
from_path(path: str):
Load configuration from a YAML file.
from_args(args: str):
Parse configuration from a command-line argument string.
_parse_args(args: str):
Static method to parse argument strings into configuration dictionaries.
Example:
# Load from YAML file
config = ModelConfig.from_path("model_config.yaml")
# Load from command line arguments
config = ModelConfig.from_args("model_name=meta-llama/Llama-3.1-8B-Instruct,system_prompt='You are a helpful assistant.',generation_parameters={temperature=0.7}")
# Direct instantiation
config = ModelConfig(
model_name="meta-llama/Llama-3.1-8B-Instruct",
generation_parameters=GenerationParameters(temperature=0.7),
system_prompt="You are a helpful assistant."
)
model_name (str) —
HuggingFace Hub model ID or path to a pre-trained model. This corresponds to the
pretrained_model_name_or_path argument in HuggingFace’s from_pretrained method.
tokenizer (str | None) —
Optional HuggingFace Hub tokenizer ID. If not specified, uses the same ID as model_name.
Useful when the tokenizer is different from the model (e.g., for multilingual models).
subfolder (str | None) —
Subfolder within the model repository. Used when models are stored in subdirectories.
revision (str) —
Git revision of the model to load. Defaults to “main”.
batch_size (PositiveInt | None) —
Batch size for model inference. If None, will be automatically determined.
max_length (PositiveInt | None) —
Maximum sequence length for the model. If None, uses model’s default.
model_loading_kwargs (dict) —
Additional keyword arguments passed to from_pretrained. Defaults to empty dict.
add_special_tokens (bool) —
Whether to add special tokens during tokenization. Defaults to True.
skip_special_tokens (bool) —
Whether the tokenizer should output special tokens back during generation. Needed for reasoning models. Defaults to True
model_parallel (bool | None) —
Whether to use model parallelism across multiple GPUs. If None, automatically
determined based on available GPUs and model size.
dtype (str | None) —
Data type for model weights. Can be “float16”, “bfloat16”, “float32”, “auto”, “4bit”, “8bit”.
If “auto”, uses the model’s default dtype.
device (Union[int, str]) —
Device to load the model on. Can be “cuda”, “cpu”, or GPU index. Defaults to “cuda”.
trust_remote_code (bool) —
Whether to trust remote code when loading models. Defaults to False.
compile (bool) —
Whether to compile the model using torch.compile for optimization. Defaults to False.
multichoice_continuations_start_space (bool | None) —
Whether to add a space before multiple choice continuations. If None, uses model default.
True forces adding space, False removes leading space if present.
pairwise_tokenization (bool) —
Whether to tokenize context and continuation separately or together. Defaults to False.
continuous_batching (bool) —
Whether to use continuous batching for generation. Defaults to False.
override_chat_template (bool) —
If True, we force the model to use a chat template. If alse, we prevent the model from using
a chat template. If None, we use the default (true if present in the tokenizer, false otherwise)
generation_parameters (GenerationParameters, optional, defaults to empty GenerationParameters) —
Configuration parameters that control text generation behavior, including
temperature, top_p, max_new_tokens, etc.
system_prompt (str | None, optional, defaults to None) — Optional system prompt to be used with chat models.
This prompt sets the behavior and context for the model during evaluation.
cache_dir (str, optional, defaults to ”~/.cache/huggingface/lighteval”) — Directory to cache the model.
Configuration class for HuggingFace Transformers models.
This configuration is used to load and configure models from the HuggingFace Transformers library.
Note:
This configuration supports quantization (4-bit and 8-bit) through the dtype parameter.
When using quantization, ensure you have the required dependencies installed
(bitsandbytes for 4-bit/8-bit quantization).
base_model (str) —
HuggingFace Hub model ID or path to the base model. This is the original
pre-trained model that the delta was computed from.
Configuration class for delta models (weight difference models).
This configuration is used to load models that represent the difference between a
fine-tuned model and its base model. The delta weights are added to the base model
during loading to reconstruct the full fine-tuned model.
model_name (str) —
HuggingFace Hub model ID or path to the model to load.
tokenizer (str | None) —
HuggingFace Hub model ID or path to the tokenizer to load.
revision (str) —
Git revision of the model. Defaults to “main”.
dtype (str) —
Data type for model weights. Defaults to “bfloat16”. Options: “float16”, “bfloat16”, “float32”.
tensor_parallel_size (PositiveInt) —
Number of GPUs to use for tensor parallelism. Defaults to 1.
data_parallel_size (PositiveInt) —
Number of GPUs to use for data parallelism. Defaults to 1.
pipeline_parallel_size (PositiveInt) —
Number of GPUs to use for pipeline parallelism. Defaults to 1.
gpu_memory_utilization (NonNegativeFloat) —
Fraction of GPU memory to use. Lower this if running out of memory. Defaults to 0.9.
enable_prefix_caching (bool) —
Whether to enable prefix caching to speed up generation. May use more memory. Should be disabled for LFM2. Defaults to True.
max_model_length (PositiveInt | None) —
Maximum sequence length for the model. If None, automatically inferred.
Reduce this if encountering OOM issues (4096 is usually sufficient).
quantization (str | None) —
Quantization method.
load_format (str | None) —
The format of the model weights to load. choices: auto, pt, safetensors, npcache, dummy, tensorizer, sharded_state, gguf, bitsandbytes, mistral, runai_streamer.
swap_space (PositiveInt) —
CPU swap space size in GiB per GPU. Defaults to 4.
seed (NonNegativeInt) —
Random seed for reproducibility. Defaults to 1234.
trust_remote_code (bool) —
Whether to trust remote code when loading models. Defaults to False.
add_special_tokens (bool) —
Whether to add special tokens during tokenization. Defaults to True.
multichoice_continuations_start_space (bool) —
Whether to add a space before multiple choice continuations. Defaults to True.
pairwise_tokenization (bool) —
Whether to tokenize context and continuation separately for loglikelihood evals. Defaults to False.
max_num_seqs (PositiveInt) —
Maximum number of sequences per iteration. Controls batch size at prefill stage. Defaults to 128.
max_num_batched_tokens (PositiveInt) —
Maximum number of tokens per batch. Defaults to 2048.
subfolder (str | None) —
Subfolder within the model repository. Defaults to None.
is_async (bool) —
Whether to use the async version of VLLM. Defaults to False.
override_chat_template (bool) —
If True, we force the model to use a chat template. If alse, we prevent the model from using
a chat template. If None, we use the default (true if present in the tokenizer, false otherwise)
generation_parameters (GenerationParameters, optional, defaults to empty GenerationParameters) —
Configuration parameters that control text generation behavior, including
temperature, top_p, max_new_tokens, etc.
system_prompt (str | None, optional, defaults to None) — Optional system prompt to be used with chat models.
This prompt sets the behavior and context for the model during evaluation.
cache_dir (str, optional, defaults to ”~/.cache/huggingface/lighteval”) — Directory to cache the model.
Configuration class for VLLM inference engine.
This configuration is used to load and configure models using the VLLM inference engine,
which provides high-performance inference for large language models with features like
PagedAttention, continuous batching, and efficient memory management.
model_name (str) —
HuggingFace Hub model ID or path to the model to load.
load_format (str) —
The format of the model weights to load. choices: auto, pt, safetensors, npcache, dummy, tensorizer, sharded_state, gguf, bitsandbytes, mistral, runai_streamer.
dtype (str) —
Data type for model weights. Defaults to “auto”. Options: “auto”, “float16”, “bfloat16”, “float32”.
tp_size (PositiveInt) —
Number of GPUs to use for tensor parallelism. Defaults to 1.
dp_size (PositiveInt) —
Number of GPUs to use for data parallelism. Defaults to 1.
context_length (PositiveInt | None) —
Maximum context length for the model.
random_seed (PositiveInt | None) —
Random seed for reproducibility. Defaults to 1234.
trust_remote_code (bool) —
Whether to trust remote code when loading models. Defaults to False.
device (str) —
Device to load the model on. Defaults to “cuda”.
skip_tokenizer_init (bool) —
Whether to skip tokenizer initialization. Defaults to False.
kv_cache_dtype (str) —
Data type for key-value cache. Defaults to “auto”.
add_special_tokens (bool) —
Whether to add special tokens during tokenization. Defaults to True.
pairwise_tokenization (bool) —
Whether to tokenize context and continuation separately for loglikelihood evals. Defaults to False.
sampling_backend (str | None) —
Sampling backend to use. If None, uses default.
attention_backend (str | None) —
Attention backend to use. If None, uses default.
mem_fraction_static (PositiveFloat) —
Fraction of GPU memory to use for static allocation. Defaults to 0.8.
chunked_prefill_size (PositiveInt) —
Size of chunks for prefill operations. Defaults to 4096.
override_chat_template (bool) —
If True, we force the model to use a chat template. If alse, we prevent the model from using
a chat template. If None, we use the default (true if present in the tokenizer, false otherwise)
generation_parameters (GenerationParameters, optional, defaults to empty GenerationParameters) —
Configuration parameters that control text generation behavior, including
temperature, top_p, max_new_tokens, etc.
system_prompt (str | None, optional, defaults to None) — Optional system prompt to be used with chat models.
This prompt sets the behavior and context for the model during evaluation.
cache_dir (str, optional, defaults to ”~/.cache/huggingface/lighteval”) — Directory to cache the model.
Configuration class for SGLang inference engine.
This configuration is used to load and configure models using the SGLang inference engine,
which provides high-performance inference.
model_name (str) —
Name of your choice - “dummy” by default
seed (int) —
Random seed for reproducible dummy responses. Defaults to 42.
This seed controls the randomness of the generated responses and log probabilities.
Configuration class for dummy models used for testing and baselines.
This configuration is used to create dummy models that generate random responses
or baselines for evaluation purposes. Useful for testing evaluation pipelines
without requiring actual model inference.
model_name (str) —
Name or identifier of the model to use.
provider (str) —
Name of the inference provider. Examples: “together”, “anyscale”, “runpod”, etc.
timeout (int | None) —
Request timeout in seconds. If None, uses provider default.
proxies (Any | None) —
Proxy configuration for requests. Can be a dict or proxy URL string.
org_to_bill (str | None) —
Organization to bill for API usage. If None, bills the user’s account.
parallel_calls_count (NonNegativeInt) —
Number of parallel API calls to make. Defaults to 10.
Higher values increase throughput but may hit rate limits.
generation_parameters (GenerationParameters, optional, defaults to empty GenerationParameters) —
Configuration parameters that control text generation behavior, including
temperature, top_p, max_new_tokens, etc.
system_prompt (str | None, optional, defaults to None) — Optional system prompt to be used with chat models.
This prompt sets the behavior and context for the model during evaluation.
cache_dir (str, optional, defaults to ”~/.cache/huggingface/lighteval”) — Directory to cache the model.
Configuration class for HuggingFace’s inference providers (like Together AI, Anyscale, etc.).
env_vars (dict | None) —
Additional environment variables for the endpoint.
batch_size (int) —
Batch size for requests. Defaults to 1.
generation_parameters (GenerationParameters, optional, defaults to empty GenerationParameters) —
Configuration parameters that control text generation behavior, including
temperature, top_p, max_new_tokens, etc.
system_prompt (str | None, optional, defaults to None) — Optional system prompt to be used with chat models.
This prompt sets the behavior and context for the model during evaluation.
cache_dir (str, optional, defaults to ”~/.cache/huggingface/lighteval”) — Directory to cache the model.
Configuration class for HuggingFace Inference Endpoints (dedicated infrastructure).
This configuration is used to create and manage dedicated inference endpoints
on HuggingFace’s infrastructure. These endpoints provide dedicated compute
resources and can handle larger batch sizes and higher throughput.
Methods:
model_post_init():
Validates configuration and ensures proper parameter combinations.
get_dtype_args():
Returns environment variables for dtype configuration.
get_custom_env_vars():
Returns custom environment variables for the endpoint.
model_name (str) —
HuggingFace Hub model ID to use with the Inference API.
Example: “meta-llama/Llama-3.1-8B-Instruct”
add_special_tokens (bool) —
Whether to add special tokens during tokenization. Defaults to True.
batch_size (int) —
Batch size for requests. Defaults to 1 (serverless API limitation).
generation_parameters (GenerationParameters, optional, defaults to empty GenerationParameters) —
Configuration parameters that control text generation behavior, including
temperature, top_p, max_new_tokens, etc.
system_prompt (str | None, optional, defaults to None) — Optional system prompt to be used with chat models.
This prompt sets the behavior and context for the model during evaluation.
cache_dir (str, optional, defaults to ”~/.cache/huggingface/lighteval”) — Directory to cache the model.
Configuration class for HuggingFace Inference API (inference endpoints).
inference_server_address (str | None) —
Address of the TGI server. Format: “http://host:port” or “https://host:port”.
Example: “http://localhost:8080”
inference_server_auth (str | None) —
Authentication token for the TGI server. If None, no authentication is used.
model_name (str | None) —
Optional model name override. If None, uses the model name from server info.
generation_parameters (GenerationParameters, optional, defaults to empty GenerationParameters) —
Configuration parameters that control text generation behavior, including
temperature, top_p, max_new_tokens, etc.
system_prompt (str | None, optional, defaults to None) — Optional system prompt to be used with chat models.
This prompt sets the behavior and context for the model during evaluation.
cache_dir (str, optional, defaults to ”~/.cache/huggingface/lighteval”) — Directory to cache the model.
Configuration class for Text Generation Inference (TGI) backend.
This configuration is used to connect to TGI servers that serve HuggingFace models
using the text-generation-inference library. TGI provides high-performance inference
with features like continuous batching and efficient memory management.
model_name (str) —
Model identifier. Can include provider prefix (e.g., “gpt-4”, “claude-3-sonnet”)
or use provider/model format (e.g., “openai/gpt-4”, “anthropic/claude-3-sonnet”).
provider (str | None) —
Optional provider name override. If None, inferred from model_name.
Examples: “openai”, “anthropic”, “google”, “cohere”, etc.
base_url (str | None) —
Custom base URL for the API. If None, uses provider’s default URL.
Useful for using custom endpoints or local deployments.
api_key (str | None) —
API key for authentication. If None, reads from environment variables.
Environment variable names are provider-specific (e.g., OPENAI_API_KEY).
concurrent_requests (int) —
Maximum number of concurrent API requests to execute in parallel.
Higher values can improve throughput for batch processing but may hit rate limits
or exhaust API quotas faster. Default is 10.
verbose (bool) —
Whether to enable verbose logging. Default is False.
max_model_length (int | None) —
Maximum context length for the model. If None, infers the model’s default max length.
api_max_retry (int) —
Maximum number of retries for API requests. Default is 8.
api_retry_sleep (float) —
Initial sleep time (in seconds) between retries. Default is 1.0.
api_retry_multiplier (float) —
Multiplier for increasing sleep time between retries. Default is 2.0.
timeout (float) —
Request timeout in seconds. Default is None (no timeout).
generation_parameters (GenerationParameters, optional, defaults to empty GenerationParameters) —
Configuration parameters that control text generation behavior, including
temperature, top_p, max_new_tokens, etc.
system_prompt (str | None, optional, defaults to None) — Optional system prompt to be used with chat models.
This prompt sets the behavior and context for the model during evaluation.
cache_dir (str, optional, defaults to ”~/.cache/huggingface/lighteval”) — Directory to cache the model.
Configuration class for LiteLLM unified API client.
This configuration is used to connect to various LLM providers through the LiteLLM
unified API. LiteLLM provides a consistent interface to multiple providers including
OpenAI, Anthropic, Google, and many others.
model (str) —
An identifier for the model. This can be used to track which model was evaluated
in the results and logs.
model_definition_file_path (str) —
Path to a Python file containing the custom model implementation. This file must
define exactly one class that inherits from LightevalModel. The class should
implement all required methods from the LightevalModel interface.
Configuration class for loading custom model implementations in Lighteval.
This config allows users to define and load their own model implementations by specifying
a Python file containing a custom model class that inherits from LightevalModel.
The custom model file should contain exactly one class that inherits from LightevalModel.
This class will be automatically detected and instantiated when loading the model.