# Lighteval

## Docs

- [Using Inference Providers as Backend](https://huggingface.co/docs/lighteval/main/use-inference-providers-as-backend.md)
- [Lighteval](https://huggingface.co/docs/lighteval/main/index.md)
- [Caching System](https://huggingface.co/docs/lighteval/main/caching.md)
- [Using Hugging Face Inference Endpoints or TGI as Backend](https://huggingface.co/docs/lighteval/main/use-huggingface-inference-endpoints-or-tgi-as-backend.md)
- [Adding a New Metric](https://huggingface.co/docs/lighteval/main/adding-a-new-metric.md)
- [Quick Tour](https://huggingface.co/docs/lighteval/main/quicktour.md)
- [Using SGLang as Backend](https://huggingface.co/docs/lighteval/main/use-sglang-as-backend.md)
- [Using VLLM as Backend](https://huggingface.co/docs/lighteval/main/use-vllm-as-backend.md)
- [Using LiteLLM as Backend](https://huggingface.co/docs/lighteval/main/use-litellm-as-backend.md)
- [Metric List](https://huggingface.co/docs/lighteval/main/metric-list.md)
- [Adding a Custom Task](https://huggingface.co/docs/lighteval/main/adding-a-custom-task.md)
- [Installation](https://huggingface.co/docs/lighteval/main/installation.md)
- [Evaluate your model with Inspect-AI](https://huggingface.co/docs/lighteval/main/inspect-ai.md)
- [Contributing to Multilingual Evaluations](https://huggingface.co/docs/lighteval/main/contributing-to-multilingual-evaluations.md)
- [Saving and Reading Results](https://huggingface.co/docs/lighteval/main/saving-and-reading-results.md)
- [Using the Python API](https://huggingface.co/docs/lighteval/main/using-the-python-api.md)
- [Evaluating Custom Models](https://huggingface.co/docs/lighteval/main/evaluating-a-custom-model.md)
- [Available tasks](https://huggingface.co/docs/lighteval/main/available-tasks.md)
- [Logging](https://huggingface.co/docs/lighteval/main/package_reference/logging.md)
- [EvaluationTracker[[lighteval.logging.evaluation_tracker.EvaluationTracker]]](https://huggingface.co/docs/lighteval/main/package_reference/evaluation_tracker.md)
- [Model's Output[[lighteval.models.model_output.ModelResponse]]](https://huggingface.co/docs/lighteval/main/package_reference/models_outputs.md)
- [Doc[[lighteval.tasks.requests.Doc]]](https://huggingface.co/docs/lighteval/main/package_reference/doc.md)
- [Tasks](https://huggingface.co/docs/lighteval/main/package_reference/tasks.md)
- [Model Configs](https://huggingface.co/docs/lighteval/main/package_reference/models.md)
- [Metrics](https://huggingface.co/docs/lighteval/main/package_reference/metrics.md)
- [Pipeline](https://huggingface.co/docs/lighteval/main/package_reference/pipeline.md)

### Using Inference Providers as Backend
https://huggingface.co/docs/lighteval/main/use-inference-providers-as-backend.md

# Using Inference Providers as Backend

Lighteval allows you to use Hugging Face's Inference Providers to evaluate LLMs on supported providers such as Black Forest Labs, Cerebras, Fireworks AI, Nebius, Together AI, and many more.

> [!WARNING]
> Do not forget to set your Hugging Face API key.
> You can set it using the `HF_TOKEN` environment variable or by using the `huggingface-cli` command.

## Basic Usage

```bash
lighteval endpoint inference-providers \
    "model_name=deepseek-ai/DeepSeek-R1,provider=hf-inference" \
    "lighteval|gsm8k|0"
```

## Using a Configuration File

You can use configuration files to define the model and the provider to use.

```bash
lighteval endpoint inference-providers \
    examples/model_configs/inference_providers.yaml \
    "lighteval|gsm8k|0"
```

With the following configuration file:

```yaml
model_parameters:
  model_name: "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
  provider: "novita"
  timeout: null
  proxies: null
  parallel_calls_count: 10
  generation_parameters:
    temperature: 0.8
    top_k: 10
    max_new_tokens: 10000
```

By default, inference requests are billed to your personal account.
Optionally, you can charge them to an organization by setting `org_to_bill="<your_org_name>"` (requires being a member of that organization).

## Supported Providers

Hugging Face Inference Providers supports a wide range of LLM providers see the [Inference Providers documentation](https://huggingface.co/docs/inference-providers/en/index) for the complete list.


## Billing and Costs

### Personal Account Billing
By default, all inference requests are billed to your personal Hugging Face account. You can monitor your usage in the [Hugging Face billing dashboard](https://huggingface.co/settings/billing).

### Organization Billing
To bill requests to an organization:

1. Ensure you are a member of the organization
2. Add `org_to_bill="<organization_name>"` to your configuration
3. The organization must have sufficient credits

```yaml
model_parameters:
  model_name: "meta-llama/Llama-2-7b-chat-hf"
  provider: "together"
  org_to_bill: "my-organization"
```

For more detailed error handling and provider-specific information, refer to the [Hugging Face Inference Providers documentation](https://huggingface.co/docs/inference-providers/en/index).


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/use-inference-providers-as-backend.mdx" />

### Lighteval
https://huggingface.co/docs/lighteval/main/index.md

# Lighteval

🤗 Lighteval is your all-in-one toolkit for evaluating Large Language Models
(LLMs) across multiple backends with ease. Dive deep into your model's
performance by saving and exploring detailed, sample-by-sample results to debug
and see how your models stack up.

## Key Features

### 🚀 **Multi-Backend Support**
Evaluate your models using the most popular and efficient inference backends:
- `eval`: Use [inspect-ai](https://inspect.aisi.org.uk/) as backend to evaluate and inspect your models ! (prefered way)
- `transformers`: Evaluate models on CPU or one or more GPUs using [🤗
  Accelerate](https://github.com/huggingface/transformers)
- `nanotron`: Evaluate models in distributed settings using [⚡️
  Nanotron](https://github.com/huggingface/nanotron)
- `vllm`: Evaluate models on one or more GPUs using [🚀
  VLLM](https://github.com/vllm-project/vllm)
- `custom`: Evaluate custom models (can be anything)
- `sglang`: Evaluate models using [SGLang](https://github.com/sgl-project/sglang) as backend
- `inference-endpoint`: Evaluate models using Hugging Face's [Inference Endpoints API](https://huggingface.co/inference-endpoints/dedicated)
- `tgi`: Evaluate models using [🔗 Text Generation Inference](https://huggingface.co/docs/text-generation-inference/en/index) running locally
- `litellm`: Evaluate models on any compatible API using [LiteLLM](https://www.litellm.ai/)
- `inference-providers`: Evaluate models using [HuggingFace's inference providers](https://huggingface.co/docs/inference-providers/en/index) as backend**: Distributed training and evaluation

### 📊 **Comprehensive Evaluation**
- **Extensive Task Library**: 1000s pre-built evaluation tasks
- **Custom Task Creation**: Build your own evaluation tasks
- **Flexible Metrics**: Support for custom metrics and scoring
- **Detailed Analysis**: Sample-by-sample results for deep insights

### 🔧 **Easy Customization**
Customization at your fingertips: create [new tasks](adding-a-custom-task),
[metrics](adding-a-new-metric) or [model](evaluating-a-custom-model) tailored to your needs, or browse all our existing tasks and metrics.

### ☁️ **Seamless Integration**
Seamlessly experiment, benchmark, and store your results on the Hugging Face Hub, S3, or locally.

## Quick Start

### Installation

```bash
pip install lighteval
```

### Basic Usage

#### Find a task

<iframe
	src="https://openevals-open-benchmark-index.hf.space"
	frameborder="0"
	width="850"
	height="450"
></iframe>

#### Run your benchmark and push details to the hub

```bash
lighteval eval "hf-inference-providers/openai/gpt-oss-20b" \
  "lighteval|gpqa:diamond|0" \
    --bundle-dir gpt-oss-bundle \
    --repo-id OpenEvals/evals
```

Resulting Space:

<iframe
    src="https://openevals-evals.static.hf.space"
    frameborder="0"
    width="850"
    height="450"
></iframe>


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/index.mdx" />

### Caching System
https://huggingface.co/docs/lighteval/main/caching.md

# Caching System

Lighteval includes a caching system that can significantly speed up evaluations by storing and reusing model predictions.
This is especially useful when running the same evaluation multiple times, or comparing different evaluation metrics on the same model outputs.

## How It Works

The caching system caches the predictions of the model for now (we will add tokenized input caching later).
It stores model responses objects (generations, logits, probabilities) for evaluation samples.

### Cache Structure

Cached data is stored on disk using HuggingFace datasets in the following structure:

```
.cache/
└── huggingface/
    └── lighteval/
        └── predictions/
            └── {model_name}/
                └── {model_hash}/
                    └── {task_name}.parquet
```

Where:
- `model_name`: The model name (path on the hub or local path)
- `model_hash`: Hash of the model configuration to ensure cache invalidation when parameters change
- `task_name`: Name of the evaluation task

### Cache Recreation

A new cache is automatically created when:
- Model configuration changes (different parameters, quantization, etc.)
- Model weights change (different revision, checkpoint, etc.)
- Generation parameters change (temperature, max_tokens, etc.)

This ensures that cached results are always consistent with your current model setup.

## Using Caching

### Automatic Caching

All built-in model classes in Lighteval automatically support caching. No additional configuration is needed.
For custom models you need to add a cache to the model class and decorators on all functions.

## Cache Management

### Clearing Cache

To clear cache for a specific model, delete the corresponding directory:

```bash
rm -rf ~/.cache/huggingface/lighteval/predictions/{model_name}/{model_hash}/
```

To clear all caches:

```bash
rm -rf ~/.cache/huggingface/lighteval/predictions
```


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/caching.mdx" />

### Using Hugging Face Inference Endpoints or TGI as Backend
https://huggingface.co/docs/lighteval/main/use-huggingface-inference-endpoints-or-tgi-as-backend.md

# Using Hugging Face Inference Endpoints or TGI as Backend

An alternative to launching the evaluation locally is to serve the model on a
TGI-compatible server/container and then run the evaluation by sending requests
to the server. The command is the same as before, except you specify a path to
a YAML configuration file (detailed below):

```bash
lighteval endpoint {tgi,inference-endpoint} \
    "/path/to/config/file" \
    <task_parameters>
```

There are two types of configuration files that can be provided for running on
the server:

## Hugging Face Inference Endpoints

To launch a model using Hugging Face's Inference Endpoints, you need to provide
the following file: `endpoint_model.yaml`. Lighteval will automatically deploy
the endpoint, run the evaluation, and finally delete the endpoint (unless you
specify an endpoint that was already launched, in which case the endpoint won't
be deleted afterwards).

### Configuration File Example

```yaml
model_parameters:
    reuse_existing: false # If true, ignore all params in instance, and don't delete the endpoint after evaluation
    # endpoint_name: "llama-2-7B-lighteval" # Needs to be lowercase without special characters
    model_name: "meta-llama/Llama-2-7b-hf"
    revision: "main"  # Defaults to "main"
    dtype: "float16" # Can be any of "awq", "eetq", "gptq", "4bit" or "8bit" (will use bitsandbytes), "bfloat16" or "float16"
    accelerator: "gpu"
    region: "eu-west-1"
    vendor: "aws"
    instance_type: "nvidia-a10g"
    instance_size: "x1"
    framework: "pytorch"
    endpoint_type: "protected"
    namespace: null # The namespace under which to launch the endpoint. Defaults to the current user's namespace
    image_url: null # Optionally specify the docker image to use when launching the endpoint model. E.g., launching models with later releases of the TGI container with support for newer models.
    env_vars: null # Optional environment variables to include when launching the endpoint. e.g., `MAX_INPUT_LENGTH: 2048`
```

## Text Generation Inference (TGI)

To use a model already deployed on a TGI server, for example on Hugging Face's
serverless inference.

### Configuration File Example

```yaml
model_parameters:
    inference_server_address: ""
    inference_server_auth: null
    model_id: null # Optional, only required if the TGI container was launched with model_id pointing to a local directory
```

## Key Parameters

### Hugging Face Inference Endpoints

#### Model Configuration
- `model_name`: The Hugging Face model ID to deploy
- `revision`: Model revision (defaults to "main")
- `dtype`: Data type for model weights ("float16", "bfloat16", "4bit", "8bit", etc.)
- `framework`: Framework to use ("pytorch", "tensorflow")

#### Infrastructure Settings
- `accelerator`: Hardware accelerator ("gpu", "cpu")
- `region`: AWS region for deployment
- `vendor`: Cloud vendor ("aws", "azure", "gcp")
- `instance_type`: Instance type (e.g., "nvidia-a10g", "nvidia-t4")
- `instance_size`: Instance size ("x1", "x2", etc.)

#### Endpoint Configuration
- `endpoint_type`: Endpoint access level ("public", "protected", "private")
- `namespace`: Organization namespace for deployment
- `reuse_existing`: Whether to reuse an existing endpoint
- `endpoint_name`: Custom endpoint name (lowercase, no special characters)

#### Advanced Settings
- `image_url`: Custom Docker image URL
- `env_vars`: Environment variables for the endpoint

### Text Generation Inference (TGI)

#### Server Configuration
- `inference_server_address`: URL of the TGI server
- `inference_server_auth`: Authentication credentials
- `model_id`: Model identifier (if using local model directory)

## Usage Examples

### Deploying a New Inference Endpoint

```bash
lighteval endpoint inference-endpoint \
    "configs/endpoint_model.yaml" \
    "lighteval|gsm8k|0"
```

### Using an Existing TGI Server

```bash
lighteval endpoint tgi \
    "configs/tgi_server.yaml" \
    "lighteval|gsm8k|0"
```

### Reusing an Existing Endpoint

```yaml
model_parameters:
    reuse_existing: true
    endpoint_name: "my-existing-endpoint"
    # Other parameters will be ignored when reuse_existing is true
```

## Cost Management

### Inference Endpoints
- Endpoints are automatically deleted after evaluation (unless `reuse_existing: true`)
- Costs are based on instance type and runtime
- Monitor usage in the [Hugging Face billing dashboard](https://huggingface.co/settings/billing)

### TGI Servers
- No additional costs beyond your existing server infrastructure
- Useful for cost-effective evaluation of already-deployed models

## Troubleshooting

### Common Issues

1. **Endpoint Deployment Failures**: Check instance availability in your region
2. **Authentication Errors**: Ensure proper Hugging Face token permissions
3. **Model Loading Errors**: Verify model name and revision are correct
4. **Resource Constraints**: Choose appropriate instance type for your model size

### Performance Tips

- Use appropriate instance types for your model size
- Consider using quantized models (4bit, 8bit) for cost savings
- Reuse existing endpoints for multiple evaluations
- Use serverless TGI for cost-effective evaluation

### Error Handling

Common error messages and solutions:
- **"Instance not available"**: Try a different region or instance type
- **"Model not found"**: Check the model name and revision
- **"Insufficient permissions"**: Verify your Hugging Face token has endpoint deployment permissions
- **"Endpoint already exists"**: Use `reuse_existing: true` or choose a different endpoint name

For more detailed information about Hugging Face Inference Endpoints, see the [official documentation](https://huggingface.co/docs/inference-endpoints/).


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/use-huggingface-inference-endpoints-or-tgi-as-backend.mdx" />

### Adding a New Metric
https://huggingface.co/docs/lighteval/main/adding-a-new-metric.md

# Adding a New Metric

## Before You Start

### Two different types of metrics

There are two types of metrics in Lighteval:

#### Sample-Level Metrics
- **Purpose**: Evaluate individual samples/predictions
- **Input**: Takes a `Doc` and `ModelResponse` (model's prediction)
- **Output**: Returns a float or boolean value for that specific sample
- **Example**: Checking if a model's answer matches the correct answer for one sample

#### Corpus-Level Metrics
- **Purpose**: Compute final scores across the entire dataset/corpus
- **Input**: Takes the results from all sample-level evaluations
- **Output**: Returns a single score representing overall performance
- **Examples**:
  - Simple aggregation: Calculating average accuracy across all test samples
  - Complex metrics: BLEU score where sample-level metric prepares data (tokenization, etc.) and corpus-level metric computes the actual BLEU score

### Check Existing Metrics

First, check if you can use one of the parameterized functions in
[Corpus Metrics](package_reference/metrics#corpus-metrics) or
[Sample Metrics](package_reference/metrics#sample-metrics).

If not, you can use the `custom_task` system to register your new metric.

> [!TIP]
> To see an example of a custom metric added along with a custom task, look at the [IFEval custom task](https://github.com/huggingface/lighteval/tree/main/examples/custom_tasks/ifeval).

> [!WARNING]
> To contribute your custom metric to the Lighteval repository, you would first need
> to install the required dev dependencies by running `pip install -e .[dev]`
> and then run `pre-commit install` to install the pre-commit hooks.

## Creating a Custom Metric

### Step 1: Create the Metric File

Create a new Python file which should contain the full logic of your metric.
The file also needs to start with these imports:

```python
from aenum import extend_enum
from lighteval.metrics import Metrics
```

### Step 2: Define the Sample-Level Metric

You need to define a sample-level metric. All sample-level metrics will have the same signature, taking a
`~lighteval.types.Doc` and a `~lighteval.types.ModelResponse`. The metric should return a float or a
boolean.

#### Single Metric Example

```python
def custom_metric(doc: Doc, model_response: ModelResponse) -> bool:
    response = model_response.final_text[0]
    return response == doc.choices[doc.gold_index]
```

#### Multiple Metrics Example

If you want to return multiple metrics per sample, you need to return a dictionary with the metrics as keys and the values as values:

```python
def custom_metric(doc: Doc, model_response: ModelResponse) -> dict:
    response = model_response.final_text[0]
    return {"accuracy": response == doc.choices[doc.gold_index], "other_metric": 0.5}
```

### Step 3: Define Aggregation Function (Optional)

You can define an aggregation function if needed. A common aggregation function is `np.mean`:

```python
def agg_function(items):
    flat_items = [item for sublist in items for item in sublist]
    score = sum(flat_items) / len(flat_items)
    return score
```

### Step 4: Create the Metric Object

#### Single Metric

If it's a sample-level metric, you can use the following code
with [SampleLevelMetric](/docs/lighteval/main/en/package_reference/metrics#lighteval.metrics.utils.metric_utils.SampleLevelMetric):

```python
my_custom_metric = SampleLevelMetric(
    metric_name="custom_accuracy",
    higher_is_better=True,
    category=SamplingMethod.GENERATIVE,
    sample_level_fn=custom_metric,
    corpus_level_fn=agg_function,
)
```

#### Multiple Metrics

If your metric defines multiple metrics per sample, you can use the following code
with [SampleLevelMetricGrouping](/docs/lighteval/main/en/package_reference/metrics#lighteval.metrics.utils.metric_utils.SampleLevelMetricGrouping):

```python
custom_metric = SampleLevelMetricGrouping(
    metric_name=["accuracy", "response_length", "confidence"],
    higher_is_better={
        "accuracy": True,
        "response_length": False,  # Shorter responses might be better
        "confidence": True
    },
    category=SamplingMethod.GENERATIVE,
    sample_level_fn=custom_metric,
    corpus_level_fn={
        "accuracy": np.mean,
        "response_length": np.mean,
        "confidence": np.mean,
    },
)
```

### Step 5: Register the Metric

To finish, add the following code so that it adds your metric to our metrics list
when loaded as a module:

```python
# Adds the metric to the metric list!
extend_enum(Metrics, "CUSTOM_ACCURACY", my_custom_metric)

if __name__ == "__main__":
    print("Imported metric")
```

## Using Your Custom Metric

### With Custom Tasks

You can then give your custom metric to Lighteval by using `--custom-tasks
path_to_your_file` when launching it after adding it to the task config.

```bash
lighteval accelerate \
    "model_name=openai-community/gpt2" \
    "leaderboard|truthfulqa:mc|0" \
    --custom-tasks path_to_your_metric_file.py
```

```python
from lighteval.tasks.lighteval_task import LightevalTaskConfig

task = LightevalTaskConfig(
    name="my_custom_task",
    suite=["community"],
    metric=[my_custom_metric],  # Use your custom metric here
    prompt_function=my_prompt_function,
    hf_repo="my_dataset",
    evaluation_splits=["test"]
)
```


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/adding-a-new-metric.mdx" />

### Quick Tour
https://huggingface.co/docs/lighteval/main/quicktour.md

# Quick Tour

> [!TIP]
> We recommend using the `--help` flag to get more information about the
> available options for each command.
> `lighteval --help`

Lighteval can be used with several different commands, each optimized for different evaluation scenarios.


## Find your benchmark

<iframe
	src="https://openevals-open-benchmark-index.hf.space"
	frameborder="0"
	width="850"
	height="450"
></iframe>

## Available Commands

### Evaluation Backends

- `lighteval accelerate`: Evaluate models on CPU or one or more GPUs using [🤗
  Accelerate](https://github.com/huggingface/accelerate)
- `lighteval nanotron`: Evaluate models in distributed settings using [⚡️
  Nanotron](https://github.com/huggingface/nanotron)
- `lighteval vllm`: Evaluate models on one or more GPUs using [🚀
  VLLM](https://github.com/vllm-project/vllm)
- `lighteval custom`: Evaluate custom models (can be anything)
- `lighteval sglang`: Evaluate models using [SGLang](https://github.com/sgl-project/sglang) as backend
- `lighteval endpoint`: Evaluate models using various endpoints as backend
  - `lighteval endpoint inference-endpoint`: Evaluate models using Hugging Face's [Inference Endpoints API](https://huggingface.co/inference-endpoints/dedicated)
  - `lighteval endpoint tgi`: Evaluate models using [🔗 Text Generation Inference](https://huggingface.co/docs/text-generation-inference/en/index) running locally
  - `lighteval endpoint litellm`: Evaluate models on any compatible API using [LiteLLM](https://www.litellm.ai/)
  - `lighteval endpoint inference-providers`: Evaluate models using [HuggingFace's inference providers](https://huggingface.co/docs/inference-providers/en/index) as backend

### Evaluation Utils

- `lighteval baseline`: Compute baselines for given tasks

### Utils

- `lighteval tasks`: List or inspect tasks
  - `lighteval tasks list`: List all available tasks
  - `lighteval tasks inspect`: Inspect a specific task to see its configuration and samples
  - `lighteval tasks create`: Create a new task from a template

## Basic Usage

To evaluate `GPT-2` on the Truthful QA benchmark with [🤗
  Accelerate](https://github.com/huggingface/accelerate), run:

```bash
lighteval accelerate \
     "model_name=openai-community/gpt2" \
     "leaderboard|truthfulqa:mc|0"
```

Here, we first choose a backend (either `accelerate`, `nanotron`, `endpoint`, or `vllm`), and then specify the model and task(s) to run.

### Task Specification

The syntax for the task specification might be a bit hard to grasp at first. The format is as follows:

```txt
{suite}|{task}|{num_few_shot}
```

Tasks have a function applied at the sample level and one at the corpus level. For example,
- an exact match can be applied per sample, then averaged over the corpus to give the final score
- samples can be left untouched before applying Corpus BLEU at the corpus level
etc.

If the task you are looking at has a sample level function (`sample_level_fn`) which can be parametrized, you can pass parameters in the CLI.
For example
```txt
{suite}|{task}@{parameter_name1}={value1}@{parameter_name2}={value2},...|0
```

All officially supported tasks can be found at the [tasks_list](available-tasks) and in the
[extended folder](https://github.com/huggingface/lighteval/tree/main/src/lighteval/tasks/extended).
Moreover, community-provided tasks can be found in the
[community](https://github.com/huggingface/lighteval/tree/main/community_tasks) folder.

For more details on the implementation of the tasks, such as how prompts are constructed or which metrics are used, you can examine the
[implementation file](https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/default_tasks.py).

### Running Multiple Tasks

Running multiple tasks is supported, either with a comma-separated list or by specifying a file path.
The file should be structured like [examples/tasks/recommended_set.txt](https://github.com/huggingface/lighteval/blob/main/examples/tasks/recommended_set.txt).
When specifying a path to a file, it should start with `./`.

```bash
lighteval accelerate \
     "model_name=openai-community/gpt2" \
     ./path/to/lighteval/examples/tasks/recommended_set.txt
# or, e.g., "leaderboard|truthfulqa:mc|0,leaderboard|gsm8k|3"
```

## Backend Configuration

### General Information

The `model-args` argument takes a string representing a list of model
arguments. The arguments allowed vary depending on the backend you use and
correspond to the fields of the model configurations.

The model configurations can be found [here](./package_reference/models).

All models allow you to post-process your reasoning model predictions
to remove the thinking tokens from the trace used to compute the metrics,
using `--remove-reasoning-tags` and `--reasoning-tags` to specify which
reasoning tags to remove (defaults to `<think>` and `</think>`).

Here's an example with `mistralai/Magistral-Small-2507` which outputs custom
thinking tokens:

```bash
lighteval vllm \
    "model_name=mistralai/Magistral-Small-2507,dtype=float16,data_parallel_size=4" \
    "lighteval|aime24|0" \
    --remove-reasoning-tags \
    --reasoning-tags="[('[THINK]','[/THINK]')]"
```

### Nanotron

To evaluate a model trained with Nanotron on a single GPU:

> [!WARNING]
> Nanotron models cannot be evaluated without torchrun.

```bash
torchrun --standalone --nnodes=1 --nproc-per-node=1 \
    src/lighteval/__main__.py nanotron \
    --checkpoint-config-path ../nanotron/checkpoints/10/config.yaml \
    --lighteval-config-path examples/nanotron/lighteval_config_override_template.yaml
```

The `nproc-per-node` argument should match the data, tensor, and pipeline
parallelism configured in the `lighteval_config_template.yaml` file.
That is: `nproc-per-node = data_parallelism * tensor_parallelism *
pipeline_parallelism`.


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/quicktour.mdx" />

### Using SGLang as Backend
https://huggingface.co/docs/lighteval/main/use-sglang-as-backend.md

# Using SGLang as Backend

Lighteval allows you to use SGLang as a backend, providing significant speedups for model evaluation.
To use SGLang, simply change the `model_args` to reflect the arguments you want to pass to SGLang.

## Basic Usage

```bash
lighteval sglang \
    "model_name=HuggingFaceH4/zephyr-7b-beta,dtype=float16" \
    "leaderboard|truthfulqa:mc|0"
```

## Parallelism Options

SGLang can distribute the model across multiple GPUs using data parallelism and tensor parallelism.
You can choose the parallelism method by setting the appropriate parameters in the `model_args`.

### Tensor Parallelism

For example, if you have 4 GPUs, you can split the model across them using tensor parallelism with `tp_size`:

```bash
lighteval sglang \
    "model_name=HuggingFaceH4/zephyr-7b-beta,dtype=float16,tp_size=4" \
    "leaderboard|truthfulqa:mc|0"
```

### Data Parallelism

If your model fits on a single GPU, you can use data parallelism with `dp_size` to speed up the evaluation:

```bash
lighteval sglang \
    "model_name=HuggingFaceH4/zephyr-7b-beta,dtype=float16,dp_size=4" \
    "leaderboard|truthfulqa:mc|0"
```

## Using a Configuration File

For more advanced configurations, you can use a YAML configuration file for the model.
An example configuration file is shown below and can be found at `examples/model_configs/sglang_model_config.yaml`.

```bash
lighteval sglang \
    "examples/model_configs/sglang_model_config.yaml" \
    "leaderboard|truthfulqa:mc|0"
```

> [!TIP]
> Documentation for SGLang server arguments can be found [here](https://docs.sglang.ai/backend/server_arguments.html)

```yaml
model_parameters:
    model_name: "HuggingFaceTB/SmolLM-1.7B-Instruct"
    dtype: "auto"
    tp_size: 1
    dp_size: 1
    context_length: null
    random_seed: 1
    trust_remote_code: False
    device: "cuda"
    skip_tokenizer_init: False
    kv_cache_dtype: "auto"
    add_special_tokens: True
    pairwise_tokenization: False
    sampling_backend: null
    attention_backend: null
    mem_fraction_static: 0.8
    chunked_prefill_size: 4096
    generation_parameters:
      max_new_tokens: 1024
      min_new_tokens: 0
      temperature: 1.0
      top_k: 50
      min_p: 0.0
      top_p: 1.0
      presence_penalty: 0.0
      repetition_penalty: 1.0
      frequency_penalty: 0.0
```

> [!WARNING]
> In case of out-of-memory (OOM) issues, you might need to reduce the context size of the
> model as well as reduce the `mem_fraction_static` and `chunked_prefill_size` parameters.

## Key SGLang Parameters

### Memory Management
- `mem_fraction_static`: Fraction of GPU memory to allocate for static tensors (default: 0.8)
- `chunked_prefill_size`: Size of chunks for prefill operations (default: 4096)
- `context_length`: Maximum context length for the model
- `kv_cache_dtype`: Data type for key-value cache

### Parallelism Settings
- `tp_size`: Number of GPUs for tensor parallelism
- `dp_size`: Number of GPUs for data parallelism

### Model Configuration
- `dtype`: Data type for model weights ("auto", "float16", "bfloat16", etc.)
- `device`: Device to run the model on ("cuda", "cpu")
- `trust_remote_code`: Whether to trust remote code from the model
- `skip_tokenizer_init`: Skip tokenizer initialization for faster startup

### Generation Parameters
- `temperature`: Controls randomness in generation (0.0 = deterministic, 1.0 = random)
- `top_p`: Nucleus sampling parameter
- `top_k`: Top-k sampling parameter
- `max_new_tokens`: Maximum number of tokens to generate
- `repetition_penalty`: Penalty for repeating tokens
- `presence_penalty`: Penalty for token presence
- `frequency_penalty`: Penalty for token frequency


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/use-sglang-as-backend.mdx" />

### Using VLLM as Backend
https://huggingface.co/docs/lighteval/main/use-vllm-as-backend.md

# Using VLLM as Backend

Lighteval allows you to use VLLM as a backend, providing significant speedups for model evaluation.
To use VLLM, simply change the `model_args` to reflect the arguments you want to pass to VLLM.

> [!TIP]
> Documentation for VLLM engine arguments can be found [here](https://docs.vllm.ai/en/latest/serving/engine_args.html)

## Basic Usage

```bash
lighteval vllm \
    "model_name=HuggingFaceH4/zephyr-7b-beta" \
    "extended|ifeval|0"
```

## Parallelism Options

VLLM can distribute the model across multiple GPUs using data parallelism, pipeline parallelism, or tensor parallelism.
You can choose the parallelism method by setting the appropriate parameters in the `model_args`.

### Tensor Parallelism

For example, if you have 4 GPUs, you can split the model across them using tensor parallelism:

```bash
export VLLM_WORKER_MULTIPROC_METHOD=spawn && lighteval vllm \
    "model_name=HuggingFaceH4/zephyr-7b-beta,tensor_parallel_size=4" \
    "extended|ifeval|0"
```

### Data Parallelism

If your model fits on a single GPU, you can use data parallelism to speed up the evaluation:

```bash
export VLLM_WORKER_MULTIPROC_METHOD=spawn && lighteval vllm \
    "model_name=HuggingFaceH4/zephyr-7b-beta,data_parallel_size=4" \
    "extended|ifeval|0"
```

## Using a Configuration File

For more advanced configurations, you can use a YAML configuration file for the model.
An example configuration file is shown below and can be found at `examples/model_configs/vllm_model_config.yaml`.

```bash
lighteval vllm \
    "examples/model_configs/vllm_model_config.yaml" \
    "extended|ifeval|0"
```

```yaml
model_parameters:
    model_name: "HuggingFaceTB/SmolLM-1.7B-Instruct"
    revision: "main"
    dtype: "bfloat16"
    tensor_parallel_size: 1
    data_parallel_size: 1
    pipeline_parallel_size: 1
    gpu_memory_utilization: 0.9
    max_model_length: 2048
    swap_space: 4
    seed: 1
    trust_remote_code: True
    add_special_tokens: True
    multichoice_continuations_start_space: True
    pairwise_tokenization: True
    subfolder: null
    generation_parameters:
      presence_penalty: 0.0
      repetition_penalty: 1.0
      frequency_penalty: 0.0
      temperature: 1.0
      top_k: 50
      min_p: 0.0
      top_p: 1.0
      seed: 42
      stop_tokens: null
      max_new_tokens: 1024
      min_new_tokens: 0
```

> [!WARNING]
> In case of out-of-memory (OOM) issues, you might need to reduce the context size of the
> model as well as reduce the `gpu_memory_utilization` parameter.


## Key VLLM Parameters

### Memory Management
- `gpu_memory_utilization`: Controls how much GPU memory VLLM can use (default: 0.9)
- `max_model_length`: Maximum sequence length for the model
- `swap_space`: Amount of CPU memory to use for swapping (in GB)

### Parallelism Settings
- `tensor_parallel_size`: Number of GPUs for tensor parallelism
- `data_parallel_size`: Number of GPUs for data parallelism
- `pipeline_parallel_size`: Number of GPUs for pipeline parallelism

### Generation Parameters
- `temperature`: Controls randomness in generation (0.0 = deterministic, 1.0 = random)
- `top_p`: Nucleus sampling parameter
- `top_k`: Top-k sampling parameter
- `max_new_tokens`: Maximum number of tokens to generate
- `repetition_penalty`: Penalty for repeating tokens

## Troubleshooting

### Common Issues

1. **Out of Memory Errors**: Reduce `gpu_memory_utilization` or `max_model_length`
2. **Worker Process Issues**: Ensure `VLLM_WORKER_MULTIPROC_METHOD=spawn` is set for multi-GPU setups
3. **Model Loading Errors**: Check that the model name and revision are correct


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/use-vllm-as-backend.mdx" />

### Using LiteLLM as Backend
https://huggingface.co/docs/lighteval/main/use-litellm-as-backend.md

# Using LiteLLM as Backend

Lighteval allows you to use LiteLLM as a backend, enabling you to call all LLM APIs
using the OpenAI format. LiteLLM supports various providers including Bedrock, Hugging Face, Vertex AI, Together AI, Azure,
OpenAI, Groq, and many others.

> [!TIP]
> Documentation for available APIs and compatible endpoints can be found [here](https://docs.litellm.ai/docs/).

## Basic Usage

```bash
lighteval endpoint litellm \
    "provider=openai,model_name=gpt-3.5-turbo" \
    "lighteval|gsm8k|0"
```

## Using a Configuration File

LiteLLM allows generation with any OpenAI-compatible endpoint. For example, you
can evaluate a model running on a local VLLM server.

To do so, you will need to use a configuration file like this:

```yaml
model_parameters:
    model_name: "openai/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
    base_url: "URL_OF_THE_ENDPOINT_YOU_WANT_TO_USE"
    api_key: "" # Remove or keep empty as needed
    generation_parameters:
      temperature: 0.5
      max_new_tokens: 256
      stop_tokens: [""]
      top_p: 0.9
      seed: 0
      repetition_penalty: 1.0
      frequency_penalty: 0.0
```

## Supported Providers

LiteLLM supports a wide range of LLM providers:

### Cloud Providers

all cloud providers can be found in the [litellm documentation](https://docs.litellm.ai/docs/providers).

### Local/On-Premise
- **VLLM**: Local VLLM servers
- **Hugging Face**: Local Hugging Face models
- **Custom endpoints**: Any OpenAI-compatible API

## Using with Local Models

### VLLM Server
To use with a local VLLM server:

1. Start your VLLM server:
```bash
vllm serve HuggingFaceH4/zephyr-7b-beta --host 0.0.0.0 --port 8000
```

2. Configure LiteLLM to use the local server:
```yaml
model_parameters:
    provider: "openai"
    model_name: "HuggingFaceH4/zephyr-7b-beta"
    base_url: "http://localhost:8000/v1"
    api_key: ""
```

For more detailed error handling and debugging, refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/).


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/use-litellm-as-backend.mdx" />

### Metric List
https://huggingface.co/docs/lighteval/main/metric-list.md

# Metric List

## Automatic metrics for multiple-choice tasks

These metrics use log-likelihood of the different possible targets.
- `loglikelihood_acc`: Fraction of instances where the choice with the best logprob was correct - we recommend using a normalization by length
- `loglikelihood_f1`: Corpus level F1 score of the multichoice selection
- `mcc`: Matthew's correlation coefficient (a measure of agreement between statistical distributions).
- `recall_at_k`: Fraction of instances where the choice with the k-st best logprob or better was correct
- `mrr`: Mean reciprocal rank, a measure of the quality of a ranking of choices ordered by correctness/relevance
- `target_perplexity`: Perplexity of the different choices available.
- `acc_golds_likelihood`: A bit different, it actually checks if the average logprob of a single target is above or below 0.5.
- `multi_f1_numeric`: Loglikelihood F1 score for multiple gold targets.

## Automatic metrics for perplexity and language modeling
These metrics use log-likelihood of prompt.
- `word_perplexity`: Perplexity (log probability of the input) weighted by the number of words of the sequence.
- `byte_perplexity`: Perplexity (log probability of the input) weighted by the number of bytes of the sequence.
- `bits_per_byte`: Average number of bits per byte according to model probabilities.
- `log_prob`: Predicted output's average log probability (input's log prob for language modeling).

## Automatic metrics for generative tasks
These metrics need the model to generate an output. They are therefore slower.
- Base:
    - `exact_match`: Fraction of instances where the prediction matches the gold. Several variations can be made through parametrization:
        - normalization on string pre-comparision on whitespace, articles, capitalization, ....
        - comparing the full string, or only subsets (prefix, suffix, ...)
    - `maj_at_k`: Model majority vote. Samples k generations from the model and assumes the most frequent is the actual prediction.
    - `f1_score`:  Average F1 score in terms of word overlap between the model output and gold (normalisation optional).
    - `f1_score_macro`: Corpus level macro F1 score.
    - `f1_score_macro`: Corpus level micro F1 score.
- Summarization:
    - `rouge`: Average ROUGE score [(Lin, 2004)](https://aclanthology.org/W04-1013/).
    - `rouge1`: Average ROUGE score [(Lin, 2004)](https://aclanthology.org/W04-1013/) based on 1-gram overlap.
    - `rouge2`: Average ROUGE score [(Lin, 2004)](https://aclanthology.org/W04-1013/) based on 2-gram overlap.
    - `rougeL`: Average ROUGE score [(Lin, 2004)](https://aclanthology.org/W04-1013/) based on longest common subsequence overlap.
    - `rougeLsum`: Average ROUGE score [(Lin, 2004)](https://aclanthology.org/W04-1013/) based on longest common subsequence overlap.
    - `rouge_t5` (BigBench): Corpus level ROUGE score for all available ROUGE metrics.
    - `faithfulness`: Faithfulness scores based on the SummaC method of [Laban et al. (2022)](https://aclanthology.org/2022.tacl-1.10/).
    - `extractiveness`: Reports, based on [(Grusky et al., 2018)](https://aclanthology.org/N18-1065/):
        - `summarization_coverage`: Extent to which the model-generated summaries are extractive fragments from the source document,
        - `summarization_density`: Extent to which the model-generated summaries are extractive summaries based on the source document,
        - `summarization_compression`: Extent to which the model-generated summaries are compressed relative to the source document.
    - `bert_score`: Reports the average BERTScore precision, recall, and f1 score [(Zhang et al., 2020)](https://openreview.net/pdf?id=SkeHuCVFDr) between model generation and gold summary.
- Translation:
    - `bleu`: Corpus level BLEU score [(Papineni et al., 2002)](https://aclanthology.org/P02-1040/) - uses the sacrebleu implementation.
    - `bleu_1`: Average sample BLEU score [(Papineni et al., 2002)](https://aclanthology.org/P02-1040/) based on 1-gram overlap - uses the nltk implementation.
    - `bleu_4`: Average sample BLEU score [(Papineni et al., 2002)](https://aclanthology.org/P02-1040/) based on 4-gram overlap - uses the nltk implementation.
    - `chrf`: Character n-gram matches f-score.
    - `ter`: Translation edit/error rate.
- Copyright:
    - `copyright`: Reports:
        - `longest_common_prefix_length`: Average length of longest common prefix between model generation and reference,
        - `edit_distance`: Average Levenshtein edit distance between model generation and reference,
        - `edit_similarity`: Average Levenshtein edit similarity (normalized by the length of longer sequence) between model generation and reference.
- Math:
    - Both `exact_match` and `maj_at_k` can be used to evaluate mathematics tasks with math specific normalization to remove and filter latex.

## LLM-as-Judge
- `llm_judge_gpt3p5`: Can be used for any generative task, the model will be scored by a GPT3.5 model using the OpenAI API.
- `llm_judge_llama_3_405b`: Can be used for any generative task, the model will be scored by a Llama 3.405B model using the HuggingFace API.
- `llm_judge_multi_turn_gpt3p5`: Can be used for any generative task, the model will be scored by a GPT3.5 model using the OpenAI API. It is used for multiturn tasks like mt-bench.
- `llm_judge_multi_turn_llama_3_405b`: Can be used for any generative task, the model will be scored by a Llama 3.405B model using the HuggingFace API. It is used for multiturn tasks like mt-bench.


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/metric-list.mdx" />

### Adding a Custom Task
https://huggingface.co/docs/lighteval/main/adding-a-custom-task.md

# Adding a Custom Task

Lighteval provides a flexible framework for creating custom evaluation tasks. This guide explains how to create and integrate new tasks into the evaluation system.

## Step-by-Step Creation of a Task

> [!WARNING]
> To contribute your task to the Lighteval repository, you would first need
> to install the required dev dependencies by running `pip install -e .[dev]`
> and then run `pre-commit install` to install the pre-commit hooks.

### Step 1: Create the Task File

First, create a Python file or directory under the `src/lighteval/tasks/tasks` directory.
A directory is helpfull if you need to split your file into multiple ones, just make sure to have one of the file named `main.py`.

### Step 2: Define the Prompt Function

You need to define a prompt function that will convert a line from your
dataset to a document to be used for evaluation.

```python
from lighteval.tasks.requests import Doc

# Define as many as you need for your different tasks
def prompt_fn(line: dict, task_name: str):
    """Defines how to go from a dataset line to a doc object.
    Follow examples in src/lighteval/tasks/default_prompts.py, or get more info
    about what this function should do in the README.
    """
    return Doc(
        task_name=task_name,
        query=line["question"],
        choices=[f" {c}" for c in line["choices"]],
        gold_index=line["gold"],
    )
```

### Step 3: Choose or Create Metrics

You can either use an existing metric (defined in `lighteval.metrics.metrics.Metrics`) or [create a custom one](adding-a-new-metric).

#### Using Existing Metrics

```python
from lighteval.metrics import Metrics

# Use an existing metric
metric = Metrics.ACCURACY
```

#### Creating Custom Metrics

```python
from lighteval.metrics.utils.metric_utils import SampleLevelMetric
import numpy as np

custom_metric = SampleLevelMetric(
    metric_name="my_custom_metric_name",
    higher_is_better=True,
    category="accuracy",
    sample_level_fn=lambda x: x,  # How to compute score for one sample
    corpus_level_fn=np.mean,  # How to aggregate the sample metrics
)
```

### Step 4: Define Your Task

You can define a task with or without subsets using [LightevalTaskConfig](/docs/lighteval/main/en/package_reference/tasks#lighteval.tasks.lighteval_task.LightevalTaskConfig).

#### Simple Task (No Subsets)

```python
from lighteval.tasks.lighteval_task import LightevalTaskConfig

# This is how you create a simple task (like HellaSwag) which has one single subset
# attached to it, and one evaluation possible.
task = LightevalTaskConfig(
    name="myothertask",
    prompt_function=prompt_fn,  # Must be defined in the file or imported
    suite=["community"],
    hf_repo="your_dataset_repo_on_hf",
    hf_subset="default",
    hf_avail_splits=["train", "test"],
    evaluation_splits=["test"],
    few_shots_split="train",
    few_shots_select="random_sampling_from_train",
    metrics=[metric],  # Select your metric in Metrics
    generation_size=256,
    stop_sequence=["\n", "Question:"],
)
```

#### Task with Multiple Subsets

If you want to create a task with multiple subsets, add them to the
`SAMPLE_SUBSETS` list and create a task for each subset.

```python
SAMPLE_SUBSETS = ["subset1", "subset2", "subset3"]  # List of all the subsets to use for this eval

class CustomSubsetTask(LightevalTaskConfig):
    def __init__(
        self,
        name,
        hf_subset,
    ):
        super().__init__(
            name=name,
            hf_subset=hf_subset,
            prompt_function=prompt_fn,  # Must be defined in the file or imported
            hf_repo="your_dataset_name",
            metrics=[custom_metric],  # Select your metric in Metrics or use your custom_metric
            hf_avail_splits=["train", "test"],
            evaluation_splits=["test"],
            few_shots_split="train",
            few_shots_select="random_sampling_from_train",
            suite=["lighteval"],
            generation_size=256,
            stop_sequence=["\n", "Question:"],
        )

SUBSET_TASKS = [CustomSubsetTask(name=f"task:{subset}", hf_subset=subset) for subset in SAMPLE_SUBSETS]
```

### Step 5: Add Tasks to the Table

Then you need to add your task to the `TASKS_TABLE` list.

```python
# STORE YOUR EVALS

# Tasks with subsets:
TASKS_TABLE = SUBSET_TASKS

# Tasks without subsets:
# TASKS_TABLE = [task]
```

### Step 6: Creating a requirement file

If your task has requirements, you need to create a `requirement.txt` file with
only the required dependencies so that anyone can run your task.

## Running Your Custom Task

Once your file is created, you can run the evaluation with the following command:

```bash
lighteval accelerate \
    "model_name=HuggingFaceH4/zephyr-7b-beta" \
    "lighteval|{task}|{fewshots}" \
    --custom-tasks {path_to_your_custom_task_file}
```

### Example Usage

```bash
# Run a custom task with zero-shot evaluation
lighteval accelerate \
    "model_name=openai-community/gpt2" \
    "lighteval|myothertask|0" \
    --custom-tasks community_tasks/my_custom_task.py

# Run a custom task with few-shot evaluation
lighteval accelerate \
    "model_name=openai-community/gpt2" \
    "lighteval|myothertask|3" \
    --custom-tasks community_tasks/my_custom_task.py
```


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/adding-a-custom-task.mdx" />

### Installation
https://huggingface.co/docs/lighteval/main/installation.md

# Installation

Lighteval can be installed from PyPI or from source. This guide covers all installation options and dependencies.

## System Requirements

- **Python**: 3.10 or higher
- **PyTorch**: 2.0 or higher (but less than 3.0)
- **CUDA**: Optional, for GPU acceleration

## From PyPI

The simplest way to install Lighteval is from PyPI:

```bash
pip install lighteval
```

This installs the core package with all essential dependencies for basic evaluation tasks.

## From Source

Source installation is recommended for developers who want to contribute to Lighteval or need the latest features:

```bash
git clone https://github.com/huggingface/lighteval.git
cd lighteval
pip install -e .
```

## Optional Dependencies (Extras)

Lighteval provides several optional dependency groups that you can install based on your needs. Use the format `pip install lighteval[<group>]` or `pip install -e .[<group>]` for source installation.

### Backend Extras

| Extra | Description | Dependencies |
|-------|-------------|--------------|
| `vllm` | Use VLLM as backend for high-performance inference | vllm>=0.10.0, ray, more_itertools |
| `tgi` | Use Text Generation Inference API | text-generation>=0.6.0 |
| `litellm` | Use LiteLLM for unified API access | litellm, diskcache |
| `optimum` | Use Optimum for optimized models | optimum==1.12.0 |
| `quantization` | Evaluate quantized models | bitsandbytes>=0.41.0, auto-gptq>=0.4.2 |
| `adapters` | Evaluate adapter models (PEFT, Delta) | peft==0.3.0 |
| `nanotron` | Evaluate Nanotron models | nanotron, tensorboardX |

### Task and Feature Extras

| Extra | Description | Dependencies |
|-------|-------------|--------------|
| `extended_tasks` | Extended evaluation tasks | langdetect, openai>1.87, tiktoken |
| `multilingual` | Multilingual evaluation support | stanza, spacy[ja,ko,th], jieba, pyvi |
| `math` | Mathematical reasoning tasks | latex2sympy2_extended==1.0.6 |

### Storage and Logging Extras

| Extra | Description | Dependencies |
|-------|-------------|--------------|
| `s3` | Upload results to S3 | s3fs |
| `tensorboardX` | Upload results to TensorBoard | tensorboardX |
| `wandb` | Log results to Weights & Biases | wandb |
| `trackio` | Log results to Trackio | trackio |

### Development Extras

| Extra | Description | Dependencies |
|-------|-------------|--------------|
| `quality` | Code quality tools | ruff>=v0.11.0, pre-commit |
| `tests` | Testing dependencies | pytest>=7.4.0, deepdiff |
| `docs` | Documentation building | hf-doc-builder, watchdog |
| `dev` | All development dependencies | Includes accelerate, quality, tests, multilingual, math, extended_tasks, vllm |


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/installation.mdx" />

### Evaluate your model with Inspect-AI
https://huggingface.co/docs/lighteval/main/inspect-ai.md

# Evaluate your model with Inspect-AI

Pick the right benchmarks with our benchmark finder:
Search by language, task type, dataset name, or keywords.

> [!WARNING]
> Not all tasks are compatible with inspect-ai's API as of yet, we are working on converting all of them !


<iframe
	src="https://openevals-open-benchmark-index.hf.space"
	frameborder="0"
	width="850"
	height="450"
></iframe>

Once you've chosen a benchmark, run it with `lighteval eval`. Below are examples for common setups.

### Examples

1. Evaluate a model via Hugging Face Inference Providers.

```bash
lighteval eval "hf-inference-providers/openai/gpt-oss-20b" "lighteval|gpqa:diamond|0"
```

2. Run multiple evals at the same time.

```bash
lighteval eval "hf-inference-providers/openai/gpt-oss-20b" "lighteval|gpqa:diamond|0,lighteval|aime25|0"
```

3. Compare providers for the same model.

```bash
lighteval eval \
    hf-inference-providers/openai/gpt-oss-20b:fireworks-ai \
    hf-inference-providers/openai/gpt-oss-20b:together \
    hf-inference-providers/openai/gpt-oss-20b:nebius \
    "lighteval|gpqa:diamond|0"
```

4. Evaluate a vLLM or SGLang model.

```bash
lighteval eval vllm/HuggingFaceTB/SmolLM-135M-Instruct "lighteval|gpqa:diamond|0"
```

5. See the impact of few-shot on your model.

```bash
lighteval eval hf-inference-providers/openai/gpt-oss-20b "lighteval|gsm8k|0,lighteval|gsm8k|5"
```

6. Optimize custom server connections.

```bash
lighteval eval hf-inference-providers/openai/gpt-oss-20b "lighteval|gsm8k|0" \
    --max-connections 50 \
    --timeout 30 \
    --retry-on-error 1 \
    --max-retries 1 \
    --max-samples 10
```

7. Use multiple epochs for more reliable results.

```bash
lighteval eval hf-inference-providers/openai/gpt-oss-20b "lighteval|aime25|0" --epochs 16 --epochs-reducer "pass_at_4"
```

8. Push to the Hub to share results.

```bash
lighteval eval hf-inference-providers/openai/gpt-oss-20b "lighteval|hle|0" \
    --bundle-dir gpt-oss-bundle \
    --repo-id OpenEvals/evals \
    --max-samples 100
```

Resulting Space:

<iframe
	src="https://openevals-evals.static.hf.space"
	frameborder="0"
	width="850"
	height="450"
></iframe>

9. Change model behaviour

You can use any argument defined in inspect-ai's API.

```bash
lighteval eval hf-inference-providers/openai/gpt-oss-20b "lighteval|aime25|0" --temperature 0.1
```

10. Use model-args to use any inference provider specific argument.

```bash
lighteval eval google/gemini-2.5-pro "lighteval|aime25|0" --model-args location=us-east5
```

```bash
lighteval eval openai/gpt-4o "lighteval|gpqa:diamond|0" --model-args service_tier=flex,client_timeout=1200
```


LightEval prints a per-model results table:

```
Completed all tasks in 'lighteval-logs' successfully

|                 Model                 |gpqa|gpqa:diamond|
|---------------------------------------|---:|-----------:|
|vllm/HuggingFaceTB/SmolLM-135M-Instruct|0.01|        0.01|

results saved to lighteval-logs
run "inspect view --log-dir lighteval-logs" to view the results
```


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/inspect-ai.mdx" />

### Contributing to Multilingual Evaluations
https://huggingface.co/docs/lighteval/main/contributing-to-multilingual-evaluations.md

# Contributing to Multilingual Evaluations

Lighteval supports multilingual evaluations through a comprehensive system of translation literals and language-adapted templates.

## Contributing Translation Literals

### What Are Translation Literals?

We define 19 `literals`, basic keywords or punctuation signs used when creating evaluation prompts in an automatic manner, such as `yes`, `no`, `because`, etc.

These literals are essential for:
- **Consistent prompt formatting** across languages
- **Automatic prompt generation** for multilingual tasks
- **Proper localization** of evaluation templates

### How to Contribute Translations

We welcome translations in your language! To contribute:

1. **Open the translation literals file**: [translation_literals.py](https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/templates/utils/translation_literals.py)

2. **Edit the file** to add or expand the literal for your language of interest

3. **Open a PR** with your modifications

### Translation Literals Structure

```python
Language.ENGLISH: TranslationLiterals(
    language=Language.ENGLISH,
    question_word="question",  # Usage: "Question: How are you?"
    answer="answer",  # Usage: "Answer: I am fine"
    confirmation_word="right",  # Usage: "He is smart, right?"
    yes="yes",  # Usage: "Yes, he is"
    no="no",  # Usage: "No, he is not"
    also="also",  # Usage: "Also, she is smart."
    cause_word="because",  # Usage: "She is smart, because she is tall"
    effect_word="therefore",  # Usage: "He is tall therefore he is smart"
    or_word="or",  # Usage: "He is tall or small"
    true="true",  # Usage: "He is smart, true, false or neither?"
    false="false",  # Usage: "He is smart, true, false or neither?"
    neither="neither",  # Usage: "He is smart, true, false or neither?"
    # Punctuation and spacing: only adjust if your language uses something different than in English
    full_stop=".",
    comma=",",
    question_mark="?",
    exclamation_mark="!",
    word_space=" ",
    sentence_space=" ",
    colon=":",
    # The first characters of your alphabet used in enumerations, if different from English
    indices=["A", "B", "C", ...]
)
```

## Contributing New Multilingual Tasks

### Prerequisites

Before creating a new multilingual task, you should:

1. **Read the custom task guide**: [Adding a Custom Task](adding-a-custom-task)
2. **Understand multilingual task structure**: Review the [multilingual tasks](https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/multilingual/tasks.py) file
3. **Browse available templates**: Check the [templates directory](https://github.com/huggingface/lighteval/tree/main/src/lighteval/tasks/templates)

### Key Concepts

#### Language-Adapted Templates
For multilingual evaluations, the `prompt_function` should be implemented using language-adapted templates. These templates handle:
- **Correct formatting** for each language
- **Consistent usage** of language-adjusted prompt anchors (e.g., Question/Answer)
- **Proper punctuation** and spacing conventions

#### Template Types
Available template types include:
- **XNLI**: Natural language inference tasks - [`get_nli_prompt_function`](https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/templates/nli.py#L162)
- **COPA**: Causal reasoning tasks - [`get_copa_prompt_function`](https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/templates/copa.py#L76)
- **Multiple Choice**: Standard multiple choice questions - [`get_mcq_prompt_function`](https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/templates/multichoice.py#L81)
- **Question Answering**: Open-ended question answering - [`get_qa_prompt_function`](https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/templates/qa.py#L46)
- **Custom**: Specialized task templates

#### Formulation Types

##### Multiple Choice Formulation (MCF)
Used for standard multiple choice questions where the model selects from lettered options:
```python
MCFFormulation()
```

**Example output:**
```
Question: What is the capital of France?
A. London
B. Paris
C. Berlin
D. Rome
Answer: | A/B/C/D
```

##### Classification Formulation (CF)
Used for classification tasks where the model generates the answer directly:
```python
CFFormulation()
```

**Example output:**
```
Question: What is the capital of France?
Answer: | Paris
```

##### Hybrid Formulation
Used for tasks that present choices but expect the full answer text:
```python
HybridFormulation()
```

**Example output:**
```
Question: What is the capital of France?
A. London
B. Paris
C. Berlin
D. Rome
Answer: | Paris
```


### Creating Your Multilingual Task

#### Step 1: Create the Task File
Create a Python file following the custom task guide structure.

#### Step 2: Import Required Components
```python
from lighteval.tasks.lighteval_task import LightevalTaskConfig
from lighteval.tasks.multilingual.language import Language
from lighteval.tasks.multilingual.formulations import MCFFormulation, CFFormulation, HybridFormulation
from lighteval.tasks.multilingual.templates import get_template_prompt_function
from lighteval.tasks.multilingual.metrics import get_metrics_for_formulation, loglikelihood_acc_metric
from lighteval.tasks.multilingual.normalization import LogProbTokenNorm, LogProbCharNorm
```

#### Step 3: Define Your Tasks
```python
your_tasks = [
    LightevalTaskConfig(
        # Name of your evaluation
        name=f"evalname_{language.value}_{formulation.name.lower()}",
        # The evaluation is community contributed
        suite=["community"],
        # This will automatically get the correct metrics for your chosen formulation
        metric=get_metrics_for_formulation(
            formulation,
            [
                LogLikelihoodAccMetric(normalization=None),
                LogLikelihoodAccMetric(normalization=LogProbTokenNorm()),
                LogLikelihoodAccMetric(normalization=LogProbCharNorm()),
            ],
        ),
        # In this function, you choose which template to follow and for which language and formulation
        prompt_function=get_template_prompt_function(
            language=language,
            # Use the adapter to define the mapping between the
            # keys of the template (left), and the keys of your dataset
            # (right)
            # To know which template keys are required and available,
            # consult the appropriate adapter type and doc-string.
            adapter=lambda line: {
                "key": line["relevant_key"],
                # Add more mappings as needed
            },
            formulation=formulation,
        ),
        # You can also add specific filters to remove irrelevant samples
        hf_filter=lambda line: line["label"] in <condition>,
        # You then select your huggingface dataset as well as
        # the splits available for evaluation
        hf_repo=<dataset>,
        hf_subset=<subset>,
        evaluation_splits=["train"],
        hf_avail_splits=["train"],
    )
    for language in [
        Language.YOUR_LANGUAGE,  # Add your target languages
        # Language.SPANISH,
        # Language.FRENCH,
        # etc.
    ]
    for formulation in [MCFFormulation(), CFFormulation(), HybridFormulation()]
]
```

#### Step 4: Test Your Implementation
Follow the custom task guide to test if your task is correctly implemented.

> [!TIP]
> All [LightevalTaskConfig](/docs/lighteval/main/en/package_reference/tasks#lighteval.tasks.lighteval_task.LightevalTaskConfig) parameters are strongly typed, including the inputs to the template function. Make sure to take advantage of your IDE's functionality to make it easier to correctly fill these parameters.

### Validation Checklist
- [ ] Translation literals are accurate and complete
- [ ] Task works correctly across all target languages
- [ ] Metrics are appropriate for the task type
- [ ] Documentation is clear and comprehensive
- [ ] Code follows project conventions

### Getting Help

- **GitHub Issues**: Report bugs or ask questions
- **Discussions**: Join community discussions
- **Documentation**: Review existing guides and examples


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/contributing-to-multilingual-evaluations.mdx" />

### Saving and Reading Results
https://huggingface.co/docs/lighteval/main/saving-and-reading-results.md

# Saving and Reading Results

Lighteval provides comprehensive logging and result management through the `EvaluationTracker` class. This system allows you to save results locally and optionally push them to various platforms for collaboration and analysis.

## Saving Results Locally

Lighteval automatically saves results and evaluation details in the
directory specified with the `--output-dir` option. The results are saved in
`{output_dir}/results/{model_name}/results_{timestamp}.json`. [Here is an
example of a result file](#example-of-a-result-file). The output path can be
any [fsspec](https://filesystem-spec.readthedocs.io/en/latest/index.html)
compliant path (local, S3, Hugging Face Hub, Google Drive, FTP, etc.).

To save detailed evaluation information, you can use the `--save-details`
option. The details are saved in Parquet files at
`{output_dir}/details/{model_name}/{timestamp}/details_{task}_{timestamp}.parquet`.

If you want results to be saved in a custom path structure, you can set the `results-path-template` option.
This allows you to specify a string template for the path. The template must contain the following
variables: `output_dir`, `model_name`, `org`. For example:
`{output_dir}/{org}_{model}`. The template will be used to create the path for the results file.

## Pushing Results to the Hugging Face Hub

You can push results and evaluation details to the Hugging Face Hub. To do
so, you need to set the `--push-to-hub` option as well as the `--results-org`
option. The results are saved in a dataset with the name
`{results_org}/{model_org}/{model_name}`. To push the details, you need to set
the `--save-details` option.

The dataset created will be private by default. You can make it public by
setting the `--public-run` option.

## Pushing Results to TensorBoard

You can push results to TensorBoard by setting `--push-to-tensorboard`.
This creates a TensorBoard dashboard in a Hugging Face organization specified with the `--results-org`
option.

## Pushing Results to Weights & Biases or Trackio

You can push results to Weights & Biases by setting `--wandb`. This initializes a W&B
run and logs the results.

W&B arguments need to be set in your environment variables:

```bash
export WANDB_PROJECT="lighteval"
```

You can find a complete list of variables in the [W&B documentation](https://docs.wandb.ai/guides/track/environment-variables/).

If Trackio is available in your environment (`pip install lighteval[trackio]`), it will be used to log and push results to a
Hugging Face dataset. Choose the dataset name and organization with:

```bash
export WANDB_SPACE_ID="org/name"
```

## How to Load and Investigate Details

### Loading from Local Detail Files

```python
from datasets import load_dataset
import os
import glob

output_dir = "evals_doc"
model_name = "HuggingFaceH4/zephyr-7b-beta"
timestamp = "latest"
task = "lighteval|gsm8k|0"

if timestamp == "latest":
    path = f"{output_dir}/details/{model_name}/*/"
    timestamps = glob.glob(path)
    timestamp = sorted(timestamps)[-1].split("/")[-2]
    print(f"Latest timestamp: {timestamp}")

details_path = f"{output_dir}/details/{model_name}/{timestamp}/details_{task}_{timestamp}.parquet"

# Load the details
details = load_dataset("parquet", data_files=details_path, split="train")

for detail in details:
    print(detail)
```

### Loading from the Hugging Face Hub

```python
from datasets import load_dataset

results_org = "SaylorTwift"
model_name = "HuggingFaceH4/zephyr-7b-beta"
sanitized_model_name = model_name.replace("/", "__")
task = "lighteval|gsm8k|0"
public_run = False

dataset_path = f"{results_org}/details_{sanitized_model_name}{'_private' if not public_run else ''}"
details = load_dataset(dataset_path, task.replace("|", "_"), split="latest")

for detail in details:
    print(detail)
```

## Detail File Structure

The detail file contains the following columns:

- **`__doc__`**: The document used for evaluation, containing the gold reference, few-shot examples, and other hyperparameters used for the task.
- **`__model_response__`**: Contains model generations, log probabilities, and the input that was sent to the model.
- **`__metric__`**: The value of the metrics for this sample.

## EvaluationTracker Configuration

The `EvaluationTracker` class provides several configuration options for customizing how results are saved and pushed:

### Basic Configuration

```python
from lighteval.logging.evaluation_tracker import EvaluationTracker

tracker = EvaluationTracker(
    output_dir="./results",
    save_details=True,
    push_to_hub=True,
    hub_results_org="your_username",
    public=False
)
```

### Advanced Configuration

```python
tracker = EvaluationTracker(
    output_dir="./results",
    results_path_template="{output_dir}/custom/{org}_{model}",
    save_details=True,
    push_to_hub=True,
    push_to_tensorboard=True,
    hub_results_org="my-org",
    tensorboard_metric_prefix="eval",
    public=True,
    use_wandb=True
)
```

### Key Parameters

- **`output_dir`**: Local directory to save evaluation results and logs
- **`results_path_template`**: Template for results directory structure
- **`save_details`**: Whether to save detailed evaluation records (default: True)
- **`push_to_hub`**: Whether to push results to Hugging Face Hub (default: False)
- **`push_to_tensorboard`**: Whether to push metrics to TensorBoard (default: False)
- **`hub_results_org`**: Hugging Face Hub organization to push results to
- **`tensorboard_metric_prefix`**: Prefix for TensorBoard metrics (default: "eval")
- **`public`**: Whether to make Hub datasets public (default: False)
- **`use_wandb`**: Whether to log to Weights & Biases or Trackio (default: False)

## Result File Structure

The main results file contains several sections:

### General Configuration
- **`config_general`**: Overall evaluation configuration including model information, timing, and system details
- **`summary_general`**: General statistics about the evaluation run

### Task-Specific Information
- **`config_tasks`**: Configuration details for each evaluated task
- **`summary_tasks`**: Task-specific statistics and metadata
- **`versions`**: Version information for tasks and datasets

### Results
- **`results`**: Actual evaluation metrics and scores for each task

## Example of a Result File

```json
{
  "config_general": {
    "lighteval_sha": "203045a8431bc9b77245c9998e05fc54509ea07f",
    "num_fewshot_seeds": 1,
    "max_samples": 1,
    "job_id": "",
    "start_time": 620979.879320166,
    "end_time": 621004.632108041,
    "total_evaluation_time_secondes": "24.752787875011563",
    "model_name": "gpt2",
    "model_sha": "607a30d783dfa663caf39e06633721c8d4cfcd7e",
    "model_dtype": null,
    "model_size": "476.2 MB"
  },
  "results": {
    "lighteval|gsm8k|0": {
      "em": 0.0,
      "em_stderr": 0.0,
      "maj@8": 0.0,
      "maj@8_stderr": 0.0
    },
    "all": {
      "em": 0.0,
      "em_stderr": 0.0,
      "maj@8": 0.0,
      "maj@8_stderr": 0.0
    }
  },
  "versions": {
    "lighteval|gsm8k|0": 0
  },
  "config_tasks": {
    "lighteval|gsm8k": {
      "name": "gsm8k",
      "prompt_function": "gsm8k",
      "hf_repo": "gsm8k",
      "hf_subset": "main",
      "metric": [
        {
          "metric_name": "em",
          "higher_is_better": true,
          "category": "3",
          "use_case": "5",
          "sample_level_fn": "compute",
          "corpus_level_fn": "mean"
        },
        {
          "metric_name": "maj@8",
          "higher_is_better": true,
          "category": "5",
          "use_case": "5",
          "sample_level_fn": "compute",
          "corpus_level_fn": "mean"
        }
      ],
      "hf_avail_splits": [
        "train",
        "test"
      ],
      "evaluation_splits": [
        "test"
      ],
      "few_shots_split": null,
      "few_shots_select": "random_sampling_from_train",
      "generation_size": 256,
      "generation_grammar": null,
      "stop_sequence": [
        "Question="
      ],
      "num_samples": null,
      "suite": [
        "lighteval"
      ],
      "original_num_docs": 1319,
      "effective_num_docs": 1,
      "must_remove_duplicate_docs": null,
      "version": 0
    }
  },
  "summary_tasks": {
    "lighteval|gsm8k|0": {
      "hashes": {
        "hash_examples": "8517d5bf7e880086",
        "hash_full_prompts": "8517d5bf7e880086",
        "hash_input_tokens": "29916e7afe5cb51d",
        "hash_cont_tokens": "37f91ce23ef6d435"
      },
      "padded": 0,
      "non_padded": 2,
      "effective_few_shots": 0.0,
    }
  },
  "summary_general": {
    "hashes": {
      "hash_examples": "5f383c395f01096e",
      "hash_full_prompts": "5f383c395f01096e",
      "hash_input_tokens": "ac933feb14f96d7b",
      "hash_cont_tokens": "9d03fb26f8da7277"
    },
    "padded": 0,
    "non_padded": 2,
  }
}
```


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/saving-and-reading-results.mdx" />

### Using the Python API
https://huggingface.co/docs/lighteval/main/using-the-python-api.md

# Using the Python API

Lighteval can be used from a custom Python script. To evaluate a model, you will need to set up an
[EvaluationTracker](/docs/lighteval/main/en/package_reference/evaluation_tracker#lighteval.logging.evaluation_tracker.EvaluationTracker), [PipelineParameters](/docs/lighteval/main/en/package_reference/pipeline#lighteval.pipeline.PipelineParameters),
a [`model`](package_reference/models) or a [`model_config`](package_reference/model_config),
and a [Pipeline](/docs/lighteval/main/en/package_reference/pipeline#lighteval.pipeline.Pipeline).

After that, simply run the pipeline and save the results.

```python
import lighteval
from lighteval.logging.evaluation_tracker import EvaluationTracker
from lighteval.models.vllm.vllm_model import VLLMModelConfig
from lighteval.pipeline import ParallelismManager, Pipeline, PipelineParameters
from lighteval.utils.imports import is_package_available

if is_package_available("accelerate"):
    from datetime import timedelta
    from accelerate import Accelerator, InitProcessGroupKwargs
    accelerator = Accelerator(kwargs_handlers=[InitProcessGroupKwargs(timeout=timedelta(seconds=3000))])
else:
    accelerator = None

def main():
    evaluation_tracker = EvaluationTracker(
        output_dir="./results",
        save_details=True,
        push_to_hub=True,
        hub_results_org="your_username",  # Replace with your actual username
    )

    pipeline_params = PipelineParameters(
        launcher_type=ParallelismManager.ACCELERATE,
        custom_tasks_directory=None,  # Set to path if using custom tasks
        # Remove the parameter below once your configuration is tested
        max_samples=10
    )

    model_config = VLLMModelConfig(
        model_name="HuggingFaceH4/zephyr-7b-beta",
        dtype="float16",
    )

    task = "lighteval|gsm8k|5"

    pipeline = Pipeline(
        tasks=task,
        pipeline_parameters=pipeline_params,
        evaluation_tracker=evaluation_tracker,
        model_config=model_config,
    )

    pipeline.evaluate()
    pipeline.save_and_push_results()
    pipeline.show_results()

if __name__ == "__main__":
    main()
```

## Key Components

### EvaluationTracker
The `EvaluationTracker` handles logging and saving evaluation results. It can save results locally and optionally push them to the Hugging Face Hub.

### PipelineParameters
`PipelineParameters` configures how the evaluation pipeline runs, including parallelism settings and task configuration.

### Model Configuration
Model configurations define the model to be evaluated, including the model name, data type, and other model-specific parameters. Different backends (VLLM, Transformers, etc.) have their own configuration classes.

### Pipeline
The `Pipeline` orchestrates the entire evaluation process, taking the tasks, model configuration, and parameters to run the evaluation.

## Running Multiple Tasks

You can evaluate on multiple tasks by providing a comma-separated list or a file path:

```python
# Multiple tasks as comma-separated string
tasks = "lighteval|aime24|0,lighteval|aime25|0"

# Or load from a file
tasks = "./path/to/tasks.txt"

pipeline = Pipeline(
    tasks=tasks,
    # ... other parameters
)
```

## Custom Tasks

To use custom tasks, set the `custom_tasks_directory` parameter to the path containing your custom task definitions:

```python
pipeline_params = PipelineParameters(
    custom_tasks_directory="./path/to/custom/tasks",
    # ... other parameters
)
```

For more information on creating custom tasks, see the [Adding a Custom Task](adding-a-custom-task) guide.


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/using-the-python-api.mdx" />

### Evaluating Custom Models
https://huggingface.co/docs/lighteval/main/evaluating-a-custom-model.md

# Evaluating Custom Models

Lighteval allows you to evaluate custom model implementations by creating a custom model class that inherits from `LightevalModel`.
This is useful when you want to evaluate models that aren't directly supported by the standard backends and providers (Transformers, VLLM, etc.), or
if you want to add your own pre/post-processing logic.

## Creating a Custom Model

### Step 1: Create Your Model Implementation

Create a Python file containing your custom model implementation. The model must inherit from `LightevalModel` and implement all required methods.

Here's a basic example:

```python
from lighteval.models.abstract_model import LightevalModel
from lighteval.models.model_output import ModelResponse
from lighteval.tasks.requests import Doc, SamplingMethod
from lighteval.utils.cache_management import SampleCache, cached

class MyCustomModel(LightevalModel):
    def __init__(self, config):
        super().__init__(config)
        # Initialize your model here...

        # Enable caching (recommended)
        self._cache = SampleCache(config)

    @cached(SamplingMethod.GENERATIVE)
    def greedy_until(self, docs: List[Doc]) -> List[ModelResponse]:
        # Implement generation logic
        pass

    @cached(SamplingMethod.LOGPROBS)
    def loglikelihood(self, docs: List[Doc]) -> List[ModelResponse]:
        # Implement loglikelihood computation
        pass

    @cached(SamplingMethod.PERPLEXITY)
    def loglikelihood_rolling(self, docs: List[Doc]) -> List[ModelResponse]:
        # Implement rolling loglikelihood computation
        pass
```

### Step 2: Model File Requirements

The custom model file should contain exactly one class that inherits from `LightevalModel`. This class will be automatically detected and instantiated when loading the model.

> [!TIP]
> You can find a complete example of a custom model implementation in `examples/custom_models/google_translate_model.py`.

## Running the Evaluation

You can evaluate your custom model using either the command-line interface or the Python API.

### Using the Command Line

```bash
lighteval custom \
    "google-translate" \
    "examples/custom_models/google_translate_model.py" \
    "lighteval|wmt20:fr-de|0" \
    --max-samples 10
```

The command takes three required arguments:
- **Model name**: Used for tracking in results/logs
- **Model implementation file path**: Path to your Python file containing the custom model
- **Tasks**: Tasks to evaluate on (same format as other backends)

### Using the Python API

```python
from lighteval.logging.evaluation_tracker import EvaluationTracker
from lighteval.models.custom.custom_model import CustomModelConfig
from lighteval.pipeline import Pipeline, PipelineParameters, ParallelismManager

# Set up evaluation tracking
evaluation_tracker = EvaluationTracker(
    output_dir="results",
    save_details=True
)

# Configure the pipeline
pipeline_params = PipelineParameters(
    launcher_type=ParallelismManager.CUSTOM,
)

# Configure your custom model
model_config = CustomModelConfig(
    model_name="my-custom-model",
    model_definition_file_path="path/to/my_model.py"
)

# Create and run the pipeline
pipeline = Pipeline(
    tasks="leaderboard|truthfulqa:mc|0",
    pipeline_parameters=pipeline_params,
    evaluation_tracker=evaluation_tracker,
    model_config=model_config
)

pipeline.evaluate()
pipeline.save_and_push_results()
```

## Required Methods

Your custom model must implement these core methods:

### `greedy_until`
For generating text until a stop sequence or max tokens is reached. This is used for generative evaluations.

```python
def greedy_until(self, docs: list[Doc]) -> list[ModelResponse]:
    """
    Generate text until stop sequence or max tokens.

    Args:
        docs: list of documents containing prompts and generation parameters

    Returns:
        list of model responses with generated text
    """
    pass
```

### `loglikelihood`
For computing log probabilities of specific continuations. This is used for multiple choice logprob evaluations.

```python
def loglikelihood(self, docs: list[Doc]) -> list[ModelResponse]:
    """
    Compute log probabilities of continuations.

    Args:
        docs: list of documents containing context and continuation pairs

    Returns:
        list of model responses with log probabilities
    """
    pass
```

### `loglikelihood_rolling`
For computing rolling log probabilities of sequences. This is used for perplexity metrics.

```python
def loglikelihood_rolling(self, docs: list[Doc]) -> list[ModelResponse]:
    """
    Compute rolling log probabilities of sequences.

    Args:
        docs: list of documents containing text sequences

    Returns:
        list of model responses with rolling log probabilities
    """
    pass
```

See the `LightevalModel` base class documentation for detailed method signatures and requirements.

## Enabling Caching (Recommended)

Lighteval includes a caching system that can significantly speed up evaluations by storing and reusing model predictions.
To enable caching in your custom model:

### Step 1: Import Caching Components
```python
from lighteval.utils.cache_management import SampleCache, cached
```

### Step 2: Initialize Cache in Constructor
```python
def __init__(self, config):
    super().__init__(config)
    # Your initialization code...
    self._cache = SampleCache(config)
```

3. Add cache decorators to your prediction methods:
   ```python
   @cached(SamplingMethod.GENERATIVE)
   def greedy_until(self, docs: List[Doc]) -> List[ModelResponse]:
       # Your implementation...
   ```

For detailed information about the caching system, see the [Caching Documentation](caching).

## Troubleshooting

### Common Issues

1. **Import Errors**: Ensure all required dependencies are installed
2. **Method Signature Errors**: Verify your methods match the expected signatures
3. **Caching Issues**: Check that cache decorators are applied correctly
4. **Performance Issues**: Consider implementing batching and caching

### Debugging Tips

- Use the `--max-samples` flag to test with a small dataset
- Enable detailed logging to see what's happening
- Test individual methods in isolation
- Check the example implementations for reference

For more detailed information about custom model implementation, see the [Model Reference](package_reference/models).


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/evaluating-a-custom-model.mdx" />

### Available tasks
https://huggingface.co/docs/lighteval/main/available-tasks.md

# Available tasks

Browse and inspect tasks available in LightEval.
<iframe
	src="https://openevals-benchmark-finder.hf.space"
	frameborder="0"
	width="850"
	height="450"
></iframe>



List all tasks:

```bash
lighteval tasks list
```

### Inspect specific tasks

Inspect a task to view its config, metrics, and requirements:

```bash
lighteval tasks inspect <task_name>
```

Example:
```bash
lighteval tasks inspect "lighteval|truthfulqa:mc|0"
```


<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/available-tasks.mdx" />

### Logging
https://huggingface.co/docs/lighteval/main/package_reference/logging.md

# Logging

## EvaluationTracker[[lighteval.logging.evaluation_tracker.EvaluationTracker]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.logging.evaluation_tracker.EvaluationTracker</name><anchor>lighteval.logging.evaluation_tracker.EvaluationTracker</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/evaluation_tracker.py#L95</source><parameters>[{"name": "output_dir", "val": ": str"}, {"name": "results_path_template", "val": ": str | None = None"}, {"name": "save_details", "val": ": bool = True"}, {"name": "push_to_hub", "val": ": bool = False"}, {"name": "push_to_tensorboard", "val": ": bool = False"}, {"name": "hub_results_org", "val": ": str | None = ''"}, {"name": "tensorboard_metric_prefix", "val": ": str = 'eval'"}, {"name": "public", "val": ": bool = False"}, {"name": "nanotron_run_info", "val": ": GeneralArgs = None"}, {"name": "use_wandb", "val": ": bool = False"}]</parameters><paramsdesc>- **output_dir** (str) -- Local directory to save evaluation results and logs
- **results_path_template** (str, optional) -- Template for results directory structure.
  Example: "{output_dir}/results/{org}_{model}"
- **save_details** (bool, defaults to True) -- Whether to save detailed evaluation records
- **push_to_hub** (bool, defaults to False) -- Whether to push results to HF Hub
- **push_to_tensorboard** (bool, defaults to False) -- Whether to push metrics to TensorBoard
- **hub_results_org** (str, optional) -- HF Hub organization to push results to
- **tensorboard_metric_prefix** (str, defaults to "eval") -- Prefix for TensorBoard metrics
- **public** (bool, defaults to False) -- Whether to make Hub datasets public
- **nanotron_run_info** (GeneralArgs, optional) -- Nanotron model run information
- **use_wandb** (bool, defaults to False) -- Whether to log to Weights & Biases or Trackio if available</paramsdesc><paramgroups>0</paramgroups></docstring>
Tracks and manages evaluation results, metrics, and logging for model evaluations.

The EvaluationTracker coordinates multiple specialized loggers to track different aspects of model evaluation:

- Details Logger (DetailsLogger): Records per-sample evaluation details and predictions
- Metrics Logger (MetricsLogger): Tracks aggregate evaluation metrics and scores
- Versions Logger (VersionsLogger): Records task and dataset versions
- General Config Logger (GeneralConfigLogger): Stores overall evaluation configuration
- Task Config Logger (TaskConfigLogger): Maintains per-task configuration details

The tracker can save results locally and optionally push them to:
- Hugging Face Hub as datasets
- TensorBoard for visualization
- Trackio or Weights & Biases for experiment tracking



<ExampleCodeBlock anchor="lighteval.logging.evaluation_tracker.EvaluationTracker.example">

Example:
```python
tracker = EvaluationTracker(
    output_dir="./eval_results",
    push_to_hub=True,
    hub_results_org="my-org",
    save_details=True
)

# Log evaluation results
tracker.metrics_logger.add_metric("accuracy", 0.85)
tracker.details_logger.add_detail(task_name="qa", prediction="Paris")

# Save all results
tracker.save()
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>generate_final_dict</name><anchor>lighteval.logging.evaluation_tracker.EvaluationTracker.generate_final_dict</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/evaluation_tracker.py#L363</source><parameters>[]</parameters><rettype>dict</rettype><retdesc>Dictionary containing all experiment information including config, results, versions, and summaries</retdesc></docstring>
Aggregates and returns all the logger's experiment information in a dictionary.

This function should be used to gather and display said information at the end of an evaluation run.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>push_to_hub</name><anchor>lighteval.logging.evaluation_tracker.EvaluationTracker.push_to_hub</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/evaluation_tracker.py#L387</source><parameters>[{"name": "date_id", "val": ": str"}, {"name": "details", "val": ": dict"}, {"name": "results_dict", "val": ": dict"}]</parameters></docstring>
Pushes the experiment details (all the model predictions for every step) to the hub.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>recreate_metadata_card</name><anchor>lighteval.logging.evaluation_tracker.EvaluationTracker.recreate_metadata_card</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/evaluation_tracker.py#L454</source><parameters>[{"name": "repo_id", "val": ": str"}]</parameters><paramsdesc>- **repo_id** (str) -- Details dataset repository path on the hub (`org/dataset`)</paramsdesc><paramgroups>0</paramgroups></docstring>
Fully updates the details repository metadata card for the currently evaluated model




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save</name><anchor>lighteval.logging.evaluation_tracker.EvaluationTracker.save</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/evaluation_tracker.py#L247</source><parameters>[]</parameters></docstring>
Saves the experiment information and results to files, and to the hub if requested.

</div></div>

## GeneralConfigLogger[[lighteval.logging.info_loggers.GeneralConfigLogger]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.logging.info_loggers.GeneralConfigLogger</name><anchor>lighteval.logging.info_loggers.GeneralConfigLogger</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/info_loggers.py#L48</source><parameters>[]</parameters><paramsdesc>- **lighteval_sha** (str) -- Git commit SHA of lighteval used for evaluation, enabling exact version reproducibility.
  Set to "?" if not in a git repository.

- **num_fewshot_seeds** (int) -- Number of random seeds used for few-shot example sampling.
  - If <= 1: Single evaluation with seed=0
  - If > 1: Multiple evaluations with different few-shot samplings (HELM-style)

- **max_samples** (int, optional) -- Maximum number of samples to evaluate per task.
  Only used for debugging - truncates each task's dataset.

- **job_id** (int, optional) -- Slurm job ID if running on a cluster.
  Used to cross-reference with scheduler logs.

- **start_time** (float) -- Unix timestamp when evaluation started.
  Automatically set during logger initialization.

- **end_time** (float) -- Unix timestamp when evaluation completed.
  Set by calling log_end_time().

- **total_evaluation_time_secondes** (str) -- Total runtime in seconds.
  Calculated as end_time - start_time.

- **model_config** (ModelConfig) -- Complete model configuration settings.
  Contains model architecture, tokenizer, and generation parameters.

- **model_name** (str) -- Name identifier for the evaluated model.
  Extracted from model_config.</paramsdesc><paramgroups>0</paramgroups></docstring>
Tracks general configuration and runtime information for model evaluations.

This logger captures key configuration parameters, model details, and timing information
to ensure reproducibility and provide insights into the evaluation process.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>log_args_info</name><anchor>lighteval.logging.info_loggers.GeneralConfigLogger.log_args_info</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/info_loggers.py#L106</source><parameters>[{"name": "num_fewshot_seeds", "val": ": int"}, {"name": "max_samples", "val": ": int | None"}, {"name": "job_id", "val": ": str"}]</parameters><paramsdesc>- **num_fewshot_seeds** (int) -- number of few-shot seeds.
- **max_samples** (int | None) -- maximum number of samples, if None, use all the samples available.
- **job_id** (str) -- job ID, used to retrieve logs.</paramsdesc><paramgroups>0</paramgroups></docstring>
Logs the information about the arguments passed to the method.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>log_model_info</name><anchor>lighteval.logging.info_loggers.GeneralConfigLogger.log_model_info</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/info_loggers.py#L123</source><parameters>[{"name": "model_config", "val": ": ModelConfig"}]</parameters><paramsdesc>- **model_config** -- the model config used to initialize the model.</paramsdesc><paramgroups>0</paramgroups></docstring>
Logs the model information.




</div></div>

## DetailsLogger[[lighteval.logging.info_loggers.DetailsLogger]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.logging.info_loggers.DetailsLogger</name><anchor>lighteval.logging.info_loggers.DetailsLogger</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/info_loggers.py#L138</source><parameters>[{"name": "hashes", "val": ": dict = <factory>"}, {"name": "compiled_hashes", "val": ": dict = <factory>"}, {"name": "details", "val": ": dict = <factory>"}, {"name": "compiled_details", "val": ": dict = <factory>"}, {"name": "compiled_details_over_all_tasks", "val": ": DetailsLogger.CompiledDetailOverAllTasks = <factory>"}]</parameters><paramsdesc>- **hashes** (dict[str, list`Hash`) -- Maps each task name to the list of all its samples' `Hash`.
- **compiled_hashes** (dict[str, CompiledHash) -- Maps each task name to its `CompiledHas`, an aggregation of all the individual sample hashes.
- **details** (dict[str, list`Detail`]) -- Maps each task name to the list of its samples' details.
  Example: winogrande: [sample1_details, sample2_details, ...]
- **compiled_details** (dict[str, `CompiledDetail`]) -- : Maps each task name to the list of its samples' compiled details.
- **compiled_details_over_all_tasks** (CompiledDetailOverAllTasks) -- Aggregated details over all the tasks.</paramsdesc><paramgroups>0</paramgroups></docstring>
Logger for the experiment details.

Stores and logs experiment information both at the task and at the sample level.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>aggregate</name><anchor>lighteval.logging.info_loggers.DetailsLogger.aggregate</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/info_loggers.py#L277</source><parameters>[]</parameters></docstring>
Hashes the details for each task and then for all tasks.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>log</name><anchor>lighteval.logging.info_loggers.DetailsLogger.log</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/info_loggers.py#L253</source><parameters>[{"name": "task_name", "val": ": str"}, {"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "metrics", "val": ": dict"}]</parameters><paramsdesc>- **task_name** (str) -- Name of the current task of interest.
- **doc** (Doc) -- Current sample that we want to store.
- **model_response** (ModelResponse) -- Model outputs for the current sample
- **metrics** (dict) -- Model scores for said sample on the current task's metrics.</paramsdesc><paramgroups>0</paramgroups></docstring>
Stores the relevant information for one sample of one task to the total list of samples stored in the DetailsLogger.




</div></div>

## MetricsLogger[[lighteval.logging.info_loggers.MetricsLogger]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.logging.info_loggers.MetricsLogger</name><anchor>lighteval.logging.info_loggers.MetricsLogger</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/info_loggers.py#L309</source><parameters>[{"name": "metrics_values", "val": ": dict = <factory>"}, {"name": "metric_aggregated", "val": ": dict = <factory>"}]</parameters><paramsdesc>- **metrics_value** (dict[str, dict[str, list[float]]]) -- Maps each task to its dictionary of metrics to scores for all the example of the task.
  Example: {"winogrande|winogrande_xl": {"accuracy": [0.5, 0.5, 0.5, 0.5, 0.5, 0.5]}}
- **metric_aggregated** (dict[str, dict[str, float]]) -- Maps each task to its dictionary of metrics to aggregated scores over all the example of the task.
  Example: {"winogrande|winogrande_xl": {"accuracy": 0.5}}</paramsdesc><paramgroups>0</paramgroups></docstring>
Logs the actual scores for each metric of each task.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>aggregate</name><anchor>lighteval.logging.info_loggers.MetricsLogger.aggregate</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/info_loggers.py#L330</source><parameters>[{"name": "task_dict", "val": ": dict"}, {"name": "bootstrap_iters", "val": ": int = 1000"}]</parameters><paramsdesc>- **task_dict** (dict[str, LightevalTask]) -- used to determine what aggregation function to use for each metric
- **bootstrap_iters** (int, optional) -- Number of runs used to run the statistical bootstrap. Defaults to 1000.</paramsdesc><paramgroups>0</paramgroups></docstring>
Aggregate the metrics for each task and then for all tasks.




</div></div>

## VersionsLogger[[lighteval.logging.info_loggers.VersionsLogger]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.logging.info_loggers.VersionsLogger</name><anchor>lighteval.logging.info_loggers.VersionsLogger</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/info_loggers.py#L406</source><parameters>[{"name": "versions", "val": ": dict = <factory>"}]</parameters><paramsdesc>- **version** (dict[str, int]) -- Maps the task names with the task versions.</paramsdesc><paramgroups>0</paramgroups></docstring>
Logger of the tasks versions.

Tasks can have a version number/date, which indicates what is the precise metric definition and dataset version used for an evaluation.




</div>

## TaskConfigLogger[[lighteval.logging.info_loggers.TaskConfigLogger]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.logging.info_loggers.TaskConfigLogger</name><anchor>lighteval.logging.info_loggers.TaskConfigLogger</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/info_loggers.py#L425</source><parameters>[{"name": "tasks_configs", "val": ": dict = <factory>"}]</parameters><paramsdesc>- **tasks_config** (dict[str, LightevalTaskConfig]) -- Maps each task to its associated `LightevalTaskConfig`</paramsdesc><paramgroups>0</paramgroups></docstring>
Logs the different parameters of the current `LightevalTask` of interest.




</div>

<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/package_reference/logging.mdx" />

### EvaluationTracker[[lighteval.logging.evaluation_tracker.EvaluationTracker]]
https://huggingface.co/docs/lighteval/main/package_reference/evaluation_tracker.md

# EvaluationTracker[[lighteval.logging.evaluation_tracker.EvaluationTracker]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.logging.evaluation_tracker.EvaluationTracker</name><anchor>lighteval.logging.evaluation_tracker.EvaluationTracker</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/evaluation_tracker.py#L95</source><parameters>[{"name": "output_dir", "val": ": str"}, {"name": "results_path_template", "val": ": str | None = None"}, {"name": "save_details", "val": ": bool = True"}, {"name": "push_to_hub", "val": ": bool = False"}, {"name": "push_to_tensorboard", "val": ": bool = False"}, {"name": "hub_results_org", "val": ": str | None = ''"}, {"name": "tensorboard_metric_prefix", "val": ": str = 'eval'"}, {"name": "public", "val": ": bool = False"}, {"name": "nanotron_run_info", "val": ": GeneralArgs = None"}, {"name": "use_wandb", "val": ": bool = False"}]</parameters><paramsdesc>- **output_dir** (str) -- Local directory to save evaluation results and logs
- **results_path_template** (str, optional) -- Template for results directory structure.
  Example: "{output_dir}/results/{org}_{model}"
- **save_details** (bool, defaults to True) -- Whether to save detailed evaluation records
- **push_to_hub** (bool, defaults to False) -- Whether to push results to HF Hub
- **push_to_tensorboard** (bool, defaults to False) -- Whether to push metrics to TensorBoard
- **hub_results_org** (str, optional) -- HF Hub organization to push results to
- **tensorboard_metric_prefix** (str, defaults to "eval") -- Prefix for TensorBoard metrics
- **public** (bool, defaults to False) -- Whether to make Hub datasets public
- **nanotron_run_info** (GeneralArgs, optional) -- Nanotron model run information
- **use_wandb** (bool, defaults to False) -- Whether to log to Weights & Biases or Trackio if available</paramsdesc><paramgroups>0</paramgroups></docstring>
Tracks and manages evaluation results, metrics, and logging for model evaluations.

The EvaluationTracker coordinates multiple specialized loggers to track different aspects of model evaluation:

- Details Logger (DetailsLogger): Records per-sample evaluation details and predictions
- Metrics Logger (MetricsLogger): Tracks aggregate evaluation metrics and scores
- Versions Logger (VersionsLogger): Records task and dataset versions
- General Config Logger (GeneralConfigLogger): Stores overall evaluation configuration
- Task Config Logger (TaskConfigLogger): Maintains per-task configuration details

The tracker can save results locally and optionally push them to:
- Hugging Face Hub as datasets
- TensorBoard for visualization
- Trackio or Weights & Biases for experiment tracking



<ExampleCodeBlock anchor="lighteval.logging.evaluation_tracker.EvaluationTracker.example">

Example:
```python
tracker = EvaluationTracker(
    output_dir="./eval_results",
    push_to_hub=True,
    hub_results_org="my-org",
    save_details=True
)

# Log evaluation results
tracker.metrics_logger.add_metric("accuracy", 0.85)
tracker.details_logger.add_detail(task_name="qa", prediction="Paris")

# Save all results
tracker.save()
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>generate_final_dict</name><anchor>lighteval.logging.evaluation_tracker.EvaluationTracker.generate_final_dict</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/evaluation_tracker.py#L363</source><parameters>[]</parameters><rettype>dict</rettype><retdesc>Dictionary containing all experiment information including config, results, versions, and summaries</retdesc></docstring>
Aggregates and returns all the logger's experiment information in a dictionary.

This function should be used to gather and display said information at the end of an evaluation run.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>push_to_hub</name><anchor>lighteval.logging.evaluation_tracker.EvaluationTracker.push_to_hub</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/evaluation_tracker.py#L387</source><parameters>[{"name": "date_id", "val": ": str"}, {"name": "details", "val": ": dict"}, {"name": "results_dict", "val": ": dict"}]</parameters></docstring>
Pushes the experiment details (all the model predictions for every step) to the hub.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>recreate_metadata_card</name><anchor>lighteval.logging.evaluation_tracker.EvaluationTracker.recreate_metadata_card</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/evaluation_tracker.py#L454</source><parameters>[{"name": "repo_id", "val": ": str"}]</parameters><paramsdesc>- **repo_id** (str) -- Details dataset repository path on the hub (`org/dataset`)</paramsdesc><paramgroups>0</paramgroups></docstring>
Fully updates the details repository metadata card for the currently evaluated model




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save</name><anchor>lighteval.logging.evaluation_tracker.EvaluationTracker.save</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/logging/evaluation_tracker.py#L247</source><parameters>[]</parameters></docstring>
Saves the experiment information and results to files, and to the hub if requested.

</div></div>

<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/package_reference/evaluation_tracker.mdx" />

### Model's Output[[lighteval.models.model_output.ModelResponse]]
https://huggingface.co/docs/lighteval/main/package_reference/models_outputs.md

# Model's Output[[lighteval.models.model_output.ModelResponse]]

All models will generate an ouput per Doc supplied to the `generation` or `loglikelihood` fuctions.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.model_output.ModelResponse</name><anchor>lighteval.models.model_output.ModelResponse</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/model_output.py#L29</source><parameters>[{"name": "input", "val": ": str | list | None = None"}, {"name": "input_tokens", "val": ": list = <factory>"}, {"name": "text", "val": ": list = <factory>"}, {"name": "output_tokens", "val": ": list = <factory>"}, {"name": "text_post_processed", "val": ": list[str] | None = None"}, {"name": "reasonings", "val": ": list = <factory>"}, {"name": "logprobs", "val": ": list = <factory>"}, {"name": "argmax_logits_eq_gold", "val": ": list = <factory>"}, {"name": "logits", "val": ": list[list[float]] | None = None"}, {"name": "unconditioned_logprobs", "val": ": list[float] | None = None"}, {"name": "truncated_tokens_count", "val": ": int = 0"}, {"name": "padded_tokens_count", "val": ": int = 0"}]</parameters><paramsdesc>- **input** (str | list | None) --
  The original input prompt or context that was fed to the model.
  Used for debugging and analysis purposes.

- **input_tokens** (list[int]) --
  The tokenized representation of the input prompt.
  Useful for understanding how the model processes the input.

- **text** (list[str]) --
  The generated text responses from the model. Each element represents
  one generation (useful when num_samples > 1).
  **Required for**: Generative metrics, exact match, llm as a judge, etc.

- **text_post_processed** (Optional[list[str]]) --
  The generated text responses from the model, but post processed.
  Atm, post processing removes thinking/reasoning steps.

  Careful! This is not computed by default, but in a separate step by calling
  `post_process` on the ModelResponse object.
  **Required for**: Generative metrics that require direct answers.

- **logprobs** (list[float]) --
  Log probabilities of the generated tokens or sequences.
  **Required for**: loglikelihood and perplexity metrics.

- **argmax_logits_eq_gold** (list[bool]) --
  Whether the argmax logits match the gold/expected text.
  Used for accuracy calculations in multiple choice and classification tasks.
  **Required for**: certain loglikelihood metrics.


- **unconditioned_logprobs** (Optional[list[float]]) --
  Log probabilities from an unconditioned model (e.g., without context).
  Used for PMI (Pointwise Mutual Information) normalization.
  **Required for**: PMI metrics.</paramsdesc><paramgroups>0</paramgroups></docstring>
A class to represent the response from a model during evaluation.

This dataclass contains all the information returned by a model during inference,
including generated text, log probabilities, token information, and metadata.
Different attributes are required for different types of evaluation metrics.



Usage Examples:

**For generative tasks (text completion, summarization):**
<ExampleCodeBlock anchor="lighteval.models.model_output.ModelResponse.example">

```python
response = ModelResponse(
    text=["The capital of France is Paris."],
    input_tokens=[1, 2, 3, 4],
    output_tokens=[[5, 6, 7, 8]]
)
```

</ExampleCodeBlock>

**For multiple choice tasks:**
<ExampleCodeBlock anchor="lighteval.models.model_output.ModelResponse.example-2">

```python
response = ModelResponse(
    logprobs=[-0.5, -1.2, -2.1, -1.8],  # Logprobs for each choice
    argmax_logits_eq_gold=[False, False, False, False],  # Whether correct choice was selected
    input_tokens=[1, 2, 3, 4],
    output_tokens=[[5], [6], [7], [8]]
)
```

</ExampleCodeBlock>

**For perplexity calculation:**
<ExampleCodeBlock anchor="lighteval.models.model_output.ModelResponse.example-3">

```python
response = ModelResponse(
    text=["The model generated this text."],
    logprobs=[-1.2, -0.8, -1.5, -0.9, -1.1],  # Logprobs for each token
    input_tokens=[1, 2, 3, 4, 5],
    output_tokens=[[6], [7], [8], [9], [10]]
)
```

</ExampleCodeBlock>

**For PMI analysis:**
<ExampleCodeBlock anchor="lighteval.models.model_output.ModelResponse.example-4">

```python
response = ModelResponse(
    text=["The answer is 42."],
    logprobs=[-1.1, -0.9, -1.3, -0.7],  # Conditioned logprobs
    unconditioned_logprobs=[-2.1, -1.8, -2.3, -1.5],  # Unconditioned logprobs
    input_tokens=[1, 2, 3, 4],
    output_tokens=[[5], [6], [7], [8]]
)
```

</ExampleCodeBlock>

Notes:
- For most evaluation tasks, only a subset of attributes is required
- The `text` attribute is the most commonly used for generative tasks
- `logprobs` are essential for probability-based metrics like perplexity


</div>

<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/package_reference/models_outputs.mdx" />

### Doc[[lighteval.tasks.requests.Doc]]
https://huggingface.co/docs/lighteval/main/package_reference/doc.md

# Doc[[lighteval.tasks.requests.Doc]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.tasks.requests.Doc</name><anchor>lighteval.tasks.requests.Doc</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/requests.py#L44</source><parameters>[{"name": "query", "val": ": str"}, {"name": "choices", "val": ": list"}, {"name": "gold_index", "val": ": typing.Union[int, list[int]]"}, {"name": "instruction", "val": ": str | None = None"}, {"name": "images", "val": ": list['Image'] | None = None"}, {"name": "specific", "val": ": dict | None = None"}, {"name": "unconditioned_query", "val": ": str | None = None"}, {"name": "original_query", "val": ": str | None = None"}, {"name": "id", "val": ": str = ''"}, {"name": "task_name", "val": ": str = ''"}, {"name": "fewshot_samples", "val": ": list = <factory>"}, {"name": "sampling_methods", "val": ": list = <factory>"}, {"name": "fewshot_sorting_class", "val": ": str | None = None"}, {"name": "generation_size", "val": ": int | None = None"}, {"name": "stop_sequences", "val": ": list[str] | None = None"}, {"name": "use_logits", "val": ": bool = False"}, {"name": "num_samples", "val": ": int = 1"}, {"name": "generation_grammar", "val": ": None = None"}]</parameters><paramsdesc>- **query** (str) --
  The main query, prompt, or question to be sent to the model.

- **choices** (list[str]) --
  List of possible answer choices for the query.
  For multiple choice tasks, this contains all options (A, B, C, D, etc.).
  For generative tasks, this may be empty or contain reference answers.

- **gold_index** (Union[int, list[int]]) --
  Index or indices of the correct answer(s) in the choices list.
  For single correct answers,(e.g., 0 for first choice).
  For multiple correct answers, use a list (e.g., [0, 2] for first and third).

- **instruction** (str | None) --
  System prompt or task-specific instructions to guide the model.
  This is typically prepended to the query to set context or behavior.

- **images** (list["Image"] | None) --
  List of PIL Image objects for multimodal tasks.

- **specific** (dict | None) --
  Task-specific information or metadata.
  Can contain any additional data needed for evaluation.

- **unconditioned_query** (Optional[str]) --
  Query without task-specific context for PMI normalization.
  Used to calculate: log P(choice | Query) - log P(choice | Unconditioned Query).

- **original_query** (str | None) --
  The query before any preprocessing or modification.

- **#** Set by task parameters --
- **id** (str) --
  Unique identifier for this evaluation instance.
  Set by the task and not the user.

- **task_name** (str) --
  Name of the task or benchmark this Doc belongs to.

- **##** Few-shot Learning Parameters --
- **fewshot_samples** (list) --
  List of Doc objects representing few-shot examples.
  These examples are prepended to the main query to provide context.

- **sampling_methods** (list[SamplingMethod]) --
  List of sampling methods to use for this instance.
  Options: GENERATIVE, LOGPROBS, PERPLEXITY.

- **fewshot_sorting_class** (Optional[str]) --
  Class label for balanced few-shot example selection.
  Used to ensure diverse representation in few-shot examples.

- **##** Generation Control Parameters --
- **generation_size** (int | None) --
  Maximum number of tokens to generate for this instance.

- **stop_sequences** (list[str] | None) --
  List of strings that should stop generation when encountered.
  **Used for**: Controlled generation, preventing unwanted continuations.

- **use_logits** (bool) --
  Whether to return logits (raw model outputs) in addition to text.
  **Used for**: Probability analysis, confidence scoring, detailed evaluation.

- **num_samples** (int) --
  Number of different samples to generate for this instance.
  **Used for**: Diversity analysis, uncertainty estimation, ensemble methods.

- **generation_grammar** (None) --
  Grammar constraints for generation (currently not implemented).
  **Reserved for**: Future structured generation features.</paramsdesc><paramgroups>0</paramgroups></docstring>
Dataclass representing a single evaluation sample for a benchmark.

This class encapsulates all the information needed to evaluate a model on a single
task instance. It contains the input query, expected outputs, metadata, and
configuration parameters for different types of evaluation tasks.

**Required Fields:**
- `query`: The input prompt or question
- `choices`: Available answer choices (for multiple choice tasks)
- `gold_index`: Index(es) of the correct answer(s)

**Optional Fields:**
- `instruction`: System prompt, task specific. Will be appended to model specific system prompt.
- `images`: Visual inputs for multimodal tasks.



Methods:
get_golds():
Returns the correct answer(s) as strings based on gold_index.
Handles both single and multiple correct answers.

Usage Examples:

**Multiple Choice Question:**
<ExampleCodeBlock anchor="lighteval.tasks.requests.Doc.example">

```python
doc = Doc(
    query="What is the capital of France?",
    choices=["London", "Paris", "Berlin", "Madrid"],
    gold_index=1,  # Paris is the correct answer
    instruction="Answer the following geography question:",
)
```

</ExampleCodeBlock>

**Generative Task:**
<ExampleCodeBlock anchor="lighteval.tasks.requests.Doc.example-2">

```python
doc = Doc(
    query="Write a short story about a robot.",
    choices=[],  # No predefined choices for generative tasks
    gold_index=0,  # Not used for generative tasks
    generation_size=100,
    stop_sequences=["

End"],
)
```

</ExampleCodeBlock>

**Few-shot Learning:**
<ExampleCodeBlock anchor="lighteval.tasks.requests.Doc.example-3">

```python
doc = Doc(
    query="Translate 'Hello world' to Spanish.",
    choices=["Hola mundo", "Bonjour monde", "Ciao mondo"],
    gold_index=0,
    fewshot_samples=[
        Doc(query="Translate 'Good morning' to Spanish.",
            choices=["Buenos días", "Bonjour", "Buongiorno"],
            gold_index=0),
        Doc(query="Translate 'Thank you' to Spanish.",
            choices=["Gracias", "Merci", "Grazie"],
            gold_index=0)
    ],
)
```

</ExampleCodeBlock>

**Multimodal Task:**
<ExampleCodeBlock anchor="lighteval.tasks.requests.Doc.example-4">

```python
doc = Doc(
    query="What is shown in this image?",
    choices=["A cat"],
    gold_index=0,
    images=[pil_image],  # PIL Image object
)
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_golds</name><anchor>lighteval.tasks.requests.Doc.get_golds</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/requests.py#L217</source><parameters>[]</parameters></docstring>
Return gold targets extracted from the target dict

</div></div>

<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/package_reference/doc.mdx" />

### Tasks
https://huggingface.co/docs/lighteval/main/package_reference/tasks.md

# Tasks

## LightevalTask
### LightevalTaskConfig[[lighteval.tasks.lighteval_task.LightevalTaskConfig]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.tasks.lighteval_task.LightevalTaskConfig</name><anchor>lighteval.tasks.lighteval_task.LightevalTaskConfig</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/lighteval_task.py#L48</source><parameters>[{"name": "name", "val": ": str"}, {"name": "prompt_function", "val": ": typing.Callable[[dict, str], lighteval.tasks.requests.Doc]"}, {"name": "hf_repo", "val": ": str"}, {"name": "hf_subset", "val": ": str"}, {"name": "metrics", "val": ": list[lighteval.metrics.utils.metric_utils.Metric] | tuple[lighteval.metrics.utils.metric_utils.Metric, ...]"}, {"name": "solver", "val": ": None = None"}, {"name": "scorer", "val": ": None = None"}, {"name": "sample_fields", "val": ": typing.Optional[typing.Callable[[dict], inspect_ai.dataset._dataset.Sample]] = None"}, {"name": "sample_to_fewshot", "val": ": typing.Optional[typing.Callable[[inspect_ai.dataset._dataset.Sample], str]] = None"}, {"name": "filter", "val": ": typing.Optional[typing.Callable[[dict], bool]] = None"}, {"name": "hf_revision", "val": ": str | None = None"}, {"name": "hf_filter", "val": ": typing.Optional[typing.Callable[[dict], bool]] = None"}, {"name": "hf_avail_splits", "val": ": list[str] | tuple[str, ...] = <factory>"}, {"name": "evaluation_splits", "val": ": list[str] | tuple[str, ...] = <factory>"}, {"name": "few_shots_split", "val": ": str | None = None"}, {"name": "few_shots_select", "val": ": str | None = None"}, {"name": "generation_size", "val": ": int | None = None"}, {"name": "generation_grammar", "val": ": huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType | None = None"}, {"name": "stop_sequence", "val": ": list[str] | tuple[str, ...] | None = None"}, {"name": "num_samples", "val": ": list[int] | None = None"}, {"name": "suite", "val": ": list[str] | tuple[str, ...] = <factory>"}, {"name": "original_num_docs", "val": ": int = -1"}, {"name": "effective_num_docs", "val": ": int = -1"}, {"name": "must_remove_duplicate_docs", "val": ": bool = False"}, {"name": "num_fewshots", "val": ": int = 0"}, {"name": "version", "val": ": int = 0"}]</parameters><paramsdesc>- **name** (str) -- Short name of the evaluation task.
- **prompt_function** (Callable[[dict, str], Doc]) -- Function that converts dataset
  row to Doc objects for evaluation. Takes a dataset row dict and task
  name as input.
- **hf_repo** (str) -- HuggingFace Hub repository path containing the evaluation dataset.
- **hf_subset** (str) -- Dataset subset/configuration name to use for this task.
- **metrics** (ListLike[Metric]) -- List of metrics to compute for this task.</paramsdesc><paramgroups>0</paramgroups></docstring>
Configuration dataclass for a LightevalTask.

This class stores all the configuration parameters needed to define and run
an evaluation task, including dataset information, prompt formatting,
evaluation metrics, and generation parameters.



Dataset Configuration:
hf_revision (str | None, optional): Specific dataset revision to use.
Defaults to None (latest).
hf_filter (Callable[[dict], bool] | None, optional): Filter function to
apply to dataset items. Defaults to None.
hf_avail_splits (ListLike[str], optional): Available dataset splits.
Defaults to ["train", "validation", "test"].

Evaluation Splits:
evaluation_splits (ListLike[str], optional): Dataset splits to use for
evaluation. Defaults to ["validation"].
few_shots_split (str | None, optional): Split to sample few-shot examples
from. Defaults to None.
few_shots_select (str | None, optional): Method for selecting few-shot
examples. Defaults to None.

Generation Parameters:
generation_size (int | None, optional): Maximum token length for generated
text. Defaults to None.
generation_grammar (TextGenerationInputGrammarType | None, optional): Grammar
for structured text generation. Only available for TGI and Inference
Endpoint models. Defaults to None.
stop_sequence (ListLike[str] | None, optional): Sequences that stop text
generation. Defaults to None.
num_samples (list[int] | None, optional): Number of samples to generate
per input. Defaults to None.

Task Configuration:
suite (ListLike[str], optional): Evaluation suites this task belongs to.
Defaults to ["custom"].
version (int, optional): Task version number. Increment when dataset or
prompt changes. Defaults to 0.
num_fewshots (int, optional): Number of few-shot examples to include.
Defaults to 0.
truncate_fewshots (bool, optional): Whether to truncate few-shot examples.
Defaults to False.
must_remove_duplicate_docs (bool, optional): Whether to remove duplicate
documents. Defaults to False.

Document Tracking:
original_num_docs (int, optional): Total number of documents in the task.
Defaults to -1.
effective_num_docs (int, optional): Number of documents actually used
in evaluation. Defaults to -1.


</div>

### LightevalTask[[lighteval.tasks.lighteval_task.LightevalTask]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.tasks.lighteval_task.LightevalTask</name><anchor>lighteval.tasks.lighteval_task.LightevalTask</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/lighteval_task.py#L201</source><parameters>[{"name": "config", "val": ": LightevalTaskConfig"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>aggregation</name><anchor>lighteval.tasks.lighteval_task.LightevalTask.aggregation</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/lighteval_task.py#L405</source><parameters>[]</parameters></docstring>
Return a dict with metric name and its aggregation function for all
metrics


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>download_dataset_worker</name><anchor>lighteval.tasks.lighteval_task.LightevalTask.download_dataset_worker</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/lighteval_task.py#L437</source><parameters>[{"name": "task", "val": ": LightevalTask"}]</parameters><paramsdesc>- **task** (LightevalTask) -- The task object containing dataset configuration.</paramsdesc><paramgroups>0</paramgroups><rettype>DatasetDict</rettype><retdesc>The loaded dataset dictionary containing all splits.</retdesc></docstring>
Worker function to download a dataset from the HuggingFace Hub.

Downloads the dataset specified in the task configuration, optionally
applies a filter if configured, and returns the dataset dictionary.
This method is designed to be used for parallel dataset loading.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>eval_docs</name><anchor>lighteval.tasks.lighteval_task.LightevalTask.eval_docs</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/lighteval_task.py#L348</source><parameters>[]</parameters><rettype>list[Doc]</rettype><retdesc>Evaluation documents.</retdesc></docstring>
Returns the evaluation documents.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fewshot_docs</name><anchor>lighteval.tasks.lighteval_task.LightevalTask.fewshot_docs</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/lighteval_task.py#L329</source><parameters>[]</parameters><rettype>list[Doc]</rettype><retdesc>Documents that will be used for few shot examples. One
document = one few shot example.</retdesc></docstring>
Returns the few shot documents. If the few shot documents are not
available, it gets them from the few shot split or the evaluation split.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_docs</name><anchor>lighteval.tasks.lighteval_task.LightevalTask.get_docs</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/lighteval_task.py#L360</source><parameters>[{"name": "max_samples", "val": ": int | None = None"}]</parameters><paramsdesc>- **max_samples** (int | None, optional) -- Maximum number of documents to return.
  If None, returns all available documents. Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>list[Doc]</rettype><retdesc>List of documents ready for evaluation with few-shot examples
and generation parameters configured.</retdesc><raises>- ``ValueError`` -- If no documents are available for evaluation.</raises><raisederrors>``ValueError``</raisederrors></docstring>
Get evaluation documents with few-shot examples and generation parameters configured.

Retrieves evaluation documents, optionally limits the number of samples,
shuffles them for reproducibility, and configures each document with
few-shot examples and generation parameters for evaluation.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_first_possible_fewshot_splits</name><anchor>lighteval.tasks.lighteval_task.LightevalTask.get_first_possible_fewshot_splits</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/lighteval_task.py#L255</source><parameters>[{"name": "available_splits", "val": ": list[str] | tuple[str, ...]"}]</parameters><rettype>str</rettype><retdesc>the first available fewshot splits or None if nothing is available</retdesc></docstring>
Parses the possible fewshot split keys in order: train, then validation
keys and matches them with the available keys.  Returns the first
available.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_datasets</name><anchor>lighteval.tasks.lighteval_task.LightevalTask.load_datasets</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/lighteval_task.py#L414</source><parameters>[{"name": "tasks", "val": ": dict"}, {"name": "dataset_loading_processes", "val": ": int = 1"}]</parameters><paramsdesc>- **tasks** (dict[str, LightevalTask]) -- Dictionary mapping task names to task objects.
- **dataset_loading_processes** (int, optional) -- Number of processes to use for
  parallel dataset loading. Defaults to 1 (sequential loading).</paramsdesc><paramgroups>0</paramgroups></docstring>
Load datasets from the HuggingFace Hub for the given tasks.




</div></div>

## PromptManager[[lighteval.tasks.prompt_manager.PromptManager]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.tasks.prompt_manager.PromptManager</name><anchor>lighteval.tasks.prompt_manager.PromptManager</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/prompt_manager.py#L42</source><parameters>[{"name": "use_chat_template", "val": ": bool = False"}, {"name": "tokenizer", "val": " = None"}, {"name": "system_prompt", "val": ": str | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>prepare_prompt</name><anchor>lighteval.tasks.prompt_manager.PromptManager.prepare_prompt</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/prompt_manager.py#L48</source><parameters>[{"name": "doc", "val": ": Doc"}]</parameters><rettype>str</rettype><retdesc>The formatted prompt string</retdesc></docstring>
Prepare a prompt from a document, either using chat template or plain text format.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>prepare_prompt_api</name><anchor>lighteval.tasks.prompt_manager.PromptManager.prepare_prompt_api</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/prompt_manager.py#L88</source><parameters>[{"name": "doc", "val": ": Doc"}]</parameters><rettype>list[dict[str, str]]</rettype><retdesc>List of message dictionaries for API calls</retdesc></docstring>
Prepare a prompt for API calls, using a chat-like format.
Will not tokenize the message because APIs will usually handle this.






</div></div>

## Registry[[lighteval.tasks.registry.Registry]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.tasks.registry.Registry</name><anchor>lighteval.tasks.registry.Registry</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/registry.py#L110</source><parameters>[{"name": "tasks", "val": ": str | pathlib.Path | None = None"}, {"name": "load_multilingual", "val": ": bool = False"}, {"name": "custom_tasks", "val": ": str | pathlib.Path | module | None = None"}]</parameters></docstring>
The Registry class is used to manage the task registry and get task classes.


<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_custom_tasks_module</name><anchor>lighteval.tasks.registry.Registry.create_custom_tasks_module</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/registry.py#L281</source><parameters>[{"name": "custom_tasks", "val": ": str | pathlib.Path | module"}]</parameters><paramsdesc>- **custom_tasks** (Optional[Union[str, ModuleType]]) -- Path to the custom tasks file or name of a module to import containing custom tasks or the module itself</paramsdesc><paramgroups>0</paramgroups><rettype>ModuleType</rettype><retdesc>The newly imported/created custom tasks modules</retdesc></docstring>
Creates a custom task module to load tasks defined by the user in their own file.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_all_task_configs</name><anchor>lighteval.tasks.registry.Registry.load_all_task_configs</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/registry.py#L332</source><parameters>[{"name": "custom_tasks", "val": ": str | pathlib.Path | None = None"}, {"name": "load_multilingual", "val": ": bool = False"}]</parameters></docstring>
Load all LightevalTaskConfig objects from all Python files in the tasks/ directory.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>print_all_tasks</name><anchor>lighteval.tasks.registry.Registry.print_all_tasks</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/registry.py#L370</source><parameters>[{"name": "suites", "val": ": str | None = None"}]</parameters><paramsdesc>- **suites** -- Comma-separated list of suites to display. If None, shows core suites only.
  Use 'all' to show all available suites (core + optional).
  Special handling for 'multilingual' suite with dependency checking.</paramsdesc><paramgroups>0</paramgroups></docstring>
Print all the tasks in the task registry.




</div></div>

## Doc[[lighteval.tasks.requests.Doc]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.tasks.requests.Doc</name><anchor>lighteval.tasks.requests.Doc</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/requests.py#L44</source><parameters>[{"name": "query", "val": ": str"}, {"name": "choices", "val": ": list"}, {"name": "gold_index", "val": ": typing.Union[int, list[int]]"}, {"name": "instruction", "val": ": str | None = None"}, {"name": "images", "val": ": list['Image'] | None = None"}, {"name": "specific", "val": ": dict | None = None"}, {"name": "unconditioned_query", "val": ": str | None = None"}, {"name": "original_query", "val": ": str | None = None"}, {"name": "id", "val": ": str = ''"}, {"name": "task_name", "val": ": str = ''"}, {"name": "fewshot_samples", "val": ": list = <factory>"}, {"name": "sampling_methods", "val": ": list = <factory>"}, {"name": "fewshot_sorting_class", "val": ": str | None = None"}, {"name": "generation_size", "val": ": int | None = None"}, {"name": "stop_sequences", "val": ": list[str] | None = None"}, {"name": "use_logits", "val": ": bool = False"}, {"name": "num_samples", "val": ": int = 1"}, {"name": "generation_grammar", "val": ": None = None"}]</parameters><paramsdesc>- **query** (str) --
  The main query, prompt, or question to be sent to the model.

- **choices** (list[str]) --
  List of possible answer choices for the query.
  For multiple choice tasks, this contains all options (A, B, C, D, etc.).
  For generative tasks, this may be empty or contain reference answers.

- **gold_index** (Union[int, list[int]]) --
  Index or indices of the correct answer(s) in the choices list.
  For single correct answers,(e.g., 0 for first choice).
  For multiple correct answers, use a list (e.g., [0, 2] for first and third).

- **instruction** (str | None) --
  System prompt or task-specific instructions to guide the model.
  This is typically prepended to the query to set context or behavior.

- **images** (list["Image"] | None) --
  List of PIL Image objects for multimodal tasks.

- **specific** (dict | None) --
  Task-specific information or metadata.
  Can contain any additional data needed for evaluation.

- **unconditioned_query** (Optional[str]) --
  Query without task-specific context for PMI normalization.
  Used to calculate: log P(choice | Query) - log P(choice | Unconditioned Query).

- **original_query** (str | None) --
  The query before any preprocessing or modification.

- **#** Set by task parameters --
- **id** (str) --
  Unique identifier for this evaluation instance.
  Set by the task and not the user.

- **task_name** (str) --
  Name of the task or benchmark this Doc belongs to.

- **##** Few-shot Learning Parameters --
- **fewshot_samples** (list) --
  List of Doc objects representing few-shot examples.
  These examples are prepended to the main query to provide context.

- **sampling_methods** (list[SamplingMethod]) --
  List of sampling methods to use for this instance.
  Options: GENERATIVE, LOGPROBS, PERPLEXITY.

- **fewshot_sorting_class** (Optional[str]) --
  Class label for balanced few-shot example selection.
  Used to ensure diverse representation in few-shot examples.

- **##** Generation Control Parameters --
- **generation_size** (int | None) --
  Maximum number of tokens to generate for this instance.

- **stop_sequences** (list[str] | None) --
  List of strings that should stop generation when encountered.
  **Used for**: Controlled generation, preventing unwanted continuations.

- **use_logits** (bool) --
  Whether to return logits (raw model outputs) in addition to text.
  **Used for**: Probability analysis, confidence scoring, detailed evaluation.

- **num_samples** (int) --
  Number of different samples to generate for this instance.
  **Used for**: Diversity analysis, uncertainty estimation, ensemble methods.

- **generation_grammar** (None) --
  Grammar constraints for generation (currently not implemented).
  **Reserved for**: Future structured generation features.</paramsdesc><paramgroups>0</paramgroups></docstring>
Dataclass representing a single evaluation sample for a benchmark.

This class encapsulates all the information needed to evaluate a model on a single
task instance. It contains the input query, expected outputs, metadata, and
configuration parameters for different types of evaluation tasks.

**Required Fields:**
- `query`: The input prompt or question
- `choices`: Available answer choices (for multiple choice tasks)
- `gold_index`: Index(es) of the correct answer(s)

**Optional Fields:**
- `instruction`: System prompt, task specific. Will be appended to model specific system prompt.
- `images`: Visual inputs for multimodal tasks.



Methods:
get_golds():
Returns the correct answer(s) as strings based on gold_index.
Handles both single and multiple correct answers.

Usage Examples:

**Multiple Choice Question:**
<ExampleCodeBlock anchor="lighteval.tasks.requests.Doc.example">

```python
doc = Doc(
    query="What is the capital of France?",
    choices=["London", "Paris", "Berlin", "Madrid"],
    gold_index=1,  # Paris is the correct answer
    instruction="Answer the following geography question:",
)
```

</ExampleCodeBlock>

**Generative Task:**
<ExampleCodeBlock anchor="lighteval.tasks.requests.Doc.example-2">

```python
doc = Doc(
    query="Write a short story about a robot.",
    choices=[],  # No predefined choices for generative tasks
    gold_index=0,  # Not used for generative tasks
    generation_size=100,
    stop_sequences=["

End"],
)
```

</ExampleCodeBlock>

**Few-shot Learning:**
<ExampleCodeBlock anchor="lighteval.tasks.requests.Doc.example-3">

```python
doc = Doc(
    query="Translate 'Hello world' to Spanish.",
    choices=["Hola mundo", "Bonjour monde", "Ciao mondo"],
    gold_index=0,
    fewshot_samples=[
        Doc(query="Translate 'Good morning' to Spanish.",
            choices=["Buenos días", "Bonjour", "Buongiorno"],
            gold_index=0),
        Doc(query="Translate 'Thank you' to Spanish.",
            choices=["Gracias", "Merci", "Grazie"],
            gold_index=0)
    ],
)
```

</ExampleCodeBlock>

**Multimodal Task:**
<ExampleCodeBlock anchor="lighteval.tasks.requests.Doc.example-4">

```python
doc = Doc(
    query="What is shown in this image?",
    choices=["A cat"],
    gold_index=0,
    images=[pil_image],  # PIL Image object
)
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_golds</name><anchor>lighteval.tasks.requests.Doc.get_golds</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/requests.py#L217</source><parameters>[]</parameters></docstring>
Return gold targets extracted from the target dict

</div></div>

## Datasets[[lighteval.data.DynamicBatchDataset]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.data.DynamicBatchDataset</name><anchor>lighteval.data.DynamicBatchDataset</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/data.py#L44</source><parameters>[{"name": "requests", "val": ": list"}, {"name": "num_dataset_splits", "val": ": int"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_original_order</name><anchor>lighteval.data.DynamicBatchDataset.get_original_order</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/data.py#L88</source><parameters>[{"name": "new_arr", "val": ": list"}]</parameters><paramsdesc>- **new_arr** (list) -- Array containing any kind of data that needs to be
  reset in the original order.</paramsdesc><paramgroups>0</paramgroups><rettype>list</rettype><retdesc>new_arr in the original order.</retdesc></docstring>
Get the original order of the data.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>splits_iterator</name><anchor>lighteval.data.DynamicBatchDataset.splits_iterator</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/data.py#L110</source><parameters>[]</parameters><yieldtype>Subset</yieldtype><yielddesc>A subset of the dataset.</yielddesc></docstring>
Iterator that yields the dataset splits based on the split limits.






</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.data.LoglikelihoodDataset</name><anchor>lighteval.data.LoglikelihoodDataset</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/data.py#L161</source><parameters>[{"name": "requests", "val": ": list"}, {"name": "num_dataset_splits", "val": ": int"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.data.GenerativeTaskDataset</name><anchor>lighteval.data.GenerativeTaskDataset</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/data.py#L186</source><parameters>[{"name": "requests", "val": ": list"}, {"name": "num_dataset_splits", "val": ": int"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>init_split_limits</name><anchor>lighteval.data.GenerativeTaskDataset.init_split_limits</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/data.py#L187</source><parameters>[{"name": "num_dataset_splits", "val": ""}]</parameters><paramsdesc>- **num_dataset_splits** (_type_) -- _description_</paramsdesc><paramgroups>0</paramgroups><rettype>_type_</rettype><retdesc>_description_</retdesc></docstring>
Initialises the split limits based on generation parameters.
The splits are used to estimate time remaining when evaluating, and in the case of generative evaluations, to group similar samples together.

For generative tasks, self._sorting_criteria outputs:
- a boolean (whether the generation task uses logits)
- a list (the stop sequences)
- the item length (the actual size sorting factor).

In the current function, we create evaluation groups by generation parameters (logits and eos), so that samples with similar properties get batched together afterwards.
The samples will then be further organised by length in each split.








</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.data.GenerativeTaskDatasetNanotron</name><anchor>lighteval.data.GenerativeTaskDatasetNanotron</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/data.py#L254</source><parameters>[{"name": "requests", "val": ": list"}, {"name": "num_dataset_splits", "val": ": int"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.data.GenDistributedSampler</name><anchor>lighteval.data.GenDistributedSampler</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/data.py#L270</source><parameters>[{"name": "dataset", "val": ": Dataset"}, {"name": "num_replicas", "val": ": typing.Optional[int] = None"}, {"name": "rank", "val": ": typing.Optional[int] = None"}, {"name": "shuffle", "val": ": bool = True"}, {"name": "seed", "val": ": int = 0"}, {"name": "drop_last", "val": ": bool = False"}]</parameters></docstring>
A distributed sampler that copy the last element only when drop_last is False so we keep a small padding in the batches
as our samples are sorted by length.


</div>

<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/package_reference/tasks.mdx" />

### Model Configs
https://huggingface.co/docs/lighteval/main/package_reference/models.md

# Model Configs

The model configs are used to define the model and its parameters. All the parameters can be
set in the `model-args` or in the model yaml file (see example
[here](https://github.com/huggingface/lighteval/blob/main/examples/model_configs/vllm_model_config.yaml)).

### Base model config[[lighteval.models.abstract_model.ModelConfig]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.abstract_model.ModelConfig</name><anchor>lighteval.models.abstract_model.ModelConfig</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/abstract_model.py#L41</source><parameters>[{"name": "model_name", "val": ": str = None"}, {"name": "generation_parameters", "val": ": GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None)"}, {"name": "system_prompt", "val": ": str | None = None"}, {"name": "cache_dir", "val": ": str = '~/.cache/huggingface/lighteval'"}]</parameters><paramsdesc>- **model_name** (str) --
  The model name or unique id
- **generation_parameters** (GenerationParameters) --
  Configuration parameters that control text generation behavior, including
  temperature, top_p, max_new_tokens, etc. Defaults to empty GenerationParameters.
- **system_prompt** (str | None) --
  Optional system prompt to be used with chat models. This prompt sets the
  behavior and context for the model during evaluation.
- **cache_dir** (str) --
  Directory to cache the model. Defaults to "~/.cache/huggingface/lighteval".</paramsdesc><paramgroups>0</paramgroups></docstring>
Base configuration class for all model types in Lighteval.

This is the foundation class that all specific model configurations inherit from.
It provides common functionality for parsing configuration from files and command-line arguments,
as well as shared attributes that are used by all models like generation parameters and system prompts.



Methods:
from_path(path: str):
Load configuration from a YAML file.
from_args(args: str):
Parse configuration from a command-line argument string.
_parse_args(args: str):
Static method to parse argument strings into configuration dictionaries.

<ExampleCodeBlock anchor="lighteval.models.abstract_model.ModelConfig.example">

Example:
```python
# Load from YAML file
config = ModelConfig.from_path("model_config.yaml")

# Load from command line arguments
config = ModelConfig.from_args("model_name=meta-llama/Llama-3.1-8B-Instruct,system_prompt='You are a helpful assistant.',generation_parameters={temperature=0.7}")

# Direct instantiation
config = ModelConfig(
    model_name="meta-llama/Llama-3.1-8B-Instruct",
    generation_parameters=GenerationParameters(temperature=0.7),
    system_prompt="You are a helpful assistant."
)
```

</ExampleCodeBlock>


</div>

## Local Models

### Transformers Model[[lighteval.models.transformers.transformers_model.TransformersModelConfig]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.transformers.transformers_model.TransformersModelConfig</name><anchor>lighteval.models.transformers.transformers_model.TransformersModelConfig</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/transformers/transformers_model.py#L70</source><parameters>[{"name": "model_name", "val": ": str"}, {"name": "generation_parameters", "val": ": GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None)"}, {"name": "system_prompt", "val": ": str | None = None"}, {"name": "cache_dir", "val": ": str = '~/.cache/huggingface/lighteval'"}, {"name": "tokenizer", "val": ": str | None = None"}, {"name": "subfolder", "val": ": str | None = None"}, {"name": "revision", "val": ": str = 'main'"}, {"name": "batch_size", "val": ": typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None"}, {"name": "max_length", "val": ": typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None"}, {"name": "model_loading_kwargs", "val": ": dict = <factory>"}, {"name": "add_special_tokens", "val": ": bool = True"}, {"name": "skip_special_tokens", "val": ": bool = True"}, {"name": "model_parallel", "val": ": bool | None = None"}, {"name": "dtype", "val": ": str | None = None"}, {"name": "device", "val": ": typing.Union[int, str] = 'cuda'"}, {"name": "trust_remote_code", "val": ": bool = False"}, {"name": "compile", "val": ": bool = False"}, {"name": "multichoice_continuations_start_space", "val": ": bool | None = None"}, {"name": "pairwise_tokenization", "val": ": bool = False"}, {"name": "continuous_batching", "val": ": bool = False"}, {"name": "override_chat_template", "val": ": bool = None"}]</parameters><paramsdesc>- **model_name** (str) --
  HuggingFace Hub model ID or path to a pre-trained model. This corresponds to the
  `pretrained_model_name_or_path` argument in HuggingFace's `from_pretrained` method.
- **tokenizer** (str | None) --
  Optional HuggingFace Hub tokenizer ID. If not specified, uses the same ID as model_name.
  Useful when the tokenizer is different from the model (e.g., for multilingual models).
- **subfolder** (str | None) --
  Subfolder within the model repository. Used when models are stored in subdirectories.
- **revision** (str) --
  Git revision of the model to load. Defaults to "main".
- **batch_size** (PositiveInt | None) --
  Batch size for model inference. If None, will be automatically determined.
- **max_length** (PositiveInt | None) --
  Maximum sequence length for the model. If None, uses model's default.
- **model_loading_kwargs** (dict) --
  Additional keyword arguments passed to `from_pretrained`. Defaults to empty dict.
- **add_special_tokens** (bool) --
  Whether to add special tokens during tokenization. Defaults to True.
- **skip_special_tokens** (bool) --
  Whether the tokenizer should output special tokens back during generation. Needed for reasoning models. Defaults to True
- **model_parallel** (bool | None) --
  Whether to use model parallelism across multiple GPUs. If None, automatically
  determined based on available GPUs and model size.
- **dtype** (str | None) --
  Data type for model weights. Can be "float16", "bfloat16", "float32", "auto", "4bit", "8bit".
  If "auto", uses the model's default dtype.
- **device** (Union[int, str]) --
  Device to load the model on. Can be "cuda", "cpu", or GPU index. Defaults to "cuda".
- **trust_remote_code** (bool) --
  Whether to trust remote code when loading models. Defaults to False.
- **compile** (bool) --
  Whether to compile the model using torch.compile for optimization. Defaults to False.
- **multichoice_continuations_start_space** (bool | None) --
  Whether to add a space before multiple choice continuations. If None, uses model default.
  True forces adding space, False removes leading space if present.
- **pairwise_tokenization** (bool) --
  Whether to tokenize context and continuation separately or together. Defaults to False.
- **continuous_batching** (bool) --
  Whether to use continuous batching for generation. Defaults to False.
- **override_chat_template** (bool) --
  If True, we force the model to use a chat template. If alse, we prevent the model from using
  a chat template. If None, we use the default (true if present in the tokenizer, false otherwise)
- **generation_parameters** (GenerationParameters, optional, defaults to empty GenerationParameters) --
  Configuration parameters that control text generation behavior, including
  temperature, top_p, max_new_tokens, etc.
- **system_prompt** (str | None, optional, defaults to None) -- Optional system prompt to be used with chat models.
  This prompt sets the behavior and context for the model during evaluation.
- **cache_dir** (str, optional, defaults to "~/.cache/huggingface/lighteval") -- Directory to cache the model.</paramsdesc><paramgroups>0</paramgroups></docstring>
Configuration class for HuggingFace Transformers models.

This configuration is used to load and configure models from the HuggingFace Transformers library.



<ExampleCodeBlock anchor="lighteval.models.transformers.transformers_model.TransformersModelConfig.example">

Example:
```python
config = TransformersModelConfig(
    model_name="meta-llama/Llama-3.1-8B-Instruct",
    batch_size=4,
    dtype="float16",
    generation_parameters=GenerationParameters(
        temperature=0.7,
        max_new_tokens=100
    )
)
```

</ExampleCodeBlock>

Note:
This configuration supports quantization (4-bit and 8-bit) through the dtype parameter.
When using quantization, ensure you have the required dependencies installed
(bitsandbytes for 4-bit/8-bit quantization).


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.transformers.adapter_model.requires.<locals>.inner_fn.<locals>.Placeholder</name><anchor>lighteval.models.transformers.adapter_model.requires.<locals>.inner_fn.<locals>.Placeholder</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/transformers/adapter_model.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.transformers.delta_model.DeltaModelConfig</name><anchor>lighteval.models.transformers.delta_model.DeltaModelConfig</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/transformers/delta_model.py#L38</source><parameters>[{"name": "model_name", "val": ": str"}, {"name": "generation_parameters", "val": ": GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None)"}, {"name": "system_prompt", "val": ": str | None = None"}, {"name": "cache_dir", "val": ": str = '~/.cache/huggingface/lighteval'"}, {"name": "tokenizer", "val": ": str | None = None"}, {"name": "subfolder", "val": ": str | None = None"}, {"name": "revision", "val": ": str = 'main'"}, {"name": "batch_size", "val": ": typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None"}, {"name": "max_length", "val": ": typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None"}, {"name": "model_loading_kwargs", "val": ": dict = <factory>"}, {"name": "add_special_tokens", "val": ": bool = True"}, {"name": "skip_special_tokens", "val": ": bool = True"}, {"name": "model_parallel", "val": ": bool | None = None"}, {"name": "dtype", "val": ": str | None = None"}, {"name": "device", "val": ": typing.Union[int, str] = 'cuda'"}, {"name": "trust_remote_code", "val": ": bool = False"}, {"name": "compile", "val": ": bool = False"}, {"name": "multichoice_continuations_start_space", "val": ": bool | None = None"}, {"name": "pairwise_tokenization", "val": ": bool = False"}, {"name": "continuous_batching", "val": ": bool = False"}, {"name": "override_chat_template", "val": ": bool = None"}, {"name": "base_model", "val": ": str"}]</parameters><paramsdesc>- **base_model** (str) --
  HuggingFace Hub model ID or path to the base model. This is the original
  pre-trained model that the delta was computed from.</paramsdesc><paramgroups>0</paramgroups></docstring>
Configuration class for delta models (weight difference models).

This configuration is used to load models that represent the difference between a
fine-tuned model and its base model. The delta weights are added to the base model
during loading to reconstruct the full fine-tuned model.




</div>

### VLLM Model[[lighteval.models.vllm.vllm_model.VLLMModelConfig]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.vllm.vllm_model.VLLMModelConfig</name><anchor>lighteval.models.vllm.vllm_model.VLLMModelConfig</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/vllm/vllm_model.py#L76</source><parameters>[{"name": "model_name", "val": ": str"}, {"name": "generation_parameters", "val": ": GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None)"}, {"name": "system_prompt", "val": ": str | None = None"}, {"name": "cache_dir", "val": ": str = '~/.cache/huggingface/lighteval'"}, {"name": "tokenizer", "val": ": str | None = None"}, {"name": "revision", "val": ": str = 'main'"}, {"name": "dtype", "val": ": str = 'bfloat16'"}, {"name": "tensor_parallel_size", "val": ": typing.Annotated[int, Gt(gt=0)] = 1"}, {"name": "data_parallel_size", "val": ": typing.Annotated[int, Gt(gt=0)] = 1"}, {"name": "pipeline_parallel_size", "val": ": typing.Annotated[int, Gt(gt=0)] = 1"}, {"name": "gpu_memory_utilization", "val": ": typing.Annotated[float, Ge(ge=0)] = 0.9"}, {"name": "enable_prefix_caching", "val": ": bool = None"}, {"name": "max_model_length", "val": ": typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None"}, {"name": "quantization", "val": ": str | None = None"}, {"name": "load_format", "val": ": str | None = None"}, {"name": "swap_space", "val": ": typing.Annotated[int, Gt(gt=0)] = 4"}, {"name": "seed", "val": ": typing.Annotated[int, Ge(ge=0)] = 1234"}, {"name": "trust_remote_code", "val": ": bool = False"}, {"name": "add_special_tokens", "val": ": bool = True"}, {"name": "multichoice_continuations_start_space", "val": ": bool = True"}, {"name": "pairwise_tokenization", "val": ": bool = False"}, {"name": "max_num_seqs", "val": ": typing.Annotated[int, Gt(gt=0)] = 128"}, {"name": "max_num_batched_tokens", "val": ": typing.Annotated[int, Gt(gt=0)] = 2048"}, {"name": "subfolder", "val": ": str | None = None"}, {"name": "is_async", "val": ": bool = False"}, {"name": "override_chat_template", "val": ": bool = None"}]</parameters><paramsdesc>- **model_name** (str) --
  HuggingFace Hub model ID or path to the model to load.
- **tokenizer** (str | None) --
  HuggingFace Hub model ID or path to the tokenizer to load.
- **revision** (str) --
  Git revision of the model. Defaults to "main".
- **dtype** (str) --
  Data type for model weights. Defaults to "bfloat16". Options: "float16", "bfloat16", "float32".
- **tensor_parallel_size** (PositiveInt) --
  Number of GPUs to use for tensor parallelism. Defaults to 1.
- **data_parallel_size** (PositiveInt) --
  Number of GPUs to use for data parallelism. Defaults to 1.
- **pipeline_parallel_size** (PositiveInt) --
  Number of GPUs to use for pipeline parallelism. Defaults to 1.
- **gpu_memory_utilization** (NonNegativeFloat) --
  Fraction of GPU memory to use. Lower this if running out of memory. Defaults to 0.9.
- **enable_prefix_caching** (bool) --
  Whether to enable prefix caching to speed up generation. May use more memory. Should be disabled for LFM2. Defaults to True.
- **max_model_length** (PositiveInt | None) --
  Maximum sequence length for the model. If None, automatically inferred.
  Reduce this if encountering OOM issues (4096 is usually sufficient).
- **quantization** (str | None) --
  Quantization method.
- **load_format** (str | None) --
  The format of the model weights to load. choices: auto, pt, safetensors, npcache, dummy, tensorizer, sharded_state, gguf, bitsandbytes, mistral, runai_streamer.
- **swap_space** (PositiveInt) --
  CPU swap space size in GiB per GPU. Defaults to 4.
- **seed** (NonNegativeInt) --
  Random seed for reproducibility. Defaults to 1234.
- **trust_remote_code** (bool) --
  Whether to trust remote code when loading models. Defaults to False.
- **add_special_tokens** (bool) --
  Whether to add special tokens during tokenization. Defaults to True.
- **multichoice_continuations_start_space** (bool) --
  Whether to add a space before multiple choice continuations. Defaults to True.
- **pairwise_tokenization** (bool) --
  Whether to tokenize context and continuation separately for loglikelihood evals. Defaults to False.
- **max_num_seqs** (PositiveInt) --
  Maximum number of sequences per iteration. Controls batch size at prefill stage. Defaults to 128.
- **max_num_batched_tokens** (PositiveInt) --
  Maximum number of tokens per batch. Defaults to 2048.
- **subfolder** (str | None) --
  Subfolder within the model repository. Defaults to None.
- **is_async** (bool) --
  Whether to use the async version of VLLM. Defaults to False.
- **override_chat_template** (bool) --
  If True, we force the model to use a chat template. If alse, we prevent the model from using
  a chat template. If None, we use the default (true if present in the tokenizer, false otherwise)
- **generation_parameters** (GenerationParameters, optional, defaults to empty GenerationParameters) --
  Configuration parameters that control text generation behavior, including
  temperature, top_p, max_new_tokens, etc.
- **system_prompt** (str | None, optional, defaults to None) -- Optional system prompt to be used with chat models.
  This prompt sets the behavior and context for the model during evaluation.
- **cache_dir** (str, optional, defaults to "~/.cache/huggingface/lighteval") -- Directory to cache the model.</paramsdesc><paramgroups>0</paramgroups></docstring>
Configuration class for VLLM inference engine.

This configuration is used to load and configure models using the VLLM inference engine,
which provides high-performance inference for large language models with features like
PagedAttention, continuous batching, and efficient memory management.

vllm doc: https://docs.vllm.ai/en/v0.7.1/serving/engine_args.html



<ExampleCodeBlock anchor="lighteval.models.vllm.vllm_model.VLLMModelConfig.example">

Example:
```python
config = VLLMModelConfig(
    model_name="meta-llama/Llama-3.1-8B-Instruct",
    tensor_parallel_size=2,
    gpu_memory_utilization=0.8,
    max_model_length=4096,
    generation_parameters=GenerationParameters(
        temperature=0.7,
        max_new_tokens=100
    )
)
```

</ExampleCodeBlock>


</div>

### SGLang Model[[lighteval.models.sglang.sglang_model.SGLangModelConfig]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.sglang.sglang_model.SGLangModelConfig</name><anchor>lighteval.models.sglang.sglang_model.SGLangModelConfig</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/sglang/sglang_model.py#L54</source><parameters>[{"name": "model_name", "val": ": str"}, {"name": "generation_parameters", "val": ": GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None)"}, {"name": "system_prompt", "val": ": str | None = None"}, {"name": "cache_dir", "val": ": str = '~/.cache/huggingface/lighteval'"}, {"name": "load_format", "val": ": str = 'auto'"}, {"name": "dtype", "val": ": str = 'auto'"}, {"name": "tp_size", "val": ": typing.Annotated[int, Gt(gt=0)] = 1"}, {"name": "dp_size", "val": ": typing.Annotated[int, Gt(gt=0)] = 1"}, {"name": "context_length", "val": ": typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None"}, {"name": "random_seed", "val": ": typing.Optional[typing.Annotated[int, Gt(gt=0)]] = 1234"}, {"name": "trust_remote_code", "val": ": bool = False"}, {"name": "device", "val": ": str = 'cuda'"}, {"name": "skip_tokenizer_init", "val": ": bool = False"}, {"name": "kv_cache_dtype", "val": ": str = 'auto'"}, {"name": "add_special_tokens", "val": ": bool = True"}, {"name": "pairwise_tokenization", "val": ": bool = False"}, {"name": "sampling_backend", "val": ": str | None = None"}, {"name": "attention_backend", "val": ": str | None = None"}, {"name": "mem_fraction_static", "val": ": typing.Annotated[float, Gt(gt=0)] = 0.8"}, {"name": "chunked_prefill_size", "val": ": typing.Annotated[int, Gt(gt=0)] = 4096"}, {"name": "override_chat_template", "val": ": bool = None"}]</parameters><paramsdesc>- **model_name** (str) --
  HuggingFace Hub model ID or path to the model to load.
- **load_format** (str) --
  The format of the model weights to load. choices: auto, pt, safetensors, npcache, dummy, tensorizer, sharded_state, gguf, bitsandbytes, mistral, runai_streamer.
- **dtype** (str) --
  Data type for model weights. Defaults to "auto". Options: "auto", "float16", "bfloat16", "float32".
- **tp_size** (PositiveInt) --
  Number of GPUs to use for tensor parallelism. Defaults to 1.
- **dp_size** (PositiveInt) --
  Number of GPUs to use for data parallelism. Defaults to 1.
- **context_length** (PositiveInt | None) --
  Maximum context length for the model.
- **random_seed** (PositiveInt | None) --
  Random seed for reproducibility. Defaults to 1234.
- **trust_remote_code** (bool) --
  Whether to trust remote code when loading models. Defaults to False.
- **device** (str) --
  Device to load the model on. Defaults to "cuda".
- **skip_tokenizer_init** (bool) --
  Whether to skip tokenizer initialization. Defaults to False.
- **kv_cache_dtype** (str) --
  Data type for key-value cache. Defaults to "auto".
- **add_special_tokens** (bool) --
  Whether to add special tokens during tokenization. Defaults to True.
- **pairwise_tokenization** (bool) --
  Whether to tokenize context and continuation separately for loglikelihood evals. Defaults to False.
- **sampling_backend** (str | None) --
  Sampling backend to use. If None, uses default.
- **attention_backend** (str | None) --
  Attention backend to use. If None, uses default.
- **mem_fraction_static** (PositiveFloat) --
  Fraction of GPU memory to use for static allocation. Defaults to 0.8.
- **chunked_prefill_size** (PositiveInt) --
  Size of chunks for prefill operations. Defaults to 4096.
- **override_chat_template** (bool) --
  If True, we force the model to use a chat template. If alse, we prevent the model from using
  a chat template. If None, we use the default (true if present in the tokenizer, false otherwise)
- **generation_parameters** (GenerationParameters, optional, defaults to empty GenerationParameters) --
  Configuration parameters that control text generation behavior, including
  temperature, top_p, max_new_tokens, etc.
- **system_prompt** (str | None, optional, defaults to None) -- Optional system prompt to be used with chat models.
  This prompt sets the behavior and context for the model during evaluation.
- **cache_dir** (str, optional, defaults to "~/.cache/huggingface/lighteval") -- Directory to cache the model.</paramsdesc><paramgroups>0</paramgroups></docstring>
Configuration class for SGLang inference engine.

This configuration is used to load and configure models using the SGLang inference engine,
which provides high-performance inference.

sglang doc: https://docs.sglang.ai/index.html#



<ExampleCodeBlock anchor="lighteval.models.sglang.sglang_model.SGLangModelConfig.example">

Example:
```python
config = SGLangModelConfig(
    model_name="meta-llama/Llama-3.1-8B-Instruct",
    tp_size=2,
    context_length=8192,
    generation_parameters=GenerationParameters(
        temperature=0.7,
        max_new_tokens=100
    )
)
```

</ExampleCodeBlock>


</div>

### Dummy Model[[lighteval.models.dummy.dummy_model.DummyModelConfig]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.dummy.dummy_model.DummyModelConfig</name><anchor>lighteval.models.dummy.dummy_model.DummyModelConfig</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/dummy/dummy_model.py#L35</source><parameters>[{"name": "model_name", "val": ": str = 'dummy'"}, {"name": "generation_parameters", "val": ": GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None)"}, {"name": "system_prompt", "val": ": str | None = None"}, {"name": "cache_dir", "val": ": str = '~/.cache/huggingface/lighteval'"}, {"name": "seed", "val": ": int = 42"}]</parameters><paramsdesc>- **model_name** (str) --
  Name of your choice - "dummy" by default
- **seed** (int) --
  Random seed for reproducible dummy responses. Defaults to 42.
  This seed controls the randomness of the generated responses and log probabilities.</paramsdesc><paramgroups>0</paramgroups></docstring>
Configuration class for dummy models used for testing and baselines.

This configuration is used to create dummy models that generate random responses
or baselines for evaluation purposes. Useful for testing evaluation pipelines
without requiring actual model inference.



<ExampleCodeBlock anchor="lighteval.models.dummy.dummy_model.DummyModelConfig.example">

Example:
```python
config = DummyModelConfig(
    model_name="my_dummy",
    seed=123,
)
```

</ExampleCodeBlock>


</div>

## Endpoints-based Models

### Inference Providers Model[[lighteval.models.endpoints.inference_providers_model.InferenceProvidersModelConfig]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.endpoints.inference_providers_model.InferenceProvidersModelConfig</name><anchor>lighteval.models.endpoints.inference_providers_model.InferenceProvidersModelConfig</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/endpoints/inference_providers_model.py#L45</source><parameters>[{"name": "model_name", "val": ": str"}, {"name": "generation_parameters", "val": ": GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None)"}, {"name": "system_prompt", "val": ": str | None = None"}, {"name": "cache_dir", "val": ": str = '~/.cache/huggingface/lighteval'"}, {"name": "provider", "val": ": str"}, {"name": "timeout", "val": ": int | None = None"}, {"name": "proxies", "val": ": typing.Optional[typing.Any] = None"}, {"name": "org_to_bill", "val": ": str | None = None"}, {"name": "parallel_calls_count", "val": ": typing.Annotated[int, Ge(ge=0)] = 10"}]</parameters><paramsdesc>- **model_name** (str) --
  Name or identifier of the model to use.
- **provider** (str) --
  Name of the inference provider. Examples: "together", "anyscale", "runpod", etc.
- **timeout** (int | None) --
  Request timeout in seconds. If None, uses provider default.
- **proxies** (Any | None) --
  Proxy configuration for requests. Can be a dict or proxy URL string.
- **org_to_bill** (str | None) --
  Organization to bill for API usage. If None, bills the user's account.
- **parallel_calls_count** (NonNegativeInt) --
  Number of parallel API calls to make. Defaults to 10.
  Higher values increase throughput but may hit rate limits.
- **generation_parameters** (GenerationParameters, optional, defaults to empty GenerationParameters) --
  Configuration parameters that control text generation behavior, including
  temperature, top_p, max_new_tokens, etc.
- **system_prompt** (str | None, optional, defaults to None) -- Optional system prompt to be used with chat models.
  This prompt sets the behavior and context for the model during evaluation.
- **cache_dir** (str, optional, defaults to "~/.cache/huggingface/lighteval") -- Directory to cache the model.</paramsdesc><paramgroups>0</paramgroups></docstring>
Configuration class for HuggingFace's inference providers (like Together AI, Anyscale, etc.).

inference providers doc: https://huggingface.co/docs/inference-providers/en/index



<ExampleCodeBlock anchor="lighteval.models.endpoints.inference_providers_model.InferenceProvidersModelConfig.example">

Example:
```python
config = InferenceProvidersModelConfig(
    model_name="deepseek-ai/DeepSeek-R1-0528",
    provider="together",
    parallel_calls_count=5,
    generation_parameters=GenerationParameters(
        temperature=0.7,
        max_new_tokens=100
    )
)
```

</ExampleCodeBlock>

Note:
- Requires HF API keys to be set in environment variable
- Different providers have different rate limits and pricing


</div>

### InferenceEndpointModel[[lighteval.models.endpoints.endpoint_model.InferenceEndpointModelConfig]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.endpoints.endpoint_model.InferenceEndpointModelConfig</name><anchor>lighteval.models.endpoints.endpoint_model.InferenceEndpointModelConfig</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/endpoints/endpoint_model.py#L108</source><parameters>[{"name": "model_name", "val": ": str | None = None"}, {"name": "generation_parameters", "val": ": GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None)"}, {"name": "system_prompt", "val": ": str | None = None"}, {"name": "cache_dir", "val": ": str = '~/.cache/huggingface/lighteval'"}, {"name": "endpoint_name", "val": ": str | None = None"}, {"name": "reuse_existing", "val": ": bool = False"}, {"name": "accelerator", "val": ": str = 'gpu'"}, {"name": "dtype", "val": ": str | None = None"}, {"name": "vendor", "val": ": str = 'aws'"}, {"name": "region", "val": ": str = 'us-east-1'"}, {"name": "instance_size", "val": ": str | None = None"}, {"name": "instance_type", "val": ": str | None = None"}, {"name": "framework", "val": ": str = 'pytorch'"}, {"name": "endpoint_type", "val": ": str = 'protected'"}, {"name": "add_special_tokens", "val": ": bool = True"}, {"name": "revision", "val": ": str = 'main'"}, {"name": "namespace", "val": ": str | None = None"}, {"name": "image_url", "val": ": str | None = None"}, {"name": "env_vars", "val": ": dict | None = None"}, {"name": "batch_size", "val": ": int = 1"}]</parameters><paramsdesc>- **endpoint_name** (str | None) --
  Name for the inference endpoint. If None, auto-generated from model_name.
- **model_name** (str | None) --
  HuggingFace Hub model ID to deploy. Required if endpoint_name is None.
- **reuse_existing** (bool) --
  Whether to reuse an existing endpoint with the same name. Defaults to False.
- **accelerator** (str) --
  Type of accelerator to use. Defaults to "gpu". Options: "gpu", "cpu".
- **dtype** (str | None) --
  Model data type. If None, uses model default. Options: "float16", "bfloat16", "awq", "gptq", "8bit", "4bit".
- **vendor** (str) --
  Cloud vendor for the endpoint. Defaults to "aws". Options: "aws", "azure", "gcp".
- **region** (str) --
  Cloud region for the endpoint. Defaults to "us-east-1".
- **instance_size** (str | None) --
  Instance size for the endpoint. If None, auto-scaled.
- **instance_type** (str | None) --
  Instance type for the endpoint. If None, auto-scaled.
- **framework** (str) --
  ML framework to use. Defaults to "pytorch".
- **endpoint_type** (str) --
  Type of endpoint. Defaults to "protected". Options: "protected", "public".
- **add_special_tokens** (bool) --
  Whether to add special tokens during tokenization. Defaults to True.
- **revision** (str) --
  Git revision of the model. Defaults to "main".
- **namespace** (str | None) --
  Namespace for the endpoint. If None, uses current user's namespace.
- **image_url** (str | None) --
  Custom Docker image URL. If None, uses default TGI image.
- **env_vars** (dict | None) --
  Additional environment variables for the endpoint.
- **batch_size** (int) --
  Batch size for requests. Defaults to 1.
- **generation_parameters** (GenerationParameters, optional, defaults to empty GenerationParameters) --
  Configuration parameters that control text generation behavior, including
  temperature, top_p, max_new_tokens, etc.
- **system_prompt** (str | None, optional, defaults to None) -- Optional system prompt to be used with chat models.
  This prompt sets the behavior and context for the model during evaluation.
- **cache_dir** (str, optional, defaults to "~/.cache/huggingface/lighteval") -- Directory to cache the model.</paramsdesc><paramgroups>0</paramgroups></docstring>
Configuration class for HuggingFace Inference Endpoints (dedicated infrastructure).

This configuration is used to create and manage dedicated inference endpoints
on HuggingFace's infrastructure. These endpoints provide dedicated compute
resources and can handle larger batch sizes and higher throughput.



Methods:
model_post_init():
Validates configuration and ensures proper parameter combinations.
get_dtype_args():
Returns environment variables for dtype configuration.
get_custom_env_vars():
Returns custom environment variables for the endpoint.

<ExampleCodeBlock anchor="lighteval.models.endpoints.endpoint_model.InferenceEndpointModelConfig.example">

Example:
```python
config = InferenceEndpointModelConfig(
    model_name="microsoft/DialoGPT-medium",
    instance_type="nvidia-a100",
    instance_size="x1",
    vendor="aws",
    region="us-east-1",
    dtype="float16",
    generation_parameters=GenerationParameters(
        temperature=0.7,
        max_new_tokens=100
    )
)
```

</ExampleCodeBlock>

Note:
- Creates dedicated infrastructure for model inference
- Supports various quantization methods and hardware configurations
- Auto-scaling available for optimal resource utilization
- Requires HuggingFace Pro subscription for most features
- Endpoints can take several minutes to start up
- Billed based on compute usage and duration


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.endpoints.endpoint_model.ServerlessEndpointModelConfig</name><anchor>lighteval.models.endpoints.endpoint_model.ServerlessEndpointModelConfig</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/endpoints/endpoint_model.py#L71</source><parameters>[{"name": "model_name", "val": ": str"}, {"name": "generation_parameters", "val": ": GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None)"}, {"name": "system_prompt", "val": ": str | None = None"}, {"name": "cache_dir", "val": ": str = '~/.cache/huggingface/lighteval'"}, {"name": "add_special_tokens", "val": ": bool = True"}, {"name": "batch_size", "val": ": int = 1"}]</parameters><paramsdesc>- **model_name** (str) --
  HuggingFace Hub model ID to use with the Inference API.
  Example: "meta-llama/Llama-3.1-8B-Instruct"
- **add_special_tokens** (bool) --
  Whether to add special tokens during tokenization. Defaults to True.
- **batch_size** (int) --
  Batch size for requests. Defaults to 1 (serverless API limitation).
- **generation_parameters** (GenerationParameters, optional, defaults to empty GenerationParameters) --
  Configuration parameters that control text generation behavior, including
  temperature, top_p, max_new_tokens, etc.
- **system_prompt** (str | None, optional, defaults to None) -- Optional system prompt to be used with chat models.
  This prompt sets the behavior and context for the model during evaluation.
- **cache_dir** (str, optional, defaults to "~/.cache/huggingface/lighteval") -- Directory to cache the model.</paramsdesc><paramgroups>0</paramgroups></docstring>
Configuration class for HuggingFace Inference API (inference endpoints).

https://huggingface.co/inference-endpoints/dedicated



<ExampleCodeBlock anchor="lighteval.models.endpoints.endpoint_model.ServerlessEndpointModelConfig.example">

Example:
```python
config = ServerlessEndpointModelConfig(
    model_name="meta-llama/Llama-3.1-8B-Instruct",
    generation_parameters=GenerationParameters(
        temperature=0.7,
        max_new_tokens=100
    )
)
```

</ExampleCodeBlock>


</div>

### TGI ModelClient[[lighteval.models.endpoints.tgi_model.TGIModelConfig]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.endpoints.tgi_model.TGIModelConfig</name><anchor>lighteval.models.endpoints.tgi_model.TGIModelConfig</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/endpoints/tgi_model.py#L55</source><parameters>[{"name": "model_name", "val": ": str | None"}, {"name": "generation_parameters", "val": ": GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None)"}, {"name": "system_prompt", "val": ": str | None = None"}, {"name": "cache_dir", "val": ": str = '~/.cache/huggingface/lighteval'"}, {"name": "inference_server_address", "val": ": str | None = None"}, {"name": "inference_server_auth", "val": ": str | None = None"}, {"name": "model_info", "val": ": dict | None = None"}, {"name": "batch_size", "val": ": int = 1"}]</parameters><paramsdesc>- **inference_server_address** (str | None) --
  Address of the TGI server. Format: "http://host:port" or "https://host:port".
  Example: "http://localhost:8080"
- **inference_server_auth** (str | None) --
  Authentication token for the TGI server. If None, no authentication is used.
- **model_name** (str | None) --
  Optional model name override. If None, uses the model name from server info.
- **generation_parameters** (GenerationParameters, optional, defaults to empty GenerationParameters) --
  Configuration parameters that control text generation behavior, including
  temperature, top_p, max_new_tokens, etc.
- **system_prompt** (str | None, optional, defaults to None) -- Optional system prompt to be used with chat models.
  This prompt sets the behavior and context for the model during evaluation.
- **cache_dir** (str, optional, defaults to "~/.cache/huggingface/lighteval") -- Directory to cache the model.</paramsdesc><paramgroups>0</paramgroups></docstring>
Configuration class for Text Generation Inference (TGI) backend.

doc: https://huggingface.co/docs/text-generation-inference/en/index

This configuration is used to connect to TGI servers that serve HuggingFace models
using the text-generation-inference library. TGI provides high-performance inference
with features like continuous batching and efficient memory management.



<ExampleCodeBlock anchor="lighteval.models.endpoints.tgi_model.TGIModelConfig.example">

Example:
```python
config = TGIModelConfig(
    inference_server_address="http://localhost:8080",
    inference_server_auth="your-auth-token",
    model_name="meta-llama/Llama-3.1-8B-Instruct",
    generation_parameters=GenerationParameters(
        temperature=0.7,
        max_new_tokens=100
    )
)
```

</ExampleCodeBlock>


</div>

### Litellm Model[[lighteval.models.endpoints.litellm_model.LiteLLMModelConfig]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.endpoints.litellm_model.LiteLLMModelConfig</name><anchor>lighteval.models.endpoints.litellm_model.LiteLLMModelConfig</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/endpoints/litellm_model.py#L61</source><parameters>[{"name": "model_name", "val": ": str"}, {"name": "generation_parameters", "val": ": GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None)"}, {"name": "system_prompt", "val": ": str | None = None"}, {"name": "cache_dir", "val": ": str = '~/.cache/huggingface/lighteval'"}, {"name": "provider", "val": ": str | None = None"}, {"name": "base_url", "val": ": str | None = None"}, {"name": "api_key", "val": ": str | None = None"}, {"name": "concurrent_requests", "val": ": int = 10"}, {"name": "verbose", "val": ": bool = False"}, {"name": "max_model_length", "val": ": int | None = None"}, {"name": "api_max_retry", "val": ": int = 8"}, {"name": "api_retry_sleep", "val": ": float = 1.0"}, {"name": "api_retry_multiplier", "val": ": float = 2.0"}, {"name": "timeout", "val": ": float | None = None"}]</parameters><paramsdesc>- **model_name** (str) --
  Model identifier. Can include provider prefix (e.g., "gpt-4", "claude-3-sonnet")
  or use provider/model format (e.g., "openai/gpt-4", "anthropic/claude-3-sonnet").
- **provider** (str | None) --
  Optional provider name override. If None, inferred from model_name.
  Examples: "openai", "anthropic", "google", "cohere", etc.
- **base_url** (str | None) --
  Custom base URL for the API. If None, uses provider's default URL.
  Useful for using custom endpoints or local deployments.
- **api_key** (str | None) --
  API key for authentication. If None, reads from environment variables.
  Environment variable names are provider-specific (e.g., OPENAI_API_KEY).
- **concurrent_requests** (int) --
  Maximum number of concurrent API requests to execute in parallel.
  Higher values can improve throughput for batch processing but may hit rate limits
  or exhaust API quotas faster. Default is 10.
- **verbose** (bool) --
  Whether to enable verbose logging. Default is False.
- **max_model_length** (int | None) --
  Maximum context length for the model. If None, infers the model's default max length.
- **api_max_retry** (int) --
  Maximum number of retries for API requests. Default is 8.
- **api_retry_sleep** (float) --
  Initial sleep time (in seconds) between retries. Default is 1.0.
- **api_retry_multiplier** (float) --
  Multiplier for increasing sleep time between retries. Default is 2.0.
- **timeout** (float) --
  Request timeout in seconds. Default is None (no timeout).
- **generation_parameters** (GenerationParameters, optional, defaults to empty GenerationParameters) --
  Configuration parameters that control text generation behavior, including
  temperature, top_p, max_new_tokens, etc.
- **system_prompt** (str | None, optional, defaults to None) -- Optional system prompt to be used with chat models.
  This prompt sets the behavior and context for the model during evaluation.
- **cache_dir** (str, optional, defaults to "~/.cache/huggingface/lighteval") -- Directory to cache the model.</paramsdesc><paramgroups>0</paramgroups></docstring>
Configuration class for LiteLLM unified API client.

This configuration is used to connect to various LLM providers through the LiteLLM
unified API. LiteLLM provides a consistent interface to multiple providers including
OpenAI, Anthropic, Google, and many others.

litellm doc: https://docs.litellm.ai/docs/



<ExampleCodeBlock anchor="lighteval.models.endpoints.litellm_model.LiteLLMModelConfig.example">

Example:
```python
config = LiteLLMModelConfig(
    model_name="gpt-4",
    provider="openai",
    base_url="https://api.openai.com/v1",
    concurrent_requests=5,
    generation_parameters=GenerationParameters(
        temperature=0.7,
        max_new_tokens=100
    )
)
```

</ExampleCodeBlock>


</div>

## Custom Model[[lighteval.models.custom.custom_model.CustomModelConfig]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.models.custom.custom_model.CustomModelConfig</name><anchor>lighteval.models.custom.custom_model.CustomModelConfig</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/custom/custom_model.py#L26</source><parameters>[{"name": "model_name", "val": ": str"}, {"name": "generation_parameters", "val": ": GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None)"}, {"name": "system_prompt", "val": ": str | None = None"}, {"name": "cache_dir", "val": ": str = '~/.cache/huggingface/lighteval'"}, {"name": "model_definition_file_path", "val": ": str"}]</parameters><paramsdesc>- **model** (str) --
  An identifier for the model. This can be used to track which model was evaluated
  in the results and logs.

- **model_definition_file_path** (str) --
  Path to a Python file containing the custom model implementation. This file must
  define exactly one class that inherits from LightevalModel. The class should
  implement all required methods from the LightevalModel interface.</paramsdesc><paramgroups>0</paramgroups></docstring>
Configuration class for loading custom model implementations in Lighteval.

This config allows users to define and load their own model implementations by specifying
a Python file containing a custom model class that inherits from LightevalModel.

The custom model file should contain exactly one class that inherits from LightevalModel.
This class will be automatically detected and instantiated when loading the model.



<ExampleCodeBlock anchor="lighteval.models.custom.custom_model.CustomModelConfig.example">

Example usage:
```python
# Define config
config = CustomModelConfig(
    model="my-custom-model",
    model_definition_file_path="path/to/my_model.py"
)

# Example custom model file (my_model.py):
from lighteval.models.abstract_model import LightevalModel

class MyCustomModel(LightevalModel):
    def __init__(self, config, env_config):
        super().__init__(config, env_config)
        # Custom initialization...

    def greedy_until(self, docs: list[Doc]) -> list[ModelResponse]:
        # Custom generation logic...
        pass

    def loglikelihood(self, docs: list[Doc]) -> list[ModelResponse]:
        pass
```

</ExampleCodeBlock>

An example of a custom model can be found in `examples/custom_models/google_translate_model.py`.


</div>

<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/package_reference/models.mdx" />

### Metrics
https://huggingface.co/docs/lighteval/main/package_reference/metrics.md

# Metrics

## Metrics
[//]: # (TODO: aenum.Enum raises error when generating docs: not supported by inspect.signature. See: https://github.com/ethanfurman/aenum/issues/44)
[//]: # (### Metrics)
[//]: # ([[autodoc]] metrics.metrics.Metrics)
### Metric[[lighteval.metrics.Metric]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.Metric</name><anchor>lighteval.metrics.Metric</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/utils/metric_utils.py#L33</source><parameters>[{"name": "metric_name", "val": ": str"}, {"name": "higher_is_better", "val": ": bool"}, {"name": "category", "val": ": SamplingMethod"}, {"name": "sample_level_fn", "val": ": lighteval.metrics.metrics_sample.SampleLevelComputation | lighteval.metrics.sample_preparator.Preparator"}, {"name": "corpus_level_fn", "val": ": typing.Union[lighteval.metrics.metrics_corpus.CorpusLevelComputation, typing.Callable]"}, {"name": "batched_compute", "val": ": bool = False"}]</parameters></docstring>


</div>

### CorpusLevelMetric[[lighteval.metrics.utils.metric_utils.CorpusLevelMetric]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.utils.metric_utils.CorpusLevelMetric</name><anchor>lighteval.metrics.utils.metric_utils.CorpusLevelMetric</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/utils/metric_utils.py#L117</source><parameters>[{"name": "metric_name", "val": ": str"}, {"name": "higher_is_better", "val": ": bool"}, {"name": "category", "val": ": SamplingMethod"}, {"name": "sample_level_fn", "val": ": lighteval.metrics.metrics_sample.SampleLevelComputation | lighteval.metrics.sample_preparator.Preparator"}, {"name": "corpus_level_fn", "val": ": typing.Union[lighteval.metrics.metrics_corpus.CorpusLevelComputation, typing.Callable]"}, {"name": "batched_compute", "val": ": bool = False"}]</parameters></docstring>
Metric computed over the whole corpora, with computations happening at the aggregation phase

</div>

### SampleLevelMetric[[lighteval.metrics.utils.metric_utils.SampleLevelMetric]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.utils.metric_utils.SampleLevelMetric</name><anchor>lighteval.metrics.utils.metric_utils.SampleLevelMetric</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/utils/metric_utils.py#L124</source><parameters>[{"name": "metric_name", "val": ": str"}, {"name": "higher_is_better", "val": ": bool"}, {"name": "category", "val": ": SamplingMethod"}, {"name": "sample_level_fn", "val": ": lighteval.metrics.metrics_sample.SampleLevelComputation | lighteval.metrics.sample_preparator.Preparator"}, {"name": "corpus_level_fn", "val": ": typing.Union[lighteval.metrics.metrics_corpus.CorpusLevelComputation, typing.Callable]"}, {"name": "batched_compute", "val": ": bool = False"}]</parameters></docstring>
Metric computed per sample, then aggregated over the corpus

</div>

### MetricGrouping[[lighteval.metrics.utils.metric_utils.MetricGrouping]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.utils.metric_utils.MetricGrouping</name><anchor>lighteval.metrics.utils.metric_utils.MetricGrouping</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/utils/metric_utils.py#L106</source><parameters>[{"name": "metric_name", "val": ": list"}, {"name": "higher_is_better", "val": ": dict"}, {"name": "category", "val": ": SamplingMethod"}, {"name": "sample_level_fn", "val": ": lighteval.metrics.metrics_sample.SampleLevelComputation | lighteval.metrics.sample_preparator.Preparator"}, {"name": "corpus_level_fn", "val": ": dict"}, {"name": "batched_compute", "val": ": bool = False"}]</parameters></docstring>
Some metrics are more advantageous to compute together at once.
For example, if a costly preprocessing is the same for all metrics, it makes more sense to compute it once.


</div>

### CorpusLevelMetricGrouping[[lighteval.metrics.utils.metric_utils.CorpusLevelMetricGrouping]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.utils.metric_utils.CorpusLevelMetricGrouping</name><anchor>lighteval.metrics.utils.metric_utils.CorpusLevelMetricGrouping</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/utils/metric_utils.py#L131</source><parameters>[{"name": "metric_name", "val": ": list"}, {"name": "higher_is_better", "val": ": dict"}, {"name": "category", "val": ": SamplingMethod"}, {"name": "sample_level_fn", "val": ": lighteval.metrics.metrics_sample.SampleLevelComputation | lighteval.metrics.sample_preparator.Preparator"}, {"name": "corpus_level_fn", "val": ": dict"}, {"name": "batched_compute", "val": ": bool = False"}]</parameters></docstring>
MetricGrouping computed over the whole corpora, with computations happening at the aggregation phase

</div>

### SampleLevelMetricGrouping[[lighteval.metrics.utils.metric_utils.SampleLevelMetricGrouping]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.utils.metric_utils.SampleLevelMetricGrouping</name><anchor>lighteval.metrics.utils.metric_utils.SampleLevelMetricGrouping</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/utils/metric_utils.py#L138</source><parameters>[{"name": "metric_name", "val": ": list"}, {"name": "higher_is_better", "val": ": dict"}, {"name": "category", "val": ": SamplingMethod"}, {"name": "sample_level_fn", "val": ": lighteval.metrics.metrics_sample.SampleLevelComputation | lighteval.metrics.sample_preparator.Preparator"}, {"name": "corpus_level_fn", "val": ": dict"}, {"name": "batched_compute", "val": ": bool = False"}]</parameters></docstring>
MetricGrouping are computed per sample, then aggregated over the corpus

</div>

## Corpus Metrics
### CorpusLevelF1Score[[lighteval.metrics.metrics_corpus.CorpusLevelF1Score]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_corpus.CorpusLevelF1Score</name><anchor>lighteval.metrics.metrics_corpus.CorpusLevelF1Score</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_corpus.py#L81</source><parameters>[{"name": "average", "val": ": str"}, {"name": "num_classes", "val": ": int = 2"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute_corpus</name><anchor>lighteval.metrics.metrics_corpus.CorpusLevelF1Score.compute_corpus</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_corpus.py#L96</source><parameters>[{"name": "items", "val": ": list"}]</parameters></docstring>
Computes the metric score over all the corpus generated items, by using the scikit learn implementation.

</div></div>

### CorpusLevelPerplexityMetric[[lighteval.metrics.metrics_corpus.CorpusLevelPerplexityMetric]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_corpus.CorpusLevelPerplexityMetric</name><anchor>lighteval.metrics.metrics_corpus.CorpusLevelPerplexityMetric</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_corpus.py#L164</source><parameters>[{"name": "metric_type", "val": ": str"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute_corpus</name><anchor>lighteval.metrics.metrics_corpus.CorpusLevelPerplexityMetric.compute_corpus</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_corpus.py#L182</source><parameters>[{"name": "items", "val": ": list"}]</parameters></docstring>
Computes the metric score over all the corpus generated items.

</div></div>

### CorpusLevelTranslationMetric[[lighteval.metrics.metrics_corpus.CorpusLevelTranslationMetric]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_corpus.CorpusLevelTranslationMetric</name><anchor>lighteval.metrics.metrics_corpus.CorpusLevelTranslationMetric</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_corpus.py#L116</source><parameters>[{"name": "metric_type", "val": ": str"}, {"name": "lang", "val": ": typing.Literal['zh', 'ja', 'ko', ''] = ''"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute_corpus</name><anchor>lighteval.metrics.metrics_corpus.CorpusLevelTranslationMetric.compute_corpus</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_corpus.py#L142</source><parameters>[{"name": "items", "val": ": list"}]</parameters></docstring>
Computes the metric score over all the corpus generated items, by using the sacrebleu implementation.

</div></div>

### MatthewsCorrCoef[[lighteval.metrics.metrics_corpus.MatthewsCorrCoef]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_corpus.MatthewsCorrCoef</name><anchor>lighteval.metrics.metrics_corpus.MatthewsCorrCoef</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_corpus.py#L66</source><parameters>[]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute_corpus</name><anchor>lighteval.metrics.metrics_corpus.MatthewsCorrCoef.compute_corpus</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_corpus.py#L67</source><parameters>[{"name": "items", "val": ": list"}]</parameters><paramsdesc>- **items** (list[dict]) -- List of GenerativeCorpusMetricInput</paramsdesc><paramgroups>0</paramgroups><rettype>float</rettype><retdesc>Score</retdesc></docstring>
Computes the Matthews Correlation Coefficient, using scikit learn ([doc](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html)).








</div></div>

## Sample Metrics
### ExactMatches[[lighteval.metrics.metrics_sample.ExactMatches]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.ExactMatches</name><anchor>lighteval.metrics.metrics_sample.ExactMatches</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L81</source><parameters>[{"name": "aggregation_function", "val": ": typing.Callable[[list[float]], float] = <built-in function max>"}, {"name": "normalize_gold", "val": ": typing.Optional[typing.Callable[[str], str]] = None"}, {"name": "normalize_pred", "val": ": typing.Optional[typing.Callable[[str], str]] = None"}, {"name": "strip_strings", "val": ": bool = False"}, {"name": "type_exact_match", "val": ": str = 'full'"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.ExactMatches.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L118</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing gold references.
- **model_response** (ModelResponse) -- The model's response containing predictions.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>float</rettype><retdesc>Aggregated score over the current sample's items.</retdesc></docstring>
Computes the metric over a list of golds and predictions for one single sample.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute_one_item</name><anchor>lighteval.metrics.metrics_sample.ExactMatches.compute_one_item</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L137</source><parameters>[{"name": "gold", "val": ": str"}, {"name": "pred", "val": ": str"}]</parameters><paramsdesc>- **gold** (str) -- One of the possible references
- **pred** (str) -- One of the possible predictions</paramsdesc><paramgroups>0</paramgroups><rettype>float</rettype><retdesc>The exact match score. Will be 1 for a match, 0 otherwise.</retdesc></docstring>
Compares two strings only.








</div></div>

### F1_score[[lighteval.metrics.metrics_sample.F1_score]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.F1_score</name><anchor>lighteval.metrics.metrics_sample.F1_score</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L170</source><parameters>[{"name": "aggregation_function", "val": ": typing.Callable[[list[float]], float] = <built-in function max>"}, {"name": "normalize_gold", "val": ": typing.Optional[typing.Callable[[str], str]] = None"}, {"name": "normalize_pred", "val": ": typing.Optional[typing.Callable[[str], str]] = None"}, {"name": "strip_strings", "val": ": bool = False"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.F1_score.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L197</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing gold references.
- **model_response** (ModelResponse) -- The model's response containing predictions.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>float</rettype><retdesc>Aggregated score over the current sample's items.</retdesc></docstring>
Computes the metric over a list of golds and predictions for one single sample.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute_one_item</name><anchor>lighteval.metrics.metrics_sample.F1_score.compute_one_item</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L217</source><parameters>[{"name": "gold", "val": ": str"}, {"name": "pred", "val": ": str"}]</parameters><paramsdesc>- **gold** (str) -- One of the possible references
- **pred** (str) -- One of the possible predictions</paramsdesc><paramgroups>0</paramgroups><rettype>float</rettype><retdesc>The f1 score over the bag of words, computed using nltk.</retdesc></docstring>
Compares two strings only.








</div></div>

### LoglikelihoodAcc[[lighteval.metrics.metrics_sample.LoglikelihoodAcc]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.LoglikelihoodAcc</name><anchor>lighteval.metrics.metrics_sample.LoglikelihoodAcc</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L243</source><parameters>[{"name": "logprob_normalization", "val": ": lighteval.metrics.normalizations.LogProbCharNorm | lighteval.metrics.normalizations.LogProbTokenNorm | lighteval.metrics.normalizations.LogProbPMINorm | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.LoglikelihoodAcc.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L254</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing choices and gold indices.
- **model_response** (ModelResponse) -- The model's response containing logprobs.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>int</rettype><retdesc>The eval score: 1 if the best log-prob choice is in gold, 0 otherwise.</retdesc></docstring>
Computes the log likelihood accuracy: is the choice with the highest logprob in `choices_logprob` present
in the `gold_ixs`?








</div></div>

### NormalizedMultiChoiceProbability[[lighteval.metrics.metrics_sample.NormalizedMultiChoiceProbability]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.NormalizedMultiChoiceProbability</name><anchor>lighteval.metrics.metrics_sample.NormalizedMultiChoiceProbability</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L297</source><parameters>[{"name": "log_prob_normalization", "val": ": lighteval.metrics.normalizations.LogProbCharNorm | lighteval.metrics.normalizations.LogProbTokenNorm | lighteval.metrics.normalizations.LogProbPMINorm | None = None"}, {"name": "aggregation_function", "val": ": typing.Callable[[numpy.ndarray], float] = <function max at 0x7f1a60940f30>"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.NormalizedMultiChoiceProbability.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L313</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing choices and gold indices.
- **model_response** (ModelResponse) -- The model's response containing logprobs.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>float</rettype><retdesc>The probability of the best log-prob choice being a gold choice.</retdesc></docstring>
Computes the log likelihood probability: chance of choosing the best choice.








</div></div>

### Probability[[lighteval.metrics.metrics_sample.Probability]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.Probability</name><anchor>lighteval.metrics.metrics_sample.Probability</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L357</source><parameters>[{"name": "normalization", "val": ": lighteval.metrics.normalizations.LogProbTokenNorm | None = None"}, {"name": "aggregation_function", "val": ": typing.Callable[[numpy.ndarray], float] = <function max at 0x7f1a60940f30>"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.Probability.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L373</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing choices and gold indices.
- **model_response** (ModelResponse) -- The model's response containing logprobs.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>float</rettype><retdesc>The probability of the best log-prob choice being a gold choice.</retdesc></docstring>
Computes the log likelihood probability: chance of choosing the best choice.








</div></div>

### Recall[[lighteval.metrics.metrics_sample.Recall]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.Recall</name><anchor>lighteval.metrics.metrics_sample.Recall</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L408</source><parameters>[{"name": "k", "val": ": int"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.Recall.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L418</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing choices and gold indices.
- **model_response** (ModelResponse) -- The model's response containing logprobs.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>int</rettype><retdesc>Score: 1 if one of the top level predicted choices was correct, 0 otherwise.</retdesc></docstring>
Computes the recall at the requested depth level: looks at the `n` best predicted choices (with the
highest log probabilities) and see if there is an actual gold among them.








</div></div>

### MRR[[lighteval.metrics.metrics_sample.MRR]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.MRR</name><anchor>lighteval.metrics.metrics_sample.MRR</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L438</source><parameters>[{"name": "length_normalization", "val": ": bool = False"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.MRR.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L447</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_response** (ModelResponse) -- The model's response containing logprobs.
- **doc** (Doc) -- The document containing choices and gold indices.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>float</rettype><retdesc>MRR score.</retdesc></docstring>
Mean reciprocal rank. Measures the quality of a ranking of choices (ordered by correctness).








</div></div>

### ROUGE[[lighteval.metrics.metrics_sample.ROUGE]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.ROUGE</name><anchor>lighteval.metrics.metrics_sample.ROUGE</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L486</source><parameters>[{"name": "methods", "val": ": str | list[str]"}, {"name": "multiple_golds", "val": ": bool = False"}, {"name": "bootstrap", "val": ": bool = False"}, {"name": "normalize_gold", "val": ": typing.Optional[typing.Callable] = None"}, {"name": "normalize_pred", "val": ": typing.Optional[typing.Callable] = None"}, {"name": "aggregation_function", "val": ": typing.Optional[typing.Callable] = None"}, {"name": "tokenizer", "val": ": object = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.ROUGE.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L533</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing gold references.
- **model_response** (ModelResponse) -- The model's response containing predictions.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>float or dict</rettype><retdesc>Aggregated score over the current sample's items.
If several rouge functions have been selected, returns a dict which maps name and scores.</retdesc></docstring>
Computes the metric(s) over a list of golds and predictions for one single sample.








</div></div>

### BertScore[[lighteval.metrics.metrics_sample.BertScore]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.BertScore</name><anchor>lighteval.metrics.metrics_sample.BertScore</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L598</source><parameters>[{"name": "normalize_gold", "val": ": typing.Optional[typing.Callable] = None"}, {"name": "normalize_pred", "val": ": typing.Optional[typing.Callable] = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.BertScore.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L628</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing gold references.
- **model_response** (ModelResponse) -- The model's response containing predictions.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>dict</rettype><retdesc>Scores over the current sample's items.</retdesc></docstring>
Computes the prediction, recall and f1 score using the bert scorer.








</div></div>

### Extractiveness[[lighteval.metrics.metrics_sample.Extractiveness]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.Extractiveness</name><anchor>lighteval.metrics.metrics_sample.Extractiveness</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L661</source><parameters>[{"name": "normalize_input", "val": ": <built-in function callable> = <function remove_braces at 0x7f191b263f40>"}, {"name": "normalize_pred", "val": ": <built-in function callable> = <function remove_braces_and_strip at 0x7f191b284040>"}, {"name": "input_column", "val": ": str = 'text'"}, {"name": "language", "val": ": typing.Literal['en', 'de', 'fr', 'it'] = 'en'"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.Extractiveness.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L685</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing input text.
- **model_response** (ModelResponse) -- The model's response containing predictions.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>dict[str, float]</rettype><retdesc>The extractiveness scores.</retdesc></docstring>
Compute the extractiveness of the predictions.

This method calculates coverage, density, and compression scores for a single
prediction against the input text.








</div></div>

### Faithfulness[[lighteval.metrics.metrics_sample.Faithfulness]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.Faithfulness</name><anchor>lighteval.metrics.metrics_sample.Faithfulness</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L717</source><parameters>[{"name": "normalize_input", "val": ": typing.Callable = <function remove_braces at 0x7f191b263f40>"}, {"name": "normalize_pred", "val": ": typing.Callable = <function remove_braces_and_strip at 0x7f191b284040>"}, {"name": "input_column", "val": ": str = 'text'"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.Faithfulness.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L738</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing input text.
- **model_response** (ModelResponse) -- The model's response containing predictions.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>dict[str, float]</rettype><retdesc>The faithfulness scores.</retdesc></docstring>
Compute the faithfulness of the predictions.

The SummaCZS (Summary Content Zero-Shot) model is used with configurable granularity and model variation.








</div></div>

### BLEURT[[lighteval.metrics.metrics_sample.BLEURT]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.BLEURT</name><anchor>lighteval.metrics.metrics_sample.BLEURT</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L765</source><parameters>[]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.BLEURT.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L786</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing gold references.
- **model_response** (ModelResponse) -- The model's response containing predictions.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>float</rettype><retdesc>Score over the current sample's items.</retdesc></docstring>
Uses the stored BLEURT scorer to compute the score on the current sample.








</div></div>

### BLEU[[lighteval.metrics.metrics_sample.BLEU]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.BLEU</name><anchor>lighteval.metrics.metrics_sample.BLEU</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L805</source><parameters>[{"name": "n_gram", "val": ": int"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.BLEU.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L815</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing gold references.
- **model_response** (ModelResponse) -- The model's response containing predictions.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>float</rettype><retdesc>Score over the current sample's items.</retdesc></docstring>
Computes the sentence level BLEU between the golds and each prediction, then takes the average.








</div></div>

### StringDistance[[lighteval.metrics.metrics_sample.StringDistance]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.StringDistance</name><anchor>lighteval.metrics.metrics_sample.StringDistance</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L847</source><parameters>[{"name": "metric_types", "val": ": list[str] | str"}, {"name": "strip_prediction", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.StringDistance.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L869</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing gold references.
- **model_response** (ModelResponse) -- The model's response containing predictions.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>dict</rettype><retdesc>The different scores computed</retdesc></docstring>
Computes all the requested metrics on the golds and prediction.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>edit_similarity</name><anchor>lighteval.metrics.metrics_sample.StringDistance.edit_similarity</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L927</source><parameters>[{"name": "s1", "val": ""}, {"name": "s2", "val": ""}]</parameters><rettype>float</rettype><retdesc>Edit similarity score between 0 and 1</retdesc></docstring>
Compute the edit similarity between two lists of strings.

Edit similarity is also used in the paper
Lee, Katherine, et al.
"Deduplicating training data makes language models better."
arXiv preprint arXiv:2107.06499 (2021).






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>longest_common_prefix_length</name><anchor>lighteval.metrics.metrics_sample.StringDistance.longest_common_prefix_length</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L920</source><parameters>[{"name": "s1", "val": ": ndarray"}, {"name": "s2", "val": ": ndarray"}]</parameters></docstring>
Compute the length of the longest common prefix.

</div></div>

### Metrics allowing sampling
#### PassAtK[[lighteval.metrics.metrics_sample.PassAtK]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.PassAtK</name><anchor>lighteval.metrics.metrics_sample.PassAtK</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L1264</source><parameters>[{"name": "k", "val": ": int | None = None"}, {"name": "n", "val": ": int | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.PassAtK.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L1278</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing gold references.
- **model_response** (ModelResponse) -- The model's response containing predictions.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>float</rettype><retdesc>Aggregated score over the current sample's items.</retdesc></docstring>
Computes the metric over a list of golds and predictions for one single item with possibly many samples.
It applies normalisation (if needed) to model prediction and gold, computes their per prediction score,
then aggregates the scores over the samples using a pass@k.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>pass_at_k</name><anchor>lighteval.metrics.metrics_sample.PassAtK.pass_at_k</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L1319</source><parameters>[{"name": "all_scores", "val": ": list"}]</parameters></docstring>
Algo from https://arxiv.org/pdf/2107.03374

</div></div>

#### MajAtN[[lighteval.metrics.metrics_sample.MajAtN]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.MajAtN</name><anchor>lighteval.metrics.metrics_sample.MajAtN</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L1212</source><parameters>[{"name": "n", "val": ": int | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.MajAtN.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L1225</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **doc** (Doc) -- The document containing gold references.
- **model_response** (ModelResponse) -- The model's response containing predictions.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>float</rettype><retdesc>Aggregated score over the current sample's items.</retdesc></docstring>
Computes the metric over a list of golds and predictions for one single sample.
It applies normalisation (if needed) to model prediction and gold, and takes the most frequent answer of all the available ones,
then compares it to the gold.








</div></div>

#### AvgAtN[[lighteval.metrics.metrics_sample.AvgAtN]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.AvgAtN</name><anchor>lighteval.metrics.metrics_sample.AvgAtN</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L1176</source><parameters>[{"name": "n", "val": ": int | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.AvgAtN.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L1188</source><parameters>[{"name": "doc", "val": ": Doc"}, {"name": "model_response", "val": ": ModelResponse"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_response** (ModelResponse) -- The model's response containing predictions.
- **doc** (Doc) -- The document containing gold references.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>float</rettype><retdesc>Aggregated score over the current sample's items.</retdesc></docstring>
Computes the metric over a list of golds and predictions for one single sample.
It applies normalisation (if needed) to model prediction and gold, and takes the most frequent answer of all the available ones,
then compares it to the gold.








</div></div>

## LLM-as-a-Judge
### JudgeLM[[lighteval.metrics.utils.llm_as_judge.JudgeLM]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.utils.llm_as_judge.JudgeLM</name><anchor>lighteval.metrics.utils.llm_as_judge.JudgeLM</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/utils/llm_as_judge.py#L67</source><parameters>[{"name": "model", "val": ": str"}, {"name": "templates", "val": ": typing.Callable"}, {"name": "process_judge_response", "val": ": typing.Callable"}, {"name": "judge_backend", "val": ": typing.Literal['litellm', 'openai', 'transformers', 'tgi', 'vllm', 'inference-providers']"}, {"name": "url", "val": ": str | None = None"}, {"name": "api_key", "val": ": str | None = None"}, {"name": "max_tokens", "val": ": int | None = None"}, {"name": "response_format", "val": ": BaseModel = None"}, {"name": "hf_provider", "val": ": typing.Optional[typing.Literal['black-forest-labs', 'cerebras', 'cohere', 'fal-ai', 'fireworks-ai', 'inference-providers', 'hyperbolic', 'nebius', 'novita', 'openai', 'replicate', 'sambanova', 'together']] = None"}, {"name": "backend_options", "val": ": dict | None = None"}]</parameters><paramsdesc>- **model** (str) -- The name of the model.
- **templates** (Callable) -- A function taking into account the question, options, answer, and gold and returning the judge prompt.
- **process_judge_response** (Callable) -- A function for processing the judge's response.
- **judge_backend** (Literal["litellm", "openai", "transformers", "tgi", "vllm", "inference-providers"]) -- The backend for the judge.
- **url** (str | None) -- The URL for the OpenAI API.
- **api_key** (str | None) -- The API key for the OpenAI API (either OpenAI or HF key).
- **max_tokens** (int) -- The maximum number of tokens to generate. Defaults to 512.
- **response_format** (BaseModel | None) -- The format of the response from the API, used for the OpenAI and TGI backend.
- **hf_provider** (Literal["black-forest-labs", "cerebras", "cohere", "fal-ai", "fireworks-ai", --
  "inference-providers", "hyperbolic", "nebius", "novita", "openai", "replicate", "sambanova", "together"] | None):
  The HuggingFace provider when using the inference-providers backend.
- **backend_options** (dict | None) -- Options for the backend. Currently only supported for litellm.</paramsdesc><paramgroups>0</paramgroups></docstring>
A class representing a judge for evaluating answers using either the chosen backend.



Methods:
evaluate_answer: Evaluates an answer using the OpenAI API or Transformers library.
__lazy_load_client: Lazy loads the OpenAI client or Transformers pipeline.
__call_api: Calls the API to get the judge's response.
__call_transformers: Calls the Transformers pipeline to get the judge's response.
__call_vllm: Calls the VLLM pipeline to get the judge's response.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>dict_of_lists_to_list_of_dicts</name><anchor>lighteval.metrics.utils.llm_as_judge.JudgeLM.dict_of_lists_to_list_of_dicts</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/utils/llm_as_judge.py#L204</source><parameters>[{"name": "dict_of_lists", "val": ""}]</parameters><paramsdesc>- **dict_of_lists** -- A dictionary where each value is a list.
  All lists are expected to have the same length.</paramsdesc><paramgroups>0</paramgroups><retdesc>A list of dictionaries.</retdesc></docstring>
Transform a dictionary of lists into a list of dictionaries.

Each dictionary in the output list will contain one element from each list in the input dictionary,
with the same keys as the input dictionary.





Example:
>>> dict_of_lists_to_list_of_dicts({'k': [1, 2, 3], 'k2': ['a', 'b', 'c']})
[{'k': 1, 'k2': 'a'}, {'k': 2, 'k2': 'b'}, {'k': 3, 'k2': 'c'}]


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>evaluate_answer</name><anchor>lighteval.metrics.utils.llm_as_judge.JudgeLM.evaluate_answer</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/utils/llm_as_judge.py#L272</source><parameters>[{"name": "question", "val": ": str"}, {"name": "answer", "val": ": str"}, {"name": "options", "val": ": list[str] | None = None"}, {"name": "gold", "val": ": str | None = None"}]</parameters><paramsdesc>- **question** (str) -- The prompt asked to the evaluated model.
- **answer** (str) -- Answer given by the evaluated model.
- **options** (list[str] | None) -- Optional list of answer options.
- **gold** (str | None) -- Optional reference answer.</paramsdesc><paramgroups>0</paramgroups><retdesc>A tuple containing the score, prompts, and judgment.</retdesc></docstring>
Evaluates an answer using either Transformers or OpenAI API.






</div></div>

### JudgeLLM[[lighteval.metrics.metrics_sample.JudgeLLM]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.JudgeLLM</name><anchor>lighteval.metrics.metrics_sample.JudgeLLM</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L942</source><parameters>[{"name": "judge_model_name", "val": ": str"}, {"name": "template", "val": ": typing.Callable"}, {"name": "process_judge_response", "val": ": typing.Callable"}, {"name": "judge_backend", "val": ": typing.Literal['litellm', 'openai', 'transformers', 'vllm', 'tgi', 'inference-providers']"}, {"name": "short_judge_name", "val": ": str | None = None"}, {"name": "response_format", "val": ": pydantic.main.BaseModel | None = None"}, {"name": "url", "val": ": str | None = None"}, {"name": "hf_provider", "val": ": str | None = None"}, {"name": "max_tokens", "val": ": int | None = None"}, {"name": "backend_options", "val": ": dict | None = None"}]</parameters></docstring>


</div>

### JudgeLLMMTBench[[lighteval.metrics.metrics_sample.JudgeLLMMTBench]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.JudgeLLMMTBench</name><anchor>lighteval.metrics.metrics_sample.JudgeLLMMTBench</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L1047</source><parameters>[{"name": "judge_model_name", "val": ": str"}, {"name": "template", "val": ": typing.Callable"}, {"name": "process_judge_response", "val": ": typing.Callable"}, {"name": "judge_backend", "val": ": typing.Literal['litellm', 'openai', 'transformers', 'vllm', 'tgi', 'inference-providers']"}, {"name": "short_judge_name", "val": ": str | None = None"}, {"name": "response_format", "val": ": pydantic.main.BaseModel | None = None"}, {"name": "url", "val": ": str | None = None"}, {"name": "hf_provider", "val": ": str | None = None"}, {"name": "max_tokens", "val": ": int | None = None"}, {"name": "backend_options", "val": ": dict | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.JudgeLLMMTBench.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L1048</source><parameters>[{"name": "model_response", "val": ": list"}, {"name": "doc", "val": ": list"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
Compute the score of a generative task using a llm as a judge.
The generative task can be multiturn with 2 turns max, in that case, we
return scores for turn 1 and 2. Also returns user_prompt and judgement
which are ignored later by the aggregator.


</div></div>

### JudgeLLMMixEval[[lighteval.metrics.metrics_sample.JudgeLLMMixEval]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.metrics.metrics_sample.JudgeLLMMixEval</name><anchor>lighteval.metrics.metrics_sample.JudgeLLMMixEval</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L1082</source><parameters>[{"name": "judge_model_name", "val": ": str"}, {"name": "template", "val": ": typing.Callable"}, {"name": "process_judge_response", "val": ": typing.Callable"}, {"name": "judge_backend", "val": ": typing.Literal['litellm', 'openai', 'transformers', 'vllm', 'tgi', 'inference-providers']"}, {"name": "short_judge_name", "val": ": str | None = None"}, {"name": "response_format", "val": ": pydantic.main.BaseModel | None = None"}, {"name": "url", "val": ": str | None = None"}, {"name": "hf_provider", "val": ": str | None = None"}, {"name": "max_tokens", "val": ": int | None = None"}, {"name": "backend_options", "val": ": dict | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute</name><anchor>lighteval.metrics.metrics_sample.JudgeLLMMixEval.compute</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/metrics/metrics_sample.py#L1083</source><parameters>[{"name": "responses", "val": ": list"}, {"name": "docs", "val": ": list"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
Compute the score of a generative task using a llm as a judge.
The generative task can be multiturn with 2 turns max, in that case, we
return scores for turn 1 and 2. Also returns user_prompt and judgement
which are ignored later by the aggregator.


</div></div>

<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/package_reference/metrics.mdx" />

### Pipeline
https://huggingface.co/docs/lighteval/main/package_reference/pipeline.md

# Pipeline

## Pipeline[[lighteval.pipeline.Pipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.pipeline.Pipeline</name><anchor>lighteval.pipeline.Pipeline</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/pipeline.py#L117</source><parameters>[{"name": "tasks", "val": ": str"}, {"name": "pipeline_parameters", "val": ": PipelineParameters"}, {"name": "evaluation_tracker", "val": ": EvaluationTracker"}, {"name": "model_config", "val": ": lighteval.models.abstract_model.ModelConfig | None = None"}, {"name": "model", "val": " = None"}, {"name": "metric_options", "val": " = None"}]</parameters></docstring>


</div>

## PipelineParameters[[lighteval.pipeline.PipelineParameters]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.pipeline.PipelineParameters</name><anchor>lighteval.pipeline.PipelineParameters</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/pipeline.py#L82</source><parameters>[{"name": "launcher_type", "val": ": ParallelismManager"}, {"name": "job_id", "val": ": int = 0"}, {"name": "dataset_loading_processes", "val": ": int = 1"}, {"name": "nanotron_checkpoint_path", "val": ": str | None = None"}, {"name": "custom_tasks_directory", "val": ": str | None = None"}, {"name": "num_fewshot_seeds", "val": ": int = 1"}, {"name": "max_samples", "val": ": int | None = None"}, {"name": "cot_prompt", "val": ": str | None = None"}, {"name": "remove_reasoning_tags", "val": ": bool = True"}, {"name": "reasoning_tags", "val": ": str | list[tuple[str, str]] = \"[('<think>', '</think>')]\""}, {"name": "load_responses_from_details_date_id", "val": ": str | None = None"}, {"name": "bootstrap_iters", "val": ": int = 1000"}, {"name": "load_tasks_multilingual", "val": ": bool = False"}]</parameters></docstring>


</div>

## ParallelismManager[[lighteval.pipeline.ParallelismManager]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class lighteval.pipeline.ParallelismManager</name><anchor>lighteval.pipeline.ParallelismManager</anchor><source>https://github.com/huggingface/lighteval/blob/main/src/lighteval/pipeline.py#L70</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>
An enumeration.

</div>

<EditOnGithub source="https://github.com/huggingface/lighteval/blob/main/docs/source/package_reference/pipeline.mdx" />
