Lighteval

๐Ÿค— Lighteval is your all-in-one toolkit for evaluating Large Language Models (LLMs) across multiple backends with ease. Dive deep into your modelโ€™s performance by saving and exploring detailed, sample-by-sample results to debug and see how your models stack up.

Key Features

๐Ÿš€ Multi-Backend Support

Evaluate your models using the most popular and efficient inference backends:

๐Ÿ“Š Comprehensive Evaluation

๐Ÿ”ง Easy Customization

Customization at your fingertips: create new tasks, metrics or model tailored to your needs, or browse all our existing tasks and metrics.

โ˜๏ธ Seamless Integration

Seamlessly experiment, benchmark, and store your results on the Hugging Face Hub, S3, or locally.

Quick Start

Installation

pip install lighteval

Basic Usage

Find a task

Run your benchmark and push details to the hub

lighteval eval "hf-inference-providers/openai/gpt-oss-20b" \
  "lighteval|gpqa:diamond|0" \
    --bundle-dir gpt-oss-bundle \
    --repo-id OpenEvals/evals

Resulting Space:

Update on GitHub