๐ค Lighteval is your all-in-one toolkit for evaluating Large Language Models (LLMs) across multiple backends with ease. Dive deep into your modelโs performance by saving and exploring detailed, sample-by-sample results to debug and see how your models stack up.
Evaluate your models using the most popular and efficient inference backends:
eval: Use inspect-ai as backend to evaluate and inspect your models ! (prefered way)transformers: Evaluate models on CPU or one or more GPUs using ๐ค
Acceleratenanotron: Evaluate models in distributed settings using โก๏ธ
Nanotronvllm: Evaluate models on one or more GPUs using ๐
VLLMcustom: Evaluate custom models (can be anything)sglang: Evaluate models using SGLang as backendinference-endpoint: Evaluate models using Hugging Faceโs Inference Endpoints APItgi: Evaluate models using ๐ Text Generation Inference running locallylitellm: Evaluate models on any compatible API using LiteLLMinference-providers: Evaluate models using HuggingFaceโs inference providers as backend**: Distributed training and evaluationCustomization at your fingertips: create new tasks, metrics or model tailored to your needs, or browse all our existing tasks and metrics.
Seamlessly experiment, benchmark, and store your results on the Hugging Face Hub, S3, or locally.
pip install lighteval
lighteval eval "hf-inference-providers/openai/gpt-oss-20b" \
"lighteval|gpqa:diamond|0" \
--bundle-dir gpt-oss-bundle \
--repo-id OpenEvals/evalsResulting Space:
Update on GitHub