๐ค Lighteval is your all-in-one toolkit for evaluating Large Language Models (LLMs) across multiple backends with ease. Dive deep into your modelโs performance by saving and exploring detailed, sample-by-sample results to debug and see how your models stack up.
Evaluate your models using the most popular and efficient inference backends:
transformers: Evaluate models on CPU or one or more GPUs using ๐ค
Acceleratenanotron: Evaluate models in distributed settings using โก๏ธ
Nanotronvllm: Evaluate models on one or more GPUs using ๐
VLLMcustom: Evaluate custom models (can be anything)sglang: Evaluate models using SGLang as backendinference-endpoint: Evaluate models using Hugging Faceโs Inference Endpoints APItgi: Evaluate models using ๐ Text Generation Inference running locallylitellm: Evaluate models on any compatible API using LiteLLMinference-providers: Evaluate models using HuggingFaceโs inference providers as backend**: Distributed training and evaluationCustomization at your fingertips: create new tasks, metrics or model tailored to your needs, or browse all our existing tasks and metrics.
Seamlessly experiment, benchmark, and store your results on the Hugging Face Hub, S3, or locally.
pip install lighteval
# Evaluate a model using Transformers backend
lighteval accelerate \
"model_name=openai-community/gpt2" \
"leaderboard|truthfulqa:mc|0"# Save locally
lighteval accelerate \
"model_name=openai-community/gpt2" \
"leaderboard|truthfulqa:mc|0" \
--output-dir ./results
# Push to Hugging Face Hub
lighteval accelerate \
"model_name=openai-community/gpt2" \
"leaderboard|truthfulqa:mc|0" \
--push-to-hub \
--results-org your-username