Pick the right benchmarks with our benchmark finder: Search by language, task type, dataset name, or keywords.
Not all tasks are compatible with inspect-ai’s API as of yet, we are working on converting all of them !
Once you’ve chosen a benchmark, run it with lighteval eval. Below are examples for common setups.
lighteval eval "hf-inference-providers/openai/gpt-oss-20b" "lighteval|gpqa:diamond|0"lighteval eval "hf-inference-providers/openai/gpt-oss-20b" "lighteval|gpqa:diamond|0,lighteval|aime25|0"lighteval eval \
hf-inference-providers/openai/gpt-oss-20b:fireworks-ai \
hf-inference-providers/openai/gpt-oss-20b:together \
hf-inference-providers/openai/gpt-oss-20b:nebius \
"lighteval|gpqa:diamond|0"lighteval eval vllm/HuggingFaceTB/SmolLM-135M-Instruct "lighteval|gpqa:diamond|0"lighteval eval hf-inference-providers/openai/gpt-oss-20b "lighteval|gsm8k|0,lighteval|gsm8k|5"lighteval eval hf-inference-providers/openai/gpt-oss-20b "lighteval|gsm8k|0" \
--max-connections 50 \
--timeout 30 \
--retry-on-error 1 \
--max-retries 1 \
--max-samples 10lighteval eval hf-inference-providers/openai/gpt-oss-20b "lighteval|aime25|0" --epochs 16 --epochs-reducer "pass_at_4"lighteval eval hf-inference-providers/openai/gpt-oss-20b "lighteval|hle|0" \
--bundle-dir gpt-oss-bundle \
--repo-id OpenEvals/evals \
--max-samples 100Resulting Space:
You can use any argument defined in inspect-ai’s API.
lighteval eval hf-inference-providers/openai/gpt-oss-20b "lighteval|aime25|0" --temperature 0.1lighteval eval google/gemini-2.5-pro "lighteval|aime25|0" --model-args location=us-east5lighteval eval openai/gpt-4o "lighteval|gpqa:diamond|0" --model-args service_tier=flex,client_timeout=1200LightEval prints a per-model results table:
Completed all tasks in 'lighteval-logs' successfully
| Model |gpqa|gpqa:diamond|
|---------------------------------------|---:|-----------:|
|vllm/HuggingFaceTB/SmolLM-135M-Instruct|0.01| 0.01|
results saved to lighteval-logs
run "inspect view --log-dir lighteval-logs" to view the results