lighteval_sha (str) — Git commit SHA of lighteval used for evaluation, enabling exact version reproducibility.
Set to ”?” if not in a git repository.
num_fewshot_seeds (int) — Number of random seeds used for few-shot example sampling.
If <= 1: Single evaluation with seed=0
If > 1: Multiple evaluations with different few-shot samplings (HELM-style)
max_samples (int, optional) — Maximum number of samples to evaluate per task.
Only used for debugging - truncates each task’s dataset.
job_id (int, optional) — Slurm job ID if running on a cluster.
Used to cross-reference with scheduler logs.
start_time (float) — Unix timestamp when evaluation started.
Automatically set during logger initialization.
end_time (float) — Unix timestamp when evaluation completed.
Set by calling log_end_time().
total_evaluation_time_secondes (str) — Total runtime in seconds.
Calculated as end_time - start_time.
model_config (ModelConfig) — Complete model configuration settings.
Contains model architecture, tokenizer, and generation parameters.
model_name (str) — Name identifier for the evaluated model.
Extracted from model_config.
Tracks general configuration and runtime information for model evaluations.
This logger captures key configuration parameters, model details, and timing information
to ensure reproducibility and provide insights into the evaluation process.
metrics_value (dict[str, dict[str, list[float]]]) — Maps each task to its dictionary of metrics to scores for all the example of the task.
Example: {“winogrande|winogrande_xl”: {“accuracy”: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5]}}
metric_aggregated (dict[str, dict[str, float]]) — Maps each task to its dictionary of metrics to aggregated scores over all the example of the task.
Example: {“winogrande|winogrande_xl”: {“accuracy”: 0.5}}
Logs the actual scores for each metric of each task.