Lighteval supports multilingual evaluations through a comprehensive system of translation literals and language-adapted templates.
We define 19 literals, basic keywords or punctuation signs used when creating evaluation prompts in an automatic manner, such as yes, no, because, etc.
These literals are essential for:
We welcome translations in your language! To contribute:
Open the translation literals file: translation_literals.py
Edit the file to add or expand the literal for your language of interest
Open a PR with your modifications
Language.ENGLISH: TranslationLiterals(
language=Language.ENGLISH,
question_word="question", # Usage: "Question: How are you?"
answer="answer", # Usage: "Answer: I am fine"
confirmation_word="right", # Usage: "He is smart, right?"
yes="yes", # Usage: "Yes, he is"
no="no", # Usage: "No, he is not"
also="also", # Usage: "Also, she is smart."
cause_word="because", # Usage: "She is smart, because she is tall"
effect_word="therefore", # Usage: "He is tall therefore he is smart"
or_word="or", # Usage: "He is tall or small"
true="true", # Usage: "He is smart, true, false or neither?"
false="false", # Usage: "He is smart, true, false or neither?"
neither="neither", # Usage: "He is smart, true, false or neither?"
# Punctuation and spacing: only adjust if your language uses something different than in English
full_stop=".",
comma=",",
question_mark="?",
exclamation_mark="!",
word_space=" ",
sentence_space=" ",
colon=":",
# The first characters of your alphabet used in enumerations, if different from English
indices=["A", "B", "C", ...]
)Before creating a new multilingual task, you should:
For multilingual evaluations, the prompt_function should be implemented using language-adapted templates. These templates handle:
Available template types include:
get_nli_prompt_functionget_copa_prompt_functionget_mcq_prompt_functionget_qa_prompt_functionUsed for standard multiple choice questions where the model selects from lettered options:
MCFFormulation()
Example output:
Question: What is the capital of France?
A. London
B. Paris
C. Berlin
D. Rome
Answer: | A/B/C/DUsed for classification tasks where the model generates the answer directly:
CFFormulation()
Example output:
Question: What is the capital of France?
Answer: | ParisUsed for tasks that present choices but expect the full answer text:
HybridFormulation()
Example output:
Question: What is the capital of France?
A. London
B. Paris
C. Berlin
D. Rome
Answer: | ParisCreate a Python file following the custom task guide structure.
from lighteval.tasks.lighteval_task import LightevalTaskConfig
from lighteval.tasks.multilingual.language import Language
from lighteval.tasks.multilingual.formulations import MCFFormulation, CFFormulation, HybridFormulation
from lighteval.tasks.multilingual.templates import get_template_prompt_function
from lighteval.tasks.multilingual.metrics import get_metrics_for_formulation, loglikelihood_acc_metric
from lighteval.tasks.multilingual.normalization import LogProbTokenNorm, LogProbCharNormyour_tasks = [
LightevalTaskConfig(
# Name of your evaluation
name=f"evalname_{language.value}_{formulation.name.lower()}",
# The evaluation is community contributed
suite=["community"],
# This will automatically get the correct metrics for your chosen formulation
metric=get_metrics_for_formulation(
formulation,
[
LogLikelihoodAccMetric(normalization=None),
LogLikelihoodAccMetric(normalization=LogProbTokenNorm()),
LogLikelihoodAccMetric(normalization=LogProbCharNorm()),
],
),
# In this function, you choose which template to follow and for which language and formulation
prompt_function=get_template_prompt_function(
language=language,
# Use the adapter to define the mapping between the
# keys of the template (left), and the keys of your dataset
# (right)
# To know which template keys are required and available,
# consult the appropriate adapter type and doc-string.
adapter=lambda line: {
"key": line["relevant_key"],
# Add more mappings as needed
},
formulation=formulation,
),
# You can also add specific filters to remove irrelevant samples
hf_filter=lambda line: line["label"] in <condition>,
# You then select your huggingface dataset as well as
# the splits available for evaluation
hf_repo=<dataset>,
hf_subset=<subset>,
evaluation_splits=["train"],
hf_avail_splits=["train"],
)
for language in [
Language.YOUR_LANGUAGE, # Add your target languages
# Language.SPANISH,
# Language.FRENCH,
# etc.
]
for formulation in [MCFFormulation(), CFFormulation(), HybridFormulation()]
]Follow the custom task guide to test if your task is correctly implemented.
All LightevalTaskConfig parameters are strongly typed, including the inputs to the template function. Make sure to take advantage of your IDE’s functionality to make it easier to correctly fill these parameters.