( query: str choices: list gold_index: typing.Union[int, list[int]] instruction: str | None = None images: list['Image'] | None = None specific: dict | None = None unconditioned_query: str | None = None original_query: str | None = None id: str = '' task_name: str = '' fewshot_samples: list = <factory> sampling_methods: list = <factory> fewshot_sorting_class: str | None = None generation_size: int | None = None stop_sequences: list[str] | None = None use_logits: bool = False num_samples: int = 1 generation_grammar: None = None )
Parameters
Dataclass representing a single evaluation sample for a benchmark.
This class encapsulates all the information needed to evaluate a model on a single task instance. It contains the input query, expected outputs, metadata, and configuration parameters for different types of evaluation tasks.
Required Fields:
query: The input prompt or questionchoices: Available answer choices (for multiple choice tasks)gold_index: Index(es) of the correct answer(s)Optional Fields:
instruction: System prompt, task specific. Will be appended to model specific system prompt.images: Visual inputs for multimodal tasks.Methods: get_golds(): Returns the correct answer(s) as strings based on gold_index. Handles both single and multiple correct answers.
Usage Examples:
Multiple Choice Question:
doc = Doc(
query="What is the capital of France?",
choices=["London", "Paris", "Berlin", "Madrid"],
gold_index=1, # Paris is the correct answer
instruction="Answer the following geography question:",
)Generative Task:
doc = Doc(
query="Write a short story about a robot.",
choices=[], # No predefined choices for generative tasks
gold_index=0, # Not used for generative tasks
generation_size=100,
stop_sequences=["
End"],
)Few-shot Learning:
doc = Doc(
query="Translate 'Hello world' to Spanish.",
choices=["Hola mundo", "Bonjour monde", "Ciao mondo"],
gold_index=0,
fewshot_samples=[
Doc(query="Translate 'Good morning' to Spanish.",
choices=["Buenos días", "Bonjour", "Buongiorno"],
gold_index=0),
Doc(query="Translate 'Thank you' to Spanish.",
choices=["Gracias", "Merci", "Grazie"],
gold_index=0)
],
)Multimodal Task:
doc = Doc(
query="What is shown in this image?",
choices=["A cat"],
gold_index=0,
images=[pil_image], # PIL Image object
)Return gold targets extracted from the target dict