Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask

Update dataset card: Add task categories, language, and populate paper/GitHub links

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -1,4 +1,8 @@
1
  ---
 
 
 
 
2
  dataset_info:
3
  - config_name: attribute_grounding_and_alignment
4
  features:
@@ -90,7 +94,7 @@ configs:
90
  ---
91
 
92
  ## πŸ“Š ChartAlignBench
93
- [**πŸ“– Paper**]() | [**πŸ’» GitHub**]()
94
 
95
  ChartAlignBench is a multi-modal benchmark designed to evaluate vision-language models (VLMs) on dense-level chart grounding and multi-chart alignment to comprehensively assess fine-grained chart understanding in VLMs.
96
 
@@ -112,7 +116,7 @@ ChartAlignBench contains 9K+ instances, divided into three evaluation subsets:-
112
 
113
  For each subset, the test*.parquet files contain the annotations and image pairs pre-loaded for processing with HF Datasets.
114
 
115
- ```
116
  from datasets import load_dataset
117
 
118
  data_grounding_and_alignment_subset = load_dataset("umd-zhou-lab/ChartAlignBench", "data_grounding_and_alignment")
@@ -154,5 +158,4 @@ robustness_subset = load_dataset("umd-zhou-lab/ChartAlignBench", "robustness")
154
  | set_idx | Set index which groups the 5 chart pairs in a robustness set |
155
  | set_pair_idx | Pair index within a set (1-5) |
156
  |num_cell_difference | Number of data points which differ between the chart pair |
157
- |attribute_varied | Attribute which varies across the 5 chart pairs |
158
-
 
1
  ---
2
+ task_categories:
3
+ - image-text-to-text
4
+ language:
5
+ - en
6
  dataset_info:
7
  - config_name: attribute_grounding_and_alignment
8
  features:
 
94
  ---
95
 
96
  ## πŸ“Š ChartAlignBench
97
+ [**πŸ“– Paper**](https://huggingface.co/papers/2510.26781) | [**πŸ’» GitHub**](https://github.com/tianyi-lab/ChartAlignBench)
98
 
99
  ChartAlignBench is a multi-modal benchmark designed to evaluate vision-language models (VLMs) on dense-level chart grounding and multi-chart alignment to comprehensively assess fine-grained chart understanding in VLMs.
100
 
 
116
 
117
  For each subset, the test*.parquet files contain the annotations and image pairs pre-loaded for processing with HF Datasets.
118
 
119
+ ```python
120
  from datasets import load_dataset
121
 
122
  data_grounding_and_alignment_subset = load_dataset("umd-zhou-lab/ChartAlignBench", "data_grounding_and_alignment")
 
158
  | set_idx | Set index which groups the 5 chart pairs in a robustness set |
159
  | set_pair_idx | Pair index within a set (1-5) |
160
  |num_cell_difference | Number of data points which differ between the chart pair |
161
+ |attribute_varied | Attribute which varies across the 5 chart pairs |