yunzhong-scale commited on
Commit
501c328
·
verified ·
1 Parent(s): c1837e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -12,9 +12,11 @@ dataset_info:
12
  - name: prompt
13
  dtype: string
14
  - name: images_by_turn
15
- dtype: string
 
 
16
  - name: rubrics
17
- dtype: string
18
  splits:
19
  - name: train
20
  num_bytes: 3506927656
@@ -28,7 +30,7 @@ dataset_info:
28
  path: data/*.parquet
29
  ---
30
 
31
- VisuAlToolBench is a challenging benchmark to assess tool-enabled visual perception, transformation, and reasoning in multimodal LLMs. It evaluates whether models can not only think about images but also think with images by actively manipulating visuals (e.g., crop, edit, enhance) and integrating general-purpose tools to solve complex tasks. The dataset contains single-turn and multi-turn tasks across diverse domains, each accompanied by detailed rubrics for systematic evaluation.
32
 
33
  Paper: [BEYOND SEEING: Evaluating Multimodal LLMs on Tool-enabled Image Perception, Transformation, and Reasoning](https://static.scale.com/uploads/654197dc94d34f66c0f5184e/vtb_paper.pdf)
34
 
 
12
  - name: prompt
13
  dtype: string
14
  - name: images_by_turn
15
+ sequence:
16
+ sequence:
17
+ dtype: image
18
  - name: rubrics
19
+ sequence: string
20
  splits:
21
  - name: train
22
  num_bytes: 3506927656
 
30
  path: data/*.parquet
31
  ---
32
 
33
+ VisuAlToolBench is a challenging benchmark to assess tool-enabled visual perception, transformation, and reasoning in multimodal LLMs. It evaluates whether models can not only think about images but also think with images by actively manipulating visuals (e.g., crop, edit, enhance) and integrating general-purpose tools to solve complex tasks. The dataset contains single-turn and multi-turn tasks across diverse domains, each accompanied by detailed rubrics for systematic evaluation. Parquet files under `data/` are auto-indexed by the Hub and power the Dataset Viewer.
34
 
35
  Paper: [BEYOND SEEING: Evaluating Multimodal LLMs on Tool-enabled Image Perception, Transformation, and Reasoning](https://static.scale.com/uploads/654197dc94d34f66c0f5184e/vtb_paper.pdf)
36