nielsr HF Staff commited on
Commit
bbe64c1
·
verified ·
1 Parent(s): 2323e42

Improve model card: Add pipeline tag, library name, and project page link

Browse files

This PR improves the model card by adding:
- `pipeline_tag: image-text-to-text` to better categorize the model on the Hub.
- `library_name: transformers` to indicate compatibility with the Hugging Face Transformers library.
- A link to the official project page: https://omniverifier.github.io/.

Please review and merge if these additions are accurate.

Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -1,9 +1,12 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - Qwen/Qwen2.5-VL-7B-Instruct
 
 
 
5
  ---
6
- [Paper](https://arxiv.org/abs/2510.13804) | [Code](https://github.com/Cominclip/OmniVerifier)
 
7
 
8
  We introduce **Generative Universal Verifier**, a novel concept and plugin designed for next-generation multimodal reasoning in vision-language models and unified multimodal models, providing the fundamental capability of reflection and refinement on visual outcomes during the reasoning and generation process.
9
 
@@ -13,7 +16,6 @@ We introduce **Generative Universal Verifier**, a novel concept and plugin desig
13
 
14
  OmniVerifier advances both reliable reflection during generation and scalable test-time refinement, marking a step toward more trustworthy and controllable next-generation reasoning systems.
15
 
16
-
17
  ```
18
  @article{zhang2025generative,
19
  author = {Zhang, Xinchen and Zhang, Xiaoying and Wu, Youbin and Cao, Yanbin and Zhang, Renrui and Chu, Ruihang and Yang, Ling and Yang, Yujiu},
 
1
  ---
 
2
  base_model:
3
  - Qwen/Qwen2.5-VL-7B-Instruct
4
+ license: apache-2.0
5
+ pipeline_tag: image-text-to-text
6
+ library_name: transformers
7
  ---
8
+
9
+ [Paper](https://arxiv.org/abs/2510.13804) | [Code](https://github.com/Cominclip/OmniVerifier) | [Project Page](https://omniverifier.github.io/)
10
 
11
  We introduce **Generative Universal Verifier**, a novel concept and plugin designed for next-generation multimodal reasoning in vision-language models and unified multimodal models, providing the fundamental capability of reflection and refinement on visual outcomes during the reasoning and generation process.
12
 
 
16
 
17
  OmniVerifier advances both reliable reflection during generation and scalable test-time refinement, marking a step toward more trustworthy and controllable next-generation reasoning systems.
18
 
 
19
  ```
20
  @article{zhang2025generative,
21
  author = {Zhang, Xinchen and Zhang, Xiaoying and Wu, Youbin and Cao, Yanbin and Zhang, Renrui and Chu, Ruihang and Yang, Ling and Yang, Yujiu},