File size: 703 Bytes
041e99f
 
877f28f
 
 
041e99f
 
 
877f28f
041e99f
877f28f
041e99f
877f28f
041e99f
db11b49
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
library_name: transformers
base_model:
- Qwen/Qwen3-VL-32B-Thinking
pipeline_tag: text-generation
---


<center><img style="height:100px" src="https://cdn-uploads.huggingface.co/production/uploads/66d78facde54fea8a009927e/n2pVs6-SPT0XsAyZPLKOY.png"></center>

# Qwen3-VLTO-32B-Thinking

Qwen3-VL-32B-Thinking but without the vision components (**V**ision **L**anguage **T**ext **O**nly). Functions exactly like a text-only Qwen3 model.

To do this, I simply imported the weights from the VL model into the text model via PyTorch's `load_state_dict`. The model architecture is essentially the exact same.

![](https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl_2b_32b_text_thinking.jpg)