fix typo in README.md
#33 opened 5 months ago
		by
		
				
 jsilverman26
							
						jsilverman26
	
Update README.md
								1
#31 opened 6 months ago
		by
		
				
 Disya
							
						Disya
	
How to use Qwen2.5-VL for computer use?
								1
#30 opened 6 months ago
		by
		
				
 luffycodes
							
						luffycodes
	
I want to use this model with javascript for video understanding
								1
#29 opened 6 months ago
		by
		
				
 the-research-100
							
						the-research-100
	
updated one spelling mistake
#28 opened 6 months ago
		by
		
				
 kirpalsingh2252002
							
						kirpalsingh2252002
	
Update README.md
#27 opened 7 months ago
		by
		
				
 megladagon
							
						megladagon
	
Inference problems for all Qwen2.5 VL models in transformers above 4.49.0
								2
#26 opened 7 months ago
		by
		
				
 mirekphd
							
						mirekphd
	
Ask about the M-RoPE
								1
#25 opened 7 months ago
		by
		
				
 JavenChen
							
						JavenChen
	
Upload IMG-20250318-WA0007.jpg
#23 opened 7 months ago
		by
		
				
 Aceyung1
							
						Aceyung1
	
 
							'Qwen2_5_VLProcessor' object has no attribute 'eos_token'
								1
#22 opened 7 months ago
		by
		
				
 itztheking
							
						itztheking
	
 
							本地部署72b时,模型输出为空,怎么解决?
								1
#21 opened 8 months ago
		by
		
				
 Cranegu
							
						Cranegu
	
Qwen/Qwen2.5-VL-72B-Instruct
#20 opened 8 months ago
		by
		
				
 chnsmth
							
						chnsmth
	
 
							Could you please share the detailed parameters setting for the online demo?
#18 opened 8 months ago
		by
		
				
 harryzwh
							
						harryzwh
	
vllm推理32k-128k输入
#17 opened 8 months ago
		by
		
				
 luckyZhangHu
							
						luckyZhangHu
	
official finetune example?
								5
#16 opened 8 months ago
		by
		
				
 erichartford
							
						erichartford
	
 
							Anyone pls let me know what hardware can run 72B ?
								2
#15 opened 8 months ago
		by
		
				
 haoyiharrison
							
						haoyiharrison
	
 
							Fix model tree (remove loop)
#14 opened 8 months ago
		by
		
				
 hekmon
							
						hekmon
	
batch inference error
👍
							
						1
				
								1
#13 opened 8 months ago
		by
		
				
 404dreamer
							
						404dreamer
	
Error in preprocessing prompt inputs
#12 opened 8 months ago
		by
		
				
 darvec
							
						darvec
	
cannot import name 'Qwen2_5_VLImageProcessor' (on vLLM)
									4
	#11 opened 9 months ago
		by
		
				
 cbrug
							
						cbrug
	
Update preprocessor_config.json
#10 opened 9 months ago
		by
		
				
 
							 Isotr0py
						Isotr0py
	
 
							Hardware Requirements
👀
							
						4
				#9 opened 9 months ago
		by
		
				
 shreyas0985
							
						shreyas0985
	
Vision tokens missing from chat template
#8 opened 9 months ago
		by
		
				
 depasquale
							
						depasquale
	
 
							ERROR:hf-to-gguf:Model Qwen2_5_VLForConditionalGeneration is not supported
								2
#7 opened 9 months ago
		by
		
				
 li-gz
							
						li-gz
	
docs(readme): fix typo in README.md
#6 opened 9 months ago
		by
		
				
 BjornMelin
							
						BjornMelin
	
 
							Out of Memory on two H100 (80GB) each and load_in_8_bit = True
#4 opened 9 months ago
		by
		
				
 Maverick17
							
						Maverick17
	
 
							Model Memory Requirements
								2
#3 opened 9 months ago
		by
		
				
 nvip1204
							
						nvip1204
	
Video Inference - TypeError: process_vision_info() got an unexpected keyword argument 'return_video_kwargs'
								2
#2 opened 9 months ago
		by
		
				
 hmanju
							
						hmanju
	
Qwen/Qwen2.5-VL-72B-Instruct-AWQ and Qwen/Qwen2.5-VL-40<B-Instruct-AWQ please
➕
							❤️
							
						18
				
								6
#1 opened 9 months ago
		by
		
				
 devops724
							
						devops724
	

