vllm implementation
#9 opened 5 months ago
		by
		
				
							
						TahirC
	
AWQ quantization script to Qwen-VL-2.5 7B model
➕
							
						1
				#8 opened 6 months ago
		by
		
				
							
						Berkesule
	
Necessary transformers version
🔥
							🤯
							
						4
				
								1
#7 opened 6 months ago
		by
		
				
							
						chefexperte
	
lm head in the trained model is not in AWQ format
👍
							
						1
				#6 opened 7 months ago
		by
		
				
							
						pooya-mohammadi
	
Is AWQ quantization applied only to the anguage model of this model?
								1
#4 opened 7 months ago
		by
		
				
							
						sinchir0
	
License clarification
#3 opened 7 months ago
		by
		
				
							
						ValerianGuillot
	
Empty output when using Qwen2.5-VL-7B-Instruct-AWQ example code from README
								7
#2 opened 7 months ago
		by
		
				
							
						WpythonW
	
Add link to paper and code
#1 opened 9 months ago
		by
		
				
							
						nielsr