<\s> token in the chat template instead of the </s> EOS token
									4
	#41 opened 10 months ago
		by
		
				
							
						fabric
	
Fine Tuning results into ValueError: Number of image tokens in input_ids different from num_images.
#40 opened 11 months ago
		by
		deleted
all the outputs from the llava-hf/llava-v1.6-mistral-7b-hf model are showing as <unk> when prompted with an image. kindly clarify if there is an issue or I’m doing something incorrectly?
🤯
							
						1
				#39 opened 11 months ago
		by
		
				
							
						Abdrabu
	
Potential ways to accelerate for image to text tasks?
									9
	#38 opened 11 months ago
		by
		
				
							
						triscuiter
	
Processor config change leads to errors
									9
	#37 opened 12 months ago
		by
		
				
							
						7AtAri
	
Inference without images
									4
	#36 opened 12 months ago
		by
		
				
							
						psologub
	
Evaluation
									1
	#35 opened about 1 year ago
		by
		
				
							
						russwang
	
Expanding inputs for image tokens in LLaVa-NeXT should be done in processing.
									5
	#34 opened about 1 year ago
		by
		
				
							
						miniTsl
	
Index out of range errors
									1
	#32 opened about 1 year ago
		by
		
				
							
						manas03
	
Can the model be used for commercial purposes?
									1
	#28 opened over 1 year ago
		by
		
				
							
						ssh6lq
	
PLZ!😭When I run the template, I get the error“Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained."
									5
	#26 opened over 1 year ago
		by
		
				
							
						wenyilll
	
How to set the temperature
									1
	#25 opened over 1 year ago
		by
		
				
							
						aakash1307
	
Unable to test inference on SageMaker
									1
	#23 opened over 1 year ago
		by
		
				
							
						AleksandarC
	
deploy on sagemaker
								1
#20 opened over 1 year ago
		by
		
				
							
						bk2000
	
Support for multiple images..
									8
	#19 opened over 1 year ago
		by
		
				
							
						wamozart
	
New LLaVA-NeXT (2024-05 Release)
									2
	#18 opened over 1 year ago
		by
		
				
							
						justinphan3110
	
Does this support PEFT API for fine-tuning?
									2
	#16 opened over 1 year ago
		by
		
				
							
						larry5
	
How to run on M3 MAX on macbook?
#13 opened over 1 year ago
		by
		
				
							
						Arthurvaz
	
works quite good actually using the Tag-Gui, but....
#12 opened over 1 year ago
		by
		
				
							
						U-ID
	
Embedding Function
								1
#11 opened over 1 year ago
		by
		
				
							
						dandre0102
	
Few Shot Example
➕
							
						2
				
									2
	#9 opened over 1 year ago
		by
		
				
							
						chaydaroglu
	
Several issues loading and using the model with transformers==4.39.2
									2
	#7 opened over 1 year ago
		by
		
				
							
						csegalin
	
Does llava supports multi-gpu inference?
									3
	#6 opened over 1 year ago
		by
		
				
							
						ZealLin
	
Will you make a VIP version of llava 1.6 models?
									1
	#4 opened over 1 year ago
		by
		
				
							
						barleyspectacular
	
wrong padding token
									2
	#2 opened over 1 year ago
		by
		
				
							
						aliencaocao