Liquid AI

Try LFM β€’ Documentation β€’ LEAP β€’ Blog

LFM2.5-VL-1.6B

Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2.5-VL-1.6B

πŸƒ How to run LFM2.5-VL-1.6B

Example usage with llama.cpp:

llama-cli -hf LiquidAI/LFM2.5-VL-1.6B-GGUF:Q4_0
llama-cli -hf LiquidAI/LFM2.5-VL-1.6B-GGUF:F16
Downloads last month
2,669
GGUF
Model size
1B params
Architecture
lfm2
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for LiquidAI/LFM2.5-VL-1.6B-GGUF

Quantized
(4)
this model

Collection including LiquidAI/LFM2.5-VL-1.6B-GGUF