ποΈ LFM2.5-VL
Collection
4 items
β’
Updated
β’
13
Try LFM β’ Documentation β’ LEAP β’ Blog
Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2.5-VL-1.6B
Example usage with llama.cpp:
llama-cli -hf LiquidAI/LFM2.5-VL-1.6B-GGUF:Q4_0
llama-cli -hf LiquidAI/LFM2.5-VL-1.6B-GGUF:F16
4-bit
8-bit
16-bit