Post
4337
LiquidAI/LFM2-8B-A1B just dropped!
8.3B params with only 1.5B active/token π
> Quality β 3β4B dense, yet faster than Qwen3-1.7B
> MoE designed to run on phones/laptops (llama.cpp / vLLM)
> Pre-trained on 12T tokens β strong math/code/IF
8.3B params with only 1.5B active/token π
> Quality β 3β4B dense, yet faster than Qwen3-1.7B
> MoE designed to run on phones/laptops (llama.cpp / vLLM)
> Pre-trained on 12T tokens β strong math/code/IF