Gemma3-1B-LOMO-q4f16_1-MLC
MLC-LLM formatted weights for on-device inference.
- Conv template:
gemma3_instruction(runtime prompt formatting should match training) - Files
mlc-chat-config.jsonparams_shard_*.bintensor-cache.json- tokenizer files (
tokenizer.json+tokenizer.modelorvocab.json+merges.txt)
Quick test (CLI)
mlc_llm chat HF://raining-codes/Gemma3-1B-LOMO-q4f16_1-MLC --temperature 0.7 --top-p 0.9 --repeat-penalty 1.08 --max-gen-len 512
- Downloads last month
- 70