Gemma3-1B-LOMO-q4f16_1-MLC

MLC-LLM formatted weights for on-device inference.

  • Conv template: gemma3_instruction (runtime prompt formatting should match training)
  • Files
    • mlc-chat-config.json
    • params_shard_*.bin
    • tensor-cache.json
    • tokenizer files (tokenizer.json + tokenizer.model or vocab.json + merges.txt)

Quick test (CLI)

mlc_llm chat HF://raining-codes/Gemma3-1B-LOMO-q4f16_1-MLC   --temperature 0.7 --top-p 0.9 --repeat-penalty 1.08 --max-gen-len 512
Downloads last month
70
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for raining-codes/Gemma3-1B-LOMO-q4f16_1-MLC

Finetuned
(312)
this model