๐Ÿง  Gemma3 Smart Q4 โ€” Bilingual Offline Assistant for Raspberry Pi

Gemma3 Smart Q4 is a quantized bilingual (Italianโ€“English) variant of Google's Gemma 3 1B model, specifically optimized for edge devices like the Raspberry Pi 4 & 5. It runs completely offline with Ollama or llama.cpp, ensuring privacy and speed without external dependencies.

Version: v0.1.0 Author: Antonio (chill123) Base Model: Google Gemma 3 1B IT


๐Ÿ’ป Optimized for Raspberry Pi

โœ… Tested on Raspberry Pi 4 (4GB) โ€” 3.32 t/s sustained (100% reliable over 60 minutes) โœ… Fully offline โ€” no external APIs, no internet required โœ… Lightweight โ€” under 800 MB in Q4 quantization โœ… Bilingual โ€” seamlessly switches between Italian and English โœ… Production-ready โ€” 24/7 deployment tested with zero failures


๐Ÿ” Key Features

  • ๐Ÿ—ฃ๏ธ Bilingual AI โ€” Automatically detects and responds in Italian or English
  • โšก Edge-optimized โ€” Fine-tuned parameters for low-power ARM devices
  • ๐Ÿ”’ Privacy-first โ€” All inference happens locally on your device
  • ๐Ÿงฉ Two quantizations available:
    • Q4_0 (โ‰ˆ687 MB) โ†’ Default choice, 3% faster
    • Q4_K_M (โ‰ˆ769 MB) โ†’ Better quality for long conversations

๐Ÿ“Š Benchmark Results (Updated Oct 21, 2025)

Complete 60-minute soak test on Raspberry Pi 4 (4GB RAM) with Ollama.

Production Metrics

Metric Value Status
Sustained throughput 3.32 t/s (256 tokens) โœ… Production-ready
Reliability 100% (455/455 requests) โœ… Perfect
Avg response time 7.92s โœ… Consistent
Thermal stability 70.2ยฐC avg (max 73.5ยฐC) โœ… No throttling
Memory usage 42% (1.6 GB) โœ… No leaks
Uptime tested 60+ minutes continuous โœ… 24/7 ready

Performance by Token Count

Tokens Run 1 Run 2 Run 3 Average*
128 0.24 3.44 3.45 3.45 t/s
256 3.43 3.09 3.43 3.32 t/s
512 2.76 2.77 2.11 2.55 t/s

*Average excludes cold-start (first run)

Model Comparison

Model Size Sustained Speed Recommended For
Q4_K_M โญ 769 MB 3.32 t/s Production (tested 60min, 100% reliable)
Q4_0 687 MB 3.45 t/s Development (faster but less stable)

๐Ÿ“Š View Complete Benchmark Report โ€” Full performance, reliability, and stability analysis

Benchmark Methodology

  • Duration: 60.1 minutes (3,603 seconds)
  • Total requests: 455 (2-second interval)
  • Platform: Raspberry Pi 4 (4GB RAM, ARM Cortex-A72 @ 1.5GHz)
  • Runtime: Ollama 0.3.x + llama.cpp backend
  • Monitoring: CPU temp, RAM usage, load average (sampled every 5s)
  • Tasks: Performance (128/256/512 tokens), Quality (HellaSwag, ARC, TruthfulQA), Robustness (soak test)

Recommendation: Use Q4_K_M for production deployments (proven 100% reliability over 60 minutes). Use Q4_0 for development/testing if you need slightly faster inference.


๐Ÿ› ๏ธ Quick Start with Ollama

IMPORTANT: To enable bilingual behavior, you must create a Modelfile with the bilingual SYSTEM prompt (shown in all options below).

Option 1: Use Published Ollama Model (Easiest)

# Pull the published model
ollama pull antconsales/antonio-gemma3-smart-q4

# Create Modelfile with bilingual configuration
cat > Modelfile <<'MODELFILE'
FROM antconsales/antonio-gemma3-smart-q4

PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER num_ctx 1024
PARAMETER num_thread 4
PARAMETER num_batch 32
PARAMETER repeat_penalty 1.05
PARAMETER stop "<end_of_turn>"
PARAMETER stop "</s>"

SYSTEM """You are an offline AI assistant running on a Raspberry Pi. You MUST detect the user's language and respond in the SAME language:

- If the user writes in Italian, respond ONLY in Italian
- If the user writes in English, respond ONLY in English

Sei un assistente AI offline su Raspberry Pi. DEVI rilevare la lingua dell'utente e rispondere nella STESSA lingua:

- Se l'utente scrive in italiano, rispondi SOLO in italiano
- Se l'utente scrive in inglese, rispondi SOLO in inglese

Always match the user's language choice."""
MODELFILE

# Create configured model
ollama create gemma3-bilingual -f Modelfile

# Run it!
ollama run gemma3-bilingual "Ciao! Come stai?"

Option 2: Pull from Hugging Face

Create a Modelfile:

cat > Modelfile <<'MODELFILE'
FROM hf://chill123/antonio-gemma3-smart-q4/gemma3-1b-q4_0.gguf

PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER num_ctx 1024
PARAMETER num_thread 4
PARAMETER num_batch 32
PARAMETER repeat_penalty 1.05
PARAMETER stop "<end_of_turn>"
PARAMETER stop "</s>"

SYSTEM """
You are an offline AI assistant running on a Raspberry Pi. Automatically detect the user's language (Italian or English) and respond in the same language. Be concise, practical, and helpful. If a task requires internet access or external services, clearly state this and suggest local alternatives when possible.

Sei un assistente AI offline che opera su Raspberry Pi. Rileva automaticamente la lingua dell'utente (italiano o inglese) e rispondi nella stessa lingua. Sii conciso, pratico e utile. Se un compito richiede accesso a internet o servizi esterni, indicalo chiaramente e suggerisci alternative locali quando possibile.
"""
MODELFILE

Then run:

ollama create antonio-gemma3-smart-q4 -f Modelfile
ollama run antonio-gemma3-smart-q4 "Ciao! Chi sei?"

Option 3: Download and Use Locally

# Download the model
wget https://huggingface.co/chill123/antonio-gemma3-smart-q4/resolve/main/gemma3-1b-q4_0.gguf

# Create Modelfile pointing to local file
cat > Modelfile <<'MODELFILE'
FROM ./gemma3-1b-q4_0.gguf

PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER num_ctx 1024
PARAMETER num_thread 4
PARAMETER num_batch 32
PARAMETER repeat_penalty 1.05
PARAMETER stop "<end_of_turn>"
PARAMETER stop "</s>"

SYSTEM """
You are an offline AI assistant running on a Raspberry Pi. Automatically detect the user's language (Italian or English) and respond in the same language. Be concise, practical, and helpful.

Sei un assistente AI offline su Raspberry Pi. Rileva automaticamente la lingua dell'utente (italiano o inglese) e rispondi nella stessa lingua. Sii conciso, pratico e utile.
"""
MODELFILE

# Create and run
ollama create antonio-gemma3-smart-q4 -f Modelfile
ollama run antonio-gemma3-smart-q4 "Hello! Introduce yourself."

โš™๏ธ Recommended Settings (Raspberry Pi 4)

For optimal performance on Raspberry Pi 4/5, use these parameters:

Parameter Value Description
num_ctx 512 - 1024 Context length (512 for faster response, 1024 for longer conversations)
num_thread 4 Utilize all 4 cores on Raspberry Pi 4
num_batch 32 Optimized for throughput on Pi
temperature 0.7 - 0.8 Balanced creativity vs consistency
top_p 0.9 Nucleus sampling for diverse responses
repeat_penalty 1.05 Reduces repetitive outputs

For voice assistants or real-time chat, reduce num_ctx to 512 for faster responses.


๐Ÿ’ฌ Try These Prompts

Test the bilingual capabilities with these examples:

๐Ÿ‡ฎ๐Ÿ‡น Italian

ollama run antonio-gemma3-smart-q4 "Spiegami la differenza tra sensore IR e ultrasuoni in due frasi."
ollama run antonio-gemma3-smart-q4 "Come posso controllare un LED con GPIO su Raspberry Pi?"

๐Ÿ‡ฌ๐Ÿ‡ง English

ollama run antonio-gemma3-smart-q4 "Outline a 5-step plan to control a servo with GPIO on Raspberry Pi."
ollama run antonio-gemma3-smart-q4 "What are the best uses for a Raspberry Pi in home automation?"

๐ŸŒ Code-switching

ollama run antonio-gemma3-smart-q4 "Explain in English how to install Ollama, poi spiega in italiano come testare il modello."

๐Ÿ“ฆ Files Included

File SHA256 Checksum Size Description
gemma3-1b-q4_0.gguf d1d037446a2836db7666aa6ced3ce460b0f7f2ba61c816494a098bb816f2ad55 687 MB Q4_0 quantization (recommended)
gemma3-1b-q4_k_m.gguf c02d2e6f68fd34e9e66dff6a31d3f95fccb6db51f2be0b51f26136a85f7ec1f0 769 MB Q4_K_M quantization (better quality)

๐Ÿš€ Use Cases

  • Privacy-focused personal assistant โ€” All data stays on your device
  • Offline home automation โ€” Control IoT devices without cloud dependencies
  • Educational projects โ€” Learn AI/ML without expensive hardware
  • Voice assistants โ€” Fast enough for real-time speech interaction (3.67 t/s)
  • Embedded systems โ€” Industrial applications requiring offline inference
  • Bilingual chatbots โ€” Italian/English customer support, offline documentation

๐Ÿ”– License

This model is a derivative work of Google's Gemma 3 1B.

License: Gemma License Please review and comply with the Gemma License Terms before using this model.

Quantization, optimization, and bilingual configuration by Antonio (chill123).

For licensing questions regarding the base model, refer to Google's official Gemma documentation.


๐Ÿ”— Links


๐Ÿ› ๏ธ Technical Details

Base Model: Google Gemma 3 1B (instruction-tuned) Quantization: Q4_0 and Q4_K_M (llama.cpp) Context Length: 1024 tokens (configurable) Vocabulary Size: 262,144 tokens Architecture: Gemma3ForCausalLM Supported Platforms: Raspberry Pi 4/5, Mac M1/M2, Linux ARM64


๐Ÿ“ Version History

v0.1.0 (2025-10-21)

  • Initial release
  • Two quantizations: Q4_0 (687 MB) and Q4_K_M (769 MB)
  • Bilingual IT/EN support with automatic language detection
  • Optimized for Raspberry Pi 4 (3.56-3.67 tokens/s)
  • Tested on Raspberry Pi OS (Debian Bookworm) with Ollama

Built with โค๏ธ by Antonio ๐Ÿ‡ฎ๐Ÿ‡น Empowering privacy and edge computing, one model at a time.

Downloads last month
155
GGUF
Model size
1.0B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support