Text-to-Cypher Gemma 3 4B Instruct (MLX FP16)
This repository provides an MLX-converted version of the original Neo4j fine-tuned Gemma 3 4B Instruct model. It runs natively on Apple Silicon via the MLX framework.
Original model: https://huggingface.co/neo4j/text-to-cypher-Gemma-3-4B-Instruct-2025.04.0
Converted by: Robert Fusco
Converted on: 8NOV2025
Format: FP16 Framework: MLX Hardware target: Apple Silicon
Notes
- Designed for local Cypher query generation, graph database automation, and RAG applications.
License
Same license as the upstream Gemma 3 model and Neo4j fine-tuned derivatives. Refer to the Hugging Face model card for usage and redistribution conditions.
- Downloads last month
- 35
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for mlx-community/text-to-cypher-Gemma-3-4B-Instruct-MLX-FP16
Base model
google/gemma-3-4b-pt
Finetuned
google/gemma-3-4b-it