Spark-270M
Spark-270M is a highly compact, utility-focused language model with 270 million parameters. It is a fine-tune of Google's Gemma 3 270M, designed to punch significantly above its weight class by leveraging high-quality synthetic data distillation.
The model functions as a "dense information engine"βspecializing in generating concise title summaries, search engine queries, and logical follow-up questioningβwhile retaining the creative conversational flair inherited from its teacher model's lineage.
β‘ Model Details
- Model Name: Spark-270M
- Base Architecture: Google Gemma 3 270M
- Parameters: 270M active parameters
- Context Window: 32k tokens
- Teacher Model: Lightning-1.7B (Custom model fine-tuned on Hermes 3)
- Training Type: Synthetic "Textbook" Distillation (SFT)
π Training Methodology: "Textbooks Are All You Need"
Spark-270M was trained using a distinct data pipeline inspired by the Textbooks Are All You Need (Microsoft Phi) research paper.
Instead of training on raw web scrapes, Spark-270M was fine-tuned exclusively on a series of synthetic textbooks generated by a larger parent model, Lightning-1.7B.
The Teacher: Lightning-1.7B
The data generator, Lightning-1.7B, was itself fine-tuned on the Hermes 3 dataset. This lineage allows Spark-270M to inherit specific behavioral traits from Hermes 3βnamely creativity, steerability, and a refusal to be "boring"βdespite being distilled into a rigid textbook format.
The synthetic data focused on:
- High-density reasoning chains: Explaining complex topics in compressed formats.
- Utility Tasks: Converting conversational fluff into actionable queries.
- Socratic Dialogue: Modeling inquisitive follow-up questioning.
π οΈ Intended Use & Capabilities
Spark-270M is designed to be a lightweight Utility Model. It is ideal for edge devices, rapid prototyping, or functioning as a specific "node" in a larger agentic system (e.g., the summarizer node or the query-generator node).
Primary Capabilities
- Dense Title Summarization: Converting long conversation threads into information-dense, short titles or abstracts.
- Search Query Generation: Formulating precise, keyword-rich search queries based on vague user input.
- Proactive Questioning: Generating relevant follow-up questions to clarify user intent or deepen a topic.
π» Example Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "TitleOS/Spark-270M"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
# Example: Generating a search query from a user problem
input_text = """
User: I need to fix my sink, it's leaking from the bottom pipe where the U-shape thing is.
Task: Generate 3 search engine queries for this problem.
Response:
"""
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=128)
print(tokenizer.d ecode(outputs[0]))
Quants:
Q4_K_M: https://huggingface.co/TitleOS/Spark-270M-FP16-Q4_K_M-GGUF
Q8: https://huggingface.co/TitleOS/Spark-270M-FP16-Q8_0-GGUF
- Downloads last month
- 37