Spark-270M

Spark-270M is a highly compact, utility-focused language model with 270 million parameters. It is a fine-tune of Google's Gemma 3 270M, designed to punch significantly above its weight class by leveraging high-quality synthetic data distillation.

The model functions as a "dense information engine"β€”specializing in generating concise title summaries, search engine queries, and logical follow-up questioningβ€”while retaining the creative conversational flair inherited from its teacher model's lineage.

⚑ Model Details

  • Model Name: Spark-270M
  • Base Architecture: Google Gemma 3 270M
  • Parameters: 270M active parameters
  • Context Window: 32k tokens
  • Teacher Model: Lightning-1.7B (Custom model fine-tuned on Hermes 3)
  • Training Type: Synthetic "Textbook" Distillation (SFT)

πŸ“š Training Methodology: "Textbooks Are All You Need"

Spark-270M was trained using a distinct data pipeline inspired by the Textbooks Are All You Need (Microsoft Phi) research paper.

Instead of training on raw web scrapes, Spark-270M was fine-tuned exclusively on a series of synthetic textbooks generated by a larger parent model, Lightning-1.7B.

The Teacher: Lightning-1.7B

The data generator, Lightning-1.7B, was itself fine-tuned on the Hermes 3 dataset. This lineage allows Spark-270M to inherit specific behavioral traits from Hermes 3β€”namely creativity, steerability, and a refusal to be "boring"β€”despite being distilled into a rigid textbook format.

The synthetic data focused on:

  1. High-density reasoning chains: Explaining complex topics in compressed formats.
  2. Utility Tasks: Converting conversational fluff into actionable queries.
  3. Socratic Dialogue: Modeling inquisitive follow-up questioning.

πŸ› οΈ Intended Use & Capabilities

Spark-270M is designed to be a lightweight Utility Model. It is ideal for edge devices, rapid prototyping, or functioning as a specific "node" in a larger agentic system (e.g., the summarizer node or the query-generator node).

Primary Capabilities

  • Dense Title Summarization: Converting long conversation threads into information-dense, short titles or abstracts.
  • Search Query Generation: Formulating precise, keyword-rich search queries based on vague user input.
  • Proactive Questioning: Generating relevant follow-up questions to clarify user intent or deepen a topic.

πŸ’» Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "TitleOS/Spark-270M"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

# Example: Generating a search query from a user problem
input_text = """
User: I need to fix my sink, it's leaking from the bottom pipe where the U-shape thing is.
Task: Generate 3 search engine queries for this problem.
Response:
"""

input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids, max_new_tokens=128)
print(tokenizer.d ecode(outputs[0]))

Quants:

Q4_K_M: https://huggingface.co/TitleOS/Spark-270M-FP16-Q4_K_M-GGUF

Q8: https://huggingface.co/TitleOS/Spark-270M-FP16-Q8_0-GGUF

FP16: https://huggingface.co/TitleOS/Spark-270M-FP16

Adaptor: https://huggingface.co/TitleOS/Spark-270M-LoRA

Downloads last month
37
Safetensors
Model size
0.3B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for TitleOS/Spark-270M-FP16

Finetuned
(100)
this model
Quantizations
2 models

Dataset used to train TitleOS/Spark-270M-FP16

Collection including TitleOS/Spark-270M-FP16