π° News Summarizer (DistilBART-Based)
A lightweight, fast, and highly efficient news summarization model built on top of the distilled BART architecture.
This model converts long news articles into clear, concise, and human-like summaries, making it perfect for:
- News aggregation platforms
- Research workflows
- Content automation
- Browser extensions
- Educational tools
- AI agents & chatbots
π Features
β High-quality abstractive summaries
Not extractive β the model generates a new summary in natural language.
β Fast & lightweight
Based on the 12-6 distilled BART variant, giving strong performance at a fraction of the size.
β Trained on real news sources
Understands journalistic writing, factual structure, headlines, and key-point extraction.
β Ideal for production & APIs
Minimal latency, optimized for cloud/server use.
π¦ How to Use
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = "Sachin21112004/news-summarizer"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
article = """
Your long news article text here...
"""
inputs = tokenizer(article, return_tensors="pt", max_length=1024, truncation=True)
summary_ids = model.generate(
inputs["input_ids"],
max_length=150,
min_length=40,
no_repeat_ngram_size=3,
length_penalty=2.0,
num_beams=4,
early_stopping=True
)
print(tokenizer.decode(summary_ids[0], skip_special_tokens=True))
- Downloads last month
- 41
Model tree for Sachin21112004/distilbart-news-summarizer
Base model
sshleifer/distilbart-cnn-12-6