File size: 3,575 Bytes
ab16362 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
---
language: en
license: mit
tags:
- spiritual-ai
- brahma-kumaris
- murli
- distilgpt2
- ultra-lite
- peft
- lora
library_name: peft
base_model: distilgpt2
---
# ποΈ Murli Assistant - DistilGPT-2 Ultra-Lite
An **ultra-lightweight** spiritual AI assistant trained on Brahma Kumaris murli content. Perfect for free Colab and low-resource environments!
## π― Why This Model?
- **82M parameters** (30x smaller than Phi-2)
- **RAM: ~1-2 GB** (fits easily in free Colab)
- **Fast inference**: 0.5-1 second per response
- **No quantization needed**: Runs in full precision
- **Perfect for free tier**: No crashes, no OOM errors
## Model Details
- **Base Model**: DistilGPT-2 (82M parameters)
- **Fine-tuning**: LoRA (Low-Rank Adaptation)
- **Training Data**: 150 authentic murlis
- **Training Examples**: 153+
- **Max Length**: 256 tokens
- **LoRA Rank**: 4
## Usage
### Quick Start (Colab)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load base model
tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
base_model = AutoModelForCausalLM.from_pretrained("distilgpt2")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "eswarankrishnamurthy/murli-assistant-distilgpt2-lite")
# Chat function
def chat(message):
prompt = f"Q: {message}\nA:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Try it
response = chat("Om Shanti")
print(response)
```
### Use in Production
See the full Colab notebook: `murli-distilgpt2-colab.ipynb`
## Comparison with Other Models
| Model | Parameters | RAM | Inference | Colab Free |
|-------|------------|-----|-----------|------------|
| **DistilGPT-2 (This)** | 82M | ~1-2 GB | 0.5-1s | β
Perfect |
| Phi-2 | 2.7B | ~10 GB | 1-3s | β Crashes |
| Phi-2 (4-bit) | 2.7B | ~3-4 GB | 1-3s | β οΈ Tight fit |
## Advantages
β
**Ultra-Lightweight**: 30x smaller than Phi-2
β
**Low RAM**: Only 1-2 GB needed
β
**Fast Training**: 5-10 minutes
β
**Fast Inference**: Sub-second responses
β
**Free Colab**: Perfect fit, no crashes
β
**Easy Deployment**: Simple integration
β
**Good Quality**: Excellent for basic Q&A
## Training Details
[
"30x smaller than Phi-2",
"Fits in free Colab RAM easily",
"Fast training (5-10 min)",
"Fast inference",
"Good for basic Q&A"
]
## Example Responses
**Q:** Om Shanti
**A:** Om Shanti, sweet child! π I'm your Murli Helper. How can I guide you today?
**Q:** What is soul consciousness?
**A:** Soul consciousness is experiencing yourself as an eternal, pure soul with peace, love, and purity. Om Shanti π
**Q:** Who is Baba?
**A:** Baba is the Supreme Soul, the Ocean of Knowledge who teaches Raja Yoga through Brahma. Om Shanti π
## Limitations
- Shorter context (256 tokens vs Phi-2's 512)
- Simpler responses compared to larger models
- Best for focused Q&A, not long essays
- Limited reasoning compared to billion-parameter models
## License
MIT License - Free to use and modify
## Citation
```bibtex
@misc{murli-distilgpt2-lite,
author = {eswarankrishnamurthy},
title = {Murli Assistant - DistilGPT-2 Ultra-Lite},
year = {2025},
publisher = {HuggingFace},
url = {https://huggingface.co/eswarankrishnamurthy/murli-assistant-distilgpt2-lite}
}
```
## Acknowledgments
- Brahma Kumaris World Spiritual University for murli teachings
- HuggingFace for model hosting
- DistilGPT-2 team for the base model
---
**Om Shanti! π**
|