|
|
--- |
|
|
|
|
|
language: |
|
|
|
|
|
- en |
|
|
|
|
|
license: llama3.2 |
|
|
|
|
|
tags: |
|
|
|
|
|
- philosophy |
|
|
|
|
|
- marcus-aurelius |
|
|
|
|
|
- stoicism |
|
|
|
|
|
- mlx |
|
|
|
|
|
- synthetic-data-kit |
|
|
|
|
|
base_model: mlx-community/Llama-3.2-3B-Instruct-4bit |
|
|
|
|
|
--- |
|
|
|
|
|
|
|
|
# Marcus Aurelius 3B - Synthetic Data Kit + MLX |
|
|
|
|
|
Fine-tuned model to embody the wisdom of Marcus Aurelius using Synthetic Data Kit + MLX. |
|
|
|
|
|
## ๐๏ธ Features |
|
|
|
|
|
- **Base Model**: meta-llama/Llama-3.2-3B-Instruct |
|
|
- **Method**: Synthetic Data Kit + LoRA with MLX |
|
|
- **Hardware**: Apple M4 Pro (48GB RAM) |
|
|
- **Dataset**: Complete Meditations + synthetic data |
|
|
- **Optimized for**: Apple Silicon |
|
|
|
|
|
## ๐ Usage |
|
|
|
|
|
```python |
|
|
from mlx_lm import load, generate |
|
|
|
|
|
model, tokenizer = load("federicomoreno/marcus-aurelius-3b-sdk") |
|
|
response = generate(model, tokenizer, prompt="What is virtue?", max_tokens=100) |
|
|
print(response) |
|
|
``` |
|
|
|
|
|
## ๐ Training Data |
|
|
|
|
|
1. **Meditations**: 264 extracted from Project Gutenberg |
|
|
2. **Templates**: Curated philosophical questions |
|
|
3. **Synthetics**: Generated with Ollama/LLM (if available) |
|
|
|
|
|
## ๐ฏ Examples |
|
|
|
|
|
**Prompt**: What is your philosophy? |
|
|
**Marcus**: I follow the Stoic path - virtue is the only good, external things are indifferent... |
|
|
|
|
|
Generated with [Synthetic Data Kit](https://github.com/gretelai/synthetic-data-kit) on Mac M4 Pro. |