Update README.md
Browse files
README.md
CHANGED
|
@@ -12,6 +12,8 @@ tags:
|
|
| 12 |
- math
|
| 13 |
---
|
| 14 |
|
|
|
|
|
|
|
| 15 |
# **Mintaka-Qwen3-1.6B-V3.1**
|
| 16 |
|
| 17 |
> Mintaka-Qwen3-1.6B-V3.1 is a high-efficiency, science-focused reasoning model **based on Qwen-1.6B** and trained on **DeepSeek v3.1 synthetic traces (10,000 entries)**. It is optimized for random event simulation, logical-problem analysis, and structured scientific reasoning. The model balances symbolic precision with lightweight deployment, making it suitable for researchers, educators, and developers seeking efficient reasoning under constrained compute.
|
|
@@ -100,4 +102,4 @@ print(response)
|
|
| 100 |
* Not tuned for long-form creative writing or conversational small talk
|
| 101 |
* Context window limitations may hinder multi-document or full codebase analysis
|
| 102 |
* Optimized specifically for simulation and logical analysis tasks—general chat may underperform
|
| 103 |
-
* Prioritizes structured logic and reproducibility over emotional tone
|
|
|
|
| 12 |
- math
|
| 13 |
---
|
| 14 |
|
| 15 |
+

|
| 16 |
+
|
| 17 |
# **Mintaka-Qwen3-1.6B-V3.1**
|
| 18 |
|
| 19 |
> Mintaka-Qwen3-1.6B-V3.1 is a high-efficiency, science-focused reasoning model **based on Qwen-1.6B** and trained on **DeepSeek v3.1 synthetic traces (10,000 entries)**. It is optimized for random event simulation, logical-problem analysis, and structured scientific reasoning. The model balances symbolic precision with lightweight deployment, making it suitable for researchers, educators, and developers seeking efficient reasoning under constrained compute.
|
|
|
|
| 102 |
* Not tuned for long-form creative writing or conversational small talk
|
| 103 |
* Context window limitations may hinder multi-document or full codebase analysis
|
| 104 |
* Optimized specifically for simulation and logical analysis tasks—general chat may underperform
|
| 105 |
+
* Prioritizes structured logic and reproducibility over emotional tone
|