Update README.md
Browse files
README.md
CHANGED
|
@@ -23,7 +23,7 @@ base_model:
|
|
| 23 |
|
| 24 |
**Pleias-RAG-350m** is a 350 million parameters Small Reasoning Model, trained for retrieval-augmented general (RAG), search and source summarization. Along with Pleias-RAG-1B it belongs to the first generation of Pleias specialized reasoning models.
|
| 25 |
|
| 26 |
-
Pleias-RAG-350m outperform most SLMs (4 billion parameters and below) on standardized benchmarks for retrieval-augmented general (HotPotQA, 2wiki) and is
|
| 27 |
|
| 28 |
<p align="center">
|
| 29 |
<img width="80%" src="figures/pleias_benchmark.png">
|
|
|
|
| 23 |
|
| 24 |
**Pleias-RAG-350m** is a 350 million parameters Small Reasoning Model, trained for retrieval-augmented general (RAG), search and source summarization. Along with Pleias-RAG-1B it belongs to the first generation of Pleias specialized reasoning models.
|
| 25 |
|
| 26 |
+
Pleias-RAG-350m outperform most SLMs (4 billion parameters and below) on standardized benchmarks for retrieval-augmented general (HotPotQA, 2wiki) and is a highly cost effective alternative with popular larger models, including Qwen-2.5-7B, Llama-3.1-8B and Gemma-3-4B. It is the only SLM to date to maintain consistent RAG performance across leading European languages and to ensure systematic reference grounding for statements.
|
| 27 |
|
| 28 |
<p align="center">
|
| 29 |
<img width="80%" src="figures/pleias_benchmark.png">
|