File size: 10,941 Bytes
			
			| dbba271 7cf4e16 dbba271 7cf4e16 1d2c3ee 7cf4e16 c2bd47b 7cf4e16 468a160 7cf4e16 468a160 7cf4e16 45da961 7cf4e16 a2e02fb 7cf4e16 a2e02fb 7cf4e16 a2e02fb 7cf4e16 dbba271 7cf4e16 dbba271 7cf4e16 468a160 7cf4e16 468a160 7cf4e16 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 | ---
license: apache-2.0
base_model:
- swiss-ai/Apertus-70B-2509
pipeline_tag: text-generation
library_name: transformers
tags:
  - multilingual
  - compliant
  - swiss-ai
  - apertus
extra_gated_prompt: "### Apertus LLM Acceptable Use Policy  \n(1.0 | September 1, 2025)\n\"Agreement\" The Swiss National AI Institute (SNAI) is a partnership between the two Swiss Federal Institutes of Technology, ETH Zurich and EPFL. \n\nBy using the Apertus LLM you agree to indemnify, defend, and hold harmless ETH Zurich and EPFL against any third-party claims arising from your use of Apertus LLM. \n\nThe training data and the Apertus LLM may contain or generate information that directly or indirectly refers to an identifiable individual (Personal Data). You process Personal Data as independent controller in accordance with applicable data protection law. SNAI will regularly provide a file with hash values for download which you can apply as an output filter to your use of our Apertus LLM. The file reflects data protection deletion requests which have been addressed to SNAI as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output. We strongly advise downloading and applying this output filter from SNAI every six months following the release of the model.  "
extra_gated_fields:
  Your Name: text
  Country: country
  Affiliation: text
  geo: ip_location
  By clicking Submit below I accept the terms of use: checkbox
extra_gated_button_content: Submit
---
# Apertus

##  Table of Contents
1. [Model Summary](#model-summary)
2. [How to use](#how-to-use)
3. [Evaluation](#evaluation)
4. [Training](#training)
5. [Limitations](#limitations)
6. [Legal Aspects](#legal-aspects)
## Model Summary
Apertus is a 70B and 8B parameter language model designed to push the boundaries of fully-open multilingual and transparent models. 
The model supports over 1000 languages and long context, it uses only fully compliant and open training data, and achieves comparable performance to models trained behind closed doors.

The model is a decoder-only transformer, pretrained on 15T tokens with a staged curriculum of web, code and math data. The model uses a new xIELU activation function and is trained from scratch with the AdEMAMix optimizer. Post-training included supervised fine-tuning and alignment via QRPO.
### Key features
- **Fully open model**: open weights + open data + full training details including all data and training recipes
- **Massively Multilingual**: 1811 natively supported languages
- **Compliant** Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data
For more details refer to our [technical report](https://arxiv.org/abs/2509.14233)
## How to use
The modeling code for Apertus is available in transformers `v4.56.0` and later, so make sure to upgrade your transformers version. You can also load the model with the latest `vLLM` which uses transformers as a backend.
```bash
pip install -U transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "swiss-ai/Apertus-70B-Instruct-2509"
device = "cuda"  # for GPU usage or "cpu" for CPU usage
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
).to(device)
# prepare the model input
prompt = "Give me a brief explanation of gravity in simple terms."
messages_think = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages_think,
    tokenize=False,
    add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt", add_special_tokens=False).to(model.device)
# Generate the output
generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
# Get and decode the output
output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
```
>[!TIP]
> We recommend setting `temperature=0.8` and `top_p=0.9` in the sampling parameters.
### Long context processing
Apertus by default supports a context length up to 65,536 tokens.
### Agentic Usage
Apertus supports tool use
### Deployment
Deployment of the models is directly supported by the newest versions of [Transformers](https://github.com/huggingface/transformers), [vLLM](https://github.com/vllm-project/vllm), [SGLang](https://github.com/sgl-project/sglang), and also for running on-device with [MLX](https://github.com/ml-explore/mlx-lm), 
## Evaluation
**Pretraining Evaluation:** Performance (%) of Apertus models on *general language understanding* tasks (higher is better) compared to other pretrained models.
| **Model** | **Avg** | **ARC** | **HellaSwag** | **WinoGrande** | **XNLI** | **XCOPA** | **PIQA** |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **Fully Open Models** | | | | | | | |
| **Apertus-8B** | 65.8 | 72.7 | 59.8 | 70.6 | 45.2 | 66.5 | 79.8 |
| **Apertus-70B** | 67.5 | 70.6 | 64.0 | 73.3 | 45.3 | 69.8 | 81.9 |
| OLMo2-7B | 64.0 | 72.9 | 60.4 | 74.5 | 40.4 | 55.2 | 80.9 |
| OLMo2-32B | 67.7 | 76.2 | 66.7 | 78.6 | 42.9 | 60.1 | 82.1 |
| EuroLLM-1.7B | 54.8 | 57.2 | 44.9 | 58.1 | 40.7 | 55.7 | 72.4 |
| EuroLLM-9B | 62.8 | 67.9 | 57.9 | 68.8 | 41.5 | 61.1 | 79.6 |
| SmolLM2-1.7B | 58.5 | 66.1 | 52.4 | 65.6 | 37.6 | 52.3 | 77.0 |
| SmolLM3-3B | 61.6 | 68.6 | 56.4 | 68.1 | 40.5 | 58.2 | 77.7 |
| Poro-34B | 61.7 | 65.7 | 57.9 | 70.6 | 41.6 | 56.0 | 78.5 |
| **Open-Weight Models** | | | | | | | |
| Llama3.1-8B | 65.4 | 71.6 | 60.0 | 73.4 | 45.3 | 61.8 | 80.1 |
| Llama3.1-70B | 67.3 | 74.4 | 56.5 | 79.4 | 44.3 | 66.7 | 82.3 |
| Qwen2.5-7B | 64.4 | 69.6 | 60.1 | 72.8 | 43.3 | 61.7 | 78.7 |
| Qwen2.5-72B | 69.8 | 76.2 | 67.5 | 78.0 | 46.9 | 68.2 | 82.0 |
| Qwen3-32B | 67.8 | 75.6 | 64.0 | 73.8 | 44.4 | 67.9 | 80.9 |
| Llama4-Scout-16x17B | 67.9 | 74.7 | 66.8 | 73.2 | 43.5 | 67.7 | 81.2 |
| GPT-OSS-20B | 58.1 | 67.0 | 41.5 | 66.5 | 37.4 | 60.4 | 75.6 |
Many additional benchmark evaluations, for pretraining and posttraining phases, multilingual evaluations in around hundred languages, and long context evaluations are provided in Section 5 of the [Apertus_Tech_Report.pdf](https://github.com/swiss-ai/apertus-tech-report/blob/main/Apertus_Tech_Report.pdf)
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 15T
- **Precision:** bfloat16
### Software & hardware
- **GPUs:** 4096 GH200
- **Training Framework:** [Megatron-LM](https://github.com/swiss-ai/Megatron-LM)
- ...
### Open resources
All elements used in the training process are made openly available
- **Training data reconstruction scripts:** [github.com/swiss-ai/pretrain-data](https://github.com/swiss-ai/pretrain-data)
- The training intermediate checkpoints are available on the different branches of this same repository
## Limitations
Apertus can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## Legal Aspects
#### EU AI Act Transparency Documentation and Code of Practice
- [Apertus_EU_Public_Summary.pdf](https://huggingface.co/swiss-ai/Apertus-70B-2509/blob/main/Apertus_EU_Public_Summary.pdf)
- [Apertus_EU_Code_of_Practice.pdf](https://huggingface.co/swiss-ai/Apertus-70B-2509/blob/main/Apertus_EU_Code_of_Practice.pdf)
#### Data Protection and Copyright Requests
For removal requests of personally identifiable information (PII) or of copyrighted content, please contact the respective dataset owners or us directly
- [email protected]
- [email protected]
  
#### Output Filter for PII
- Currently no output filter is provided.
- Please check this site regularly for an output filter that can be used on top of the Apertus LLM. The filter reflects data protection deletion requests which have been addressed to us as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output. We strongly advise downloading and applying this output filter from this site every six months.
## Contact
To contact us, please send an email to
[email protected]
## Citation
```bash
@misc{swissai2025apertus,
  title={{Apertus: Democratizing Open and Compliant LLMs for Global Language Environments}},
  author={Alejandro Hernández-Cano and Alexander Hägele and Allen Hao Huang and Angelika Romanou and Antoni-Joan Solergibert and Barna Pasztor and Bettina Messmer and Dhia Garbaya and Eduard Frank Ďurech and Ido Hakimi and Juan García Giraldo and Mete Ismayilzada and Negar Foroutan and Skander Moalla and Tiancheng Chen and Vinko Sabolčec and Yixuan Xu and Michael Aerni and Badr AlKhamissi and Ines Altemir Marinas and Mohammad Hossein Amani and Matin Ansaripour and Ilia Badanin and Harold Benoit and Emanuela Boros and Nicholas Browning and Fabian Bösch and Maximilian Böther and Niklas Canova and Camille Challier and Clement Charmillot and Jonathan Coles and Jan Deriu and Arnout Devos and Lukas Drescher and Daniil Dzenhaliou and Maud Ehrmann and Dongyang Fan and Simin Fan and Silin Gao and Miguel Gila and María Grandury and Diba Hashemi and Alexander Hoyle and Jiaming Jiang and Mark Klein and Andrei Kucharavy and Anastasiia Kucherenko and Frederike Lübeck and Roman Machacek and Theofilos Manitaras and Andreas Marfurt and Kyle Matoba and Simon Matrenok and Henrique Mendoncça and Fawzi Roberto Mohamed and Syrielle Montariol and Luca Mouchel and Sven Najem-Meyer and Jingwei Ni and Gennaro Oliva and Matteo Pagliardini and Elia Palme and Andrei Panferov and Léo Paoletti and Marco Passerini and Ivan Pavlov and Auguste Poiroux and Kaustubh Ponkshe and Nathan Ranchin and Javi Rando and Mathieu Sauser and Jakhongir Saydaliev and Muhammad Ali Sayfiddinov and Marian Schneider and Stefano Schuppli and Marco Scialanga and Andrei Semenov and Kumar Shridhar and Raghav Singhal and Anna Sotnikova and Alexander Sternfeld and Ayush Kumar Tarun and Paul Teiletche and Jannis Vamvas and Xiaozhe Yao and Hao Zhao Alexander Ilic and Ana Klimovic and Andreas Krause and Caglar Gulcehre and David Rosenthal and Elliott Ash and Florian Tramèr and Joost VandeVondele and Livio Veraldi and Martin Rajman and Thomas Schulthess and Torsten Hoefler and Antoine Bosselut and Martin Jaggi and Imanol Schlag},
  year={2025},
  howpublished={\url{https://arxiv.org/abs/2509.14233}}
}
``` | 
