Safetensors
modernbert
mtreviso commited on
Commit
db62e11
·
verified ·
1 Parent(s): d922862

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - HuggingFaceFW/fineweb-edu
5
+ ---
6
+
7
+
8
+ # SparseModernBERT α=2.0 Model Card
9
+
10
+ ## Model Overview
11
+
12
+ SparseModernBERT-alpha2.0 is a masked language model based on [ModernBERT](https://github.com/AnswerDotAI/ModernBERT) that replaces the standard softmax attention with an adaptive sparse attention mechanism (AdaSplash) using Triton.
13
+
14
+ The sparsity parameter α = 2.0 yields highly sparse attention patterns, improving efficiency while maintaining performance.
15
+
16
+ **Key features:**
17
+
18
+ * **Sparsity (α)**: 2.0
19
+ * **Tokenization**: same as ModernBERT
20
+ * **Pretraining**: masked language modeling on a large web corpus
21
+
22
+ ## Usage
23
+
24
+
25
+ Use the codebase from: https://github.com/deep-spin/SparseModernBERT
26
+
27
+ ```python
28
+ from transformers import AutoTokenizer
29
+ from sparse_modern_bert import CustomModernBertModel
30
+
31
+ model_id = "sardinelab/SparseModernBERT-alpha2.0"
32
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
33
+ model = CustomModernBertModel.from_pretrained(model_id, trust_remote_code=True)
34
+ ```
35
+
36
+ ## Citation
37
+
38
+ If you use this model in your work, please cite:
39
+
40
+ ```bibtex
41
+ @article{goncalves2025adasplash,
42
+ title={AdaSplash: Adaptive Sparse Flash Attention},
43
+ author={Gon\c{c}alves, Nuno and Treviso, Marcos and Martins, Andr\'e F. T.},
44
+ journal={arXiv preprint arXiv:2502.12082},
45
+ year={2025}
46
+ }
47
+ ```
48
+