SmerkyG commited on
Commit
6dfe3c2
·
verified ·
1 Parent(s): 2b56b5e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -11,7 +11,7 @@ pipeline_tag: text-generation
11
  - Try out the model on [![Featherless](https://img.shields.io/badge/featherless--ai%2FQRWKV--72B-Dummy?style=flat&label=Featherless&color=facc15)](https://featherless.ai/models/featherless-ai/QRWKV-72B)
12
  - Model details from our blog post here! [![Substack](https://img.shields.io/badge/Substack-Dummy?style=flat&color=facc15)](https://substack.recursal.ai/p/qwerky-72b-and-32b-training-large)
13
  - This model was presented in [RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale](https://huggingface.co/papers/2505.03005).
14
- - Code: [https://github.com/recursal/RADLADS](https://github.com/recursal/RADLADS)
15
 
16
  Benchmarks is as follows for both QRWKV-QwQ-32B and QRWKV-72B models:
17
 
 
11
  - Try out the model on [![Featherless](https://img.shields.io/badge/featherless--ai%2FQRWKV--72B-Dummy?style=flat&label=Featherless&color=facc15)](https://featherless.ai/models/featherless-ai/QRWKV-72B)
12
  - Model details from our blog post here! [![Substack](https://img.shields.io/badge/Substack-Dummy?style=flat&color=facc15)](https://substack.recursal.ai/p/qwerky-72b-and-32b-training-large)
13
  - This model was presented in [RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale](https://huggingface.co/papers/2505.03005).
14
+ - Code: [https://github.com/recursal/RADLADS-paper](https://github.com/recursal/RADLADS-paper)
15
 
16
  Benchmarks is as follows for both QRWKV-QwQ-32B and QRWKV-72B models:
17