Jyhan003 commited on
Commit
f054c72
·
1 Parent(s): f74b69a

Updated README to clarify: code under MIT, weights under CC BY-NC 4.0

Browse files
Files changed (1) hide show
  1. README.md +19 -3
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: mit
3
  library_name: transformers
4
  pipeline_tag: voice-activity-detection
5
  tags:
@@ -14,7 +14,7 @@ tags:
14
  ---
15
 
16
  ## Overview
17
- This hub features the pre-trained model by [DiariZen](https://github.com/BUTSpeechFIT/DiariZen) as described in [BUT System for the MLC-SLM Challenge](https://huggingface.co/papers/2506.13414). The EEND component is built upon WavLM-Large and Conformer layers. The model was pre-trained on far-field, single-channel audio from a diverse set of public datasets, including AMI, AISHELL-4, AliMeeting, NOTSOFAR-1, MSDWild, DIHARD3, RAMC, and VoxConverse. Then structured pruning at 80% sparsity is applied. Finally, the pruned model is fine-tuned with [MLC-SLM](https://www.nexdata.ai/competition/mlc-slm) data.
18
 
19
 
20
  ## Usage
@@ -59,4 +59,20 @@ DER evaluation of [Pyannote baseline](https://github.com/mubingshen/MLC-SLM-Base
59
  | Spanish | 12.92 | 10.82 |
60
  | Thai | 10.90 | 10.62 |
61
  | Vietnamese | 14.64 | 12.69 |
62
- | **Average** | **16.44**| **12.71**|
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-nc-4.0
3
  library_name: transformers
4
  pipeline_tag: voice-activity-detection
5
  tags:
 
14
  ---
15
 
16
  ## Overview
17
+ This hub features the pre-trained model by [DiariZen](https://github.com/BUTSpeechFIT/DiariZen) as described in [BUT System for the MLC-SLM Challenge](https://huggingface.co/papers/2506.13414). The EEND component is built upon WavLM-Large and Conformer layers. The model was pre-trained on far-field, single-channel audio from a diverse set of public datasets, including AMI, AISHELL-4, AliMeeting, NOTSOFAR-1, MSDWild, DIHARD3, RAMC, and VoxConverse. Then structured pruning at 80% sparsity is applied. Finally, the pruned model is fine-tuned with [MLC-SLM](https://www.nexdata.ai/competition/mlc-slm) data. When loading this model, please ensure **non-commercial** usage, in accordance with the CC BY-NC 4.0 license.
18
 
19
 
20
  ## Usage
 
59
  | Spanish | 12.92 | 10.82 |
60
  | Thai | 10.90 | 10.62 |
61
  | Vietnamese | 14.64 | 12.69 |
62
+ | **Average** | **16.44**| **12.71**|
63
+
64
+ ## Citation
65
+ If you found this work helpful, please consider citing:
66
+ ```
67
+ @article{polok2025but,
68
+ title={BUT System for the MLC-SLM Challenge},
69
+ author={Polok, Alexander and Han, Jiangyu and Klement, Dominik and Cornell, Samuele and {\v{C}}ernock{\`y}, Jan and Burget, Luk{\'a}{\v{s}}},
70
+ journal={arXiv preprint arXiv:2506.13414},
71
+ year={2025}
72
+ }
73
+ ```
74
+
75
+ ## License
76
+ - **Source code**: MIT (see the [project’s GitHub repository](https://github.com/BUTSpeechFIT/DiariZen)).
77
+ - **Model weights**: CC BY-NC 4.0 (non-commercial).
78
+ - Rationale: some training datasets are research-only or non-commercial, so the released weights cannot be used commercially.