File size: 6,343 Bytes
0896b51
21e6c3e
0896b51
 
 
 
 
 
 
 
 
 
 
 
202e092
0896b51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6411899
0896b51
 
b80e9ae
0896b51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
04b55d8
0896b51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b80e9ae
 
 
 
0896b51
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---
license: apache-2.0
language:
- eu
pipeline_tag: translation
tags:
- Translation
- Capitalization-and-punctuation
- Transformer
---

# HiTZ's Capitalization & Punctuation model for Basque

## Model description
This model was trained from scratch using Marian NMT. The dataset used in training contains 9,784,905 basque sentences. The model was evaluated on the Flores-101 basque subset dev and devtest datasets containing 2009 sentences and 2000 randomly picked CommonVoice-Eu dataset sentences.  
* **Developed by**: HiTZ Research Center (University of the Basque Country EHU)
* **Model type**: Capitalization and Punctuation
* **Language**: Basque  

## Intended uses and limitations  
You can use this model for Punctuation and Capitalization in Basque  
## Usage
Required packages:
- torch
- transformers
- sentencepiece

### Capitalizing using python:
Clone repository to download the model:

```bash
git clone https://huggingface.co/HiTZ/cap-punct-eu
```

Given `MODEL_PATH` is the path that points to the downloaded `cap-punct-eu` folder.

```python
from transformers import pipeline

device = 0 # 0-->GPU, -1-->CPU

segment_list = ["kaixo egun on guztioi", "faktoria e i te beko irratian entzuten da", "gutxi gora behera ehuneko berrogeita bikoa","lau zortzi hamabost hamasei hogeita hiru berrogeita bi", "nire jaio urtea mila bederatziehun eta laurogeita hamasei da", "informazio gehiago hitz puntu e hatxe u puntu eus web horrian"]

translator = pipeline(task="translation", model=MODEL_PATH, tokenizer=MODEL_PATH, device=device)    
result_list = translator(segment_list)
cp_segment_list = [result["translation_text"] for result in result_list]

for text, cp_text in zip(segment_list, cp_segment_list):
    print(f"Normalized: {text}\n  With C&P: {cp_text}\n")
```

### Expected output:
```bash
Normalized: kaixo egun on guztioi
  With C&P: Kaixo, egun on guztioi.

Normalized: faktoria e i te beko irratian entzuten da
  With C&P: Faktoria EiTBko irratian entzuten da.

Normalized: gutxi gora behera ehuneko berrogeita bikoa
  With C&P: Gutxi gora behera %42koa.

Normalized: lau zortzi hamabost hamasei hogeita hiru berrogeita bi
  With C&P: Lau, zortzi, hamabost, hamasei, hogeita hiru, berrogeita bi.

Normalized: nire jaio urtea mila bederatziehun eta laurogeita hamasei da
  With C&P: Nire jaio urtea 1996 da.

Normalized: informazio gehiago hitz puntu e hatxe u puntu eus web horrian
  With C&P: Informazio gehiago hitz.ehu.eus web horrian.
```

## Training

### Data preparation
The training data was compiled by our research group from multiple heterogeneous sources and consists of approximately 9,784,905 sentences. This dataset is a subset of the data used in the following machine translation model [mt-hitz-eu-es](https://huggingface.co/HiTZ/mt-hitz-eu-es)

Prior to training, the data underwent preprocessing steps including cleaning, punctuation standardization, filtering, and the creation of aligned input–output sentence pairs for the capitalization and punctuation restoration task.

To generate the input–output pairs, the target sentences were lowercased, punctuation was removed, and text normalization was applied using an in-house normalization tool.
  
Example:  
```bash 
Output--Cleaned, filetered and standarized: EHUko Estatutuetako 0. artikuluak ondorengoetarako eskumena aitortzen du: egitura bereziak sortzea unibertsitate irakaskuntza, ikasketa eta ikerketa, eta unibertsitatearen hedapenerako oinarri gisa.  
Input--Lowercased, without punctuation and normalized: euskal herriko unibertsitateako estatutuetako zerogarren artikuluak ondorengoetarako eskumena aitortzen du egitura bereziak sortzea unibertsitate irakaskuntza ikasketa eta ikerketa eta unibertsitatearen hedapenerako oinarri gisa 
```
 
### Training procedure
The model was trained using the official [MarianNMT](https://marian-nmt.github.io/quickstart/) implementation.  
Training was performed on a single NVIDIA TITAN RTX GPU.

## Performance
The following table shows the model performance. We use the Word Error Rate metric. The WER-WITHOUT metric corresponds to the Word Error Rate computed on the evaluation dataset before applying capitalization and punctuation restoration. The WER metric corresponds to the output after processing with the model.
The evaluation dataset underwent the same processing as the training dataset.  

| Metric      | FLORES-101     | COMMON-VOICE   |
|-------------|----------------|----------------|
| WER-WITHOUT |       %19.55   |      %22.42    |
|     WER     |        %5.99   |       %5.75    |


# Aditional Information
## Author
HiTZ Basque Center for Language Technology - Aholab Signal Processing Laboratory, University of the Basque Country UPV/EHU.

## Copyright
Copyright (c) 2025 HiTZ Basque Center for Language Technology - Aholab Signal Processing Laboratory, University of the Basque Country UPV/EHU.

## Licensing Information
This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)  

## Funding

The development of these models have been parcially funded by the Ministerio de Transformación Digital and by the Plan de Recuperación, Transformación y Resiliencia – Funded by the European Union – NextGenerationEU ILENIA (with reference 2022/TL22/00215335), by the project IkerGaitu funded by the Basque Government and by the project  HiTZketan (COLAB22/13) funded by the University of the Basque Country EHU.

## Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.

When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.

In no event shall the owner and creator of the models (HiTZ Basque Center for Language Technology - Aholab Signal Processing Laboratory, University of the Basque Country UPV/EHU.) be liable for any results arising from the use made by third parties of these models.