Commit
·
ca8130e
1
Parent(s):
becf91f
Add README.md
Browse files
README.md
CHANGED
|
@@ -9,12 +9,13 @@ datasets:
|
|
| 9 |
- wikipedia
|
| 10 |
---
|
| 11 |
# MultiBERTs Seed 4 (uncased)
|
| 12 |
-
Seed 4 pretrained BERT model on English language using a masked language modeling (MLM) objective. It was introduced in
|
| 13 |
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
|
| 14 |
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
|
| 15 |
between english and English.
|
| 16 |
|
| 17 |
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
|
|
|
|
| 18 |
## Model description
|
| 19 |
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
|
| 20 |
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
|
|
@@ -31,6 +32,7 @@ was pretrained with two objectives:
|
|
| 31 |
This way, the model learns an inner representation of the English language that can then be used to extract features
|
| 32 |
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
|
| 33 |
classifier using the features produced by the BERT model as inputs.
|
|
|
|
| 34 |
## Intended uses & limitations
|
| 35 |
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
|
| 36 |
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
|
|
@@ -38,6 +40,7 @@ fine-tuned versions on a task that interests you.
|
|
| 38 |
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
|
| 39 |
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
|
| 40 |
generation you should look at model like GPT2.
|
|
|
|
| 41 |
### How to use
|
| 42 |
Here is how to use this model to get the features of a given text in PyTorch:
|
| 43 |
```python
|
|
@@ -48,6 +51,7 @@ text = "Replace me by any text you'd like."
|
|
| 48 |
encoded_input = tokenizer(text, return_tensors='pt')
|
| 49 |
output = model(**encoded_input)
|
| 50 |
```
|
|
|
|
| 51 |
### Limitations and bias
|
| 52 |
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
|
| 53 |
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
|
|
@@ -58,6 +62,7 @@ The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com
|
|
| 58 |
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
|
| 59 |
headers).
|
| 60 |
## Training procedure
|
|
|
|
| 61 |
### Preprocessing
|
| 62 |
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
|
| 63 |
then of the form:
|
|
@@ -73,11 +78,13 @@ The details of the masking procedure for each sentence are the following:
|
|
| 73 |
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
|
| 74 |
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
|
| 75 |
- In the 10% remaining cases, the masked tokens are left as is.
|
|
|
|
| 76 |
### Pretraining
|
| 77 |
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
|
| 78 |
of 256. The sequence length was set to 512 throughout. The optimizer
|
| 79 |
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
|
| 80 |
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
|
|
|
|
| 81 |
### BibTeX entry and citation info
|
| 82 |
```bibtex
|
| 83 |
@article{DBLP:journals/corr/abs-2106-16163,
|
|
|
|
| 9 |
- wikipedia
|
| 10 |
---
|
| 11 |
# MultiBERTs Seed 4 (uncased)
|
| 12 |
+
Seed 4 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
|
| 13 |
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
|
| 14 |
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
|
| 15 |
between english and English.
|
| 16 |
|
| 17 |
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
|
| 18 |
+
|
| 19 |
## Model description
|
| 20 |
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
|
| 21 |
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
|
|
|
|
| 32 |
This way, the model learns an inner representation of the English language that can then be used to extract features
|
| 33 |
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
|
| 34 |
classifier using the features produced by the BERT model as inputs.
|
| 35 |
+
|
| 36 |
## Intended uses & limitations
|
| 37 |
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
|
| 38 |
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
|
|
|
|
| 40 |
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
|
| 41 |
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
|
| 42 |
generation you should look at model like GPT2.
|
| 43 |
+
|
| 44 |
### How to use
|
| 45 |
Here is how to use this model to get the features of a given text in PyTorch:
|
| 46 |
```python
|
|
|
|
| 51 |
encoded_input = tokenizer(text, return_tensors='pt')
|
| 52 |
output = model(**encoded_input)
|
| 53 |
```
|
| 54 |
+
|
| 55 |
### Limitations and bias
|
| 56 |
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
|
| 57 |
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
|
|
|
|
| 62 |
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
|
| 63 |
headers).
|
| 64 |
## Training procedure
|
| 65 |
+
|
| 66 |
### Preprocessing
|
| 67 |
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
|
| 68 |
then of the form:
|
|
|
|
| 78 |
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
|
| 79 |
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
|
| 80 |
- In the 10% remaining cases, the masked tokens are left as is.
|
| 81 |
+
|
| 82 |
### Pretraining
|
| 83 |
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
|
| 84 |
of 256. The sequence length was set to 512 throughout. The optimizer
|
| 85 |
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
|
| 86 |
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
|
| 87 |
+
|
| 88 |
### BibTeX entry and citation info
|
| 89 |
```bibtex
|
| 90 |
@article{DBLP:journals/corr/abs-2106-16163,
|