Datasets:

Languages:
English
ArXiv:
License:

Update metadata (language, task categories) and add model usage examples

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +46 -2
README.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
  language:
3
- - en
4
  license: mit
5
  task_categories:
6
- - fill-mask
7
  tags:
8
  - pretraining
9
  - language-modeling
@@ -69,6 +69,50 @@ for sample in dataset:
69
  # Process your data...
70
  ```
71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
  ## 🔗 Related Resources
74
 
 
1
  ---
2
  language:
3
+ - mul
4
  license: mit
5
  task_categories:
6
+ - feature-extraction
7
  tags:
8
  - pretraining
9
  - language-modeling
 
69
  # Process your data...
70
  ```
71
 
72
+ ## 💡 Sample Usage of mmBERT Models
73
+
74
+ Models trained on this pre-training data can be used for various multilingual tasks. The following examples demonstrate how to get started with `mmBERT` models.
75
+
76
+ ### Installation
77
+ ```bash
78
+ pip install torch>=1.9.0
79
+ pip install transformers>=4.48.0
80
+ ```
81
+
82
+ ### Small Model for Fast Inference (Feature Extraction)
83
+ ```python
84
+ from transformers import AutoTokenizer, AutoModel
85
+
86
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-small")
87
+ model = AutoModel.from_pretrained("jhu-clsp/mmbert-small")
88
+
89
+ # Example: Get multilingual embeddings
90
+ inputs = tokenizer("Hello world! 你好世界! Bonjour le monde!", return_tensors="pt")
91
+ outputs = model(**inputs)
92
+ embeddings = outputs.last_hidden_state.mean(dim=1)
93
+ ```
94
+
95
+ ### Base Model for Masked Language Modeling
96
+ ```python
97
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
98
+ import torch
99
+
100
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-base")
101
+ model = AutoModelForMaskedLM.from_pretrained("jhu-clsp/mmbert-base")
102
+
103
+ # Example: Multilingual masked language modeling
104
+ text = "The capital of [MASK] is Paris."
105
+ inputs = tokenizer(text, return_tensors="pt")
106
+ with torch.no_grad():
107
+ outputs = model(**inputs)
108
+
109
+ # Get predictions for [MASK] tokens
110
+ mask_indices = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)
111
+ predictions = outputs.logits[mask_indices]
112
+ top_tokens = torch.topk(predictions, 5, dim=-1)
113
+ predicted_words = [tokenizer.decode(token) for token in top_tokens.indices[0]]
114
+ print(f"Predictions: {predicted_words}")
115
+ ```
116
 
117
  ## 🔗 Related Resources
118