nielsr HF Staff commited on
Commit
f8866f4
·
verified ·
1 Parent(s): 0f6d4d1

Improve dataset card: Update task category and add sample usage

Browse files

This pull request improves the dataset card for the mmBERT Training Data by:
- Updating the `task_categories` metadata from `fill-mask` to `feature-extraction`. This change better reflects the primary utility of models trained with this dataset, which are designed for downstream tasks like classification and retrieval that rely on extracted features/embeddings.
- Adding a "Sample Usage" section, including installation steps and code snippets for fast inference (multilingual embeddings) and masked language modeling, directly from the associated GitHub repository. This helps users quickly understand how to leverage models trained with this dataset.

Files changed (1) hide show
  1. README.md +48 -1
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  license: mit
3
  task_categories:
4
- - fill-mask
5
  tags:
6
  - pretraining
7
  - encoder
@@ -23,6 +23,53 @@ This dataset is part of the complete, pre-shuffled training data used to train t
23
 
24
  This dataset aggregates multiple open-source datasets under permissive licenses. See individual source datasets for specific attribution requirements.
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  ## Related Resources
27
 
28
  - **Models**: [mmBERT Model Suite](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4)
 
1
  ---
2
  license: mit
3
  task_categories:
4
+ - feature-extraction
5
  tags:
6
  - pretraining
7
  - encoder
 
23
 
24
  This dataset aggregates multiple open-source datasets under permissive licenses. See individual source datasets for specific attribution requirements.
25
 
26
+ ## Sample Usage
27
+
28
+ The mmBERT models trained with this dataset can be easily loaded and used for various tasks, including getting multilingual embeddings and masked language modeling.
29
+
30
+ ### Installation
31
+ ```bash
32
+ pip install torch>=1.9.0
33
+ pip install transformers>=4.48.0
34
+ ```
35
+
36
+ ### 30-Second Examples
37
+
38
+ **Small Model for Fast Inference:**
39
+ ```python
40
+ from transformers import AutoTokenizer, AutoModel
41
+
42
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-small")
43
+ model = AutoModel.from_pretrained("jhu-clsp/mmbert-small")
44
+
45
+ # Example: Get multilingual embeddings
46
+ inputs = tokenizer("Hello world! 你好世界! Bonjour le monde!", return_tensors="pt")
47
+ outputs = model(**inputs)
48
+ embeddings = outputs.last_hidden_state.mean(dim=1)
49
+ ```
50
+
51
+ **Base Model for Classification:**
52
+ ```python
53
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
54
+ import torch
55
+
56
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-base")
57
+ model = AutoModelForMaskedLM.from_pretrained("jhu-clsp/mmbert-base")
58
+
59
+ # Example: Multilingual masked language modeling
60
+ text = "The capital of [MASK] is Paris."
61
+ inputs = tokenizer(text, return_tensors="pt")
62
+ with torch.no_grad():
63
+ outputs = model(**inputs)
64
+
65
+ # Get predictions for [MASK] tokens
66
+ mask_indices = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)
67
+ predictions = outputs.logits[mask_indices]
68
+ top_tokens = torch.topk(predictions, 5, dim=-1)
69
+ predicted_words = [tokenizer.decode(token) for token in top_tokens.indices[0]]
70
+ print(f"Predictions: {predicted_words}")
71
+ ```
72
+
73
  ## Related Resources
74
 
75
  - **Models**: [mmBERT Model Suite](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4)