Papers
arxiv:2510.20475

Mask and You Shall Receive: Optimizing Masked Language Modeling For Pretraining BabyLMs

Published on Oct 23
Authors:

Abstract

An improved Masked Language Modeling strategy adapts token probabilities and incorporates sub-token embeddings, enhancing performance on (Super)GLUE tasks and morphological generalization.

AI-generated summary

We describe our strategy for the 2025 edition of the BabyLM Challenge. Our main contribution is that of an improved form of Masked Language Modeling (MLM), which adapts the probabilities of the tokens masked according to the model's ability to predict them. The results show a substantial increase in performance on (Super)GLUE tasks over the standard MLM. We also incorporate sub-token embeddings, finding that this increases the model's morphological generalization capabilities. Our submission beats the baseline in the strict-small track.

Community

Hey @leukas , really interesting approach, thanks for releasing the training code! I would like to know which GPU setup you have used for training the models - many thanks!

·
Paper author

Hi @stefan-it , thanks for your interest! We used a H100 for the experiments. This is not necessary though, the memory footprint is not very big (anymore) and trains in ~20mins.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.20475 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.20475 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.20475 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.