BabyLM 2025
Collection
4 items
•
Updated
•
1
This is a model I accidentally trained with too low a batch size, causing the training loss to spike and essentially fail. I found it amusing that it nevertheless does very well on EWoK, Entity Tracking, Adjective Nominalization, COMPS, and AoA. Maybe this says something about ourselves, how so many in society fail upwards... food for thought.
Thanks to the work of my student Serdar Gülbahar, the reason for this model scoring well has been traced to a few bugs in the babylm evaluation pipeline. The issue is currently being fixed here: https://github.com/babylm/evaluation-pipeline-2025/issues/34
After this is fixed, this model should perform quite poorly, as expected.