Model Card for Model ID

This is a model I accidentally trained with too low a batch size, causing the training loss to spike and essentially fail. I found it amusing that it nevertheless does very well on EWoK, Entity Tracking, Adjective Nominalization, COMPS, and AoA. Maybe this says something about ourselves, how so many in society fail upwards... food for thought.

UPDATE

Thanks to the work of my student Serdar Gülbahar, the reason for this model scoring well has been traced to a few bugs in the babylm evaluation pipeline. The issue is currently being fixed here: https://github.com/babylm/evaluation-pipeline-2025/issues/34

After this is fixed, this model should perform quite poorly, as expected.


Downloads last month
1,424
Safetensors
Model size
34.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including leukas/amlm_hd_fail