Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs
Abstract
A conditional scaling law is introduced to optimize architectural choices for large language models, balancing accuracy and inference efficiency.
Scaling the number of parameters and the size of training data has proven to be an effective strategy for improving large language model (LLM) performance. Yet, as these models grow increasingly powerful and widely deployed, the cost of inference has become a pressing concern. Despite its importance, the trade-off between model accuracy and inference efficiency remains underexplored. In this work, we examine how key architectural factors, hidden size, the allocation of parameters between MLP and attention (mlp-to-attention ratio), and grouped-query attention (GQA), influence both inference cost and accuracy. We introduce a conditional scaling law that augments the Chinchilla framework with architectural information, along with a search framework for identifying architectures that are simultaneously inference-efficient and accurate. To validate our approach, we train more than 200 models spanning 80M to 3B parameters and 8B to 100B training tokens, and fit the proposed conditional scaling law. Our results show that the conditional scaling law reliably predicts optimal architectural choices and that the resulting models outperform existing open-source baselines. Under the same training budget, optimized architectures achieve up to 2.1% higher accuracy and 42% greater inference throughput compared to LLaMA-3.2.
Community
We extend scaling laws to account for architecture efficiency, enabling >40% faster Llama3-style 3B models, with no drop in training accuracy.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- xLSTM Scaling Laws: Competitive Performance with Linear Time-Complexity (2025)
- ShishuLM: Lightweight Language Model with Hybrid Decoder-MLP Architecture and Paired Weight Sharing (2025)
- Towards a Comprehensive Scaling Law of Mixture-of-Experts (2025)
- Scaling Laws for Code: A More Data-Hungry Regime (2025)
- LExI: Layer-Adaptive Active Experts for Efficient MoE Model Inference (2025)
- Predicting Task Performance with Context-aware Scaling Laws (2025)
- Where to Begin: Efficient Pretraining via Subnetwork Selection and Distillation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper