-
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 625 -
When Scaling Meets LLM Finetuning: The Effect of Data, Model and Finetuning Method
Paper • 2402.17193 • Published • 26 -
Training-Free Long-Context Scaling of Large Language Models
Paper • 2402.17463 • Published • 24 -
The Power of Scale for Parameter-Efficient Prompt Tuning
Paper • 2104.08691 • Published • 10
Hao-Yuan Chen
MarkChenX
AI & ML interests
Deep Learning, Foundational Models, Domain Adaptation, Quantum AI, LLM Reasoning, Agentic Research
Recent Activity
published
a model
5 days ago
MarkChenX/gemma-12b-tairis-4bq
updated
a model
5 days ago
MarkChenX/gemma-12b-tairis-4bq
updated
a dataset
about 1 month ago
ntuai/multi-tw