-
Massive Activations in Large Language Models
Paper • 2402.17762 • Published • 1 -
What Matters in Transformers? Not All Attention is Needed
Paper • 2406.15786 • Published • 31 -
The Super Weight in Large Language Models
Paper • 2411.07191 • Published • 6 -
Top-nσ: Not All Logits Are You Need
Paper • 2411.07641 • Published • 23
Collections
Discover the best community collections!
Collections including paper arxiv:2406.15786
-
RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval
Paper • 2409.10516 • Published • 43 -
Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse
Paper • 2409.11242 • Published • 7 -
Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models
Paper • 2409.11136 • Published • 24 -
On the Diagram of Thought
Paper • 2409.10038 • Published • 14
-
RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
Paper • 2401.18059 • Published • 46 -
Personalized Visual Instruction Tuning
Paper • 2410.07113 • Published • 70 -
Differential Transformer
Paper • 2410.05258 • Published • 179 -
What Matters in Transformers? Not All Attention is Needed
Paper • 2406.15786 • Published • 31
-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 151 -
Orion-14B: Open-source Multilingual Large Language Models
Paper • 2401.12246 • Published • 14 -
MambaByte: Token-free Selective State Space Model
Paper • 2401.13660 • Published • 60 -
MM-LLMs: Recent Advances in MultiModal Large Language Models
Paper • 2401.13601 • Published • 48
-
What Matters in Transformers? Not All Attention is Needed
Paper • 2406.15786 • Published • 31 -
Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss
Paper • 2410.17243 • Published • 93 -
Forgetting Transformer: Softmax Attention with a Forget Gate
Paper • 2503.02130 • Published • 32 -
Transformers without Normalization
Paper • 2503.10622 • Published • 169
-
MotionLLM: Understanding Human Behaviors from Human Motions and Videos
Paper • 2405.20340 • Published • 20 -
Spectrally Pruned Gaussian Fields with Neural Compensation
Paper • 2405.00676 • Published • 10 -
Paint by Inpaint: Learning to Add Image Objects by Removing Them First
Paper • 2404.18212 • Published • 29 -
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper • 2405.00732 • Published • 121
-
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
Paper • 2403.05530 • Published • 66 -
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Paper • 2410.11779 • Published • 26 -
What Matters in Transformers? Not All Attention is Needed
Paper • 2406.15786 • Published • 31 -
Cavia: Camera-controllable Multi-view Video Diffusion with View-Integrated Attention
Paper • 2410.10774 • Published • 26
-
Massive Activations in Large Language Models
Paper • 2402.17762 • Published • 1 -
What Matters in Transformers? Not All Attention is Needed
Paper • 2406.15786 • Published • 31 -
The Super Weight in Large Language Models
Paper • 2411.07191 • Published • 6 -
Top-nσ: Not All Logits Are You Need
Paper • 2411.07641 • Published • 23
-
What Matters in Transformers? Not All Attention is Needed
Paper • 2406.15786 • Published • 31 -
Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss
Paper • 2410.17243 • Published • 93 -
Forgetting Transformer: Softmax Attention with a Forget Gate
Paper • 2503.02130 • Published • 32 -
Transformers without Normalization
Paper • 2503.10622 • Published • 169
-
RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval
Paper • 2409.10516 • Published • 43 -
Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse
Paper • 2409.11242 • Published • 7 -
Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models
Paper • 2409.11136 • Published • 24 -
On the Diagram of Thought
Paper • 2409.10038 • Published • 14
-
MotionLLM: Understanding Human Behaviors from Human Motions and Videos
Paper • 2405.20340 • Published • 20 -
Spectrally Pruned Gaussian Fields with Neural Compensation
Paper • 2405.00676 • Published • 10 -
Paint by Inpaint: Learning to Add Image Objects by Removing Them First
Paper • 2404.18212 • Published • 29 -
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper • 2405.00732 • Published • 121
-
RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
Paper • 2401.18059 • Published • 46 -
Personalized Visual Instruction Tuning
Paper • 2410.07113 • Published • 70 -
Differential Transformer
Paper • 2410.05258 • Published • 179 -
What Matters in Transformers? Not All Attention is Needed
Paper • 2406.15786 • Published • 31
-
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
Paper • 2403.05530 • Published • 66 -
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Paper • 2410.11779 • Published • 26 -
What Matters in Transformers? Not All Attention is Needed
Paper • 2406.15786 • Published • 31 -
Cavia: Camera-controllable Multi-view Video Diffusion with View-Integrated Attention
Paper • 2410.10774 • Published • 26
-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 151 -
Orion-14B: Open-source Multilingual Large Language Models
Paper • 2401.12246 • Published • 14 -
MambaByte: Token-free Selective State Space Model
Paper • 2401.13660 • Published • 60 -
MM-LLMs: Recent Advances in MultiModal Large Language Models
Paper • 2401.13601 • Published • 48