- 
	
	
	
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 42 - 
	
	
	
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 - 
	
	
	
Attention Is All You Need
Paper • 1706.03762 • Published • 94 - 
	
	
	
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 246 
Christos Charisis
chris-char
		AI & ML interests
None yet
		
		Organizations
None yet
RAG
			
			
	
	- 
	
	
	
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Paper • 2005.11401 • Published • 13 - 
	
	
	
RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture
Paper • 2401.08406 • Published • 37 - 
	
	
	
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Paper • 2104.08663 • Published • 3 - 
	
	
	
Precise Zero-Shot Dense Retrieval without Relevance Labels
Paper • 2212.10496 • Published • 4 
LLMs Theory
			
			
	
	- 
	
	
	
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 42 - 
	
	
	
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 - 
	
	
	
Attention Is All You Need
Paper • 1706.03762 • Published • 94 - 
	
	
	
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 246 
RAG
			
			
	
	- 
	
	
	
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Paper • 2005.11401 • Published • 13 - 
	
	
	
RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture
Paper • 2401.08406 • Published • 37 - 
	
	
	
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Paper • 2104.08663 • Published • 3 - 
	
	
	
Precise Zero-Shot Dense Retrieval without Relevance Labels
Paper • 2212.10496 • Published • 4 
			datasets
			0
		
			
	None public yet