Llama Trained on Finetune-RAG
					Collection
				
				5 items
				โข 
				Updated
					
				
This repository contains model checkpoints from the Finetune-RAG project, which aims to tackle hallucination in retrieval-augmented LLMs. Checkpoints here are saved at steps 2, 4, 6, 8, and 10 from baseline-format fine-tuning of Llama-3.1-8B-Instruct on Finetune-RAG.
@misc{lee2025finetuneragfinetuninglanguagemodels,
      title={Finetune-RAG: Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation}, 
      author={Zhan Peng Lee and Andre Lin and Calvin Tan},
      year={2025},
      eprint={2505.10792},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.10792}, 
}
Base model
meta-llama/Llama-3.1-8B