ANAH: Analytical Annotation of Hallucinations in Large Language Models
Paper
β’
2405.20315
β’
Published
This page holds the InternLM2-20B model which is trained with the ANAH dataset. It is fine-tuned to annotate the hallucination in LLM's responses.
More information please refer to our project page.
You have to follow the prompt in our paper to annotate the hallucination.
The models follow the conversation format of InternLM2-chat, with the template protocol as:
dict(role='user', begin='<|im_start|>user
', end='<|im_end|>
'),
dict(role='assistant', begin='<|im_start|>assistant
', end='<|im_end|>
'),
If you find this project useful in your research, please consider citing:
@article{ji2024anah,
title={ANAH: Analytical Annotation of Hallucinations in Large Language Models},
author={Ji, Ziwei and Gu, Yuzhe and Zhang, Wenwei and Lyu, Chengqi and Lin, Dahua and Chen, Kai},
journal={arXiv preprint arXiv:2405.20315},
year={2024}
}