Text Generation
Safetensors
English
llama

Question about tokenizer_config.json

#2
by insomnia-ye - opened

Hi, thanks for your valuable work!
When I compared the tokenizer_config.json in this repo with the original Llama3.1's, I found two differences:

"eos_token" is changed from <|eot_id|> to <|end_of_text|>, why was that?
"chat_template" is missing. (I encountered error when using VeRL to do RL training upon this model
What's the motivation of these? Really looking forward for the reply. Thanks!

Sign up or log in to comment