runtime error

Exit code: 1. Reason: tokenizer.json: 0%| | 0.00/2.83M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.83M/2.83M [00:00<00:00, 4.30MB/s] config.json: 0%| | 0.00/1.00k [00:00<?, ?B/s] config.json: 100%|██████████| 1.00k/1.00k [00:00<00:00, 8.27MB/s] The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'GPT2Tokenizer'. The class this function is called from is 'PreTrainedTokenizerFast'. pytorch_model.bin: 0%| | 0.00/513M [00:00<?, ?B/s] pytorch_model.bin: 10%|█ | 52.4M/513M [00:01<00:09, 50.5MB/s] pytorch_model.bin: 100%|█████████▉| 513M/513M [00:01<00:00, 292MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 6, in <module> model = GPT2LMHeadModel.from_pretrained(model_name) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 279, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4333, in from_pretrained model_init_context = cls.get_init_context(is_quantized, _is_ds_init_called) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3736, in get_init_context init_contexts = [no_init_weights(), init_empty_weights()] NameError: name 'init_empty_weights' is not defined model.safetensors: 0%| | 0.00/513M [00:00<?, ?B/s] model.safetensors: 10%|█ | 52.4M/513M [00:01<00:11, 40.2MB/s] model.safetensors: 100%|█████████▉| 513M/513M [00:02<00:00, 253MB/s]

Container logs:

Fetching error logs...