[2025-03-16 13:15:47,890][__main__][INFO] - cache_dir: /media/data/tmp dataset: name: kamel-usp/aes_enem_dataset split: JBCS2025 training_params: seed: 42 num_train_epochs: 20 logging_steps: 100 metric_for_best_model: QWK bf16: true post_training_results: model_path: /workspace/jbcs2025/outputs/2025-03-16/08-36-36 experiments: model: name: meta-llama/Llama-3.1-8B type: llama31_classification_lora num_labels: 6 output_dir: ./results/llama31_8b-balanced/C1 logging_dir: ./logs/llama31_8b-balanced/C1 best_model_dir: ./results/llama31_8b-balanced/C1/best_model lora_r: 8 lora_dropout: 0.05 lora_alpha: 16 lora_target_modules: all-linear dataset: grade_index: 0 training_id: llama31_8b-balanced-C1 training_params: weight_decay: 0.01 warmup_ratio: 0.1 learning_rate: 5.0e-05 train_batch_size: 1 eval_batch_size: 2 gradient_accumulation_steps: 16 gradient_checkpointing: false [2025-03-16 13:15:47,891][__main__][INFO] - Starting the Fine Tuning training process. [2025-03-16 13:15:51,975][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at /media/data/tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/tokenizer.json [2025-03-16 13:15:51,975][transformers.tokenization_utils_base][INFO] - loading file tokenizer.model from cache at None [2025-03-16 13:15:51,975][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at None [2025-03-16 13:15:51,975][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /media/data/tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/special_tokens_map.json [2025-03-16 13:15:51,975][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /media/data/tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/tokenizer_config.json [2025-03-16 13:15:51,975][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None [2025-03-16 13:15:52,167][transformers.tokenization_utils_base][INFO] - Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. [2025-03-16 13:15:52,172][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: False [2025-03-16 13:15:52,783][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /media/data/tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 13:15:52,786][transformers.configuration_utils][INFO] - Model config LlamaConfig { "_name_or_path": "meta-llama/Llama-3.1-8B", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "id2label": { "0": 0, "1": 40, "2": 80, "3": 120, "4": 160, "5": 200 }, "initializer_range": 0.02, "intermediate_size": 14336, "label2id": { "0": 0, "40": 1, "80": 2, "120": 3, "160": 4, "200": 5 }, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 13:15:52,800][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /media/data/tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/model.safetensors.index.json [2025-03-16 13:15:52,801][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.bfloat16 as defined in model's config object [2025-03-16 13:15:52,801][transformers.modeling_utils][INFO] - Instantiating LlamaForSequenceClassification model under default dtype torch.bfloat16. [2025-03-16 13:15:56,600][transformers.modeling_utils][INFO] - Some weights of the model checkpoint at meta-llama/Llama-3.1-8B were not used when initializing LlamaForSequenceClassification: {'lm_head.weight'} - This IS expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [2025-03-16 13:15:56,601][transformers.modeling_utils][WARNING] - Some weights of LlamaForSequenceClassification were not initialized from the model checkpoint at meta-llama/Llama-3.1-8B and are newly initialized: ['score.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [2025-03-16 13:15:56,832][__main__][INFO] - None [2025-03-16 13:15:56,833][transformers.training_args][INFO] - PyTorch: setting up devices [2025-03-16 13:15:56,871][__main__][INFO] - Total steps: 620. Number of warmup steps: 62 [2025-03-16 13:15:56,880][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching. [2025-03-16 13:15:56,893][transformers.trainer][INFO] - Using auto half precision backend [2025-03-16 13:15:56,893][transformers.trainer][WARNING] - No label_names provided for model class `PeftModelForSequenceClassification`. Since `PeftModel` hides base models input arguments, if label_names is not given, label_names can't be set automatically within `Trainer`. Note that empty label_names list will be used instead. [2025-03-16 13:15:56,912][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 13:15:56,925][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 13:15:56,925][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 13:15:56,925][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 13:16:58,143][transformers][INFO] - {'accuracy': 0.29545454545454547, 'RMSE': 46.056618647183825, 'QWK': 0.03250125649187485, 'HDIV': 0.015151515151515138, 'Macro F1': 0.07692307692307693} [2025-03-16 13:16:58,145][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 13:16:58,289][transformers.trainer][INFO] - The following columns in the training set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 13:16:58,325][transformers.trainer][INFO] - ***** Running training ***** [2025-03-16 13:16:58,325][transformers.trainer][INFO] - Num examples = 500 [2025-03-16 13:16:58,325][transformers.trainer][INFO] - Num Epochs = 20 [2025-03-16 13:16:58,325][transformers.trainer][INFO] - Instantaneous batch size per device = 1 [2025-03-16 13:16:58,325][transformers.trainer][INFO] - Total train batch size (w. parallel, distributed & accumulation) = 16 [2025-03-16 13:16:58,325][transformers.trainer][INFO] - Gradient Accumulation steps = 16 [2025-03-16 13:16:58,325][transformers.trainer][INFO] - Total optimization steps = 620 [2025-03-16 13:16:58,328][transformers.trainer][INFO] - Number of trainable parameters = 20,996,096 [2025-03-16 13:30:53,496][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 13:30:53,497][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 13:30:53,497][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 13:30:53,498][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 13:31:57,120][transformers][INFO] - {'accuracy': 0.3939393939393939, 'RMSE': 41.046905910780715, 'QWK': 0.07965489566613149, 'HDIV': 0.007575757575757569, 'Macro F1': 0.1734968771164121} [2025-03-16 13:31:57,120][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 13:31:57,122][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-32 [2025-03-16 13:31:57,604][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 13:31:57,606][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 13:45:58,066][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 13:45:58,067][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 13:45:58,067][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 13:45:58,068][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 13:47:01,924][transformers][INFO] - {'accuracy': 0.4696969696969697, 'RMSE': 33.574882386580704, 'QWK': 0.25437317784256563, 'HDIV': 0.007575757575757569, 'Macro F1': 0.19694989106753813} [2025-03-16 13:47:01,924][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 13:47:01,926][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-64 [2025-03-16 13:47:02,387][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 13:47:02,390][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 13:47:02,559][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-32] due to args.save_total_limit [2025-03-16 14:01:02,989][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 14:01:02,991][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 14:01:02,991][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 14:01:02,991][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 14:02:06,865][transformers][INFO] - {'accuracy': 0.4772727272727273, 'RMSE': 32.84490643597388, 'QWK': 0.3022095509622237, 'HDIV': 0.007575757575757569, 'Macro F1': 0.2089608257095942} [2025-03-16 14:02:06,866][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 14:02:06,867][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-96 [2025-03-16 14:02:07,358][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 14:02:07,360][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 14:02:07,530][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-64] due to args.save_total_limit [2025-03-16 14:16:07,909][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 14:16:07,910][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 14:16:07,910][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 14:16:07,910][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 14:17:11,741][transformers][INFO] - {'accuracy': 0.45454545454545453, 'RMSE': 38.92494720807615, 'QWK': 0.2276727204643325, 'HDIV': 0.007575757575757569, 'Macro F1': 0.30417862838915466} [2025-03-16 14:17:11,741][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 14:17:11,742][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-128 [2025-03-16 14:17:12,265][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 14:17:12,267][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 14:31:12,254][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 14:31:12,256][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 14:31:12,256][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 14:31:12,256][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 14:32:16,023][transformers][INFO] - {'accuracy': 0.5, 'RMSE': 33.574882386580704, 'QWK': 0.16648560564910375, 'HDIV': 0.007575757575757569, 'Macro F1': 0.183216929010737} [2025-03-16 14:32:16,023][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 14:32:16,025][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-160 [2025-03-16 14:32:16,566][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 14:32:16,568][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 14:32:16,736][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-128] due to args.save_total_limit [2025-03-16 14:46:17,000][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 14:46:17,001][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 14:46:17,001][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 14:46:17,001][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 14:47:20,843][transformers][INFO] - {'accuracy': 0.5984848484848485, 'RMSE': 28.91995221924885, 'QWK': 0.5161495962600935, 'HDIV': 0.015151515151515138, 'Macro F1': 0.2924253285543608} [2025-03-16 14:47:20,843][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 14:47:20,844][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-192 [2025-03-16 14:47:21,310][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 14:47:21,313][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 14:47:21,478][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-96] due to args.save_total_limit [2025-03-16 14:47:21,490][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-160] due to args.save_total_limit [2025-03-16 15:01:21,920][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 15:01:21,921][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 15:01:21,921][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 15:01:21,921][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 15:02:25,838][transformers][INFO] - {'accuracy': 0.5757575757575758, 'RMSE': 32.84490643597388, 'QWK': 0.276957163958641, 'HDIV': 0.007575757575757569, 'Macro F1': 0.2738722188201674} [2025-03-16 15:02:25,838][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 15:02:25,839][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-224 [2025-03-16 15:02:26,368][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 15:02:26,370][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 15:16:27,446][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 15:16:27,447][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 15:16:27,447][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 15:16:27,447][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 15:17:31,369][transformers][INFO] - {'accuracy': 0.5909090909090909, 'RMSE': 29.336088024923512, 'QWK': 0.5172058520502782, 'HDIV': 0.007575757575757569, 'Macro F1': 0.3985035687797945} [2025-03-16 15:17:31,369][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 15:17:31,371][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-256 [2025-03-16 15:17:31,878][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 15:17:31,881][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 15:17:32,051][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-192] due to args.save_total_limit [2025-03-16 15:17:32,063][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-224] due to args.save_total_limit [2025-03-16 15:31:34,746][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 15:31:34,747][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 15:31:34,747][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 15:31:34,747][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 15:32:39,461][transformers][INFO] - {'accuracy': 0.5833333333333334, 'RMSE': 28.91995221924885, 'QWK': 0.4639830508474576, 'HDIV': 0.007575757575757569, 'Macro F1': 0.28377402226355086} [2025-03-16 15:32:39,462][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 15:32:39,463][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-288 [2025-03-16 15:32:39,927][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 15:32:39,929][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 15:46:43,172][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 15:46:43,174][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 15:46:43,174][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 15:46:43,174][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 15:47:47,303][transformers][INFO] - {'accuracy': 0.5833333333333334, 'RMSE': 27.633971188310298, 'QWK': 0.510593220338983, 'HDIV': 0.007575757575757569, 'Macro F1': 0.3399487967229903} [2025-03-16 15:47:47,303][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 15:47:47,304][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-320 [2025-03-16 15:47:47,769][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 15:47:47,771][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 15:47:47,938][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-288] due to args.save_total_limit [2025-03-16 16:01:51,129][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 16:01:51,130][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 16:01:51,130][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 16:01:51,130][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 16:02:55,323][transformers][INFO] - {'accuracy': 0.5984848484848485, 'RMSE': 27.19179912021158, 'QWK': 0.5295629820051413, 'HDIV': 0.007575757575757569, 'Macro F1': 0.34896996773392897} [2025-03-16 16:02:55,323][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 16:02:55,324][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-352 [2025-03-16 16:02:55,847][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 16:02:55,849][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 16:02:56,019][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-256] due to args.save_total_limit [2025-03-16 16:02:56,031][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-320] due to args.save_total_limit [2025-03-16 16:17:00,650][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 16:17:00,651][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 16:17:00,651][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 16:17:00,651][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 16:18:05,272][transformers][INFO] - {'accuracy': 0.553030303030303, 'RMSE': 30.944720996896347, 'QWK': 0.4280386134269418, 'HDIV': 0.007575757575757569, 'Macro F1': 0.27696997064800183} [2025-03-16 16:18:05,273][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 16:18:05,274][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-384 [2025-03-16 16:18:05,734][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 16:18:05,736][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 16:32:09,622][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 16:32:09,623][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 16:32:09,623][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 16:32:09,623][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 16:33:13,955][transformers][INFO] - {'accuracy': 0.5833333333333334, 'RMSE': 28.91995221924885, 'QWK': 0.49003359462486007, 'HDIV': 0.007575757575757569, 'Macro F1': 0.3452991452991453} [2025-03-16 16:33:13,955][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 16:33:13,956][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-416 [2025-03-16 16:33:14,443][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 16:33:14,445][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 16:33:14,611][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-384] due to args.save_total_limit [2025-03-16 16:47:19,993][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 16:47:19,994][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 16:47:19,994][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 16:47:19,994][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 16:48:24,194][transformers][INFO] - {'accuracy': 0.5227272727272727, 'RMSE': 32.84490643597388, 'QWK': 0.4285992217898833, 'HDIV': 0.007575757575757569, 'Macro F1': 0.36892246283550634} [2025-03-16 16:48:24,194][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 16:48:24,196][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-448 [2025-03-16 16:48:24,730][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 16:48:24,732][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 16:48:24,895][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-416] due to args.save_total_limit [2025-03-16 17:02:29,963][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 17:02:29,964][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 17:02:29,964][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 17:02:29,964][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 17:03:34,106][transformers][INFO] - {'accuracy': 0.6136363636363636, 'RMSE': 26.74231693686086, 'QWK': 0.5616839261593878, 'HDIV': 0.007575757575757569, 'Macro F1': 0.3411934470758} [2025-03-16 17:03:34,106][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 17:03:34,107][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-480 [2025-03-16 17:03:34,596][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 17:03:34,598][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 17:03:34,762][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-352] due to args.save_total_limit [2025-03-16 17:03:34,774][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-448] due to args.save_total_limit [2025-03-16 17:17:38,703][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 17:17:38,704][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 17:17:38,704][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 17:17:38,704][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 17:18:43,041][transformers][INFO] - {'accuracy': 0.5606060606060606, 'RMSE': 30.15113445777636, 'QWK': 0.49084550504011537, 'HDIV': 0.007575757575757569, 'Macro F1': 0.3855266713094572} [2025-03-16 17:18:43,042][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 17:18:43,043][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-512 [2025-03-16 17:18:43,576][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 17:18:43,578][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 17:32:49,532][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 17:32:49,533][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 17:32:49,533][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 17:32:49,533][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 17:33:53,775][transformers][INFO] - {'accuracy': 0.5909090909090909, 'RMSE': 29.336088024923512, 'QWK': 0.5077731092436977, 'HDIV': 0.007575757575757569, 'Macro F1': 0.3968880268150341} [2025-03-16 17:33:53,776][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 17:33:53,777][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-544 [2025-03-16 17:33:54,264][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 17:33:54,266][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 17:33:54,431][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-512] due to args.save_total_limit [2025-03-16 17:47:58,410][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 17:47:58,412][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 17:47:58,412][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 17:47:58,412][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 17:49:02,705][transformers][INFO] - {'accuracy': 0.6287878787878788, 'RMSE': 26.285149626910837, 'QWK': 0.579476861167002, 'HDIV': 0.007575757575757569, 'Macro F1': 0.394676583276989} [2025-03-16 17:49:02,706][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 17:49:02,707][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-576 [2025-03-16 17:49:03,279][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 17:49:03,281][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 17:49:03,447][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-480] due to args.save_total_limit [2025-03-16 17:49:03,459][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-544] due to args.save_total_limit [2025-03-16 18:03:09,551][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 18:03:09,553][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 18:03:09,553][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 18:03:09,553][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 18:04:13,897][transformers][INFO] - {'accuracy': 0.5909090909090909, 'RMSE': 28.06917861068948, 'QWK': 0.5302233902759528, 'HDIV': 0.007575757575757569, 'Macro F1': 0.3776732460185698} [2025-03-16 18:04:13,897][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 18:04:13,899][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-608 [2025-03-16 18:04:14,388][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 18:04:14,390][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 18:09:39,235][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-620 [2025-03-16 18:09:39,701][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 18:09:39,703][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 18:09:39,859][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-608] due to args.save_total_limit [2025-03-16 18:09:39,871][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 18:09:39,873][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 18:09:39,873][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 18:09:39,873][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 18:10:44,058][transformers][INFO] - {'accuracy': 0.5909090909090909, 'RMSE': 28.06917861068948, 'QWK': 0.5302233902759528, 'HDIV': 0.007575757575757569, 'Macro F1': 0.3776732460185698} [2025-03-16 18:10:44,059][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 18:10:44,060][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-620 [2025-03-16 18:10:44,398][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 18:10:44,400][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 18:10:44,605][transformers.trainer][INFO] - Training completed. Do not forget to share your model on huggingface.co/models =) [2025-03-16 18:10:44,605][transformers.trainer][INFO] - Loading best model from /workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-576 (score: 0.579476861167002). [2025-03-16 18:10:44,655][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-16/13-15-47/results/llama31_8b-balanced/C1/checkpoint-620] due to args.save_total_limit [2025-03-16 18:10:44,668][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 18:10:44,669][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 18:10:44,669][transformers.trainer][INFO] - Num examples = 132 [2025-03-16 18:10:44,669][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 18:11:49,145][transformers][INFO] - {'accuracy': 0.6287878787878788, 'RMSE': 26.285149626910837, 'QWK': 0.579476861167002, 'HDIV': 0.007575757575757569, 'Macro F1': 0.394676583276989} [2025-03-16 18:11:49,147][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 18:11:49,148][__main__][INFO] - Training completed successfully. [2025-03-16 18:11:49,148][__main__][INFO] - Running on Test [2025-03-16 18:11:49,148][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text. If essay_year, grades, prompt, id_prompt, reference, essay_text, id, supporting_text are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-16 18:11:49,149][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-16 18:11:49,149][transformers.trainer][INFO] - Num examples = 138 [2025-03-16 18:11:49,149][transformers.trainer][INFO] - Batch size = 2 [2025-03-16 18:12:58,833][transformers][INFO] - {'accuracy': 0.6594202898550725, 'RMSE': 25.931906372573962, 'QWK': 0.6043890865954924, 'HDIV': 0.007246376811594235, 'Macro F1': 0.43574433494234377} [2025-03-16 18:12:58,833][tensorboardX.summary][INFO] - Summary name eval/Macro F1 is illegal; using eval/Macro_F1 instead. [2025-03-16 18:12:58,834][transformers.trainer][INFO] - Saving model checkpoint to ./results/llama31_8b-balanced/C1/best_model [2025-03-16 18:12:59,177][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-03-16 18:12:59,179][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "vocab_size": 128256 } [2025-03-16 18:12:59,231][__main__][INFO] - Fine Tuning Finished.