Built with Axolotl

See axolotl config

axolotl version: 0.4.1

adapter: lora
base_model: bigscience/bloomz-560m
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
  - 41cfaa113d837568_train_data.json
  ds_type: json
  format: custom
  path: /workspace/input_data/41cfaa113d837568_train_data.json
  type:
    field_instruction: premise
    field_output: hypothesis
    format: '{instruction}'
    no_input_format: '{instruction}'
    system_format: '{system}'
    system_prompt: ''
debug: null
device_map:
  ? ''
  : 0,1,2,3,4,5,6,7
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: false
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Alphatao/c9fe4bf0-c690-4c16-939a-e900e96b8da2
hub_repo: null
hub_strategy: null
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- query_key_value
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 8832
micro_batch_size: 4
mlflow_experiment_name: /tmp/41cfaa113d837568_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.029732703000029732
wandb_entity: null
wandb_mode: online
wandb_name: 791c5a53-d2c7-4e74-a1e0-6065f33c468d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 791c5a53-d2c7-4e74-a1e0-6065f33c468d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null

c9fe4bf0-c690-4c16-939a-e900e96b8da2

This model is a fine-tuned version of bigscience/bloomz-560m on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0059

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • training_steps: 8832

Training results

Training Loss Epoch Step Validation Loss
24.65 0.0002 1 3.3734
17.8944 0.0196 100 2.3096
17.1278 0.0392 200 2.2631
20.0682 0.0588 300 2.2257
17.2783 0.0784 400 2.2106
20.953 0.0981 500 2.1982
18.0772 0.1177 600 2.1857
18.651 0.1373 700 2.1725
18.3775 0.1569 800 2.1652
18.2149 0.1765 900 2.1505
16.9199 0.1961 1000 2.1489
15.127 0.2157 1100 2.1384
18.3053 0.2353 1200 2.1365
16.8984 0.2550 1300 2.1276
16.4964 0.2746 1400 2.1240
15.2356 0.2942 1500 2.1224
17.4402 0.3138 1600 2.1186
16.7051 0.3334 1700 2.1171
15.4807 0.3530 1800 2.1089
16.4821 0.3726 1900 2.1048
17.9296 0.3922 2000 2.0956
20.2905 0.4118 2100 2.0972
18.5316 0.4315 2200 2.0932
17.8267 0.4511 2300 2.0877
16.5729 0.4707 2400 2.0820
18.0547 0.4903 2500 2.0832
16.7011 0.5099 2600 2.0749
16.4063 0.5295 2700 2.0728
15.8053 0.5491 2800 2.0709
18.0942 0.5687 2900 2.0662
16.7752 0.5884 3000 2.0613
16.2293 0.6080 3100 2.0576
18.3454 0.6276 3200 2.0525
14.8829 0.6472 3300 2.0511
15.7294 0.6668 3400 2.0501
17.0917 0.6864 3500 2.0480
17.7716 0.7060 3600 2.0442
15.2095 0.7256 3700 2.0400
17.79 0.7452 3800 2.0327
17.0274 0.7649 3900 2.0350
16.1242 0.7845 4000 2.0327
16.3873 0.8041 4100 2.0258
16.1772 0.8237 4200 2.0250
16.4274 0.8433 4300 2.0250
16.6272 0.8629 4400 2.0175
16.4568 0.8825 4500 2.0164
15.8856 0.9021 4600 2.0141
16.8868 0.9217 4700 2.0138
16.4577 0.9414 4800 2.0112
19.6963 0.9610 4900 2.0113
17.4742 0.9806 5000 2.0068
12.8209 1.0002 5100 2.0016
15.4832 1.0198 5200 2.0092
15.5734 1.0394 5300 2.0059

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Alphatao/c9fe4bf0-c690-4c16-939a-e900e96b8da2

Adapter
(294)
this model