Upload folder using huggingface_hub
Browse files- .DS_Store +0 -0
- README.md +240 -0
- adapter_config.json +29 -0
- adapter_model.safetensors +3 -0
- additional_config.json +1 -0
- args.json +460 -0
- latest +1 -0
- rng_state_0.pth +3 -0
- rng_state_1.pth +3 -0
- rng_state_2.pth +3 -0
- rng_state_3.pth +3 -0
- scheduler.pt +3 -0
- trainer_state.json +0 -0
- training_args.bin +3 -0
- zero_to_fp32.py +760 -0
.DS_Store
ADDED
|
Binary file (6.15 kB). View file
|
|
|
README.md
ADDED
|
@@ -0,0 +1,240 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
base_model: Qwen/Qwen2.5-VL-7B-Instruct
|
| 3 |
+
library_name: peft
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
- zh
|
| 8 |
+
pipeline_tag: image-text-to-text
|
| 9 |
+
tags:
|
| 10 |
+
- visualization
|
| 11 |
+
- quality-assessment
|
| 12 |
+
- lora
|
| 13 |
+
- qwen2.5-vl
|
| 14 |
+
- visjudge
|
| 15 |
+
- aesthetics
|
| 16 |
+
- grpo
|
| 17 |
+
datasets:
|
| 18 |
+
- xypkent/VisJudgeBench
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
# VisJudge: Qwen2.5-VL-7B LoRA for Visualization Quality Assessment
|
| 22 |
+
|
| 23 |
+
[](https://arxiv.org/abs/2510.22373)
|
| 24 |
+
[](https://huggingface.co/datasets/xypkent/VisJudgeBench)
|
| 25 |
+
|
| 26 |
+
**VisJudge** is a specialized model fine-tuned on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) for visualization quality and aesthetics assessment. It significantly outperforms state-of-the-art multimodal large language models (MLLMs) including GPT-5, GPT-4o, and Claude-4-Sonnet on visualization evaluation tasks.
|
| 27 |
+
|
| 28 |
+
📄 **Paper**: [VisJudge-Bench: Aesthetics and Quality Assessment of Visualizations](https://arxiv.org/abs/2510.22373)
|
| 29 |
+
|
| 30 |
+
## 🎯 Model Overview
|
| 31 |
+
|
| 32 |
+
VisJudge addresses the significant gaps between general MLLMs and human expert judgment in visualization quality assessment. Trained using **GRPO (Group Relative Policy Optimization)** on the **VisJudgeBench** dataset containing 3,090 expert-annotated samples, VisJudge evaluates visualizations across the **Fidelity-Expressiveness-Aesthetics** framework.
|
| 33 |
+
|
| 34 |
+
### Key Features
|
| 35 |
+
|
| 36 |
+
- **🏆 State-of-the-Art Performance**: 19.8% MAE improvement over GPT-5
|
| 37 |
+
- **📊 Six-Dimensional Evaluation**: Data Fidelity, Semantic Readability, Insight Discovery, Design Style, Visual Composition, Color Harmony
|
| 38 |
+
- **🎨 Comprehensive Coverage**: Supports 32 visualization types including single charts, multi-panel views, and dashboards
|
| 39 |
+
- **🔬 Expert-Level Assessment**: Achieves 0.681 correlation with human experts (vs. 0.429 for GPT-5)
|
| 40 |
+
|
| 41 |
+
## 🏆 Performance Benchmarks
|
| 42 |
+
|
| 43 |
+
### Overall Performance Comparison
|
| 44 |
+
|
| 45 |
+
| Model | MAE ↓ | MSE ↓ | Correlation ↑ |
|
| 46 |
+
| ------------------ | --------------- | --------------- | --------------- |
|
| 47 |
+
| **VisJudge** | **0.442** | **0.306** | **0.681** |
|
| 48 |
+
| GPT-5 | 0.551 | 0.484 | 0.429 |
|
| 49 |
+
| GPT-4o | 0.609 | 0.575 | 0.482 |
|
| 50 |
+
| Claude-4-Sonnet | 0.618 | 0.596 | 0.470 |
|
| 51 |
+
| Gemini-2.0-Flash | 0.680 | 0.716 | 0.395 |
|
| 52 |
+
| Gemini-2.5-Pro | 0.661 | 0.674 | 0.266 |
|
| 53 |
+
| Claude-3.5-Sonnet | 0.823 | 1.006 | 0.395 |
|
| 54 |
+
| Qwen2.5-VL-7B | 1.048 | 1.502 | 0.322 |
|
| 55 |
+
|
| 56 |
+
**Key Achievements:**
|
| 57 |
+
- 🎯 **19.8% MAE improvement** over GPT-5 (0.551 → 0.442)
|
| 58 |
+
- 📈 **58.7% higher correlation** with human experts vs GPT-5 (0.429 → 0.681)
|
| 59 |
+
- 🏅 **Outperforms all commercial MLLMs** across all metrics
|
| 60 |
+
|
| 61 |
+
### Performance by Evaluation Dimensions (MAE ↓)
|
| 62 |
+
|
| 63 |
+
| Model | Overall | Data Fidelity | Semantic Readability | Insight Discovery | Design Style | Visual Composition | Color Harmony |
|
| 64 |
+
| ------------------ | ------- | ------------- | -------------------- | ----------------- | ------------ | ------------------ | ------------- |
|
| 65 |
+
| **VisJudge** | **0.442** | **0.662** | **0.649** | **0.679** | **0.581** | **0.546** | **0.604** |
|
| 66 |
+
| GPT-5 | 0.551 | 0.861 | 0.780 | 0.776 | 0.648 | 0.698 | 0.682 |
|
| 67 |
+
| GPT-4o | 0.609 | 0.986 | 0.804 | 0.742 | 0.608 | 0.694 | 0.657 |
|
| 68 |
+
|
| 69 |
+
## 🔍 Evaluation Framework
|
| 70 |
+
|
| 71 |
+
VisJudge evaluates visualizations across three fundamental dimensions with six measurable metrics:
|
| 72 |
+
|
| 73 |
+
### 1. Fidelity - Data Accuracy and Truthfulness
|
| 74 |
+
- **Data Fidelity**: Ensures visual encodings accurately reflect original data without misleading interpretations
|
| 75 |
+
|
| 76 |
+
### 2. Expressiveness - Information Clarity and Understandability
|
| 77 |
+
- **Semantic Readability**: Assesses clarity of information encoding and unambiguous decoding
|
| 78 |
+
- **Insight Discovery**: Evaluates effectiveness in revealing data patterns, trends, and outliers
|
| 79 |
+
|
| 80 |
+
### 3. Aesthetics - Visual Aesthetics and Refinement
|
| 81 |
+
- **Design Style**: Measures innovation and uniqueness of design elements
|
| 82 |
+
- **Visual Composition**: Focuses on spatial layout, balance, and element positioning
|
| 83 |
+
- **Color Harmony**: Assesses color coordination and functional effectiveness
|
| 84 |
+
|
| 85 |
+
## 🚀 Usage
|
| 86 |
+
|
| 87 |
+
### Installation
|
| 88 |
+
|
| 89 |
+
```bash
|
| 90 |
+
pip install transformers peft torch pillow
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
### Quick Start
|
| 94 |
+
|
| 95 |
+
```python
|
| 96 |
+
from peft import PeftModel
|
| 97 |
+
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor
|
| 98 |
+
from PIL import Image
|
| 99 |
+
import torch
|
| 100 |
+
|
| 101 |
+
# Load base model
|
| 102 |
+
base_model = Qwen2VLForConditionalGeneration.from_pretrained(
|
| 103 |
+
"Qwen/Qwen2.5-VL-7B-Instruct",
|
| 104 |
+
torch_dtype=torch.bfloat16,
|
| 105 |
+
device_map="auto"
|
| 106 |
+
)
|
| 107 |
+
|
| 108 |
+
# Load VisJudge LoRA adapter
|
| 109 |
+
model = PeftModel.from_pretrained(
|
| 110 |
+
base_model,
|
| 111 |
+
"xypkent/visjudge-qwen2.5-vl-7b-lora"
|
| 112 |
+
)
|
| 113 |
+
|
| 114 |
+
# Load processor
|
| 115 |
+
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")
|
| 116 |
+
|
| 117 |
+
# Prepare your visualization
|
| 118 |
+
image = Image.open("path/to/your/visualization.png")
|
| 119 |
+
|
| 120 |
+
# Evaluation prompt
|
| 121 |
+
messages = [
|
| 122 |
+
{
|
| 123 |
+
"role": "user",
|
| 124 |
+
"content": [
|
| 125 |
+
{"type": "image", "image": image},
|
| 126 |
+
{"type": "text", "text": """Please evaluate this visualization across six dimensions on a scale of 1-5:
|
| 127 |
+
|
| 128 |
+
1. Data Fidelity: Does the visual encoding accurately reflect the data?
|
| 129 |
+
2. Semantic Readability: Is the information clearly encoded and easy to decode?
|
| 130 |
+
3. Insight Discovery: Does it effectively reveal patterns and insights?
|
| 131 |
+
4. Design Style: Is the design innovative and distinctive?
|
| 132 |
+
5. Visual Composition: Is the layout balanced and well-organized?
|
| 133 |
+
6. Color Harmony: Are colors coordinated and effective?
|
| 134 |
+
|
| 135 |
+
Provide scores and explanations for each dimension."""}
|
| 136 |
+
]
|
| 137 |
+
}
|
| 138 |
+
]
|
| 139 |
+
|
| 140 |
+
# Process and generate
|
| 141 |
+
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 142 |
+
inputs = processor(text=[text], images=[image], return_tensors="pt").to(model.device)
|
| 143 |
+
|
| 144 |
+
with torch.no_grad():
|
| 145 |
+
outputs = model.generate(**inputs, max_new_tokens=512)
|
| 146 |
+
|
| 147 |
+
response = processor.batch_decode(outputs, skip_special_tokens=True)[0]
|
| 148 |
+
print(response)
|
| 149 |
+
```
|
| 150 |
+
|
| 151 |
+
### Example Output
|
| 152 |
+
|
| 153 |
+
```
|
| 154 |
+
Data Fidelity: 4.0 - The visual encoding accurately represents the data with appropriate scales.
|
| 155 |
+
Semantic Readability: 4.5 - Clear labels and legend make the information easy to understand.
|
| 156 |
+
Insight Discovery: 3.5 - The chart reveals basic trends but could better highlight key patterns.
|
| 157 |
+
Design Style: 3.0 - Uses standard design elements without much innovation.
|
| 158 |
+
Visual Composition: 4.0 - Well-balanced layout with good spacing between elements.
|
| 159 |
+
Color Harmony: 3.5 - Color palette is coordinated but could be more distinctive.
|
| 160 |
+
Overall Score: 3.75
|
| 161 |
+
```
|
| 162 |
+
|
| 163 |
+
## 📊 Training Details
|
| 164 |
+
|
| 165 |
+
### Dataset
|
| 166 |
+
|
| 167 |
+
- **Name**: [VisJudgeBench](https://huggingface.co/datasets/xypkent/VisJudgeBench)
|
| 168 |
+
- **Size**: 3,090 expert-annotated visualization samples
|
| 169 |
+
- **Types**: Single visualizations, multi-panel views, dashboards
|
| 170 |
+
- **Coverage**: 32 chart types including bar charts, line charts, heatmaps, sankey diagrams, treemaps, dashboards, and more
|
| 171 |
+
|
| 172 |
+
### Training Method
|
| 173 |
+
|
| 174 |
+
- **Base Model**: Qwen2.5-VL-7B-Instruct
|
| 175 |
+
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation) + GRPO (Group Relative Policy Optimization)
|
| 176 |
+
- **LoRA Configuration**:
|
| 177 |
+
- Rank: 128
|
| 178 |
+
- Alpha: 256
|
| 179 |
+
- Target Modules: All attention and MLP layers
|
| 180 |
+
- **Training Framework**: PEFT 0.14.0
|
| 181 |
+
|
| 182 |
+
### Key Improvements
|
| 183 |
+
|
| 184 |
+
✅ **Human-like Scoring**: Mean score μ=3.11 (vs. human μ=3.13), eliminating the score inflation bias seen in other models
|
| 185 |
+
✅ **Balanced Assessment**: Avoids both overly conservative (Gemini-2.5-Pro μ=3.02) and overly generous (Qwen2.5-VL-7B μ=3.89) biases
|
| 186 |
+
✅ **Complexity Handling**: Maintains performance across single visualizations (0.577), multi-panel views (0.565), and complex dashboards (0.375)
|
| 187 |
+
|
| 188 |
+
## 📈 Supported Visualization Types
|
| 189 |
+
|
| 190 |
+
### Single Visualizations (22 types)
|
| 191 |
+
Bar Chart, Pie Chart, Line Chart, Area Chart, Treemap, Sankey Diagram, Heatmap, Scatter Plot, Histogram, Donut Chart, Funnel Chart, Bubble Chart, Choropleth Map, Radar Chart, Network Graph, Candlestick Chart, Gauge Chart, Box Plot, Point Map, Word Cloud, Violin Plot, and more
|
| 192 |
+
|
| 193 |
+
### Multiple Visualizations (5 types)
|
| 194 |
+
Comparison Views, Small Multiples, Coordinated Views, Overview+Detail
|
| 195 |
+
|
| 196 |
+
### Dashboards (5 types)
|
| 197 |
+
Analytical Dashboard, Operational Dashboard, Interactive Dashboard, Strategic Dashboard
|
| 198 |
+
|
| 199 |
+
## ⚠️ Limitations
|
| 200 |
+
|
| 201 |
+
- Performance degrades with increasing visualization complexity (dashboards are most challenging)
|
| 202 |
+
- Best suited for visualization types seen during training
|
| 203 |
+
- Aesthetic dimensions (especially Visual Composition in complex dashboards) remain challenging
|
| 204 |
+
- Inherits any biases present in the base Qwen2.5-VL model
|
| 205 |
+
|
| 206 |
+
## 📝 Citation
|
| 207 |
+
|
| 208 |
+
If you use VisJudge in your research, please cite:
|
| 209 |
+
|
| 210 |
+
```bibtex
|
| 211 |
+
@misc{xie2025visjudge,
|
| 212 |
+
title={VisJudge-Bench: Aesthetics and Quality Assessment of Visualizations},
|
| 213 |
+
author={Yupeng Xie and Zhiyang Zhang and Yifan Wu and Sirong Lu and Jiayi Zhang and Zhaoyang Yu and Jinlin Wang and Sirui Hong and Bang Liu and Chenglin Wu and Yuyu Luo},
|
| 214 |
+
year={2025},
|
| 215 |
+
eprint={2510.22373},
|
| 216 |
+
archivePrefix={arXiv},
|
| 217 |
+
primaryClass={cs.CL},
|
| 218 |
+
url={https://arxiv.org/abs/2510.22373}
|
| 219 |
+
}
|
| 220 |
+
```
|
| 221 |
+
|
| 222 |
+
## 🔗 Resources
|
| 223 |
+
|
| 224 |
+
- 📄 **Paper**: [arXiv:2510.22373](https://arxiv.org/abs/2510.22373)
|
| 225 |
+
- 🤗 **Dataset**: [VisJudgeBench on Hugging Face](https://huggingface.co/datasets/xypkent/VisJudgeBench)
|
| 226 |
+
- 💻 **GitHub**: [VisJudgeBench Repository](https://github.com/xypkent/VisJudgeBench)
|
| 227 |
+
- 📧 **Contact**: [email protected]
|
| 228 |
+
|
| 229 |
+
## 📜 License
|
| 230 |
+
|
| 231 |
+
This model is released under the Apache 2.0 License, consistent with the base Qwen2.5-VL model.
|
| 232 |
+
|
| 233 |
+
## 🙏 Acknowledgments
|
| 234 |
+
|
| 235 |
+
This model is built upon [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) by Alibaba Cloud. We thank the Qwen team for their excellent foundation model.
|
| 236 |
+
|
| 237 |
+
---
|
| 238 |
+
|
| 239 |
+
**Developed by**: Yupeng Xie and team at HKUST-GZ
|
| 240 |
+
**Framework Versions**: PEFT 0.14.0 | Transformers 4.x | PyTorch 2.x
|
adapter_config.json
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"alpha_pattern": {},
|
| 3 |
+
"auto_mapping": null,
|
| 4 |
+
"base_model_name_or_path": "/root/.cache/modelscope/hub/models/qwen/Qwen2___5-VL-7B-Instruct",
|
| 5 |
+
"bias": "none",
|
| 6 |
+
"eva_config": null,
|
| 7 |
+
"exclude_modules": null,
|
| 8 |
+
"fan_in_fan_out": false,
|
| 9 |
+
"inference_mode": true,
|
| 10 |
+
"init_lora_weights": true,
|
| 11 |
+
"layer_replication": null,
|
| 12 |
+
"layers_pattern": null,
|
| 13 |
+
"layers_to_transform": null,
|
| 14 |
+
"loftq_config": {},
|
| 15 |
+
"lora_alpha": 128,
|
| 16 |
+
"lora_bias": false,
|
| 17 |
+
"lora_dropout": 0.05,
|
| 18 |
+
"megatron_config": null,
|
| 19 |
+
"megatron_core": "megatron.core",
|
| 20 |
+
"modules_to_save": [],
|
| 21 |
+
"peft_type": "LORA",
|
| 22 |
+
"r": 128,
|
| 23 |
+
"rank_pattern": {},
|
| 24 |
+
"revision": null,
|
| 25 |
+
"target_modules": "^(model.*\\.(up_proj|q_proj|down_proj|k_proj|gate_proj|v_proj|o_proj))$",
|
| 26 |
+
"task_type": "CAUSAL_LM",
|
| 27 |
+
"use_dora": false,
|
| 28 |
+
"use_rslora": false
|
| 29 |
+
}
|
adapter_model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:417f2180e42957b2d3d760bc677ed134db8bfba79d5bf5ae9f6d69de5b2c8d82
|
| 3 |
+
size 645976488
|
additional_config.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"lora_dtype": null, "lorap_lr_ratio": null, "lorap_emb_lr": 1e-06}
|
args.json
ADDED
|
@@ -0,0 +1,460 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"output_dir": "/ai/wuyifan/xyp/ms-swift/outputs/resumed_training_20250915_221157/v0-20250915-221215",
|
| 3 |
+
"overwrite_output_dir": false,
|
| 4 |
+
"do_train": false,
|
| 5 |
+
"do_eval": false,
|
| 6 |
+
"do_predict": false,
|
| 7 |
+
"eval_strategy": "steps",
|
| 8 |
+
"prediction_loss_only": false,
|
| 9 |
+
"per_device_train_batch_size": 1,
|
| 10 |
+
"per_device_eval_batch_size": 2,
|
| 11 |
+
"per_gpu_train_batch_size": null,
|
| 12 |
+
"per_gpu_eval_batch_size": null,
|
| 13 |
+
"gradient_accumulation_steps": 4,
|
| 14 |
+
"eval_accumulation_steps": null,
|
| 15 |
+
"eval_delay": 0,
|
| 16 |
+
"torch_empty_cache_steps": null,
|
| 17 |
+
"learning_rate": 1e-05,
|
| 18 |
+
"weight_decay": 0.01,
|
| 19 |
+
"adam_beta1": 0.9,
|
| 20 |
+
"adam_beta2": 0.95,
|
| 21 |
+
"adam_epsilon": 1e-08,
|
| 22 |
+
"max_grad_norm": 0.5,
|
| 23 |
+
"num_train_epochs": 5.0,
|
| 24 |
+
"max_steps": -1,
|
| 25 |
+
"lr_scheduler_type": "cosine",
|
| 26 |
+
"lr_scheduler_kwargs": null,
|
| 27 |
+
"warmup_ratio": 0.1,
|
| 28 |
+
"warmup_steps": 0,
|
| 29 |
+
"log_level": "passive",
|
| 30 |
+
"log_level_replica": "warning",
|
| 31 |
+
"log_on_each_node": true,
|
| 32 |
+
"logging_dir": "/ai/wuyifan/xyp/ms-swift/outputs/resumed_training_20250915_221157/v0-20250915-221215/runs",
|
| 33 |
+
"logging_strategy": "steps",
|
| 34 |
+
"logging_first_step": true,
|
| 35 |
+
"logging_steps": 1,
|
| 36 |
+
"logging_nan_inf_filter": true,
|
| 37 |
+
"save_strategy": "epoch",
|
| 38 |
+
"save_steps": 500,
|
| 39 |
+
"save_total_limit": 2,
|
| 40 |
+
"save_safetensors": true,
|
| 41 |
+
"save_on_each_node": false,
|
| 42 |
+
"save_only_model": false,
|
| 43 |
+
"restore_callback_states_from_checkpoint": false,
|
| 44 |
+
"no_cuda": false,
|
| 45 |
+
"use_cpu": false,
|
| 46 |
+
"use_mps_device": false,
|
| 47 |
+
"seed": 42,
|
| 48 |
+
"data_seed": 42,
|
| 49 |
+
"jit_mode_eval": false,
|
| 50 |
+
"use_ipex": false,
|
| 51 |
+
"bf16": true,
|
| 52 |
+
"fp16": false,
|
| 53 |
+
"fp16_opt_level": "O1",
|
| 54 |
+
"half_precision_backend": "auto",
|
| 55 |
+
"bf16_full_eval": false,
|
| 56 |
+
"fp16_full_eval": false,
|
| 57 |
+
"tf32": null,
|
| 58 |
+
"local_rank": 0,
|
| 59 |
+
"ddp_backend": null,
|
| 60 |
+
"tpu_num_cores": null,
|
| 61 |
+
"tpu_metrics_debug": false,
|
| 62 |
+
"debug": null,
|
| 63 |
+
"dataloader_drop_last": false,
|
| 64 |
+
"eval_steps": 25.0,
|
| 65 |
+
"dataloader_num_workers": 8,
|
| 66 |
+
"dataloader_prefetch_factor": null,
|
| 67 |
+
"past_index": -1,
|
| 68 |
+
"run_name": "/ai/wuyifan/xyp/ms-swift/outputs/resumed_training_20250915_221157/v0-20250915-221215",
|
| 69 |
+
"disable_tqdm": null,
|
| 70 |
+
"remove_unused_columns": false,
|
| 71 |
+
"label_names": null,
|
| 72 |
+
"load_best_model_at_end": false,
|
| 73 |
+
"metric_for_best_model": "reward",
|
| 74 |
+
"greater_is_better": true,
|
| 75 |
+
"ignore_data_skip": false,
|
| 76 |
+
"fsdp": "",
|
| 77 |
+
"fsdp_min_num_params": 0,
|
| 78 |
+
"fsdp_config": null,
|
| 79 |
+
"tp_size": 0,
|
| 80 |
+
"fsdp_transformer_layer_cls_to_wrap": null,
|
| 81 |
+
"accelerator_config": {
|
| 82 |
+
"dispatch_batches": false
|
| 83 |
+
},
|
| 84 |
+
"deepspeed": {
|
| 85 |
+
"fp16": {
|
| 86 |
+
"enabled": "auto",
|
| 87 |
+
"loss_scale": 0,
|
| 88 |
+
"loss_scale_window": 1000,
|
| 89 |
+
"initial_scale_power": 16,
|
| 90 |
+
"hysteresis": 2,
|
| 91 |
+
"min_loss_scale": 1
|
| 92 |
+
},
|
| 93 |
+
"bf16": {
|
| 94 |
+
"enabled": "auto"
|
| 95 |
+
},
|
| 96 |
+
"zero_optimization": {
|
| 97 |
+
"stage": 2,
|
| 98 |
+
"offload_optimizer": {
|
| 99 |
+
"device": "none",
|
| 100 |
+
"pin_memory": true
|
| 101 |
+
},
|
| 102 |
+
"allgather_partitions": true,
|
| 103 |
+
"allgather_bucket_size": 200000000.0,
|
| 104 |
+
"overlap_comm": false,
|
| 105 |
+
"reduce_scatter": true,
|
| 106 |
+
"reduce_bucket_size": 200000000.0,
|
| 107 |
+
"contiguous_gradients": true
|
| 108 |
+
},
|
| 109 |
+
"gradient_accumulation_steps": "auto",
|
| 110 |
+
"gradient_clipping": "auto",
|
| 111 |
+
"steps_per_print": 2000,
|
| 112 |
+
"train_batch_size": "auto",
|
| 113 |
+
"train_micro_batch_size_per_gpu": "auto",
|
| 114 |
+
"wall_clock_breakdown": false
|
| 115 |
+
},
|
| 116 |
+
"label_smoothing_factor": 0.0,
|
| 117 |
+
"optim": "adamw_torch",
|
| 118 |
+
"optim_args": null,
|
| 119 |
+
"adafactor": false,
|
| 120 |
+
"group_by_length": false,
|
| 121 |
+
"length_column_name": "length",
|
| 122 |
+
"report_to": [
|
| 123 |
+
"wandb"
|
| 124 |
+
],
|
| 125 |
+
"ddp_find_unused_parameters": null,
|
| 126 |
+
"ddp_bucket_cap_mb": null,
|
| 127 |
+
"ddp_broadcast_buffers": null,
|
| 128 |
+
"dataloader_pin_memory": true,
|
| 129 |
+
"dataloader_persistent_workers": false,
|
| 130 |
+
"skip_memory_metrics": true,
|
| 131 |
+
"use_legacy_prediction_loop": false,
|
| 132 |
+
"push_to_hub": false,
|
| 133 |
+
"resume_from_checkpoint": "/ai/wuyifan/xyp/ms-swift/outputs/resumed_training_20250912_145315/v0-20250912-145334/checkpoint-3627",
|
| 134 |
+
"hub_model_id": null,
|
| 135 |
+
"hub_strategy": "every_save",
|
| 136 |
+
"hub_token": null,
|
| 137 |
+
"hub_private_repo": null,
|
| 138 |
+
"hub_always_push": false,
|
| 139 |
+
"gradient_checkpointing": true,
|
| 140 |
+
"gradient_checkpointing_kwargs": null,
|
| 141 |
+
"include_inputs_for_metrics": false,
|
| 142 |
+
"include_for_metrics": [],
|
| 143 |
+
"eval_do_concat_batches": true,
|
| 144 |
+
"fp16_backend": "auto",
|
| 145 |
+
"push_to_hub_model_id": null,
|
| 146 |
+
"push_to_hub_organization": null,
|
| 147 |
+
"push_to_hub_token": null,
|
| 148 |
+
"mp_parameters": "",
|
| 149 |
+
"auto_find_batch_size": false,
|
| 150 |
+
"full_determinism": false,
|
| 151 |
+
"torchdynamo": null,
|
| 152 |
+
"ray_scope": "last",
|
| 153 |
+
"ddp_timeout": 18000000,
|
| 154 |
+
"torch_compile": false,
|
| 155 |
+
"torch_compile_backend": null,
|
| 156 |
+
"torch_compile_mode": null,
|
| 157 |
+
"include_tokens_per_second": false,
|
| 158 |
+
"include_num_input_tokens_seen": false,
|
| 159 |
+
"neftune_noise_alpha": null,
|
| 160 |
+
"optim_target_modules": null,
|
| 161 |
+
"batch_eval_metrics": false,
|
| 162 |
+
"eval_on_start": false,
|
| 163 |
+
"use_liger_kernel": false,
|
| 164 |
+
"eval_use_gather_object": false,
|
| 165 |
+
"average_tokens_across_devices": false,
|
| 166 |
+
"sortish_sampler": false,
|
| 167 |
+
"predict_with_generate": false,
|
| 168 |
+
"generation_max_length": null,
|
| 169 |
+
"generation_num_beams": null,
|
| 170 |
+
"generation_config": null,
|
| 171 |
+
"vit_gradient_checkpointing": null,
|
| 172 |
+
"check_model": true,
|
| 173 |
+
"acc_strategy": "token",
|
| 174 |
+
"train_dataloader_shuffle": true,
|
| 175 |
+
"max_epochs": null,
|
| 176 |
+
"aligner_lr": null,
|
| 177 |
+
"vit_lr": null,
|
| 178 |
+
"optimizer": null,
|
| 179 |
+
"use_logits_to_keep": null,
|
| 180 |
+
"channels": null,
|
| 181 |
+
"ds3_gather_for_generation": true,
|
| 182 |
+
"metric_warmup_step": 0,
|
| 183 |
+
"fsdp_num": 1,
|
| 184 |
+
"acc_steps": 1,
|
| 185 |
+
"eval_use_evalscope": false,
|
| 186 |
+
"eval_datasets": [],
|
| 187 |
+
"eval_limit": null,
|
| 188 |
+
"eval_datasets_args": null,
|
| 189 |
+
"eval_generation_config": null,
|
| 190 |
+
"model": "qwen/Qwen2.5-VL-7B-Instruct",
|
| 191 |
+
"model_type": "qwen2_5_vl",
|
| 192 |
+
"model_revision": null,
|
| 193 |
+
"task_type": "causal_lm",
|
| 194 |
+
"torch_dtype": "bfloat16",
|
| 195 |
+
"attn_impl": null,
|
| 196 |
+
"num_labels": null,
|
| 197 |
+
"problem_type": null,
|
| 198 |
+
"rope_scaling": null,
|
| 199 |
+
"device_map": null,
|
| 200 |
+
"max_memory": {},
|
| 201 |
+
"local_repo_path": null,
|
| 202 |
+
"init_strategy": null,
|
| 203 |
+
"template": "qwen2_5_vl",
|
| 204 |
+
"system": "You are a rigorous data visualization evaluation expert.\n\nFor each visualization, provide a detailed evaluation based on the classical \"Fidelity-Expressiveness-Elegance\" framework across the following six dimensions:\n\n- Fidelity: Data Fidelity - whether the visual representation accurately reflects the underlying data\n- Expressiveness: Semantic Readability, Insight Discovery - whether information is clearly conveyed and meaningful patterns are discoverable \n- Elegance: Design Style, Visual Composition, Color Harmony - whether the visualization has aesthetic appeal and professional design quality\n\nFor each dimension, give a score (1-5) and reasoning based on the evaluation criteria. The score for each metric should be an integer from 1 to 5, determined strictly according to the metric descriptions and scoring criteria.\n\nResponse format (STRICT):\n{\n \"data_fidelity\": {\"score\": 1-5, \"reasoning\": \"Your explanation here.\"},\n \"semantic_readability\": {\"score\": 1-5, \"reasoning\": \"Your explanation here.\"},\n \"insight_discovery\": {\"score\": 1-5, \"reasoning\": \"Your explanation here.\"},\n \"design_style\": {\"score\": 1-5, \"reasoning\": \"Your explanation here.\"},\n \"visual_composition\": {\"score\": 1-5, \"reasoning\": \"Your explanation here.\"},\n \"color_harmony\": {\"score\": 1-5, \"reasoning\": \"Your explanation here.\"},\n \"average_score\": \"the average of the above six scores, rounded to 2 decimals\"\n}\n\nWhere for each metric, score should be an integer from 1 to 5 based on the metric descriptions and scoring criteria, and reasoning should explain your choice. average_score is the average of all six scores rounded to 2 decimal places.\nDo NOT output any text before or after the JSON object.",
|
| 205 |
+
"max_length": null,
|
| 206 |
+
"truncation_strategy": "left",
|
| 207 |
+
"max_pixels": null,
|
| 208 |
+
"agent_template": null,
|
| 209 |
+
"norm_bbox": null,
|
| 210 |
+
"use_chat_template": true,
|
| 211 |
+
"padding_free": false,
|
| 212 |
+
"padding_side": "right",
|
| 213 |
+
"loss_scale": "last_round",
|
| 214 |
+
"sequence_parallel_size": 1,
|
| 215 |
+
"response_prefix": null,
|
| 216 |
+
"template_backend": "swift",
|
| 217 |
+
"dataset": [
|
| 218 |
+
"swift/plugin/grpo_dataset_expert_filtered/backup/train_dataset.jsonl"
|
| 219 |
+
],
|
| 220 |
+
"val_dataset": [],
|
| 221 |
+
"split_dataset_ratio": 0.01,
|
| 222 |
+
"dataset_num_proc": 1,
|
| 223 |
+
"load_from_cache_file": true,
|
| 224 |
+
"dataset_shuffle": true,
|
| 225 |
+
"val_dataset_shuffle": false,
|
| 226 |
+
"streaming": false,
|
| 227 |
+
"interleave_prob": null,
|
| 228 |
+
"stopping_strategy": "first_exhausted",
|
| 229 |
+
"shuffle_buffer_size": 1000,
|
| 230 |
+
"download_mode": "reuse_dataset_if_exists",
|
| 231 |
+
"columns": {},
|
| 232 |
+
"strict": false,
|
| 233 |
+
"model_name": null,
|
| 234 |
+
"model_author": null,
|
| 235 |
+
"custom_dataset_info": [],
|
| 236 |
+
"quant_method": null,
|
| 237 |
+
"quant_bits": null,
|
| 238 |
+
"hqq_axis": null,
|
| 239 |
+
"bnb_4bit_compute_dtype": "bfloat16",
|
| 240 |
+
"bnb_4bit_quant_type": "nf4",
|
| 241 |
+
"bnb_4bit_use_double_quant": true,
|
| 242 |
+
"bnb_4bit_quant_storage": null,
|
| 243 |
+
"max_new_tokens": 1024,
|
| 244 |
+
"temperature": 0.8,
|
| 245 |
+
"top_k": 50,
|
| 246 |
+
"top_p": 0.9,
|
| 247 |
+
"repetition_penalty": 1.0,
|
| 248 |
+
"num_beams": 1,
|
| 249 |
+
"stream": false,
|
| 250 |
+
"stop_words": [],
|
| 251 |
+
"logprobs": false,
|
| 252 |
+
"top_logprobs": null,
|
| 253 |
+
"ckpt_dir": null,
|
| 254 |
+
"lora_modules": [],
|
| 255 |
+
"tuner_backend": "peft",
|
| 256 |
+
"train_type": "lora",
|
| 257 |
+
"adapters": [],
|
| 258 |
+
"external_plugins": [
|
| 259 |
+
"swift/plugin/plugin.py"
|
| 260 |
+
],
|
| 261 |
+
"model_kwargs": {},
|
| 262 |
+
"load_args": false,
|
| 263 |
+
"load_data_args": false,
|
| 264 |
+
"packing": false,
|
| 265 |
+
"packing_cache": null,
|
| 266 |
+
"custom_register_path": [],
|
| 267 |
+
"use_hf": false,
|
| 268 |
+
"ignore_args_error": false,
|
| 269 |
+
"use_swift_lora": false,
|
| 270 |
+
"freeze_parameters": [
|
| 271 |
+
"visual",
|
| 272 |
+
"visual.merger"
|
| 273 |
+
],
|
| 274 |
+
"freeze_parameters_regex": null,
|
| 275 |
+
"freeze_parameters_ratio": 0.0,
|
| 276 |
+
"trainable_parameters": [],
|
| 277 |
+
"trainable_parameters_regex": null,
|
| 278 |
+
"freeze_llm": false,
|
| 279 |
+
"freeze_vit": true,
|
| 280 |
+
"freeze_aligner": true,
|
| 281 |
+
"target_modules": [
|
| 282 |
+
"all-linear"
|
| 283 |
+
],
|
| 284 |
+
"target_regex": null,
|
| 285 |
+
"modules_to_save": [],
|
| 286 |
+
"lora_rank": 128,
|
| 287 |
+
"lora_alpha": 128,
|
| 288 |
+
"lora_dropout": 0.05,
|
| 289 |
+
"lora_bias": "none",
|
| 290 |
+
"lora_dtype": null,
|
| 291 |
+
"lorap_lr_ratio": null,
|
| 292 |
+
"use_rslora": false,
|
| 293 |
+
"use_dora": false,
|
| 294 |
+
"lora_ga_batch_size": 2,
|
| 295 |
+
"lora_ga_iters": 2,
|
| 296 |
+
"lora_ga_max_length": 1024,
|
| 297 |
+
"lora_ga_direction": "ArB2r",
|
| 298 |
+
"lora_ga_scale": "stable",
|
| 299 |
+
"lora_ga_stable_gamma": 16,
|
| 300 |
+
"init_weights": true,
|
| 301 |
+
"fourier_n_frequency": 2000,
|
| 302 |
+
"fourier_scaling": 300.0,
|
| 303 |
+
"boft_block_size": 4,
|
| 304 |
+
"boft_block_num": 0,
|
| 305 |
+
"boft_n_butterfly_factor": 1,
|
| 306 |
+
"boft_dropout": 0.0,
|
| 307 |
+
"vera_rank": 256,
|
| 308 |
+
"vera_projection_prng_key": 0,
|
| 309 |
+
"vera_dropout": 0.0,
|
| 310 |
+
"vera_d_initial": 0.1,
|
| 311 |
+
"adapter_act": "gelu",
|
| 312 |
+
"adapter_length": 128,
|
| 313 |
+
"use_galore": false,
|
| 314 |
+
"galore_target_modules": null,
|
| 315 |
+
"galore_rank": 128,
|
| 316 |
+
"galore_update_proj_gap": 50,
|
| 317 |
+
"galore_scale": 1.0,
|
| 318 |
+
"galore_proj_type": "std",
|
| 319 |
+
"galore_optim_per_parameter": false,
|
| 320 |
+
"galore_with_embedding": false,
|
| 321 |
+
"galore_quantization": false,
|
| 322 |
+
"galore_proj_quant": false,
|
| 323 |
+
"galore_proj_bits": 4,
|
| 324 |
+
"galore_proj_group_size": 256,
|
| 325 |
+
"galore_cos_threshold": 0.4,
|
| 326 |
+
"galore_gamma_proj": 2,
|
| 327 |
+
"galore_queue_size": 5,
|
| 328 |
+
"adalora_target_r": 8,
|
| 329 |
+
"adalora_init_r": 12,
|
| 330 |
+
"adalora_tinit": 0,
|
| 331 |
+
"adalora_tfinal": 0,
|
| 332 |
+
"adalora_deltaT": 1,
|
| 333 |
+
"adalora_beta1": 0.85,
|
| 334 |
+
"adalora_beta2": 0.85,
|
| 335 |
+
"adalora_orth_reg_weight": 0.5,
|
| 336 |
+
"llamapro_num_new_blocks": 4,
|
| 337 |
+
"llamapro_num_groups": null,
|
| 338 |
+
"lisa_activated_layers": 0,
|
| 339 |
+
"lisa_step_interval": 20,
|
| 340 |
+
"reft_layer_key": null,
|
| 341 |
+
"reft_layers": null,
|
| 342 |
+
"reft_rank": 4,
|
| 343 |
+
"reft_intervention_type": "LoreftIntervention",
|
| 344 |
+
"reft_args": null,
|
| 345 |
+
"swanlab_token": null,
|
| 346 |
+
"swanlab_project": null,
|
| 347 |
+
"swanlab_workspace": null,
|
| 348 |
+
"swanlab_exp_name": null,
|
| 349 |
+
"swanlab_mode": "cloud",
|
| 350 |
+
"add_version": true,
|
| 351 |
+
"resume_only_model": false,
|
| 352 |
+
"create_checkpoint_symlink": false,
|
| 353 |
+
"lazy_tokenize": true,
|
| 354 |
+
"loss_type": "grpo",
|
| 355 |
+
"metric": null,
|
| 356 |
+
"zero_hpz_partition_size": null,
|
| 357 |
+
"sft_alpha": 0,
|
| 358 |
+
"reward_model": null,
|
| 359 |
+
"reward_adapters": [],
|
| 360 |
+
"reward_model_type": null,
|
| 361 |
+
"reward_model_revision": null,
|
| 362 |
+
"num_ppo_epochs": 4,
|
| 363 |
+
"whiten_rewards": false,
|
| 364 |
+
"kl_coef": 0.1,
|
| 365 |
+
"cliprange": 0.2,
|
| 366 |
+
"vf_coef": 0.1,
|
| 367 |
+
"cliprange_value": 0.2,
|
| 368 |
+
"gamma": 1.0,
|
| 369 |
+
"lam": 0.95,
|
| 370 |
+
"num_mini_batches": 1,
|
| 371 |
+
"local_rollout_forward_batch_size": 64,
|
| 372 |
+
"num_sample_generations": 10,
|
| 373 |
+
"response_length": 1024,
|
| 374 |
+
"missing_eos_penalty": null,
|
| 375 |
+
"epsilon": 0.2,
|
| 376 |
+
"epsilon_high": null,
|
| 377 |
+
"delta": null,
|
| 378 |
+
"num_infer_workers": null,
|
| 379 |
+
"vllm_mode": "colocate",
|
| 380 |
+
"vllm_device": null,
|
| 381 |
+
"vllm_gpu_memory_utilization": 0.8,
|
| 382 |
+
"vllm_max_model_len": 8192,
|
| 383 |
+
"vllm_max_num_seqs": null,
|
| 384 |
+
"vllm_enforce_eager": false,
|
| 385 |
+
"vllm_limit_mm_per_prompt": null,
|
| 386 |
+
"vllm_enable_prefix_caching": true,
|
| 387 |
+
"vllm_tensor_parallel_size": 1,
|
| 388 |
+
"vllm_server_base_url": null,
|
| 389 |
+
"vllm_server_host": null,
|
| 390 |
+
"vllm_server_port": 8000,
|
| 391 |
+
"vllm_server_timeout": 240.0,
|
| 392 |
+
"cosine_min_len_value_wrong": -0.5,
|
| 393 |
+
"cosine_max_len_value_wrong": 0.0,
|
| 394 |
+
"cosine_min_len_value_correct": 1.0,
|
| 395 |
+
"cosine_max_len_value_correct": 0.5,
|
| 396 |
+
"cosine_max_len": null,
|
| 397 |
+
"repetition_n_grams": 3,
|
| 398 |
+
"repetition_max_penalty": -1.0,
|
| 399 |
+
"reward_model_plugin": null,
|
| 400 |
+
"sync_ref_model": false,
|
| 401 |
+
"ref_model_sync_steps": 512,
|
| 402 |
+
"ref_model_mixup_alpha": 0.6,
|
| 403 |
+
"async_generate": false,
|
| 404 |
+
"tensor_parallel_size": null,
|
| 405 |
+
"sleep_level": 0,
|
| 406 |
+
"move_model_batches": null,
|
| 407 |
+
"offload_optimizer": false,
|
| 408 |
+
"offload_model": false,
|
| 409 |
+
"gc_collect_after_offload": false,
|
| 410 |
+
"multi_turn_func": null,
|
| 411 |
+
"multi_turn_scheduler": null,
|
| 412 |
+
"max_turns": null,
|
| 413 |
+
"completion_length_limit_scope": "per_round",
|
| 414 |
+
"dynamic_sample": false,
|
| 415 |
+
"max_resample_times": 3,
|
| 416 |
+
"overlong_filter": false,
|
| 417 |
+
"soft_max_length": null,
|
| 418 |
+
"soft_cache_length": null,
|
| 419 |
+
"scale_rewards": true,
|
| 420 |
+
"wandb_log_unique_prompts": null,
|
| 421 |
+
"generation_batch_size": null,
|
| 422 |
+
"steps_per_generation": null,
|
| 423 |
+
"num_generations": 8,
|
| 424 |
+
"reward_funcs": [
|
| 425 |
+
"external_visualization_json_combined"
|
| 426 |
+
],
|
| 427 |
+
"reward_weights": null,
|
| 428 |
+
"log_completions": true,
|
| 429 |
+
"use_vllm": true,
|
| 430 |
+
"num_iterations": 1,
|
| 431 |
+
"teacher_model": null,
|
| 432 |
+
"teacher_adapters": [],
|
| 433 |
+
"teacher_model_type": null,
|
| 434 |
+
"teacher_model_revision": null,
|
| 435 |
+
"rlhf_type": "grpo",
|
| 436 |
+
"ref_model": null,
|
| 437 |
+
"ref_model_type": null,
|
| 438 |
+
"ref_model_revision": null,
|
| 439 |
+
"beta": 0.01,
|
| 440 |
+
"label_smoothing": 0,
|
| 441 |
+
"max_completion_length": 1024,
|
| 442 |
+
"rpo_alpha": 1.0,
|
| 443 |
+
"cpo_alpha": 1.0,
|
| 444 |
+
"simpo_gamma": 1,
|
| 445 |
+
"desirable_weight": 1.0,
|
| 446 |
+
"undesirable_weight": 1.0,
|
| 447 |
+
"center_rewards_coefficient": null,
|
| 448 |
+
"lmbda": 0.5,
|
| 449 |
+
"seq_kd": false,
|
| 450 |
+
"rank": 0,
|
| 451 |
+
"global_world_size": 4,
|
| 452 |
+
"local_world_size": 4,
|
| 453 |
+
"model_suffix": "Qwen2.5-VL-7B-Instruct",
|
| 454 |
+
"model_info": "ModelInfo(model_type='qwen2_5_vl', model_dir='/root/.cache/modelscope/hub/models/qwen/Qwen2___5-VL-7B-Instruct', torch_dtype=torch.bfloat16, max_model_len=128000, quant_method=None, quant_bits=None, rope_scaling={'type': 'default', 'mrope_section': [16, 24, 24], 'rope_type': 'default'}, config=None, task_type='causal_lm', num_labels=None)",
|
| 455 |
+
"model_meta": "ModelMeta(model_type='qwen2_5_vl', model_groups=[ModelGroup(models=[Model(ms_model_id='Qwen/Qwen2.5-VL-3B-Instruct', hf_model_id='Qwen/Qwen2.5-VL-3B-Instruct', model_path=None, ms_revision=None, hf_revision=None), Model(ms_model_id='Qwen/Qwen2.5-VL-7B-Instruct', hf_model_id='Qwen/Qwen2.5-VL-7B-Instruct', model_path=None, ms_revision=None, hf_revision=None), Model(ms_model_id='Qwen/Qwen2.5-VL-32B-Instruct', hf_model_id='Qwen/Qwen2.5-VL-32B-Instruct', model_path=None, ms_revision=None, hf_revision=None), Model(ms_model_id='Qwen/Qwen2.5-VL-72B-Instruct', hf_model_id='Qwen/Qwen2.5-VL-72B-Instruct', model_path=None, ms_revision=None, hf_revision=None)], ignore_patterns=None, requires=None, tags=[]), ModelGroup(models=[Model(ms_model_id='Qwen/Qwen2.5-VL-3B-Instruct-AWQ', hf_model_id='Qwen/Qwen2.5-VL-3B-Instruct-AWQ', model_path=None, ms_revision=None, hf_revision=None), Model(ms_model_id='Qwen/Qwen2.5-VL-7B-Instruct-AWQ', hf_model_id='Qwen/Qwen2.5-VL-7B-Instruct-AWQ', model_path=None, ms_revision=None, hf_revision=None), Model(ms_model_id='Qwen/Qwen2.5-VL-32B-Instruct-AWQ', hf_model_id='Qwen/Qwen2.5-VL-32B-Instruct-AWQ', model_path=None, ms_revision=None, hf_revision=None), Model(ms_model_id='Qwen/Qwen2.5-VL-72B-Instruct-AWQ', hf_model_id='Qwen/Qwen2.5-VL-72B-Instruct-AWQ', model_path=None, ms_revision=None, hf_revision=None)], ignore_patterns=None, requires=None, tags=[])], template='qwen2_5_vl', get_function=<function get_model_tokenizer_qwen2_5_vl at 0x7f8943e59a20>, model_arch='qwen2_vl', architectures=['Qwen2_5_VLForConditionalGeneration'], additional_saved_files=[], torch_dtype=None, is_multimodal=True, is_reward=False, task_type=None, ignore_patterns=None, requires=['transformers>=4.49', 'qwen_vl_utils>=0.0.6', 'decord'], tags=['vision', 'video'])",
|
| 456 |
+
"model_dir": "/root/.cache/modelscope/hub/models/qwen/Qwen2___5-VL-7B-Instruct",
|
| 457 |
+
"hub": "<class 'swift.hub.hub.MSHub'>",
|
| 458 |
+
"evaluation_strategy": "steps",
|
| 459 |
+
"training_args": "GRPOConfig(output_dir='/ai/wuyifan/xyp/ms-swift/outputs/resumed_training_20250915_221157/v0-20250915-221215', overwrite_output_dir=False, do_train=False, do_eval=True, do_predict=False, eval_strategy=<IntervalStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=1, per_device_eval_batch_size=2, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=4, eval_accumulation_steps=None, eval_delay=0, torch_empty_cache_steps=None, learning_rate=1e-05, weight_decay=0.01, adam_beta1=0.9, adam_beta2=0.95, adam_epsilon=1e-08, max_grad_norm=0.5, num_train_epochs=5.0, max_steps=-1, lr_scheduler_type=<SchedulerType.COSINE: 'cosine'>, lr_scheduler_kwargs=None, warmup_ratio=0.1, warmup_steps=0, log_level='passive', log_level_replica='warning', log_on_each_node=True, logging_dir='/ai/wuyifan/xyp/ms-swift/outputs/resumed_training_20250915_221157/v0-20250915-221215/runs', logging_strategy=<IntervalStrategy.STEPS: 'steps'>, logging_first_step=True, logging_steps=1, logging_nan_inf_filter=True, save_strategy=<SaveStrategy.EPOCH: 'epoch'>, save_steps=500, save_total_limit=2, save_safetensors=True, save_on_each_node=False, save_only_model=False, restore_callback_states_from_checkpoint=False, no_cuda=False, use_cpu=False, use_mps_device=False, seed=42, data_seed=42, jit_mode_eval=False, use_ipex=False, bf16=True, fp16=False, fp16_opt_level='O1', half_precision_backend='auto', bf16_full_eval=False, fp16_full_eval=False, tf32=None, local_rank=0, ddp_backend=None, tpu_num_cores=None, tpu_metrics_debug=False, debug=[], dataloader_drop_last=True, eval_steps=25, dataloader_num_workers=8, dataloader_prefetch_factor=10, past_index=-1, run_name='/ai/wuyifan/xyp/ms-swift/outputs/resumed_training_20250915_221157/v0-20250915-221215', disable_tqdm=False, remove_unused_columns=False, label_names=None, load_best_model_at_end=False, metric_for_best_model='reward', greater_is_better=True, ignore_data_skip=False, fsdp=[], fsdp_min_num_params=0, fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, tp_size=0, fsdp_transformer_layer_cls_to_wrap=None, accelerator_config=AcceleratorConfig(split_batches=False, dispatch_batches=False, even_batches=True, use_seedable_sampler=True, non_blocking=False, gradient_accumulation_kwargs=None, use_configured_state=False), deepspeed={'fp16': {'enabled': 'auto', 'loss_scale': 0, 'loss_scale_window': 1000, 'initial_scale_power': 16, 'hysteresis': 2, 'min_loss_scale': 1}, 'bf16': {'enabled': 'auto'}, 'zero_optimization': {'stage': 2, 'offload_optimizer': {'device': 'none', 'pin_memory': True}, 'allgather_partitions': True, 'allgather_bucket_size': 200000000.0, 'overlap_comm': False, 'reduce_scatter': True, 'reduce_bucket_size': 200000000.0, 'contiguous_gradients': True}, 'gradient_accumulation_steps': 'auto', 'gradient_clipping': 'auto', 'steps_per_print': 2000, 'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'wall_clock_breakdown': False}, label_smoothing_factor=0.0, optim=<OptimizerNames.ADAMW_TORCH: 'adamw_torch'>, optim_args=None, adafactor=False, group_by_length=False, length_column_name='length', report_to=['wandb'], ddp_find_unused_parameters=None, ddp_bucket_cap_mb=None, ddp_broadcast_buffers=None, dataloader_pin_memory=True, dataloader_persistent_workers=False, skip_memory_metrics=True, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint='/ai/wuyifan/xyp/ms-swift/outputs/resumed_training_20250912_145315/v0-20250912-145334/checkpoint-3627', hub_model_id=None, hub_strategy=<HubStrategy.EVERY_SAVE: 'every_save'>, hub_token=None, hub_private_repo=None, hub_always_push=False, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, include_inputs_for_metrics=False, include_for_metrics=[], eval_do_concat_batches=True, fp16_backend='auto', push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=None, mp_parameters='', auto_find_batch_size=False, full_determinism=False, torchdynamo=None, ray_scope='last', ddp_timeout=18000000, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, include_tokens_per_second=None, include_num_input_tokens_seen=None, neftune_noise_alpha=None, optim_target_modules=None, batch_eval_metrics=False, eval_on_start=False, use_liger_kernel=False, eval_use_gather_object=False, average_tokens_across_devices=None, model_init_kwargs=None, disable_dropout=False, max_prompt_length=512, num_generations=8, max_completion_length=1024, ds3_gather_for_generation=True, shuffle_dataset=True, generation_batch_size=16, steps_per_generation=4, temperature=0.8, top_p=0.9, top_k=50, min_p=None, repetition_penalty=1.0, cache_implementation=None, use_vllm=True, vllm_server_base_url=None, vllm_mode='colocate', vllm_guided_decoding_regex=None, vllm_server_host=None, vllm_server_port=8000, vllm_server_timeout=240.0, vllm_gpu_memory_utilization=0.8, vllm_tensor_parallel_size=1, beta=0.01, num_iterations=1, epsilon=0.2, delta=None, epsilon_high=None, reward_weights=None, scale_rewards=True, loss_type='grpo', mask_truncated_completions=False, sync_ref_model=False, ref_model_mixup_alpha=0.6, ref_model_sync_steps=512, use_liger_loss=False, log_completions=True, num_completions_to_print=None, wandb_log_unique_prompts=None, vit_gradient_checkpointing=True, check_model=True, acc_strategy='token', train_dataloader_shuffle=True, max_epochs=None, aligner_lr=None, vit_lr=None, optimizer=None, use_logits_to_keep=None, channels=None, metric_warmup_step=0, fsdp_num=1, acc_steps=1, eval_use_evalscope=False, eval_datasets=[], eval_limit=None, eval_datasets_args=None, eval_generation_config=None, sft_alpha=0, train_type='lora', local_repo_path=None, galore_config=None, num_infer_workers=None, vllm_device=None, vllm_max_model_len=8192, vllm_max_num_seqs=None, vllm_enforce_eager=False, vllm_limit_mm_per_prompt={}, vllm_enable_prefix_caching=True, cosine_min_len_value_wrong=-0.5, cosine_max_len_value_wrong=0.0, cosine_min_len_value_correct=1.0, cosine_max_len_value_correct=0.5, cosine_max_len=1024, repetition_n_grams=3, repetition_max_penalty=-1.0, reward_model=None, reward_model_plugin=None, async_generate=False, tensor_parallel_size=None, sleep_level=0, move_model_batches=None, offload_optimizer=False, offload_model=False, gc_collect_after_offload=False, multi_turn_func=None, multi_turn_scheduler=None, max_turns=None, completion_length_limit_scope='per_round', dynamic_sample=False, max_resample_times=3, overlong_filter=False, soft_max_length=None, soft_cache_length=None, dataset_shuffle=True, stop_words=[])"
|
| 460 |
+
}
|
latest
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
global_step6045
|
rng_state_0.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:05a06cefa39bf540093ce3a65a88d749884c4315269cd6e0381475b11140eeb7
|
| 3 |
+
size 14960
|
rng_state_1.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:73d82e879245b60e9ea994a5aee7a7261042f1da6f27e3d6e260b1627f9a8f7a
|
| 3 |
+
size 15024
|
rng_state_2.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b5c3f5fdb88b73874960e735e8373a0d0ad467f338c625afbcc471a40b0593f5
|
| 3 |
+
size 14960
|
rng_state_3.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dc1b9e46835c83f76fb2bb2d5dc3b65a3e2ce22cf3046d13660ef9f7166e082f
|
| 3 |
+
size 15024
|
scheduler.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c9e10a0e9292d9f1ba58f960e50e56f7449036439a2a8b8d700733be7df90d94
|
| 3 |
+
size 1064
|
trainer_state.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
training_args.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ec85f14ff1a0daea26ef94258111d9ff14e74889c4e24c456391381a29dbc937
|
| 3 |
+
size 10232
|
zero_to_fp32.py
ADDED
|
@@ -0,0 +1,760 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python
|
| 2 |
+
|
| 3 |
+
# Copyright (c) Microsoft Corporation.
|
| 4 |
+
# SPDX-License-Identifier: Apache-2.0
|
| 5 |
+
|
| 6 |
+
# DeepSpeed Team
|
| 7 |
+
|
| 8 |
+
# This script extracts fp32 consolidated weights from a zero 1, 2 and 3 DeepSpeed checkpoints. It gets
|
| 9 |
+
# copied into the top level checkpoint dir, so the user can easily do the conversion at any point in
|
| 10 |
+
# the future. Once extracted, the weights don't require DeepSpeed and can be used in any
|
| 11 |
+
# application.
|
| 12 |
+
#
|
| 13 |
+
# example:
|
| 14 |
+
# python zero_to_fp32.py . output_dir/
|
| 15 |
+
# or
|
| 16 |
+
# python zero_to_fp32.py . output_dir/ --safe_serialization
|
| 17 |
+
|
| 18 |
+
import argparse
|
| 19 |
+
import torch
|
| 20 |
+
import glob
|
| 21 |
+
import math
|
| 22 |
+
import os
|
| 23 |
+
import re
|
| 24 |
+
import gc
|
| 25 |
+
import json
|
| 26 |
+
import numpy as np
|
| 27 |
+
from tqdm import tqdm
|
| 28 |
+
from collections import OrderedDict
|
| 29 |
+
from dataclasses import dataclass
|
| 30 |
+
|
| 31 |
+
# while this script doesn't use deepspeed to recover data, since the checkpoints are pickled with
|
| 32 |
+
# DeepSpeed data structures it has to be available in the current python environment.
|
| 33 |
+
from deepspeed.utils import logger
|
| 34 |
+
from deepspeed.checkpoint.constants import (DS_VERSION, OPTIMIZER_STATE_DICT, SINGLE_PARTITION_OF_FP32_GROUPS,
|
| 35 |
+
FP32_FLAT_GROUPS, ZERO_STAGE, PARTITION_COUNT, PARAM_SHAPES, BUFFER_NAMES,
|
| 36 |
+
FROZEN_PARAM_SHAPES, FROZEN_PARAM_FRAGMENTS)
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
@dataclass
|
| 40 |
+
class zero_model_state:
|
| 41 |
+
buffers: dict()
|
| 42 |
+
param_shapes: dict()
|
| 43 |
+
shared_params: list
|
| 44 |
+
ds_version: int
|
| 45 |
+
frozen_param_shapes: dict()
|
| 46 |
+
frozen_param_fragments: dict()
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
debug = 0
|
| 50 |
+
|
| 51 |
+
# load to cpu
|
| 52 |
+
device = torch.device('cpu')
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
def atoi(text):
|
| 56 |
+
return int(text) if text.isdigit() else text
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
def natural_keys(text):
|
| 60 |
+
'''
|
| 61 |
+
alist.sort(key=natural_keys) sorts in human order
|
| 62 |
+
http://nedbatchelder.com/blog/200712/human_sorting.html
|
| 63 |
+
(See Toothy's implementation in the comments)
|
| 64 |
+
'''
|
| 65 |
+
return [atoi(c) for c in re.split(r'(\d+)', text)]
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
def get_model_state_file(checkpoint_dir, zero_stage):
|
| 69 |
+
if not os.path.isdir(checkpoint_dir):
|
| 70 |
+
raise FileNotFoundError(f"Directory '{checkpoint_dir}' doesn't exist")
|
| 71 |
+
|
| 72 |
+
# there should be only one file
|
| 73 |
+
if zero_stage <= 2:
|
| 74 |
+
file = os.path.join(checkpoint_dir, "mp_rank_00_model_states.pt")
|
| 75 |
+
elif zero_stage == 3:
|
| 76 |
+
file = os.path.join(checkpoint_dir, "zero_pp_rank_0_mp_rank_00_model_states.pt")
|
| 77 |
+
|
| 78 |
+
if not os.path.exists(file):
|
| 79 |
+
raise FileNotFoundError(f"can't find model states file at '{file}'")
|
| 80 |
+
|
| 81 |
+
return file
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
def get_checkpoint_files(checkpoint_dir, glob_pattern):
|
| 85 |
+
# XXX: need to test that this simple glob rule works for multi-node setup too
|
| 86 |
+
ckpt_files = sorted(glob.glob(os.path.join(checkpoint_dir, glob_pattern)), key=natural_keys)
|
| 87 |
+
|
| 88 |
+
if len(ckpt_files) == 0:
|
| 89 |
+
raise FileNotFoundError(f"can't find {glob_pattern} files in directory '{checkpoint_dir}'")
|
| 90 |
+
|
| 91 |
+
return ckpt_files
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
def get_optim_files(checkpoint_dir):
|
| 95 |
+
return get_checkpoint_files(checkpoint_dir, "*_optim_states.pt")
|
| 96 |
+
|
| 97 |
+
|
| 98 |
+
def get_model_state_files(checkpoint_dir):
|
| 99 |
+
return get_checkpoint_files(checkpoint_dir, "*_model_states.pt")
|
| 100 |
+
|
| 101 |
+
|
| 102 |
+
def parse_model_states(files):
|
| 103 |
+
zero_model_states = []
|
| 104 |
+
for file in files:
|
| 105 |
+
state_dict = torch.load(file, map_location=device, weights_only=False)
|
| 106 |
+
|
| 107 |
+
if BUFFER_NAMES not in state_dict:
|
| 108 |
+
raise ValueError(f"{file} is not a model state checkpoint")
|
| 109 |
+
buffer_names = state_dict[BUFFER_NAMES]
|
| 110 |
+
if debug:
|
| 111 |
+
print("Found buffers:", buffer_names)
|
| 112 |
+
|
| 113 |
+
# recover just the buffers while restoring them to fp32 if they were saved in fp16
|
| 114 |
+
buffers = {k: v.float() for k, v in state_dict["module"].items() if k in buffer_names}
|
| 115 |
+
param_shapes = state_dict[PARAM_SHAPES]
|
| 116 |
+
|
| 117 |
+
# collect parameters that are included in param_shapes
|
| 118 |
+
param_names = []
|
| 119 |
+
for s in param_shapes:
|
| 120 |
+
for name in s.keys():
|
| 121 |
+
param_names.append(name)
|
| 122 |
+
|
| 123 |
+
# update with frozen parameters
|
| 124 |
+
frozen_param_shapes = state_dict.get(FROZEN_PARAM_SHAPES, None)
|
| 125 |
+
if frozen_param_shapes is not None:
|
| 126 |
+
if debug:
|
| 127 |
+
print(f"Found frozen_param_shapes: {frozen_param_shapes}")
|
| 128 |
+
param_names += list(frozen_param_shapes.keys())
|
| 129 |
+
|
| 130 |
+
# handle shared params
|
| 131 |
+
shared_params = [[k, v] for k, v in state_dict["shared_params"].items()]
|
| 132 |
+
|
| 133 |
+
ds_version = state_dict.get(DS_VERSION, None)
|
| 134 |
+
|
| 135 |
+
frozen_param_fragments = state_dict.get(FROZEN_PARAM_FRAGMENTS, None)
|
| 136 |
+
|
| 137 |
+
z_model_state = zero_model_state(buffers=buffers,
|
| 138 |
+
param_shapes=param_shapes,
|
| 139 |
+
shared_params=shared_params,
|
| 140 |
+
ds_version=ds_version,
|
| 141 |
+
frozen_param_shapes=frozen_param_shapes,
|
| 142 |
+
frozen_param_fragments=frozen_param_fragments)
|
| 143 |
+
zero_model_states.append(z_model_state)
|
| 144 |
+
|
| 145 |
+
return zero_model_states
|
| 146 |
+
|
| 147 |
+
|
| 148 |
+
def parse_optim_states(files, ds_checkpoint_dir):
|
| 149 |
+
total_files = len(files)
|
| 150 |
+
state_dicts = []
|
| 151 |
+
for f in tqdm(files, desc='Loading checkpoint shards'):
|
| 152 |
+
state_dict = torch.load(f, map_location=device, mmap=True, weights_only=False)
|
| 153 |
+
# immediately discard the potentially huge 2 optimizer states as we only care for fp32 master weights
|
| 154 |
+
# and also handle the case where it was already removed by another helper script
|
| 155 |
+
state_dict["optimizer_state_dict"].pop("optimizer_state_dict", None)
|
| 156 |
+
state_dicts.append(state_dict)
|
| 157 |
+
|
| 158 |
+
if not ZERO_STAGE in state_dicts[0][OPTIMIZER_STATE_DICT]:
|
| 159 |
+
raise ValueError(f"{files[0]} is not a zero checkpoint")
|
| 160 |
+
zero_stage = state_dicts[0][OPTIMIZER_STATE_DICT][ZERO_STAGE]
|
| 161 |
+
world_size = state_dicts[0][OPTIMIZER_STATE_DICT][PARTITION_COUNT]
|
| 162 |
+
|
| 163 |
+
# For ZeRO-2 each param group can have different partition_count as data parallelism for expert
|
| 164 |
+
# parameters can be different from data parallelism for non-expert parameters. So we can just
|
| 165 |
+
# use the max of the partition_count to get the dp world_size.
|
| 166 |
+
|
| 167 |
+
if type(world_size) is list:
|
| 168 |
+
world_size = max(world_size)
|
| 169 |
+
|
| 170 |
+
if world_size != total_files:
|
| 171 |
+
raise ValueError(
|
| 172 |
+
f"Expected {world_size} of '*_optim_states.pt' under '{ds_checkpoint_dir}' but found {total_files} files. "
|
| 173 |
+
"Possibly due to an overwrite of an old checkpoint, or a checkpoint didn't get saved by one or more processes."
|
| 174 |
+
)
|
| 175 |
+
|
| 176 |
+
# the groups are named differently in each stage
|
| 177 |
+
if zero_stage <= 2:
|
| 178 |
+
fp32_groups_key = SINGLE_PARTITION_OF_FP32_GROUPS
|
| 179 |
+
elif zero_stage == 3:
|
| 180 |
+
fp32_groups_key = FP32_FLAT_GROUPS
|
| 181 |
+
else:
|
| 182 |
+
raise ValueError(f"unknown zero stage {zero_stage}")
|
| 183 |
+
|
| 184 |
+
fp32_flat_groups = [state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key] for i in range(len(state_dicts))]
|
| 185 |
+
return zero_stage, world_size, fp32_flat_groups
|
| 186 |
+
|
| 187 |
+
|
| 188 |
+
def _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters):
|
| 189 |
+
"""
|
| 190 |
+
Returns fp32 state_dict reconstructed from ds checkpoint
|
| 191 |
+
|
| 192 |
+
Args:
|
| 193 |
+
- ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
|
| 194 |
+
|
| 195 |
+
"""
|
| 196 |
+
print(f"Processing zero checkpoint '{ds_checkpoint_dir}'")
|
| 197 |
+
|
| 198 |
+
optim_files = get_optim_files(ds_checkpoint_dir)
|
| 199 |
+
zero_stage, world_size, fp32_flat_groups = parse_optim_states(optim_files, ds_checkpoint_dir)
|
| 200 |
+
print(f"Detected checkpoint of type zero stage {zero_stage}, world_size: {world_size}")
|
| 201 |
+
|
| 202 |
+
model_files = get_model_state_files(ds_checkpoint_dir)
|
| 203 |
+
|
| 204 |
+
zero_model_states = parse_model_states(model_files)
|
| 205 |
+
print(f'Parsing checkpoint created by deepspeed=={zero_model_states[0].ds_version}')
|
| 206 |
+
|
| 207 |
+
if zero_stage <= 2:
|
| 208 |
+
return _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
|
| 209 |
+
exclude_frozen_parameters)
|
| 210 |
+
elif zero_stage == 3:
|
| 211 |
+
return _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
|
| 212 |
+
exclude_frozen_parameters)
|
| 213 |
+
|
| 214 |
+
|
| 215 |
+
def _zero2_merge_frozen_params(state_dict, zero_model_states):
|
| 216 |
+
if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
|
| 217 |
+
return
|
| 218 |
+
|
| 219 |
+
frozen_param_shapes = zero_model_states[0].frozen_param_shapes
|
| 220 |
+
frozen_param_fragments = zero_model_states[0].frozen_param_fragments
|
| 221 |
+
|
| 222 |
+
if debug:
|
| 223 |
+
num_elem = sum(s.numel() for s in frozen_param_shapes.values())
|
| 224 |
+
print(f'rank 0: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
|
| 225 |
+
|
| 226 |
+
wanted_params = len(frozen_param_shapes)
|
| 227 |
+
wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
|
| 228 |
+
avail_numel = sum([p.numel() for p in frozen_param_fragments.values()])
|
| 229 |
+
print(f'Frozen params: Have {avail_numel} numels to process.')
|
| 230 |
+
print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
|
| 231 |
+
|
| 232 |
+
total_params = 0
|
| 233 |
+
total_numel = 0
|
| 234 |
+
for name, shape in frozen_param_shapes.items():
|
| 235 |
+
total_params += 1
|
| 236 |
+
unpartitioned_numel = shape.numel()
|
| 237 |
+
total_numel += unpartitioned_numel
|
| 238 |
+
|
| 239 |
+
state_dict[name] = frozen_param_fragments[name]
|
| 240 |
+
|
| 241 |
+
if debug:
|
| 242 |
+
print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
|
| 243 |
+
|
| 244 |
+
print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
|
| 245 |
+
|
| 246 |
+
|
| 247 |
+
def _has_callable(obj, fn):
|
| 248 |
+
attr = getattr(obj, fn, None)
|
| 249 |
+
return callable(attr)
|
| 250 |
+
|
| 251 |
+
|
| 252 |
+
def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
|
| 253 |
+
param_shapes = zero_model_states[0].param_shapes
|
| 254 |
+
|
| 255 |
+
# Reconstruction protocol:
|
| 256 |
+
#
|
| 257 |
+
# XXX: document this
|
| 258 |
+
|
| 259 |
+
if debug:
|
| 260 |
+
for i in range(world_size):
|
| 261 |
+
for j in range(len(fp32_flat_groups[0])):
|
| 262 |
+
print(f"{FP32_FLAT_GROUPS}[{i}][{j}].shape={fp32_flat_groups[i][j].shape}")
|
| 263 |
+
|
| 264 |
+
# XXX: memory usage doubles here (zero2)
|
| 265 |
+
num_param_groups = len(fp32_flat_groups[0])
|
| 266 |
+
merged_single_partition_of_fp32_groups = []
|
| 267 |
+
for i in range(num_param_groups):
|
| 268 |
+
merged_partitions = [sd[i] for sd in fp32_flat_groups]
|
| 269 |
+
full_single_fp32_vector = torch.cat(merged_partitions, 0)
|
| 270 |
+
merged_single_partition_of_fp32_groups.append(full_single_fp32_vector)
|
| 271 |
+
avail_numel = sum(
|
| 272 |
+
[full_single_fp32_vector.numel() for full_single_fp32_vector in merged_single_partition_of_fp32_groups])
|
| 273 |
+
|
| 274 |
+
if debug:
|
| 275 |
+
wanted_params = sum([len(shapes) for shapes in param_shapes])
|
| 276 |
+
wanted_numel = sum([sum(shape.numel() for shape in shapes.values()) for shapes in param_shapes])
|
| 277 |
+
# not asserting if there is a mismatch due to possible padding
|
| 278 |
+
print(f"Have {avail_numel} numels to process.")
|
| 279 |
+
print(f"Need {wanted_numel} numels in {wanted_params} params.")
|
| 280 |
+
|
| 281 |
+
# params
|
| 282 |
+
# XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
|
| 283 |
+
# out-of-core computing solution
|
| 284 |
+
total_numel = 0
|
| 285 |
+
total_params = 0
|
| 286 |
+
for shapes, full_single_fp32_vector in zip(param_shapes, merged_single_partition_of_fp32_groups):
|
| 287 |
+
offset = 0
|
| 288 |
+
avail_numel = full_single_fp32_vector.numel()
|
| 289 |
+
for name, shape in shapes.items():
|
| 290 |
+
|
| 291 |
+
unpartitioned_numel = shape.numel() if _has_callable(shape, 'numel') else math.prod(shape)
|
| 292 |
+
total_numel += unpartitioned_numel
|
| 293 |
+
total_params += 1
|
| 294 |
+
|
| 295 |
+
if debug:
|
| 296 |
+
print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
|
| 297 |
+
state_dict[name] = full_single_fp32_vector.narrow(0, offset, unpartitioned_numel).view(shape)
|
| 298 |
+
offset += unpartitioned_numel
|
| 299 |
+
|
| 300 |
+
# Z2 started to align to 2*world_size to improve nccl performance. Therefore both offset and
|
| 301 |
+
# avail_numel can differ by anywhere between 0..2*world_size. Due to two unrelated complex
|
| 302 |
+
# paddings performed in the code it's almost impossible to predict the exact numbers w/o the
|
| 303 |
+
# live optimizer object, so we are checking that the numbers are within the right range
|
| 304 |
+
align_to = 2 * world_size
|
| 305 |
+
|
| 306 |
+
def zero2_align(x):
|
| 307 |
+
return align_to * math.ceil(x / align_to)
|
| 308 |
+
|
| 309 |
+
if debug:
|
| 310 |
+
print(f"original offset={offset}, avail_numel={avail_numel}")
|
| 311 |
+
|
| 312 |
+
offset = zero2_align(offset)
|
| 313 |
+
avail_numel = zero2_align(avail_numel)
|
| 314 |
+
|
| 315 |
+
if debug:
|
| 316 |
+
print(f"aligned offset={offset}, avail_numel={avail_numel}")
|
| 317 |
+
|
| 318 |
+
# Sanity check
|
| 319 |
+
if offset != avail_numel:
|
| 320 |
+
raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
|
| 321 |
+
|
| 322 |
+
print(f"Reconstructed fp32 state dict with {total_params} params {total_numel} elements")
|
| 323 |
+
|
| 324 |
+
|
| 325 |
+
def _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
|
| 326 |
+
exclude_frozen_parameters):
|
| 327 |
+
state_dict = OrderedDict()
|
| 328 |
+
|
| 329 |
+
# buffers
|
| 330 |
+
buffers = zero_model_states[0].buffers
|
| 331 |
+
state_dict.update(buffers)
|
| 332 |
+
if debug:
|
| 333 |
+
print(f"added {len(buffers)} buffers")
|
| 334 |
+
|
| 335 |
+
if not exclude_frozen_parameters:
|
| 336 |
+
_zero2_merge_frozen_params(state_dict, zero_model_states)
|
| 337 |
+
|
| 338 |
+
_zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
|
| 339 |
+
|
| 340 |
+
# recover shared parameters
|
| 341 |
+
for pair in zero_model_states[0].shared_params:
|
| 342 |
+
if pair[1] in state_dict:
|
| 343 |
+
state_dict[pair[0]] = state_dict[pair[1]]
|
| 344 |
+
|
| 345 |
+
return state_dict
|
| 346 |
+
|
| 347 |
+
|
| 348 |
+
def zero3_partitioned_param_info(unpartitioned_numel, world_size):
|
| 349 |
+
remainder = unpartitioned_numel % world_size
|
| 350 |
+
padding_numel = (world_size - remainder) if remainder else 0
|
| 351 |
+
partitioned_numel = math.ceil(unpartitioned_numel / world_size)
|
| 352 |
+
return partitioned_numel, padding_numel
|
| 353 |
+
|
| 354 |
+
|
| 355 |
+
def _zero3_merge_frozen_params(state_dict, world_size, zero_model_states):
|
| 356 |
+
if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
|
| 357 |
+
return
|
| 358 |
+
|
| 359 |
+
if debug:
|
| 360 |
+
for i in range(world_size):
|
| 361 |
+
num_elem = sum(s.numel() for s in zero_model_states[i].frozen_param_fragments.values())
|
| 362 |
+
print(f'rank {i}: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
|
| 363 |
+
|
| 364 |
+
frozen_param_shapes = zero_model_states[0].frozen_param_shapes
|
| 365 |
+
wanted_params = len(frozen_param_shapes)
|
| 366 |
+
wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
|
| 367 |
+
avail_numel = sum([p.numel() for p in zero_model_states[0].frozen_param_fragments.values()]) * world_size
|
| 368 |
+
print(f'Frozen params: Have {avail_numel} numels to process.')
|
| 369 |
+
print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
|
| 370 |
+
|
| 371 |
+
total_params = 0
|
| 372 |
+
total_numel = 0
|
| 373 |
+
for name, shape in zero_model_states[0].frozen_param_shapes.items():
|
| 374 |
+
total_params += 1
|
| 375 |
+
unpartitioned_numel = shape.numel()
|
| 376 |
+
total_numel += unpartitioned_numel
|
| 377 |
+
|
| 378 |
+
param_frags = tuple(model_state.frozen_param_fragments[name] for model_state in zero_model_states)
|
| 379 |
+
state_dict[name] = torch.cat(param_frags, 0).narrow(0, 0, unpartitioned_numel).view(shape)
|
| 380 |
+
|
| 381 |
+
partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
|
| 382 |
+
|
| 383 |
+
if debug:
|
| 384 |
+
print(
|
| 385 |
+
f"Frozen params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
|
| 386 |
+
)
|
| 387 |
+
|
| 388 |
+
print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
|
| 389 |
+
|
| 390 |
+
|
| 391 |
+
class GatheredTensor:
|
| 392 |
+
"""
|
| 393 |
+
A pseudo tensor that collects partitioned weights.
|
| 394 |
+
It is more memory efficient when there are multiple groups.
|
| 395 |
+
"""
|
| 396 |
+
|
| 397 |
+
def __init__(self, flat_groups, flat_groups_offset, offset, partitioned_numel, shape):
|
| 398 |
+
self.flat_groups = flat_groups
|
| 399 |
+
self.flat_groups_offset = flat_groups_offset
|
| 400 |
+
self.offset = offset
|
| 401 |
+
self.partitioned_numel = partitioned_numel
|
| 402 |
+
self.shape = shape
|
| 403 |
+
self.dtype = self.flat_groups[0][0].dtype
|
| 404 |
+
|
| 405 |
+
def contiguous(self):
|
| 406 |
+
"""
|
| 407 |
+
Merge partitioned weights from flat_groups into a single tensor.
|
| 408 |
+
"""
|
| 409 |
+
end_idx = self.offset + self.partitioned_numel
|
| 410 |
+
world_size = len(self.flat_groups)
|
| 411 |
+
pad_flat_param_chunks = []
|
| 412 |
+
|
| 413 |
+
for rank_i in range(world_size):
|
| 414 |
+
# for each rank, we need to collect weights from related group/groups
|
| 415 |
+
flat_groups_at_rank_i = self.flat_groups[rank_i]
|
| 416 |
+
start_group_id = None
|
| 417 |
+
end_group_id = None
|
| 418 |
+
for group_id in range(len(self.flat_groups_offset)):
|
| 419 |
+
if self.flat_groups_offset[group_id] <= self.offset < self.flat_groups_offset[group_id + 1]:
|
| 420 |
+
start_group_id = group_id
|
| 421 |
+
if self.flat_groups_offset[group_id] < end_idx <= self.flat_groups_offset[group_id + 1]:
|
| 422 |
+
end_group_id = group_id
|
| 423 |
+
break
|
| 424 |
+
# collect weights from related group/groups
|
| 425 |
+
for group_id in range(start_group_id, end_group_id + 1):
|
| 426 |
+
flat_tensor = flat_groups_at_rank_i[group_id]
|
| 427 |
+
start_offset = self.offset - self.flat_groups_offset[group_id]
|
| 428 |
+
end_offset = min(end_idx, self.flat_groups_offset[group_id + 1]) - self.flat_groups_offset[group_id]
|
| 429 |
+
pad_flat_param_chunks.append(flat_tensor[start_offset:end_offset])
|
| 430 |
+
|
| 431 |
+
# collect weights from all ranks
|
| 432 |
+
pad_flat_param = torch.cat(pad_flat_param_chunks, dim=0)
|
| 433 |
+
param = pad_flat_param[:self.shape.numel()].view(self.shape).contiguous()
|
| 434 |
+
return param
|
| 435 |
+
|
| 436 |
+
|
| 437 |
+
def _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
|
| 438 |
+
param_shapes = zero_model_states[0].param_shapes
|
| 439 |
+
avail_numel = sum([flat_group.numel() for flat_group in fp32_flat_groups[0]]) * world_size
|
| 440 |
+
|
| 441 |
+
# Reconstruction protocol: For zero3 we need to zip the partitions together at boundary of each
|
| 442 |
+
# param, re-consolidating each param, while dealing with padding if any
|
| 443 |
+
|
| 444 |
+
# merge list of dicts, preserving order
|
| 445 |
+
param_shapes = {k: v for d in param_shapes for k, v in d.items()}
|
| 446 |
+
|
| 447 |
+
if debug:
|
| 448 |
+
for i in range(world_size):
|
| 449 |
+
print(f"{FP32_FLAT_GROUPS}[{i}].shape={fp32_flat_groups[i].shape}")
|
| 450 |
+
|
| 451 |
+
wanted_params = len(param_shapes)
|
| 452 |
+
wanted_numel = sum(shape.numel() for shape in param_shapes.values())
|
| 453 |
+
# not asserting if there is a mismatch due to possible padding
|
| 454 |
+
avail_numel = fp32_flat_groups[0].numel() * world_size
|
| 455 |
+
print(f"Trainable params: Have {avail_numel} numels to process.")
|
| 456 |
+
print(f"Trainable params: Need {wanted_numel} numels in {wanted_params} params.")
|
| 457 |
+
|
| 458 |
+
# params
|
| 459 |
+
# XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
|
| 460 |
+
# out-of-core computing solution
|
| 461 |
+
offset = 0
|
| 462 |
+
total_numel = 0
|
| 463 |
+
total_params = 0
|
| 464 |
+
flat_groups_offset = [0] + list(np.cumsum([flat_tensor.numel() for flat_tensor in fp32_flat_groups[0]]))
|
| 465 |
+
for name, shape in tqdm(param_shapes.items(), desc='Gathering sharded weights'):
|
| 466 |
+
unpartitioned_numel = shape.numel()
|
| 467 |
+
total_numel += unpartitioned_numel
|
| 468 |
+
total_params += 1
|
| 469 |
+
partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
|
| 470 |
+
|
| 471 |
+
if debug:
|
| 472 |
+
print(
|
| 473 |
+
f"Trainable params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
|
| 474 |
+
)
|
| 475 |
+
|
| 476 |
+
# memory efficient tensor
|
| 477 |
+
tensor = GatheredTensor(fp32_flat_groups, flat_groups_offset, offset, partitioned_numel, shape)
|
| 478 |
+
state_dict[name] = tensor
|
| 479 |
+
offset += partitioned_numel
|
| 480 |
+
|
| 481 |
+
offset *= world_size
|
| 482 |
+
|
| 483 |
+
# Sanity check
|
| 484 |
+
if offset != avail_numel:
|
| 485 |
+
raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
|
| 486 |
+
|
| 487 |
+
print(f"Reconstructed Trainable fp32 state dict with {total_params} params {total_numel} elements")
|
| 488 |
+
|
| 489 |
+
|
| 490 |
+
def _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
|
| 491 |
+
exclude_frozen_parameters):
|
| 492 |
+
state_dict = OrderedDict()
|
| 493 |
+
|
| 494 |
+
# buffers
|
| 495 |
+
buffers = zero_model_states[0].buffers
|
| 496 |
+
state_dict.update(buffers)
|
| 497 |
+
if debug:
|
| 498 |
+
print(f"added {len(buffers)} buffers")
|
| 499 |
+
|
| 500 |
+
if not exclude_frozen_parameters:
|
| 501 |
+
_zero3_merge_frozen_params(state_dict, world_size, zero_model_states)
|
| 502 |
+
|
| 503 |
+
_zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
|
| 504 |
+
|
| 505 |
+
# recover shared parameters
|
| 506 |
+
for pair in zero_model_states[0].shared_params:
|
| 507 |
+
if pair[1] in state_dict:
|
| 508 |
+
state_dict[pair[0]] = state_dict[pair[1]]
|
| 509 |
+
|
| 510 |
+
return state_dict
|
| 511 |
+
|
| 512 |
+
|
| 513 |
+
def to_torch_tensor(state_dict, return_empty_tensor=False):
|
| 514 |
+
"""
|
| 515 |
+
Convert state_dict of GatheredTensor to torch tensor
|
| 516 |
+
"""
|
| 517 |
+
torch_state_dict = {}
|
| 518 |
+
converted_tensors = {}
|
| 519 |
+
for name, tensor in state_dict.items():
|
| 520 |
+
tensor_id = id(tensor)
|
| 521 |
+
if tensor_id in converted_tensors: # shared tensors
|
| 522 |
+
shared_tensor = torch_state_dict[converted_tensors[tensor_id]]
|
| 523 |
+
torch_state_dict[name] = shared_tensor
|
| 524 |
+
else:
|
| 525 |
+
converted_tensors[tensor_id] = name
|
| 526 |
+
if return_empty_tensor:
|
| 527 |
+
torch_state_dict[name] = torch.empty(tensor.shape, dtype=tensor.dtype)
|
| 528 |
+
else:
|
| 529 |
+
torch_state_dict[name] = tensor.contiguous()
|
| 530 |
+
return torch_state_dict
|
| 531 |
+
|
| 532 |
+
|
| 533 |
+
def get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir,
|
| 534 |
+
tag=None,
|
| 535 |
+
exclude_frozen_parameters=False,
|
| 536 |
+
lazy_mode=False):
|
| 537 |
+
"""
|
| 538 |
+
Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated state_dict that can be loaded with
|
| 539 |
+
``load_state_dict()`` and used for training without DeepSpeed or shared with others, for example
|
| 540 |
+
via a model hub.
|
| 541 |
+
|
| 542 |
+
Args:
|
| 543 |
+
- ``checkpoint_dir``: path to the desired checkpoint folder
|
| 544 |
+
- ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in 'latest' file. e.g., ``global_step14``
|
| 545 |
+
- ``exclude_frozen_parameters``: exclude frozen parameters
|
| 546 |
+
- ``lazy_mode``: get state_dict in lazy mode. It returns a dict of pesduo tensor instead of torch tensor, which is more memory efficient.
|
| 547 |
+
Convert the pesduo tensor to torch tensor by ``.contiguous()``
|
| 548 |
+
|
| 549 |
+
Returns:
|
| 550 |
+
- pytorch ``state_dict``
|
| 551 |
+
|
| 552 |
+
A typical usage might be ::
|
| 553 |
+
|
| 554 |
+
from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
|
| 555 |
+
# do the training and checkpoint saving
|
| 556 |
+
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
|
| 557 |
+
model = model.cpu() # move to cpu
|
| 558 |
+
model.load_state_dict(state_dict)
|
| 559 |
+
# submit to model hub or save the model to share with others
|
| 560 |
+
|
| 561 |
+
In this example the ``model`` will no longer be usable in the deepspeed context of the same
|
| 562 |
+
application. i.e. you will need to re-initialize the deepspeed engine, since
|
| 563 |
+
``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
|
| 564 |
+
|
| 565 |
+
If you want it all done for you, use ``load_state_dict_from_zero_checkpoint`` instead.
|
| 566 |
+
|
| 567 |
+
Note: the above usage may not work if your application doesn't have sufficient free CPU memory.
|
| 568 |
+
You may need to use the offline approach using the ``zero_to_fp32.py`` script that is saved with
|
| 569 |
+
the checkpoint. Or you can load state_dict in lazy mode ::
|
| 570 |
+
|
| 571 |
+
from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
|
| 572 |
+
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, lazy_mode=True) # not on cpu
|
| 573 |
+
for name, lazy_tensor in state_dict.item():
|
| 574 |
+
tensor = lazy_tensor.contiguous() # to cpu
|
| 575 |
+
print(name, tensor)
|
| 576 |
+
# del tensor to release memory if it no longer in use
|
| 577 |
+
"""
|
| 578 |
+
if tag is None:
|
| 579 |
+
latest_path = os.path.join(checkpoint_dir, 'latest')
|
| 580 |
+
if os.path.isfile(latest_path):
|
| 581 |
+
with open(latest_path, 'r') as fd:
|
| 582 |
+
tag = fd.read().strip()
|
| 583 |
+
else:
|
| 584 |
+
raise ValueError(f"Unable to find 'latest' file at {latest_path}")
|
| 585 |
+
|
| 586 |
+
ds_checkpoint_dir = os.path.join(checkpoint_dir, tag)
|
| 587 |
+
|
| 588 |
+
if not os.path.isdir(ds_checkpoint_dir):
|
| 589 |
+
raise FileNotFoundError(f"Directory '{ds_checkpoint_dir}' doesn't exist")
|
| 590 |
+
|
| 591 |
+
state_dict = _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters)
|
| 592 |
+
if lazy_mode:
|
| 593 |
+
return state_dict
|
| 594 |
+
else:
|
| 595 |
+
return to_torch_tensor(state_dict)
|
| 596 |
+
|
| 597 |
+
|
| 598 |
+
def convert_zero_checkpoint_to_fp32_state_dict(checkpoint_dir,
|
| 599 |
+
output_dir,
|
| 600 |
+
max_shard_size="5GB",
|
| 601 |
+
safe_serialization=False,
|
| 602 |
+
tag=None,
|
| 603 |
+
exclude_frozen_parameters=False):
|
| 604 |
+
"""
|
| 605 |
+
Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
|
| 606 |
+
loaded with ``torch.load(file)`` + ``load_state_dict()`` and used for training without DeepSpeed.
|
| 607 |
+
|
| 608 |
+
Args:
|
| 609 |
+
- ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
|
| 610 |
+
- ``output_dir``: directory to the pytorch fp32 state_dict output files
|
| 611 |
+
- ``max_shard_size``: the maximum size for a checkpoint before being sharded, default value is 5GB
|
| 612 |
+
- ``safe_serialization``: whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
|
| 613 |
+
- ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
|
| 614 |
+
- ``exclude_frozen_parameters``: exclude frozen parameters
|
| 615 |
+
"""
|
| 616 |
+
|
| 617 |
+
# Dependency pre-check
|
| 618 |
+
if safe_serialization:
|
| 619 |
+
try:
|
| 620 |
+
from safetensors.torch import save_file
|
| 621 |
+
except ImportError:
|
| 622 |
+
print('If you want to use `safe_serialization`, please `pip install safetensors`')
|
| 623 |
+
raise
|
| 624 |
+
if max_shard_size is not None:
|
| 625 |
+
try:
|
| 626 |
+
from huggingface_hub import split_torch_state_dict_into_shards
|
| 627 |
+
except ImportError:
|
| 628 |
+
print('If you want to use `max_shard_size`, please `pip install huggingface_hub`')
|
| 629 |
+
raise
|
| 630 |
+
|
| 631 |
+
# Convert zero checkpoint to state_dict
|
| 632 |
+
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir,
|
| 633 |
+
tag,
|
| 634 |
+
exclude_frozen_parameters,
|
| 635 |
+
lazy_mode=True)
|
| 636 |
+
|
| 637 |
+
# Shard the model if it is too big.
|
| 638 |
+
weights_name = "model.safetensors" if safe_serialization else "pytorch_model.bin"
|
| 639 |
+
if max_shard_size is not None:
|
| 640 |
+
filename_pattern = weights_name.replace(".bin", "{suffix}.bin").replace(".safetensors", "{suffix}.safetensors")
|
| 641 |
+
# an memory-efficient approach for sharding
|
| 642 |
+
empty_state_dict = to_torch_tensor(state_dict, return_empty_tensor=True)
|
| 643 |
+
state_dict_split = split_torch_state_dict_into_shards(empty_state_dict,
|
| 644 |
+
filename_pattern=filename_pattern,
|
| 645 |
+
max_shard_size=max_shard_size)
|
| 646 |
+
else:
|
| 647 |
+
from collections import namedtuple
|
| 648 |
+
StateDictSplit = namedtuple("StateDictSplit", ["is_sharded", "filename_to_tensors"])
|
| 649 |
+
state_dict_split = StateDictSplit(is_sharded=False,
|
| 650 |
+
filename_to_tensors={weights_name: list(state_dict.keys())})
|
| 651 |
+
|
| 652 |
+
# Save the model by shard
|
| 653 |
+
os.makedirs(output_dir, exist_ok=True)
|
| 654 |
+
filename_to_tensors = state_dict_split.filename_to_tensors.items()
|
| 655 |
+
for shard_file, tensors in tqdm(filename_to_tensors, desc="Saving checkpoint shards"):
|
| 656 |
+
shard_state_dict = {tensor_name: state_dict[tensor_name] for tensor_name in tensors}
|
| 657 |
+
shard_state_dict = to_torch_tensor(shard_state_dict)
|
| 658 |
+
output_path = os.path.join(output_dir, shard_file)
|
| 659 |
+
if safe_serialization:
|
| 660 |
+
save_file(shard_state_dict, output_path, metadata={"format": "pt"})
|
| 661 |
+
else:
|
| 662 |
+
torch.save(shard_state_dict, output_path)
|
| 663 |
+
# release the memory of current shard
|
| 664 |
+
for tensor_name in list(shard_state_dict.keys()):
|
| 665 |
+
del state_dict[tensor_name]
|
| 666 |
+
del shard_state_dict[tensor_name]
|
| 667 |
+
del shard_state_dict
|
| 668 |
+
gc.collect()
|
| 669 |
+
|
| 670 |
+
# Save index if sharded
|
| 671 |
+
if state_dict_split.is_sharded:
|
| 672 |
+
index = {
|
| 673 |
+
"metadata": state_dict_split.metadata,
|
| 674 |
+
"weight_map": state_dict_split.tensor_to_filename,
|
| 675 |
+
}
|
| 676 |
+
save_index_file = "model.safetensors.index.json" if safe_serialization else "pytorch_model.bin.index.json"
|
| 677 |
+
save_index_file = os.path.join(output_dir, save_index_file)
|
| 678 |
+
with open(save_index_file, "w", encoding="utf-8") as f:
|
| 679 |
+
content = json.dumps(index, indent=2, sort_keys=True) + "\n"
|
| 680 |
+
f.write(content)
|
| 681 |
+
|
| 682 |
+
|
| 683 |
+
def load_state_dict_from_zero_checkpoint(model, checkpoint_dir, tag=None):
|
| 684 |
+
"""
|
| 685 |
+
1. Put the provided model to cpu
|
| 686 |
+
2. Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict``
|
| 687 |
+
3. Load it into the provided model
|
| 688 |
+
|
| 689 |
+
Args:
|
| 690 |
+
- ``model``: the model object to update
|
| 691 |
+
- ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
|
| 692 |
+
- ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
|
| 693 |
+
|
| 694 |
+
Returns:
|
| 695 |
+
- ``model`: modified model
|
| 696 |
+
|
| 697 |
+
Make sure you have plenty of CPU memory available before you call this function. If you don't
|
| 698 |
+
have enough use the ``zero_to_fp32.py`` utility to do the conversion. You will find it
|
| 699 |
+
conveniently placed for you in the checkpoint folder.
|
| 700 |
+
|
| 701 |
+
A typical usage might be ::
|
| 702 |
+
|
| 703 |
+
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
|
| 704 |
+
model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
|
| 705 |
+
# submit to model hub or save the model to share with others
|
| 706 |
+
|
| 707 |
+
Note, that once this was run, the ``model`` will no longer be usable in the deepspeed context
|
| 708 |
+
of the same application. i.e. you will need to re-initialize the deepspeed engine, since
|
| 709 |
+
``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
|
| 710 |
+
|
| 711 |
+
"""
|
| 712 |
+
logger.info(f"Extracting fp32 weights")
|
| 713 |
+
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
|
| 714 |
+
|
| 715 |
+
logger.info(f"Overwriting model with fp32 weights")
|
| 716 |
+
model = model.cpu()
|
| 717 |
+
model.load_state_dict(state_dict, strict=False)
|
| 718 |
+
|
| 719 |
+
return model
|
| 720 |
+
|
| 721 |
+
|
| 722 |
+
if __name__ == "__main__":
|
| 723 |
+
parser = argparse.ArgumentParser()
|
| 724 |
+
parser.add_argument("checkpoint_dir",
|
| 725 |
+
type=str,
|
| 726 |
+
help="path to the desired checkpoint folder, e.g., path/checkpoint-12")
|
| 727 |
+
parser.add_argument("output_dir",
|
| 728 |
+
type=str,
|
| 729 |
+
help="directory to the pytorch fp32 state_dict output files"
|
| 730 |
+
"(e.g. path/checkpoint-12-output/)")
|
| 731 |
+
parser.add_argument(
|
| 732 |
+
"--max_shard_size",
|
| 733 |
+
type=str,
|
| 734 |
+
default="5GB",
|
| 735 |
+
help="The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size"
|
| 736 |
+
"lower than this size. If expressed as a string, needs to be digits followed by a unit (like `5MB`"
|
| 737 |
+
"We default it to 5GB in order for models to be able to run easily on free-tier google colab instances"
|
| 738 |
+
"without CPU OOM issues.")
|
| 739 |
+
parser.add_argument(
|
| 740 |
+
"--safe_serialization",
|
| 741 |
+
default=False,
|
| 742 |
+
action='store_true',
|
| 743 |
+
help="Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).")
|
| 744 |
+
parser.add_argument("-t",
|
| 745 |
+
"--tag",
|
| 746 |
+
type=str,
|
| 747 |
+
default=None,
|
| 748 |
+
help="checkpoint tag used as a unique identifier for checkpoint. e.g., global_step1")
|
| 749 |
+
parser.add_argument("--exclude_frozen_parameters", action='store_true', help="exclude frozen parameters")
|
| 750 |
+
parser.add_argument("-d", "--debug", action='store_true', help="enable debug")
|
| 751 |
+
args = parser.parse_args()
|
| 752 |
+
|
| 753 |
+
debug = args.debug
|
| 754 |
+
|
| 755 |
+
convert_zero_checkpoint_to_fp32_state_dict(args.checkpoint_dir,
|
| 756 |
+
args.output_dir,
|
| 757 |
+
max_shard_size=args.max_shard_size,
|
| 758 |
+
safe_serialization=args.safe_serialization,
|
| 759 |
+
tag=args.tag,
|
| 760 |
+
exclude_frozen_parameters=args.exclude_frozen_parameters)
|