lunahr commited on
Commit
b3a33ec
·
verified ·
1 Parent(s): 523b42f

Upload readme

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - unsloth
5
+ - trl
6
+ - sft
7
+ - code
8
+ - reasoning
9
+ - abliterated
10
+ - baukit-abliterated
11
+ datasets:
12
+ - nvidia/OpenCodeReasoning
13
+ language:
14
+ - en
15
+ base_model:
16
+ - suayptalha/Qwen3-0.6B-Code-Expert
17
+ pipeline_tag: text-generation
18
+ library_name: transformers
19
+ ---
20
+
21
+ # Qwen3-0.6B-Code-Expert (Abliterated)
22
+
23
+ This project performs full fine-tuning on the **Qwen3-0.6B** language model to enhance its code reasoning and generation capabilities. Training was conducted exclusively on the `nvidia/OpenCodeReasoning` dataset, and the model was optimized using the bfloat16 (bf16) data type.
24
+ Additionally, it has been abliterated to make it steer away from censorship.
25
+
26
+ ## Training Procedure
27
+
28
+ 1. **Dataset Preparation**
29
+
30
+ * `nvidia/OpenCodeReasoning` dataset was used.
31
+ * Each example consists of code snippets paired with detailed step-by-step reasoning in Chain-of-Thought (CoT) style.
32
+
33
+ 2. **Model Loading and Configuration**
34
+
35
+ * Qwen3-0.6B base model weights were loaded via the `unsloth` library in bf16 precision.
36
+ * Full fine-tuning (`full_finetuning=True`) was applied to all layers for optimal adaptation to code reasoning.
37
+
38
+ 3. **Supervised Fine-Tuning**
39
+
40
+ * Employed the Hugging Face TRL library with the Supervised Fine-Tuning (SFT) approach.
41
+ * The model was trained to generate correct code solutions along with the corresponding reasoning chains.
42
+
43
+ ## Purpose and Outcome
44
+
45
+ * The model’s capacity for understanding, reasoning about, and generating code was significantly improved through specialized, single-dataset training in bf16 precision.
46
+ * Outputs include both intermediate reasoning steps and final code solutions, enabling transparent and interpretable code generation.
47
+
48
+ ## License
49
+
50
+ This project is licensed under the Apache License 2.0. See the [LICENSE](./LICENSE) file for details.
51
+
52
+ ## Support
53
+
54
+ <a href="https://www.buymeacoffee.com/suayptalha" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>