File size: 2,761 Bytes
eb1a78e
79ae26f
eb1a78e
79ae26f
 
 
 
 
eb1a78e
79ae26f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---

language: en
license: mit
tags:
- pytorch
- text-generation
- qwen3
- tinystories
---


# Qwen3-0.6B Pre-trained on TinyStories

This is a Qwen3-0.6B model pre-trained on the TinyStories dataset for 200k iterations.

## Model Details

- **Architecture**: Qwen3-0.6B
- **Training Data**: TinyStories dataset from HuggingFace
- **Training Iterations**: 200,000
- **Parameters**: ~596M unique parameters
- **Tokenizer**: GPT-2 tokenizer (tiktoken)
- **Training Loss**: Available in training history

## Quick Start

### Download the Model

```python

from huggingface_hub import hf_hub_download

import torch



# Download model weights

model_path = hf_hub_download(

    repo_id="vuminhtue/qwen3-200k-tinystories",

    filename="Qwen3_200k_model_params.pt"

)



# Download config

config_path = hf_hub_download(

    repo_id="vuminhtue/qwen3-200k-tinystories",

    filename="config.json"

)

```

### Load and Use

```python

import torch

import tiktoken

from Qwen3_model import Qwen3Model  # You need this file from the original code



# Set up configuration

QWEN3_CONFIG = {

    "vocab_size": 151936,

    "context_length": 40960,

    "emb_dim": 1024,

    "n_heads": 16,

    "n_layers": 28,

    "hidden_dim": 3072,

    "head_dim": 128,

    "qk_norm": True,

    "n_kv_groups": 8,

    "rope_base": 1000000.0,

    "dtype": torch.bfloat16,

}



# Load model

model = Qwen3Model(QWEN3_CONFIG)

device = "cuda" if torch.cuda.is_available() else "cpu"

model.load_state_dict(torch.load(model_path, map_location=device))

model = model.to(device)

model.eval()



# Generate text

tokenizer = tiktoken.get_encoding("gpt2")

# Your generation code here...

```

## Training Details

- **Optimizer**: AdamW with weight decay (0.1)
- **Learning Rate**: 1e-4 with warmup and cosine decay
- **Batch Size**: 32 with gradient accumulation (32 steps)
- **Context Length**: 128 tokens
- **Mixed Precision**: bfloat16 training

## Model Architecture

- Grouped Query Attention (GQA) with 8 KV groups
- RoPE (Rotary Position Embeddings)
- RMSNorm for normalization
- SiLU activation function
- 28 transformer layers

## Performance

The model was trained on TinyStories, a dataset of simple stories for children. It can generate coherent short stories in a similar style.

## Citation

If you use this model, please cite:

```bibtex

@misc{qwen3-tinystories-2025,

  author = {Tue Vu},

  title = {Qwen3-0.6B Pre-trained on TinyStories},

  year = {2025},

  publisher = {HuggingFace},

  howpublished = {\url{https://huggingface.co/vuminhtue/qwen3-200k-tinystories}},

}

```

## License

MIT License

## Contact

For questions or issues, please open an issue on the HuggingFace model page.