|
|
--- |
|
|
license: apache-2.0 |
|
|
datasets: |
|
|
- cerebras/SlimPajama-627B |
|
|
- bigcode/starcoderdata |
|
|
- OpenAssistant/oasst_top1_2023-08-25 |
|
|
language: |
|
|
- en |
|
|
--- |
|
|
<div align="center"> |
|
|
|
|
|
# TinyLlama-1.1B |
|
|
</div> |
|
|
|
|
|
https://github.com/jzhang38/TinyLlama |
|
|
|
|
|
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ππ. The training has started on 2023-09-01. |
|
|
|
|
|
<div align="center"> |
|
|
<img src="./TinyLlama_logo.png" width="300"/> |
|
|
</div> |
|
|
|
|
|
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint. |
|
|
|
|
|
#### This Model |
|
|
This is the chat model finetuned on [PY007/TinyLlama-1.1B-intermediate-step-240k-503b](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b). The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25). |
|
|
|
|
|
**Update from V0.1: 1. Different dataset. 2. Different chat format (now [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) formatted conversations).** |