File size: 7,090 Bytes
e681738 13d26a3 e681738 13d26a3 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 e681738 79dbd50 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 |
---
library_name: transformers
base_model: meta-llama/Llama-2-7b-chat-hf
language:
- vi
---
# Vietnamese Fine-tuned Llama-2-7b-chat-hf
This repository contains a Vietnamese-tuned version of the `Llama-2-7b-chat-hf` model, which has been fine-tuned on Vietnamese datasets using LoRA (Low-Rank Adaptation) techniques.
## Model Details
This model is a fine-tuned version of the Llama-2-7b-chat-hf model, specifically adapted for improved performance on Vietnamese language tasks. It uses LoRA fine-tuning to efficiently adapt the large language model to Vietnamese data while maintaining much of the original model's general knowledge and capabilities.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Daniel Du](https://github.com/danghoangnhan)
- **Model type:** Large Language Model
- **Language(s) (NLP):** Vietnamese
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
- **Language:** Vietnamese
### Direct Use
You can use this model directly with the Hugging Face Transformers library:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
# Load the base model
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
# Load the LoRA configuration and model
peft_model_id = "CallMeMrFern/Llama-2-7b-chat-hf_vn"
config = PeftConfig.from_pretrained(peft_model_id)
model = PeftModel.from_pretrained(base_model, peft_model_id)
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
# Example usage
input_text = "Xin chào, hôm nay thời tiết thế nào?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
- This model is specifically fine-tuned for Vietnamese and may not perform as well on other languages.
- The model inherits limitations from the base Llama-2-7b-chat-hf model.
- Performance may vary depending on the specific task and domain.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
Dataset: alpaca_translate_GPT_35_10_20k.json (Vietnamese translation of the Alpaca dataset)
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
### Model Architecture and Objective
[More Information Needed]
## Citation
If you use this model in your research, please cite:
```
@misc{vietnamese_llama2_7b_chat,
author = {[Your Name]},
title = {Vietnamese Fine-tuned Llama-2-7b-chat-hf},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://huggingface.co/CallMeMrFern/Llama-2-7b-chat-hf_vn}}
}
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
## Model Description
This model is a fine-tuned version of the Llama-2-7b-chat-hf model, specifically adapted for improved performance on Vietnamese language tasks. It uses LoRA fine-tuning to efficiently adapt the large language model to Vietnamese data while maintaining much of the original model's general knowledge and capabilities.
## Fine-tuning Details
- **Fine-tuning Method:** LoRA (Low-Rank Adaptation)
- **LoRA Config:**
- Target Modules: `["q_proj", "v_proj"]`
- Precision: 8-bit
- **Dataset:** `alpaca_translate_GPT_35_10_20k.json` (Vietnamese translation of the Alpaca dataset)
## Training Procedure
The model was fine-tuned using the following command:
```bash
python finetune/lora.py \
--base_model meta-llama/Llama-2-7b-chat-hf \
--model_type llama \
--data_dir data/general/alpaca_translate_GPT_35_10_20k.json \
--output_dir finetuned/meta-llama/Llama-2-7b-chat-hf \
--lora_target_modules '["q_proj", "v_proj"]' \
--micro_batch_size 1
```
For multi-GPU training, a distributed training approach was used.
## Evaluation Results
[Include any evaluation results, perplexity scores, or benchmark performances here]
## Acknowledgements
- This project is part of the TF07 Course offered by ProtonX.
- We thank the creators of the original Llama-2-7b-chat-hf model and the Hugging Face team for their tools and resources.
- Appreciation to [VietnamAIHub/Vietnamese_LLMs](https://github.com/VietnamAIHub/Vietnamese_LLMs) for the translated dataset. |