Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
tags:
|
| 6 |
+
- rk3588
|
| 7 |
+
- rkllm
|
| 8 |
+
- Rockchip
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# TinyLlama-1.1B-Chat_rkLLM
|
| 12 |
+
|
| 13 |
+
- [中文](#tinyllama-11b中文介绍)
|
| 14 |
+
- [English](#tinyllama-11b)
|
| 15 |
+
|
| 16 |
+
## TinyLlama-1.1B中文介绍
|
| 17 |
+
|
| 18 |
+
### 介绍
|
| 19 |
+
|
| 20 |
+
TinyLlama-1.1B-Chat_rkLLM 是从 [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) 转换而来的 RKLLM 模型,专为 Rockchip 设备优化。该模型运行于 RK3588 的 NPU 上。
|
| 21 |
+
|
| 22 |
+
- **模型名称**: TinyLlama-1.1B-Chat_rkLLM
|
| 23 |
+
- **模型架构**: 与 TinyLlama-1.1B-Chat-v1.0 相同
|
| 24 |
+
- **发布者**: FydeOS
|
| 25 |
+
- **日期**: 2024-06-03
|
| 26 |
+
|
| 27 |
+
### 模型详情
|
| 28 |
+
|
| 29 |
+
TinyLlama-1.1B-Chat-v1.0 是采用了与 Llama 2 完全相同的架构和分词器的大模型。TinyLlama 结构紧凑,参数仅为 1.1B。这种紧凑性使其能够满足需要有限计算和内存占用的多种应用程序的需求。
|
| 30 |
+
|
| 31 |
+
### 使用指南
|
| 32 |
+
|
| 33 |
+
> 此模型仅支持搭载 Rockchip RK3588/s 芯片的设备。请确认设备信息并确保 NPU 可用。
|
| 34 |
+
|
| 35 |
+
#### openFyde 系统
|
| 36 |
+
|
| 37 |
+
> 请确保你已将系统升级到最新版本。
|
| 38 |
+
|
| 39 |
+
1. 下载模型文件 `XXX.rkllm`。
|
| 40 |
+
2. 新建文件夹 `model/`,将模型文件放置于该文件夹内。
|
| 41 |
+
3. 启动 FydeOS AI,在设置页面进行相关配置。
|
| 42 |
+
|
| 43 |
+
#### 其它系统
|
| 44 |
+
> 请确保已完成 RKLLM 的 NPU 相关内核更新。
|
| 45 |
+
|
| 46 |
+
1. 下载模型文件 `XXX.rkllm`。
|
| 47 |
+
2. 按照官方文档进行配置:[官方文档](https://github.com/airockchip/rknn-llm)。
|
| 48 |
+
|
| 49 |
+
### 常见问题(FAQ)
|
| 50 |
+
|
| 51 |
+
如遇到问题,请先查阅 issue 区,若问题仍未解决,再提交新的 issue。
|
| 52 |
+
|
| 53 |
+
### 限制与注意事项
|
| 54 |
+
|
| 55 |
+
- 模型在某些情况下可能存在性能限制
|
| 56 |
+
- 使用时请遵循相关法律法规
|
| 57 |
+
- 可能需要进行适当的参数调优以达到最佳效果
|
| 58 |
+
|
| 59 |
+
### 许可证
|
| 60 |
+
|
| 61 |
+
本模型采用与 [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) 相同的许可证。
|
| 62 |
+
|
| 63 |
+
### 联系方式
|
| 64 |
+
|
| 65 |
+
如需更多信息,请联系:
|
| 66 |
+
|
| 67 |
+
- **电子邮件**: [email protected]
|
| 68 |
+
- **主页**: [FydeOS AI](https://fydeos.ai/zh/)
|
| 69 |
+
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
## TinyLlama-1.1B
|
| 73 |
+
|
| 74 |
+
### Introduction
|
| 75 |
+
|
| 76 |
+
TinyLlama-1.1B-Chat_rkLLM is a RKLLM model derived from [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0), specifically optimized for Rockchip devices. This model operates on the NPU of the RK3588 chip.
|
| 77 |
+
|
| 78 |
+
- **Model Name**: TinyLlama-1.1B-Chat_rkLLM
|
| 79 |
+
- **Architecture**: Identical to TinyLlama-1.1B-Chat-v1.0
|
| 80 |
+
- **Publisher**: FydeOS
|
| 81 |
+
- **Release Date**: 2024-06-03
|
| 82 |
+
|
| 83 |
+
### Model Details
|
| 84 |
+
|
| 85 |
+
TinyLlama-1.1B-Chat_v1.0, sharing the same architecture and tokenizer as Llama 2, is a large language model with a compact structure of only 1.1 billion parameters. This compactness enables it to meet the needs of various applications requiring limited computation and memory usage.
|
| 86 |
+
|
| 87 |
+
### User Guide
|
| 88 |
+
|
| 89 |
+
This model is only supported on devices with the Rockchip RK3588/s chip. Please verify your device's chip information and ensure the NPU is operational.
|
| 90 |
+
|
| 91 |
+
#### openFyde System
|
| 92 |
+
|
| 93 |
+
> Ensure you have upgraded to the latest version of openFyde.
|
| 94 |
+
|
| 95 |
+
1. Download the model file `XXX.rkllm`.
|
| 96 |
+
2. Create a folder named `model/` and place the model file inside this folder.
|
| 97 |
+
3. Launch FydeOS AI and configure the settings on the settings page.
|
| 98 |
+
|
| 99 |
+
#### Other Systems
|
| 100 |
+
|
| 101 |
+
> Ensure you have updated the NPU kernel related to RKLLM.
|
| 102 |
+
|
| 103 |
+
1. Download the model file `XXX.rkllm`.
|
| 104 |
+
2. Follow the configuration guidelines provided in the [official documentation](https://github.com/airockchip/rknn-llm).
|
| 105 |
+
|
| 106 |
+
### FAQ
|
| 107 |
+
|
| 108 |
+
If you encounter issues, please refer to the issue section first. If your problem remains unresolved, submit a new issue.
|
| 109 |
+
|
| 110 |
+
### Limitations and Considerations
|
| 111 |
+
|
| 112 |
+
- The model may have performance limitations in certain scenarios.
|
| 113 |
+
- Ensure compliance with relevant laws and regulations during usage.
|
| 114 |
+
- Parameter tuning might be necessary to achieve optimal performance.
|
| 115 |
+
|
| 116 |
+
### Licence
|
| 117 |
+
|
| 118 |
+
This model is licensed under the same terms as [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
|
| 119 |
+
|
| 120 |
+
### Contact Information
|
| 121 |
+
|
| 122 |
+
For more information, please contact:
|
| 123 |
+
|
| 124 |
+
- **Email**: [email protected]
|
| 125 |
+
- **Homepage**: [FydeOS AI](https://fydeos.ai/en/)
|