Commit
·
64f7a80
1
Parent(s):
40635ab
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,18 +1,59 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
| 3 |
datasets:
|
| 4 |
- Open-Orca/SlimOrca
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
---
|
| 6 |
|
|
|
|
|
|
|
| 7 |
|
| 8 |
-
|
| 9 |
-
|
|
|
|
| 10 |
|
| 11 |
-
|
| 12 |
|
| 13 |
-
|
| 14 |
-
To be added in detail after evaluation
|
| 15 |
|
| 16 |
-
|
| 17 |
|
| 18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
base_model: mistralai/Mistral-7B-v0.1
|
| 4 |
datasets:
|
| 5 |
- Open-Orca/SlimOrca
|
| 6 |
+
- HuggingFaceH4/no_robots
|
| 7 |
+
- Intel/orca_dpo_pairs
|
| 8 |
+
- rizerphe/glaive-function-calling-v2-zephyr
|
| 9 |
+
- codefuse-ai/Evol-instruction-66k
|
| 10 |
+
library_name: transformers
|
| 11 |
+
pipeline_tag: text-generation
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# pic_7B_mistral_Full_v0.1
|
| 15 |
+
PIC_7B_Mistral (First phase)
|
| 16 |
|
| 17 |
+
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
| 18 |
+
A curated, decontaminated subset of datasets used have been mentioned in the model card.
|
| 19 |
+
All used datasets are public as of the time of release of this model.
|
| 20 |
|
| 21 |
+
Collaborate or Consult me - [Twitter](https://twitter.com/4evaBehindSOTA), [Discord](https://discord.gg/ftEM63pzs2)
|
| 22 |
|
| 23 |
+
*Recommended format is ChatML, Alpaca will work but take care of EOT token*
|
|
|
|
| 24 |
|
| 25 |
+
#### Chat Model Inference
|
| 26 |
|
| 27 |
+
|
| 28 |
+
## Model description
|
| 29 |
+
|
| 30 |
+
First generic model of Project PIC (Partner-in-Crime) in 7B range.
|
| 31 |
+
Trying a bunch of things and seeing what sticks right now.
|
| 32 |
+
|
| 33 |
+
Empathy + Coder + Instruction/json/function adherence is my game.
|
| 34 |
+
Finding lots of challenges and insights in this effort, patience is key.
|
| 35 |
+

|
| 36 |
+
|
| 37 |
+
## Intended uses & limitations
|
| 38 |
+
|
| 39 |
+
Should be useful in generic capacity.
|
| 40 |
+
Demonstrates little bit of everything.
|
| 41 |
+
|
| 42 |
+
Basic tests in -
|
| 43 |
+
Roleplay: Adherence to character present.
|
| 44 |
+
json/function-calling: Passing
|
| 45 |
+
Coding: To be evaluated
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
## Training procedure
|
| 49 |
+
SFT + DPO
|
| 50 |
+
|
| 51 |
+
### Training results
|
| 52 |
+
To be evaluated
|
| 53 |
+
|
| 54 |
+
### Framework versions
|
| 55 |
+
|
| 56 |
+
- Transformers 4.35.2
|
| 57 |
+
- Pytorch 2.0.1
|
| 58 |
+
- Datasets 2.15.0
|
| 59 |
+
- Tokenizers 0.15.0
|