CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare
Paper
•
2407.19705
•
Published
This model is a fine-tuned version of internlm/internlm2_5-7b on some medical datasets.
CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare.
Official Code Repo:https://github.com/CAS-SIAT-XinHai/CollectiveSFT
The model may have limitations in chat functionality.
Language: English
| Dataset Name | Style | Size |
|---|---|---|
| PubMedQA | QA | 273,518 |
| MedMCQA | MCQA | 182,822 |
| HeadQA | QA | 2,657 |
| Total | 458,997 |
Language: Chinese
| Dataset Name | Style | Size |
|---|---|---|
| cMedQA2 | QA | 100,000 |
| cMedDialogu | Dialogue | 792,099 |
| webMedQA | QA | 252,850 |
| MedicalDialog | Dialogue | 2,725,989 |
| CMID | NER | 12,254 |
| NLPEC | MCQA | 18,703 |
| CMB | MCQA | 269,359 |
| MLEC-QA | MCQA | 108,988 |
| DISCMe | Dialogue | 464,898 |
| Total | 4,745,140 |
For detailed dataset specifications and access instructions, please refer to our paper.
The following hyperparameters were used during training:
Base model
internlm/internlm2_5-7b