Update README.md
Browse filesFor all kinds of requests
README.md
CHANGED
|
@@ -1,9 +1,61 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
-
pipeline_tag: text-generation
|
| 4 |
library_name: transformers
|
| 5 |
tags:
|
| 6 |
- vllm
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
---
|
| 8 |
|
| 9 |
<p align="center">
|
|
@@ -165,4 +217,4 @@ The gpt-oss models are excellent for:
|
|
| 165 |
|
| 166 |
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
|
| 167 |
|
| 168 |
-
This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware.
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
| 3 |
library_name: transformers
|
| 4 |
tags:
|
| 5 |
- vllm
|
| 6 |
+
- code
|
| 7 |
+
- not-for-all-audiences
|
| 8 |
+
- music
|
| 9 |
+
- text-generation-inference
|
| 10 |
+
- moe
|
| 11 |
+
- chemistry
|
| 12 |
+
- biology
|
| 13 |
+
- art
|
| 14 |
+
- finance
|
| 15 |
+
- merge
|
| 16 |
+
- legal
|
| 17 |
+
- climate
|
| 18 |
+
- medical
|
| 19 |
+
datasets:
|
| 20 |
+
- fka/awesome-chatgpt-prompts
|
| 21 |
+
- common-pile/caselaw_access_project
|
| 22 |
+
- interstellarninja/hermes_reasoning_tool_use
|
| 23 |
+
- jxm/gpt-oss20b-samples
|
| 24 |
+
- MegaScience/MegaScience
|
| 25 |
+
- openai/BrowseCompLongContext
|
| 26 |
+
- levjam/openai_to_z_upload
|
| 27 |
+
- deepset/prompt-injections
|
| 28 |
+
- openai/gsm8k
|
| 29 |
+
- HuggingFaceTB/smoltalk2
|
| 30 |
+
language:
|
| 31 |
+
- en
|
| 32 |
+
- pt
|
| 33 |
+
- fr
|
| 34 |
+
- ar
|
| 35 |
+
- yo
|
| 36 |
+
- ha
|
| 37 |
+
- hi
|
| 38 |
+
- id
|
| 39 |
+
- de
|
| 40 |
+
- ig
|
| 41 |
+
- he
|
| 42 |
+
metrics:
|
| 43 |
+
- accuracy
|
| 44 |
+
- character
|
| 45 |
+
- bertscore
|
| 46 |
+
- bleu
|
| 47 |
+
- code_eval
|
| 48 |
+
- charcut_mt
|
| 49 |
+
- chrf
|
| 50 |
+
- brier_score
|
| 51 |
+
- bleurt
|
| 52 |
+
base_model:
|
| 53 |
+
- openai/gpt-oss-120b
|
| 54 |
+
- openai/gpt-oss-20b
|
| 55 |
+
- xai-org/grok-1
|
| 56 |
+
- openai/whisper-large-v3
|
| 57 |
+
- deepseek-ai/DeepSeek-R1
|
| 58 |
+
new_version: openai/gpt-oss-120b
|
| 59 |
---
|
| 60 |
|
| 61 |
<p align="center">
|
|
|
|
| 217 |
|
| 218 |
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
|
| 219 |
|
| 220 |
+
This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware.
|