Text Generation
Transformers
GGUF
English
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
llama 3.1
llama-3
llama3
llama-3.1
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
swearing
role play
sillytavern
backyard
horror
context 128k
mergekit
Merge
Not-For-All-Audiences
llama-cpp
gguf-my-repo
conversational
Update README.md
Browse files
README.md
CHANGED
|
@@ -46,6 +46,39 @@ pipeline_tag: text-generation
|
|
| 46 |
This model was converted to GGUF format from [`DavidAU/L3.1-Dark-Reasoning-Unholy-Hermes-R1-Uncensored-8B`](https://huggingface.co/DavidAU/L3.1-Dark-Reasoning-Unholy-Hermes-R1-Uncensored-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 47 |
Refer to the [original model card](https://huggingface.co/DavidAU/L3.1-Dark-Reasoning-Unholy-Hermes-R1-Uncensored-8B) for more details on the model.
|
| 48 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
## Use with llama.cpp
|
| 50 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 51 |
|
|
|
|
| 46 |
This model was converted to GGUF format from [`DavidAU/L3.1-Dark-Reasoning-Unholy-Hermes-R1-Uncensored-8B`](https://huggingface.co/DavidAU/L3.1-Dark-Reasoning-Unholy-Hermes-R1-Uncensored-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 47 |
Refer to the [original model card](https://huggingface.co/DavidAU/L3.1-Dark-Reasoning-Unholy-Hermes-R1-Uncensored-8B) for more details on the model.
|
| 48 |
|
| 49 |
+
---
|
| 50 |
+
Context : 128k.
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
Required: Llama 3 Instruct template.
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
"Dark Reasoning" is a variable control reasoning model that is uncensored and operates at all temps/settings and
|
| 57 |
+
is for creative uses cases and general usage.
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
This version's "thinking"/"reasoning" has been "darkened" by the
|
| 61 |
+
original CORE model's DNA (see model tree) and will also be shorter
|
| 62 |
+
and more compressed. Additional system prompts below to take this a lot
|
| 63 |
+
further - a lot darker, a lot more ... evil.
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
Higher temps will result in deeper, richer "thoughts"... and frankly more interesting ones too.
|
| 67 |
+
|
| 68 |
+
|
| 69 |
+
The "thinking/reasoning" tech (for the model at this repo) is from the original Llama 3.1 "DeepHermes" model from NousResearch:
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
[ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ]
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
This version will retain all the functions and features of the
|
| 76 |
+
original "DeepHermes" model at about 50%-67% of original reasoning
|
| 77 |
+
power.
|
| 78 |
+
Please visit their repo for all information on features, test results
|
| 79 |
+
and so on.
|
| 80 |
+
|
| 81 |
+
---
|
| 82 |
## Use with llama.cpp
|
| 83 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 84 |
|