Update README.md
Browse files
README.md
CHANGED
|
@@ -10,7 +10,9 @@ library_name: transformers
|
|
| 10 |
|
| 11 |
### Model Summary:
|
| 12 |
|
| 13 |
-
<
|
|
|
|
|
|
|
| 14 |
|
| 15 |
Llama-3.1-Centaur-70B is a foundation model of cognition model that can predict and simulate human behavior in any behavioral experiment expressed in natural language.
|
| 16 |
|
|
@@ -34,7 +36,7 @@ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloa
|
|
| 34 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 35 |
```
|
| 36 |
|
| 37 |
-
You can alternatively run the model with unsloth on a single 80GB GPU using the [low-rank adapter](https://huggingface.co/marcelbinz/Llama-3.1-Centaur-70B-adapter).
|
| 38 |
|
| 39 |
|
| 40 |
### Licensing Information
|
|
|
|
| 10 |
|
| 11 |
### Model Summary:
|
| 12 |
|
| 13 |
+
<p align="center">
|
| 14 |
+
<img src="https://marcelbinz.github.io/imgs/centaur.png" width="200"/>
|
| 15 |
+
</p>
|
| 16 |
|
| 17 |
Llama-3.1-Centaur-70B is a foundation model of cognition model that can predict and simulate human behavior in any behavioral experiment expressed in natural language.
|
| 18 |
|
|
|
|
| 36 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 37 |
```
|
| 38 |
|
| 39 |
+
More details are provided in this [**example script**](https://github.com/marcelbinz/Llama-3.1-Centaur-70B/blob/main/test.py). You can alternatively run the model with unsloth on a single 80GB GPU using the [low-rank adapter](https://huggingface.co/marcelbinz/Llama-3.1-Centaur-70B-adapter).
|
| 40 |
|
| 41 |
|
| 42 |
### Licensing Information
|