update for chat template
Browse files
README.md
CHANGED
|
@@ -98,6 +98,25 @@ output = model.generate(**inputs, max_new_tokens=100)
|
|
| 98 |
print(processor.decode(output[0], skip_special_tokens=True))
|
| 99 |
```
|
| 100 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
### Model optimization
|
| 102 |
|
| 103 |
#### 4-bit quantization through `bitsandbytes` library
|
|
|
|
| 98 |
print(processor.decode(output[0], skip_special_tokens=True))
|
| 99 |
```
|
| 100 |
|
| 101 |
+
-----------
|
| 102 |
+
From transformers>=v4.48, you can also pass image url or local path to the conversation history, and let the chat template handle the rest.
|
| 103 |
+
Chat template will load the image for you and return inputs in `torch.Tensor` which you can pass directly to `model.generate()`
|
| 104 |
+
|
| 105 |
+
```python
|
| 106 |
+
messages = [
|
| 107 |
+
{
|
| 108 |
+
"role": "user",
|
| 109 |
+
"content": [
|
| 110 |
+
{"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
|
| 111 |
+
{"type": "text", "text": "What is shown in this image?"},
|
| 112 |
+
],
|
| 113 |
+
},
|
| 114 |
+
]
|
| 115 |
+
|
| 116 |
+
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors"pt")
|
| 117 |
+
output = model.generate(**inputs, max_new_tokens=50)
|
| 118 |
+
```
|
| 119 |
+
|
| 120 |
### Model optimization
|
| 121 |
|
| 122 |
#### 4-bit quantization through `bitsandbytes` library
|