image-resize
#9
by
echarlaix
HF Staff
- opened
blog/openvino_vlm/openvino-vlm.md
CHANGED
|
@@ -14,7 +14,7 @@ That’s where tools like Intel [Hugging Face Optimum](https://docs.openvino.ai/
|
|
| 14 |
|
| 15 |
Let’s first recap: A Vision Language Model (VLM) can understand both text and images. Instead of just reading or writing text, it can also “see” pictures, so you can ask it to describe a photo, answer a question about an image, or generate a caption. It’s like giving your LLM eyes.
|
| 16 |
|
| 17 |
-
<figure
|
| 18 |
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/chat1.png">
|
| 19 |
</figure>
|
| 20 |
|
|
@@ -24,10 +24,11 @@ In contrast, SmolVLM is purpose-built for low-resource environments, and it beco
|
|
| 24 |
Launched by Hugging Face in July 2024, SmolVLM addresses the growing need for multimodal AI that runs locally without requiring high-end GPUs or cloud infrastructure. As vision-language models become essential in areas like accessibility, robotics, and on-device assistants, SmolVLM offers a path to efficient, privacy-preserving inference at the edge.
|
| 25 |
Architecturally, SmolVLM pairs a lightweight vision encoder with a compact language decoder. This modular design enables it to interpret both images and text.
|
| 26 |
|
| 27 |
-
<figure
|
| 28 |
-
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/smolvlm.png">
|
| 29 |
-
<figcaption
|
| 30 |
-
</
|
|
|
|
| 31 |
</figure>
|
| 32 |
|
| 33 |
It offers a lightweight, efficient solution for running image-and-text models directly on laptops or edge devices.
|
|
@@ -73,7 +74,7 @@ Now it’s time to optimize the model for efficient execution using **quantizati
|
|
| 73 |
|
| 74 |
Essentially, it's a way to map values from a high-precision data type, such as 32-bit floating-point numbers (FP32), to a lower-precision format, typically 8-bit integers (INT8). While this process offers several key benefits, it can also impact in a potential loss of accuracy.
|
| 75 |
|
| 76 |
-
<figure
|
| 77 |
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/quantization.png">
|
| 78 |
</figure>
|
| 79 |
|
|
|
|
| 14 |
|
| 15 |
Let’s first recap: A Vision Language Model (VLM) can understand both text and images. Instead of just reading or writing text, it can also “see” pictures, so you can ask it to describe a photo, answer a question about an image, or generate a caption. It’s like giving your LLM eyes.
|
| 16 |
|
| 17 |
+
<figure style="width: 700px; margin: 0 auto;">
|
| 18 |
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/chat1.png">
|
| 19 |
</figure>
|
| 20 |
|
|
|
|
| 24 |
Launched by Hugging Face in July 2024, SmolVLM addresses the growing need for multimodal AI that runs locally without requiring high-end GPUs or cloud infrastructure. As vision-language models become essential in areas like accessibility, robotics, and on-device assistants, SmolVLM offers a path to efficient, privacy-preserving inference at the edge.
|
| 25 |
Architecturally, SmolVLM pairs a lightweight vision encoder with a compact language decoder. This modular design enables it to interpret both images and text.
|
| 26 |
|
| 27 |
+
<figure style="width: 700px; margin: 0 auto;">
|
| 28 |
+
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/smolvlm.png" width=700>
|
| 29 |
+
<figcaption style="text-align: center;">
|
| 30 |
+
SmolVLM architecture (<b><i>Source: <a href="https://huggingface.co/blog/smolvlm#what-is-smolvlm">SmolVLM - small yet mighty Vision Language Model</i></b></a>).
|
| 31 |
+
</figcaption>
|
| 32 |
</figure>
|
| 33 |
|
| 34 |
It offers a lightweight, efficient solution for running image-and-text models directly on laptops or edge devices.
|
|
|
|
| 74 |
|
| 75 |
Essentially, it's a way to map values from a high-precision data type, such as 32-bit floating-point numbers (FP32), to a lower-precision format, typically 8-bit integers (INT8). While this process offers several key benefits, it can also impact in a potential loss of accuracy.
|
| 76 |
|
| 77 |
+
<figure style="width: 800px; margin: 0 auto;">
|
| 78 |
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/quantization.png">
|
| 79 |
</figure>
|
| 80 |
|