Spaces:
Running
Running
Update README.md
#1
by
bconsolvo
- opened
README.md
CHANGED
|
@@ -6,113 +6,70 @@ colorTo: gray
|
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
---
|
|
|
|
| 9 |
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
<div class="lg:col-span-3">
|
| 76 |
-
<h1>Get Started</h1>
|
| 77 |
-
<h3>1. Intel Acceleration Libraries</h3>
|
| 78 |
-
<p class="mb-2">
|
| 79 |
-
To get started with Intel hardware and software optimizations, download and install the Optimum Intel
|
| 80 |
-
and Intel® Extension for Transformers libraries. Follow these documents to learn how to install and use these libraries:
|
| 81 |
-
</p>
|
| 82 |
-
<ul>
|
| 83 |
-
<li class="ml-6"><a href="https://github.com/huggingface/optimum-intel#readme" class="underline" data-ga-category="intel-org" data-ga-action="clicked optimum intel" data-ga-label="optimum intel">🤗 Optimum Intel library</a></li>
|
| 84 |
-
<li class="ml-6"><a href="https://github.com/intel/intel-extension-for-transformers#readme" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel extension for transformers" data-ga-label="intel extension for transformers">Intel® Extension for Transformers</a></li>
|
| 85 |
-
</ul>
|
| 86 |
-
<p class="mb-2">
|
| 87 |
-
The Optimum Intel library provides primarily hardware acceleration, while the Intel® Extension
|
| 88 |
-
for Transformers is focused more on software accleration. Both should be present to achieve ideal
|
| 89 |
-
performance and productivity gains in transfer learning and fine-tuning with Hugging Face.
|
| 90 |
-
</p>
|
| 91 |
-
<h3>2. Find Your Model</h3>
|
| 92 |
-
<p class="mb-2">
|
| 93 |
-
Next, find your desired model (and dataset) by using the search box at the top-left of Hugging Face’s website.
|
| 94 |
-
Add “intel” to your search to narrow your search to models pretrained by Intel.
|
| 95 |
-
</p>
|
| 96 |
-
<img
|
| 97 |
-
alt=""
|
| 98 |
-
src="https://huggingface.co/spaces/Intel/README/resolve/main/hf-model_search.png"
|
| 99 |
-
style="margin:auto;transform:scale(0.8);"
|
| 100 |
-
/>
|
| 101 |
-
<h3>3. Read Through the Demo, Dataset, and Quick-Start Commands</h3>
|
| 102 |
-
<p class="mb-2">
|
| 103 |
-
On the model’s page (called a “Model Card”) you will find description and usage information, an embedded
|
| 104 |
-
inferencing demo, and the associated dataset. In the upper-right of your screen, click “Use in Transformers”
|
| 105 |
-
for helpful code hints on how to import the model to your own workspace with an established Hugging Face pipeline and tokenizer.
|
| 106 |
-
</p>
|
| 107 |
-
<img
|
| 108 |
-
alt=""
|
| 109 |
-
src="https://huggingface.co/spaces/Intel/README/resolve/main/hf-use_transformers.png"
|
| 110 |
-
style="margin:auto;transform:scale(0.8);"
|
| 111 |
-
/>
|
| 112 |
-
<img
|
| 113 |
-
alt=""
|
| 114 |
-
src="https://huggingface.co/spaces/Intel/README/resolve/main/hf-quickstart.png"
|
| 115 |
-
style="margin:auto;transform:scale(0.8);"
|
| 116 |
-
/>
|
| 117 |
-
</div>
|
| 118 |
-
</div>
|
|
|
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
+
Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers.
|
| 10 |
|
| 11 |
+
### Models
|
| 12 |
+
Check out Intel's models here on our Hugging Face page or directly through the [Hugging Face Models Hub search](https://huggingface.co/models?sort=trending&search=intel). Here are some of Intel's models:
|
| 13 |
+
|
| 14 |
+
| Model | Type |
|
| 15 |
+
| :--- | :--- |
|
| 16 |
+
| [dpt-hybrid-midas](https://huggingface.co/Intel/dpt-hybrid-midas) | Monocular depth estimation |
|
| 17 |
+
| [llava-gemma-2b](https://huggingface.co/Intel/llava-gemma-2b) | Multimodal |
|
| 18 |
+
| [gpt2 on Gaudi](https://huggingface.co/Habana/gpt2) | Text generation |
|
| 19 |
+
| [neural-chat-7b-v3-3-int8-ov](https://huggingface.co/OpenVINO/neural-chat-7b-v3-3-int8-ov) | Text generation |
|
| 20 |
+
|
| 21 |
+
### Datasets
|
| 22 |
+
|
| 23 |
+
Intel has created a number of [datasets](https://huggingface.co/Intel?sort_datasets=modified#datasets) for use in fine-tuning both vision and language models. Check out the datasets below on our page, including [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) for natural language processing tasks and [SocialCounterfactuals](https://huggingface.co/datasets/Intel/SocialCounterfactuals) for vision tasks.
|
| 24 |
+
|
| 25 |
+
### Collections
|
| 26 |
+
|
| 27 |
+
Our Collections categorize models that pertain to Intel hardware and software. Here are a few:
|
| 28 |
+
|
| 29 |
+
| Collection | Description |
|
| 30 |
+
| :--- | :--- |
|
| 31 |
+
| [DPT 3.1](https://huggingface.co/collections/Intel/dpt-31-65b2a13eb0a5a381b6df9b6b) | Monocular depth (MiDaS) models, leveraging state-of-the-art vision backbones such as BEiT and Swinv2 |
|
| 32 |
+
| [Whisper](https://huggingface.co/collections/Intel/whisper-65b3d8d2d5bf0d622a866e3a) | Whisper models for automatic speech recognition (ASR) and speech translation, quantized for faster inference speeds. |
|
| 33 |
+
| [Intel Neural Chat](https://huggingface.co/collections/Intel/intel-neural-chat-65b3d2f2d0ba0a801668ef2c) | Fine-tuned 7B parameter LLM models, one of which made it to the top of the 7B HF LLM Leaderboard |
|
| 34 |
+
|
| 35 |
+
### Spaces
|
| 36 |
+
Check out Intel's leaderboards and other demo applications from our [Spaces](https://huggingface.co/Intel?sort_spaces=modified#spaces):
|
| 37 |
+
|
| 38 |
+
| Space | Description |
|
| 39 |
+
| :--- | :--- |
|
| 40 |
+
| [Powered-by-Intel LLM Leaderboard](https://huggingface.co/spaces/Intel/powered_by_intel_llm_leaderboard) | Evaluate, score, and rank open-source LLMs that have been pre-trained or fine-tuned on Intel Hardware 🦾 |
|
| 41 |
+
| [Intel Low-bit Quantized Open LLM Leaderboard](https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard) | Evaluation leaderboard for quantized language models |
|
| 42 |
+
|
| 43 |
+
### Blogs
|
| 44 |
+
|
| 45 |
+
Get started with deploying Intel's models on Intel architecture with these hands-on tutorials from blogs written by staff from Hugging Face and Intel:
|
| 46 |
+
|
| 47 |
+
| Blog | Description |
|
| 48 |
+
| :--- | :--- |
|
| 49 |
+
| [Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon](https://huggingface.co/blog/cost-efficient-rag-applications-with-intel) | Develop and deploy RAG applications as part of OPEA, the Open Platform for Enterprise AI |
|
| 50 |
+
| [Running Large Multimodal Models on an AI PC's NPU](https://huggingface.co/blog/bconsolvo/llava-gemma-2b-aipc-npu) | Run the llava-gemma-2b model on an AI PC's NPU |
|
| 51 |
+
| [A Chatbot on your Laptop: Phi-2 on Intel Meteor Lake](https://huggingface.co/blog/phi2-intel-meteor-lake) | Deploy Phi-2 on your local laptop with Intel OpenVINO in the Optimum Intel library |
|
| 52 |
+
| [Partnering to Democratize ML Hardware Acceleration](https://huggingface.co/blog/intel) | Intel and Hugging Face collaborate to build state-of-the-art hardware acceleration to train, fine-tune and predict with Transformers |
|
| 53 |
+
|
| 54 |
+
### Documentation
|
| 55 |
+
|
| 56 |
+
To learn more about deploying models on Intel hardware with Transformers, visit the resources listed below.
|
| 57 |
+
|
| 58 |
+
*Optimum Habana* - To deploy on Intel Gaudi accelerators, check out [optimum-habana](https://github.com/huggingface/optimum-habana/), the interface between Gaudi and the 🤗 Transformers and Diffusers libraries. To install the latest stable release:
|
| 59 |
+
|
| 60 |
+
```bash
|
| 61 |
+
pip install --upgrade-strategy eager optimum[habana]
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
*Optimum Intel* - To deploy on all other Intel architectures, check out [optimum-intel](https://github.com/huggingface/optimum-intel), the interface between Intel architectures and the 🤗 Transformers and Diffusers libraries. Depending on your need, you can use these backends:
|
| 65 |
+
|
| 66 |
+
| Accelerator | Installation |
|
| 67 |
+
|:---|:---|
|
| 68 |
+
| [Intel Neural Compressor](https://huggingface.co/docs/optimum/en/intel/optimization_inc) | `pip install --upgrade --upgrade-strategy eager "optimum[neural-compressor]"` |
|
| 69 |
+
| [OpenVINO](https://huggingface.co/docs/optimum/en/intel/inference) | `pip install --upgrade --upgrade-strategy eager "optimum[openvino]"` |
|
| 70 |
+
| [Intel Extension for PyTorch](https://intel.github.io/intel-extension-for-pytorch/#introduction) | `pip install --upgrade --upgrade-strategy eager "optimum[ipex]"` |
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
### Join Our Dev Community
|
| 74 |
+
|
| 75 |
+
Please join us on the [Intel DevHub Discord](https://discord.gg/kfJ3NKEw5t) to ask questions and interact with our AI developer community!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|