Why I think local, open-source models will eventually win.
The most useful AI applications are moving toward multi-turn agentic behavior: systems that take hundreds or even thousands of iterative steps to complete a task, e.g. Claude Code, computer-control agents that click, type, and test repeatedly.
In these cases, the power of the model is not how smart it is per token, but in how quickly it can interact with its environment and tools across many steps. In that regime, model quality becomes secondary to latency.
An open-source model that can call tools quickly, check that the right thing was clicked, or verify that a code change actually passes tests can easily outperform a slightly โsmarterโ closed model that has to make remote API calls for every move.
Eventually, the balance tips: it becomes impractical for an agent to rely on remote inference for every micro-action. Just as no one would tolerate a keyboard that required a network request per keystroke, users wonโt accept agent workflows bottlenecked by latency. All devices will ship with local, open-source models that are โgood enoughโ and the expectation will shift toward everything running locally. Itโll happen sooner than most people think.
Weโre thrilled to announce that the Qwen3-VL family of vision-language models is now available on Azure AI Foundry, thanks to our collaboration with Microsoft.
We bring open-source innovation to enterprise-grade AI infrastructure, making it easier than ever for enterprise to deploy and scale the latest and greatest from models from hugging Face securely within Azure.
๐ Highlights:
- Deploy Qwen3-VL instantly via managed endpoints - Built-in governance, telemetry, and lifecycle management - True multimodal reasoning โ vision, language, and code understanding - State-of-the-art performance, outperforming closed-source models like Gemini 2.5 Pro and GPT-5 - Available in both *Instruct* and *Thinking* modes, across 24 model sizes
๐ Get started today: search for Qwen3-VL in the Hugging Face Collection on Azure AI Foundry.
๐ค Sentence Transformers is joining Hugging Face! ๐ค This formalizes the existing maintenance structure, as I've personally led the project for the past two years on behalf of Hugging Face! Details:
Today, the Ubiquitous Knowledge Processing (UKP) Lab is transferring the project to Hugging Face. Sentence Transformers will remain a community-driven, open-source project, with the same open-source license (Apache 2.0) as before. Contributions from researchers, developers, and enthusiasts are welcome and encouraged. The project will continue to prioritize transparency, collaboration, and broad accessibility.
We see an increasing wish from companies to move from large LLM APIs to local models for better control and privacy, reflected in the library's growth: in just the last 30 days, Sentence Transformer models have been downloaded >270 million times, second only to transformers.
I would like to thank the UKP Lab, and especially Nils Reimers and Iryna Gurevych, both for their dedication to the project and for their trust in myself, both now and two years ago. Back then, neither of you knew me well, yet you trusted me to take the project to new heights. That choice ended up being very valuable for the embedding & Information Retrieval community, and I think this choice of granting Hugging Face stewardship will be similarly successful.
I'm very excited about the future of the project, and for the world of embeddings and retrieval at large!
Finally, our new paper is out! "๐๐ถ๐ป๐ฒ๐ฉ๐ถ๐๐ถ๐ผ๐ป: ๐ข๐ฝ๐ฒ๐ป ๐๐ฎ๐๐ฎ ๐๐ ๐๐น๐น ๐ฌ๐ผ๐ ๐ก๐ฒ๐ฒ๐ฑ"! ๐ฅณ FineVision: Open Data Is All You Need (2510.17269)
If you've ever trained a VLM, you know this problem: nobody shares their data mixtures. It's a black box, making replicating SOTA work impossible. We wanted to change that.
FineVision unifies 200 sources into 24 million samples. With 17.3 million images and 9.5 billion answer tokens, it's the largest open resource of its kind.
In the paper, we share how we built it: ๐ finding and cleaning data at scale ๐งน removing excessive duplicates across sources ๐ค decontaminating against 66 public benchmarks
My favorite part is Figure 6 (in the video!). It's our visual diversity analysis. It shows that FineVision isn't just bigger; it's more balanced and conceptually richer than other open datasets. NVIDIA's Eagle 2 paper highlighted just how critical this visual diversity is, and our results confirm it: models trained on FineVision consistently outperform those trained on any other open dataset on 11 benchmarks!
๐ To celebrate the paper, Iโm also releasing a concatenated and shuffled version of the full dataset! ๐HuggingFaceM4/FineVision_full_shuffled
Itโs ready to stream, so you can start training your own models right away:
from datasets import load_dataset d = load_dataset("HuggingFaceM4/FineVision_full_shuffled", split="train", streaming=True) print(next(iter(d)))
A big shoutout to the first authors: Luis Wiedmann and Orr Zohar. They are rockstars!