Embedding Atlas is an interactive visualization tool for exploring large embedding spaces. It enables you to visualize, cross-filter, and search embeddings alongside associated metadata, helping you understand patterns and relationships in high-dimensional data. All computation happens in your computer, ensuring your data remains private and secure.
Here is an example atlas for the MegaScience dataset hosted as a Static Space:
First, install Embedding Atlas:
pip install embedding-atlas
If you plan to load private datasets from the Hugging Face Hub, you’ll also need to login with your Hugging Face account:
hf auth login
Embedding Atlas provides seamless integration with the Hugging Face Hub, allowing you to visualize embeddings from any dataset directly.
The simplest way to visualize a Hugging Face dataset is through the command line interface. Try it with the IMDB dataset:
# Load the IMDB dataset from the Hub
embedding-atlas stanfordnlp/imdb
# Specify the text column for embedding computation
embedding-atlas stanfordnlp/imdb --text "text"
# Load only a sample for faster exploration
embedding-atlas stanfordnlp/imdb --text "text" --sample 5000For your own datasets, use the same pattern:
# Load your dataset from the Hub
embedding-atlas username/dataset-name
# Load multiple splits
embedding-atlas username/dataset-name --split train --split test
# Specify custom text column
embedding-atlas username/dataset-name --text "content"You can also use Embedding Atlas in Jupyter notebooks for interactive exploration:
from embedding_atlas.widget import EmbeddingAtlasWidget
from datasets import load_dataset
import pandas as pd
# Load the IMDB dataset from Hugging Face Hub
dataset = load_dataset("stanfordnlp/imdb", split="train[:5000]")
# Convert to pandas DataFrame
df = dataset.to_pandas()
# Create interactive widget
widget = EmbeddingAtlasWidget(df)
widgetFor your own datasets:
from embedding_atlas.widget import EmbeddingAtlasWidget
from datasets import load_dataset
import pandas as pd
# Load your dataset from the Hub
dataset = load_dataset("username/dataset-name", split="train")
df = dataset.to_pandas()
# Create interactive widget
widget = EmbeddingAtlasWidget(df)
widgetIf you have datasets with pre-computed embeddings, you can load them directly:
# Load dataset with pre-computed coordinates
embedding-atlas username/dataset-name \
--x "embedding_x" \
--y "embedding_y"
# Load with pre-computed nearest neighbors
embedding-atlas username/dataset-name \
--neighbors "neighbors_column"Embedding Atlas uses SentenceTransformers by default but supports custom embedding models:
# Use a specific embedding model
embedding-atlas stanfordnlp/imdb \
--text "text" \
--model "sentence-transformers/all-MiniLM-L6-v2"
# For models requiring remote code execution
embedding-atlas username/dataset-name \
--model "custom/model" \
--trust-remote-codeFine-tune the dimensionality reduction for your specific use case:
embedding-atlas stanfordnlp/imdb \
--text "text" \
--umap-n-neighbors 30 \
--umap-min-dist 0.1 \
--umap-metric "cosine"Visualize and explore text corpora to identify clusters, outliers, and patterns:
from embedding_atlas.widget import EmbeddingAtlasWidget
from datasets import load_dataset
import pandas as pd
# Load a text classification dataset
dataset = load_dataset("stanfordnlp/imdb", split="train[:5000]")
df = dataset.to_pandas()
# Visualize with metadata
widget = EmbeddingAtlasWidget(df)
widget