You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Amazon Co-purchase Multi-modal Graph (ACMMG) Dataset

Paper | Personal Website

sampled_graph

Dataset Description

The Amazon Co-purchase Multi-modal Graph (ACMMG) dataset is a large-scale, multimodal graph that integrates textual, visual, and structural signals to support product co-purchase analysis. Built upon the Amazon Products dataset introduced by Hou et al. (2024), ACMMG enriches product-level representations with detailed co-purchase relationships and heterogeneous modalities. This dataset is curated for use in our paper, SLIP: Structural-aware Language-Image Pretraining for Vision-Language Alignment (arXiv:2511.03019).

Each product in the dataset is represented through:

  • Text: Product titles, detailed specifications, and reviews
  • Vision: High-resolution product images
  • Structure: Co-purchase graph relationships
  • Class: Subcategory of product within the main category

In addition, we provide precomputed CLIP embeddings for product images, titles, descriptions, and combined textual fields. These embeddings can be directly used as feature inputs and serve as strong baselines for downstream experiments.

Dataset Construction

  • Edges: The co-purchase graph is derived from purchase records that initially create a bipartite graph between consumers and products. We construct the co-purchase graph by connecting products that share common purchasers (second-order connections).
  • Nodes: Each product includes its assigned category from the hierarchical taxonomy in Hou et al. (e.g., Electronics > Smartphones > Accessories). However, because many branches of the taxonomy are heavily imbalanced or not semantically useful, we selected a specific hierarchy level for each main category to maintain diversity while achieving reasonable class balance. In addition, we include user reviews as part of the node information, keeping up to 25 reviews per node (product) to prevent overrepresentation of high-volume items.

Data Quality Filters

To ensure data quality and statistical robustness, we employ two filtering mechanisms:

  1. k-core decomposition (k=5): Recursively removes nodes with fewer than 5 connections until all remaining nodes have at least 5 connections. This preserves only the dense, stable subgraph where meaningful patterns can emerge.

  2. Co-purchase frequency filtering: Retains only edges representing products co-purchased at least 3 times by different users. This filtering is important for identifying meaningful product associations that represent actual shopping patterns rather than random coincidences.

This dual filtering approach:

  • Eliminates noise from sparse interactions
  • Reduces the impact of outliers
  • Ensures that captured co-purchase relationships reflect genuine consumer behaviors rather than coincidental or one-time purchasing decisions

Graph Structure Characteristics

The graph structure reveals intuitive patterns:

  • First-hop neighbors: Typically represent complementary products rather than similar ones, as consumers rarely purchase identical items multiple times, but instead buy components that work together (e.g., a laptop and its compatible charger).
  • Second-hop neighbors: Products connected through an intermediary node tend to be more similar to the original item. This pattern emerges naturally from consumer behavior: complementary purchases create first-hop connections, while similar products are indirectly linked through their shared complementary items.

Data Quality Metrics

To ensure robust multimodal alignment, entries with incomplete data (missing titles, inadequate descriptions, or low-quality images) have been filtered out. Semantic alignment between textual and visual components is quantified using CLIP-T scores, providing a measure of the coherence between images and their corresponding textual descriptions. The dataset’s statistics are comprehensively documented in Table 1 of the paper. Higher scores reflect stronger semantic congruence.

Dataset Structure

The dataset is organized by product categories. Each category directory contains:

  • node_attributes/: Directory containing node attribute files:

    • node_attributes_*.parquet: Node features including:
      • asin: Amazon Standard Identification Number (unique product identifier)
      • title: Product title
      • desc: Product description (array of description strings)
      • class: Category class label
      • reviews: Product reviews (array of review strings)
  • dgl_graph.dgl: DGL graph object containing:

    • Node indices mapped to products
    • Edge indices representing co-purchase relationships
    • Graph structure (undirected, weighted by co-purchase frequency)
  • node_categories.json: Category mapping from class indices to category names

  • images.tar: Tar archive containing all product images named as {asin}.jpg. Extract with: tar -xf images.tar

  • embeddings/: Precomputed CLIP embeddings:

    • image_embeds.pt: CLIP image embeddings (shape: [num_nodes, 512])
    • title_embeds.pt: CLIP title embeddings (shape: [num_nodes, 512])
    • description_embeds.pt: CLIP description embeddings (shape: [num_nodes, 512])
    • combined_embeds.pt: CLIP combined (title + description) embeddings (shape: [num_nodes, 512])

Supported Tasks

This dataset supports various tasks, including:

  • Product retrieval: Finding similar products based on text, image, or multimodal queries
  • Product classification: Categorizing products into hierarchical categories
  • Product recommendation: Leveraging graph structure for recommendation systems
  • Multimodal learning: Training models that integrate text, image, and graph information
  • Structure-aware multimodal modeling: Evaluating models' capabilities to integrate multimodal inputs and exploit relational contexts

Citation

If you use this dataset, please cite:

@misc{lu2025slipstructuralawarelanguageimagepretraining,
  title={SLIP: Structural-aware Language-Image Pretraining for Vision-Language Alignment}, 
  author={Wenbo Lu},
  year={2025},
  url={https://arxiv.org/abs/2511.03019}, 
}

License

This dataset is released under the CC BY 4.0 license.

Usage

Loading the Dataset

The dataset can be used directly from the HuggingFace Hub. Each category is stored as a separate directory with all necessary files.

Example: Loading a Category

import dgl
import pandas as pd
import torch
from PIL import Image
import json
import tarfile
import os

# Extract images from tar archive (if not already extracted)
category_path = "path/to/category"
if not os.path.exists(f"{category_path}/images"):
    with tarfile.open(f"{category_path}/images.tar", "r") as tar:
        tar.extractall(category_path)

# Load node attributes
node_attributes = pd.read_parquet(f"{category_path}/node_attributes/node_attributes_0.parquet")

# Load graph
graph = dgl.load_graphs(f"{category_path}/dgl_graph.dgl")[0][0]

# Load embeddings
image_embeds = torch.load(f"{category_path}/embeddings/image_embeds.pt")
title_embeds = torch.load(f"{category_path}/embeddings/title_embeds.pt")

# Load category mapping
with open(f"{category_path}/node_categories.json", "r") as f:
    categories = json.load(f)

# Load images
image_path = f"{category_path}/images/{node_attributes.iloc[0]['asin']}.jpg"
image = Image.open(image_path)

Acknowledgments

This dataset is based on the Amazon Products dataset introduced by Hou et al. (2024). We thank the original authors for making the data available. We also thank NYU HPC for providing computational resources.

Downloads last month
26