file_path stringlengths 7 11 | label class label 10
classes | data unknown |
|---|---|---|
0/10008.npy | 00 | [
147,
78,
85,
77,
80,
89,
1,
0,
118,
0,
123,
39,
100,
101,
115,
99,
114,
39,
58,
32,
39,
124,
117,
49,
39,
44,
32,
39,
102,
111,
114,
116,
114,
97,
110,
95,
111,
114,
100,
101,
114,
39,
58,
32,
70,
97,
108,
115,
101,
44... |
0/10010.npy | 00 | [
147,
78,
85,
77,
80,
89,
1,
0,
118,
0,
123,
39,
100,
101,
115,
99,
114,
39,
58,
32,
39,
124,
117,
49,
39,
44,
32,
39,
102,
111,
114,
116,
114,
97,
110,
95,
111,
114,
100,
101,
114,
39,
58,
32,
70,
97,
108,
115,
101,
44... |
0/10020.npy | 00 | [
147,
78,
85,
77,
80,
89,
1,
0,
118,
0,
123,
39,
100,
101,
115,
99,
114,
39,
58,
32,
39,
124,
117,
49,
39,
44,
32,
39,
102,
111,
114,
116,
114,
97,
110,
95,
111,
114,
100,
101,
114,
39,
58,
32,
70,
97,
108,
115,
101,
44... |
0/10024.npy | 00 | [
147,
78,
85,
77,
80,
89,
1,
0,
118,
0,
123,
39,
100,
101,
115,
99,
114,
39,
58,
32,
39,
124,
117,
49,
39,
44,
32,
39,
102,
111,
114,
116,
114,
97,
110,
95,
111,
114,
100,
101,
114,
39,
58,
32,
70,
97,
108,
115,
101,
44... |
0/10031.npy | 00 | "k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDMyNzc2LCksIH0gICA(...TRUNCATED) |
0/10043.npy | 00 | "k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDMyNzc2LCksIH0gICA(...TRUNCATED) |
0/10050.npy | 00 | "k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDMyNzc2LCksIH0gICA(...TRUNCATED) |
0/10061.npy | 00 | "k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDMyNzc2LCksIH0gICA(...TRUNCATED) |
0/10064.npy | 00 | "k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDMyNzc2LCksIH0gICA(...TRUNCATED) |
0/10115.npy | 00 | "k05VTVBZAQB2AHsnZGVzY3InOiAnfHUxJywgJ2ZvcnRyYW5fb3JkZXInOiBGYWxzZSwgJ3NoYXBlJzogKDMyNzc2LCksIH0gICA(...TRUNCATED) |
π Introduction
This repository hosts the I2E-Datasets, a comprehensive suite of neuromorphic datasets generated using the I2E (Image-to-Event) framework. This work has been accepted for Oral Presentation at AAAI 2026.
I2E bridges the data scarcity gap in Neuromorphic Computing and Spiking Neural Networks (SNNs). By simulating microsaccadic eye movements via highly parallelized convolution, I2E converts static images into high-fidelity event streams in real-time (>300x faster than prior methods).
ποΈ Visualization
The following comparisons illustrate the high-fidelity conversion from static RGB images to dynamic event streams using I2E.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
More visualization comparisons can be found in Visualization.md.
π¦ Dataset Catalog
We provide a comprehensive collection of standard benchmarks converted into event streams via the I2E algorithm.
1. Standard Benchmarks (Classification)
| Config Name | Original Source | Resolution $(H, W)$ | I2E Ratio | Event Rate | Samples (Train/Val) |
|---|---|---|---|---|---|
I2E-CIFAR10 |
CIFAR-10 | 128 x 128 | 0.07 | 5.86% | 50k / 10k |
I2E-CIFAR100 |
CIFAR-100 | 128 x 128 | 0.07 | 5.76% | 50k / 10k |
I2E-ImageNet |
ILSVRC2012 | 224 x 224 | 0.12 | 6.66% | 1.28M / 50k |
2. Transfer Learning & Fine-grained
| Config Name | Original Source | Resolution $(H, W)$ | I2E Ratio | Event Rate | Samples |
|---|---|---|---|---|---|
I2E-Caltech101 |
Caltech-101 | 224 x 224 | 0.12 | 6.25% | 8.677k |
I2E-Caltech256 |
Caltech-256 | 224 x 224 | 0.12 | 6.04% | 30.607k |
I2E-Mini-ImageNet |
Mini-ImageNet | 224 x 224 | 0.12 | 6.65% | 60k |
3. Small Scale / Toy
| Config Name | Original Source | Resolution $(H, W)$ | I2E Ratio | Event Rate | Samples |
|---|---|---|---|---|---|
I2E-MNIST |
MNIST | 32 x 32 | 0.10 | 9.56% | 60k / 10k |
I2E-FashionMNIST |
Fashion-MNIST | 32 x 32 | 0.15 | 10.76% | 60k / 10k |
π Coming Soon: Object Detection and Semantic Segmentation datasets.
π οΈ Preprocessing Protocol
To ensure reproducibility, we specify the exact data augmentation pipeline applied to the static images before I2E conversion.
The (H, W) in the code below corresponds to the "Resolution" column in the Dataset Catalog above.
from torchvision.transforms import v2
# Standard Pre-processing Pipeline used for I2E generation
transform_train = v2.Compose([
# Ensure 3-channel RGB (crucial for grayscale datasets like MNIST)
v2.Lambda(lambda x: x.convert('RGB')),
v2.PILToTensor(),
v2.Resize((H, W), interpolation=v2.InterpolationMode.BICUBIC),
v2.ToDtype(torch.float32, scale=True),
])
π» Usage
π Quick Start
You do not need to download any extra scripts. Just copy the code below. It handles the binary unpacking (converting Parquet bytes to PyTorch Tensors) automatically.
import io
import torch
import numpy as np
from datasets import load_dataset
from torch.utils.data import Dataset, DataLoader
# ==================================================================
# 1. Core Decoding Function (Handles the binary packing)
# ==================================================================
def unpack_event_data(item, use_io=True):
"""
Decodes the custom binary format:
Header (8 bytes) -> Shape (T, C, H, W) -> Body (Packed Bits)
"""
if use_io:
with io.BytesIO(item['data']) as f:
raw_data = np.load(f)
else:
raw_data = np.load(item)
header_size = 4 * 2 # Parse Header (First 8 bytes for 4 uint16 shape values)
shape_header = raw_data[:header_size].view(np.uint16)
original_shape = tuple(shape_header) # Returns (T, C, H, W)
packed_body = raw_data[header_size:] # Parse Body & Bit-unpacking
unpacked = np.unpackbits(packed_body)
num_elements = np.prod(original_shape) # Extract valid bits (Handle padding)
event_flat = unpacked[:num_elements]
event_data = event_flat.reshape(original_shape).astype(np.float32).copy()
return torch.from_numpy(event_data)
# ==================================================================
# 2. Dataset Wrapper
# ==================================================================
class I2E_Dataset(Dataset):
def __init__(self, cache_dir, config_name, split='train', transform=None, target_transform=None):
print(f"π Loading {config_name} [{split}] from Hugging Face...")
self.ds = load_dataset('UESTC-BICS/I2E', config_name, split=split, cache_dir=cache_dir, keep_in_memory=False)
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len(self.ds)
def __getitem__(self, idx):
item = self.ds[idx]
event = unpack_event_data(item)
label = item['label']
if self.transform:
event = self.transform(event)
if self.target_transform:
label = self.target_transform(label)
return event, label
# ==================================================================
# 3. Run Example
# ==================================================================
if __name__ == "__main__":
import os
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com' # Use HF mirror server in some regions
DATASET_NAME = 'I2E-CIFAR10' # Choose your config: 'I2E-CIFAR10', 'I2E-ImageNet', etc.
MODEL_PATH = 'Your cache path here' # e.g., './hf_datasets_cache/'
train_dataset = I2E_Dataset(MODEL_PATH, DATASET_NAME, split='train')
val_dataset = I2E_Dataset(MODEL_PATH, DATASET_NAME, split='validation')
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True, num_workers=32, persistent_workers=True)
val_loader = DataLoader(val_dataset, batch_size=32, shuffle=False, num_workers=32, persistent_workers=True)
events, labels = next(iter(train_loader))
print(f"β
Loaded Batch Shape: {events.shape}") # Expect: [32, T, 2, H, W]
print(f"β
Labels: {labels}")
π Results (SOTA)
Our I2E-pretraining sets new benchmarks for Sim-to-Real transfer on CIFAR10-DVS.
| Dataset | Architecture | Method | Top-1 Acc |
|---|---|---|---|
| CIFAR10-DVS (Real) |
MS-ResNet18 | Baseline | 65.6% |
| MS-ResNet18 | Transfer-I | 83.1% | |
| MS-ResNet18 | Transfer-II (Sim-to-Real) | 92.5% |
For full results and model weights, please visit our GitHub Repo.
π Citation
If you find this work or the models useful, please cite our AAAI 2026 paper:
@inproceedings{ma2026i2e,
title={I2E: Real-Time Image-to-Event Conversion for High-Performance Spiking Neural Networks},
author={Ma, Ruichen and Meng, Liwei and Qiao, Guanchao and Ning, Ning and Liu, Yang and Hu, Shaogang},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={40},
number={3},
pages={1982--1990},
year={2026}
}
πΌοΈ Poster
- Downloads last month
- 1,082








