UnifiedNeuroGen: EEG-to-fMRI Generation Demo (Naturalistic Viewing, Within-Subject)
π Model Description
This repository contains the official demonstration model for the UnifiedNeuroGen project, a pre-trained generative framework for the unified representation of neural signals. This model is trained to translate electroencephalography (EEG) signals into functional magnetic resonance imaging (fMRI) BOLD signals.
Abstract from the Paper
Multimodal functional neuroimaging enables systematic analysis of brain mechanisms and provides discriminative representations for brain-computer interface (BCI) decoding. However, its acquisition is constrained by high costs and feasibility limitations. Moreover, underrepresentation of specific groups undermines fairness of BCI decoding model. To address these challenges, we propose a unified representation framework for multimodal functional neuroimaging via generative artificial intelligence (AI). By mapping multimodal functional neuroimaging into a unified representation space, the proposed framework is capable of generating data for acquisition-constrained modalities and underrepresented groups. Experiments show that the framework can generate data consistent with real brain activity patterns, provide insights into brain mechanisms, and improve performance on downstream tasks. More importantly, it can enhance model fairness by augmenting data for underrepresented groups.
This specific model is a within-subject demo version, trained and evaluated on a naturalistic viewing task. It is specialized in generating fMRI signals for a subject whose data (from a different session) was used during training.
βοΈ Model Details
- Model Type: Generative, Diffusion Transformer (DiT)
- Architecture: The model leverages pre-trained feature extractors, a hyperdimensional integration strategy for cross-modal alignment, and a DiT-based module for unified representation learning and generation. The core architecture is defined in
models.pyin the official code repository. - Parameters: 407.27 M
- Model Size: 8.69 GB
- Training Data: The model was trained on an open-access dataset of simultaneous EEG-fMRI recordings from 22 healthy adults performing a naturalistic viewing task. For the within-subject scenario, each participant's Day 1 data served as the training set, and their Day 2 data was used for testing.
π How to Use
This model is a raw PyTorch checkpoint (.pt) and is designed to be used with the official code from the UnifiedNeuroGen GitHub repository.
Step 1: Set up the Environment
First, clone the official repository and install the required dependencies.
# Clone the repository
git clone [https://github.com/xkoo115/UnifiedNeuroGen](https://github.com/xkoo115/UnifiedNeuroGen)
cd UnifiedNeuroGen
# Install the required packages
pip install -r requirements.txt
Step 2: Download the Model
You can download the model checkpoint file from this repository's "Files" tab, or programmatically using huggingface_hub:
from huggingface_hub import hf_hub_download
model_path = hf_hub_download(
repo_id="xkoo115/unifiedneurogen-eeg2fmri-nat-view-within-subject-demo",
filename="unifiedneurogen-eeg2fmri-nat-view-within-subject-demo.pt",
local_dir="./checkpoints" # Download to a local 'checkpoints' folder
)
print(f"Model downloaded to: {model_path}")
Step 3: Run Inference
Use the sample.py script from the cloned repository to generate fMRI data from EEG inputs. You will need a test set of EEG encodings. You can find sample data in the UnifiedNeuroGen-Demo-Dataset repository.
python sample.py \
--model DiT_fMRI \
--ckpt ./checkpoints/unifiedneurogen-eeg2fmri-nat-view-within-subject-demo.pt \
--eeg-path /path/to/your/test/eeg/encodings \
--save-path /path/to/save/generated/data
--ckpt: The path to the model checkpoint you downloaded in Step 2.
--eeg-path: The path to the input EEG data you wish to translate.
--save-path: The directory where the generated fMRI data will be saved.
π Citation
If you use this model or the UnifiedNeuroGen framework in your research, please cite the original paper:
@misc{yao2025empoweringfunctionalneuroimagingpretrained,
title={Empowering Functional Neuroimaging: A Pre-trained Generative Framework for Unified Representation of Neural Signals},
author={Weiheng Yao and Xuhang Chen and Shuqiang Wang},
year={2025},
eprint={2506.02433},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={[https://arxiv.org/abs/2506.02433](https://arxiv.org/abs/2506.02433)},
}