Structure-Aware Fusion with Progressive Injection for Multimodal Molecular Representation Learning
Paper
•
2510.23640
•
Published
•
1
This model was trained using MuMo (Multi-Modal Molecular) framework, as presented in the paper Structure-Aware Fusion with Progressive Injection for Multimodal Molecular Representation Learning. The official code repository is available at: https://github.com/selmiss/MuMo
MuMo uses a custom loading function. Here's how to load the pretrained model:
git clone https://github.com/selmiss/MuMo.git
from transformers import AutoConfig, AutoTokenizer
from model.load_model import load_model
from dataclasses import dataclass
# Load configuration and tokenizer
repo = "zihaojing/MuMo-Pretrained"
config = AutoConfig.from_pretrained(repo, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(repo)
# Set up model arguments
@dataclass
class ModelArgs:
model_name_or_path: str = repo
model_class: str = "MuMoFinetune" # or "MuMoPretrain" for pretraining
cache_dir: str = None
model_revision: str = "main"
use_auth_token: bool = False
task_type: str = None # e.g., "classification" or "regression" for finetuning
model_args = ModelArgs()
# Load the model
model = load_model(config, tokenizer=tokenizer, model_args=model_args)
Notes:
model_class="MuMoPretrain" for pretraining or inferencemodel_class="MuMoFinetune" for finetuning taskstask_type to "classification" or "regression" when using MuMoFinetune"zihaojing/MuMo-Pretrained") and local paths (e.g., "/path/to/model")If you use this model or the MuMo framework, please cite our paper:
@inproceedings{jing2025mumo,
title = {MuMo: Multimodal Molecular Representation Learning via Structural Fusion and Progressive Injection},
author = {Jing, Zihao and Sun, Yan and Li, Yan Yi and Janarthanan, Sugitha and Deng, Alana and Hu, Pingzhao},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2025}
}