--- license: apache-2.0 tags: - pathology - foundation-model - medical-imaging - computational-pathology - histopathology - vision-transformer - dinov2 - vision extra_gated_prompt: > The OpenMidnight model weights and associated code are released under the Apache License, Version 2.0 (the "License"). You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Please note that the primary email used to sign up for your Hugging Face account must match your institutional email to receive approval. By downloading the OpenMidnight model weights, you attest that all information (affiliation, research use) is correct and up-to-date. Downloading the model requires prior registration on Hugging Face and agreeing to the terms of use. By using the OpenMidnight model, you acknowledge that you have read and understood these terms. extra_gated_fields: First and Last Name: text Institutional Email (must match your primary HuggingFace email): text I agree to the license and terms of use described above: checkbox --- # OpenMidnight
![figure1](https://cdn-uploads.huggingface.co/production/uploads/6057b823861b9d53d9c4b8df/2PJpVl2k51pfjFg2Vs-oS.png)

Overview of the OpenMidnight pathology foundation model

*State-of-the-art pathology foundation model trained on 12K slides* **Developed by [Sophont](https://sophont.med)** [Blog Post](https://sophont.med/blog/openmidnight) | [GitHub](https://github.com/MedARC-AI/OpenMidnight) | [Demo](https://huggingface.co/spaces/SophontAI/OpenMidnightDemo)
--- ## What is OpenMidnight? OpenMidnight is our open replication of Kaiko.AI's Midnight, a 1.1 billion parameter Vision Transformer foundation model for computational pathology. OpenMidnight achieves state-of-the-art performance despite being trained on significantly less data than Kaiko.AI's Midnight or comparable models. **Key advantages:** - ๐Ÿ† **State-of-the-art performance**: Achieves 0.775 average score across 14 benchmarks - โšก **Efficient training**: Trained in ~83 hours on 8ร— H100 GPUs for only **$1,600 USD** (estimated) - ๐Ÿ“Š **Minimal data requirements**: Uses only **12K slides from TCGA for training** - ๐Ÿ”“ **Fully open source**: Complete model weights, training code, and pipeline publicly available OpenMidnight is aimed for computational pathology tasks including: - Tumor detection and classification - Histological grading - Tissue segmentation - Margin assessment - Clinical outcome prediction --- ## Model Description - **Developed by**: [Sophont](https://sophont.med) - **Model type**: Finetuned DINOv2 ViT-G for H&E pathology images - **Training data**: TCGA, 12M H&E WSI - **Training repository**: https://github.com/MedARC-AI/OpenMidnight/tree/main ## Usage ### Requirements ```bash pip install torch torchvision huggingface_hub ``` **Recommended**: Run on GPU with mixed precision for optimal performance. ### Quick Start: Loading the Model ```python import torch from huggingface_hub import hf_hub_download #Downloads to hf cache location download_location = hf_hub_download(repo_id="SophontAI/OpenMidnight", filename="teacher_checkpoint_load.pt") model = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14_reg', weights = None) #Load OpenMidnight weights checkpoint = torch.load(download_location, map_location = "cpu") #Required because dinov2 is baseline 392 and we are baseline 224 resolution pos_embed = checkpoint["pos_embed"] model.pos_embed = torch.nn.parameter.Parameter(pos_embed) model.load_state_dict(checkpoint) model.eval() print(f"Model loaded with {sum(p.numel() for p in model.parameters()):,} parameters") ``` ### Extracting Embeddings from Tissue Patches ```python from PIL import Image import torchvision.transforms as transforms # Standard preprocessing for pathology images transform = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize( mean=[0.485, 0.456, 0.406], # ImageNet normalization std=[0.229, 0.224, 0.225] ) ]) # Load and preprocess an H&E tissue patch image = Image.open("path/to/tissue_patch.jpg") input_tensor = transform(image).unsqueeze(0) # Shape: [1, 3, 224, 224] # Extract embeddings with torch.no_grad(): embeddings = model(input_tensor) # Shape: [1, 1536] print(f"Embedding shape: {embeddings.shape}") print(f"Embedding norm: {embeddings.norm().item():.4f}") ``` --- ## Model Performance OpenMidnight achieves **competitive or superior performance** compared to models trained on 8-30ร— more data: ### Benchmark Comparison
![performance_barplot](https://cdn-uploads.huggingface.co/production/uploads/6057b823861b9d53d9c4b8df/1iTHzUK75HTxkbJKv411S.png)

Average performance of top pathology foundation models and baselines across computational pathology benchmarks

### Detailed Benchmark Results
Model #WSIs PCam (10 shots) BACH BRACS BreakHis CRC-100K Gleason MHIST PCam Cam16 (small) Panda (small) CoNSeP MoNuSAC HEST Average
OpenMidnight (Ours)12K0.7900.9160.6610.8730.9610.8170.8440.9380.9460.6520.6310.6550.3900.775
Midnight92K0.9000.9060.6420.8500.9640.8090.8250.9510.8310.6330.6630.7070.3840.774
UNI-2350K0.8870.9140.6610.8600.9650.7780.8230.9490.8680.6590.6280.6440.4140.773
UNI-2/392350K0.8210.9170.6630.8290.9650.7910.8490.9270.8580.6530.6290.6590.4070.767
Virchow23.1M0.8510.8840.6240.8230.9660.7780.8610.9360.8650.6560.6390.6760.3980.766
Midnight 92k92K0.8760.8960.6160.7890.9660.8200.8110.9500.8610.6250.6290.6560.3920.761
Midnight 12k12K0.7910.9040.6440.8410.9660.8010.8070.9300.8500.6630.6260.6630.3950.760
H-Optimus-0500K0.8240.7570.6150.8080.9560.7710.8420.9420.8380.6700.6440.6850.4150.751
Kaiko-B829K0.7860.8720.6170.8250.9570.7480.8280.9170.8310.6420.6430.6860.3730.748
TCGA-100M12K0.7740.8640.6150.7790.9670.7990.7920.9270.8520.6670.6220.6560.3960.747
Prov-GigaPath171K0.8520.7660.6160.8210.9510.7200.8310.9420.7910.6600.6260.6870.3930.743
Hibou-L1.1M0.8040.8110.6370.7400.9330.7630.8390.9520.8230.6340.6450.6680.3880.740
UNI100K0.8150.7910.5930.7890.9480.7570.8400.9380.8220.6550.6270.6590.3860.740
UNI/512100K0.7370.8770.6120.7320.9500.7540.8140.8830.8140.6540.6210.6580.3640.728
Phikon12K0.8200.7350.5680.7130.9420.7290.8040.9230.8090.6440.6230.6440.3670.717
Phikon v260K0.7410.7340.6000.7160.9390.7550.7840.8930.8030.6310.6260.6450.3750.711
DINOv2-giant (pretrained)00.7190.7250.5830.8320.9350.7440.8620.8740.5070.3820.5640.6140.3420.668
DINOv2-giant (random)00.6490.4730.4110.4270.7480.4640.5690.7550.5660.3080.4610.4280.1720.495

Performance comparison of OpenMidnight to existing pathology foundation models on eva+HEST benchmarks. Scores for existing models are taken from Midnight paper. We report balanced accuracy for the classification tasks, Dice score for semantic segmentation (CoNSeP and MoNuSAC), and the average Pearson correlation for the nine HEST regression tasks. Only performance with [CLS] token is reported. Best score per dataset is bolded.

--- ## Model Details ### Architecture | Parameter | Value | |-----------|-------| | **Base Architecture** | ViT-G/14 | | **Parameters** | 1.1 billion | | **Patch Size** | 14ร—14 pixels | | **Input Resolution** | 224ร—224 pixels | | **Embedding Dimension** | 1536 | | **Number of Layers** | 40 | | **Number of Heads** | 16 | | **Initialization** | Meta's DINOv2 pre-trained weights | ### Training Data - **Dataset**: TCGA (The Cancer Genome Atlas) - **Slides**: 12k FFPE H&E-stained whole slide images - **Cancer Types**: 32 different cancer types - **Total Patches**: 96 million - **Unique Patches**: 29 million - **Stain Type**: Hematoxylin and Eosin (H&E) - **Preprocessing**: Non-informative patch filtering ### Training Configuration - **Hardware**: 8ร— NVIDIA H100 GPUs (80GB each) - **Batch Size**: 48 per GPU (384 global batch size) - **Training Steps**: 250,000 - **Optimizer**: AdamW - **Learning Rate**: 2.0e-4 - **Regularization**: KDE regularizer for training stability - **Augmentation**: Hematoxylin-Eosin-DAB colorspace transformations - **Training Time**: ~83 hours wall-clock time (667 GPU-hrs) - **Training Cost**: ~$1,600 USD (at $2.50/H100/hour) --- ## Blog Post For an in-depth discussion of OpenMidnight, **[read the full blog post](https://sophont.med/blog/openmidnight)**. --- ## Contact For questions, feedback, or collaboration opportunities: - **๐Ÿ“ง Email**: [contact@sophont.med](mailto:contact@sophont.med) - **๐ŸŒ Website**: [sophont.med](https://sophont.med) - **๐Ÿฆ Twitter/X**: [@SophontAI](https://twitter.com/SophontAI) - **๐Ÿ’ฌ GitHub Issues**: [github.com/MedARC-AI/OpenMidnight/issues](https://github.com/MedARC-AI/OpenMidnight/issues) We welcome: - Bug reports and feature requests - Contributions to the training code - Benchmark results on new datasets - Applications of OpenMidnight to novel tasks --- ## Acknowledgments We thank Mikhail Karasikov for answering questions about Midnight. We thank Nicolas Kรคnzig for answering questions about eva. We thank the members of MedARC and the broader research community for their feedback and support. We are very grateful to FAL AI for granting compute to support this open-source research. --- ## Citation If you use OpenMidnight in your research, please cite: ```bibtex @article{kaplan2025openmidnight, author = {Kaplan, Daniel and Grandhi, Ratna Sagari and Lane, Connor and Warner, Benjamin and Abraham, Tanishq Mathew and Scotti, Paul S.}, title = {How to Train a State-of-the-Art Pathology Foundation Model with \$1.6k}, year = {2025}, url = {https://sophont.med/blog/openmidnight}, } ``` --- ## License This model is released under the **Apache 2.0 License**. --- ## Terms of Use **Research Use**: This model is primarily intended for research purposes in computational pathology, medical imaging, and related fields. **Clinical Use**: This model is not intended for use in medical diagnosis, treatment, or prevention of disease of real patients. It should not be used as a substitute for professional medical advice. **Responsible Use**: Users should: - Validate model performance on their specific use cases - Be aware of potential biases in the training data (TCGA) - Consider demographic and geographic limitations - Respect privacy rights and comply with applicable data protection laws - Follow applicable regulations and ethical guidelines ---