You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Brain Tumor Classification using Deep Learning

🎯 Project Overview

A comprehensive deep learning solution for automated brain tumor classification using PyTorch and ResNet18. This full-stack application classifies brain MRI scans into four categories: Glioma, Meningioma, No Tumor, and Pituitary tumors, with explainable AI features for medical professionals.

πŸ”¬ Technical Stack

  • Framework: PyTorch with ResNet18 (Transfer Learning)
  • Backend: FastAPI for inference endpoints
  • Frontend: React.js with Material-UI
  • Data Processing: Pandas, NumPy
  • Visualization: Matplotlib, Seaborn
  • Explainability: Grad-CAM for visual explanations
  • Evaluation: Scikit-learn metrics

πŸ’Ό Business Value

  • Healthcare Impact: Assists radiologists in tumor detection and classification
  • Efficiency: Reduces diagnosis time with automated analysis
  • Interpretability: Provides visual explanations through Grad-CAM
  • Scalability: Full-stack solution ready for clinical deployment

πŸš€ Key Features

  • Transfer Learning: ResNet18 with progressive fine-tuning (freeze β†’ partial unfreeze β†’ full unfreeze)
  • Explainable AI: Grad-CAM visualizations for interpretable predictions
  • Comprehensive Evaluation: Confusion matrices, classification reports, and performance metrics
  • Full-Stack Solution: FastAPI backend + React frontend
  • Production Ready: Robust data pipeline with proper preprocessing and augmentation
  • GPU Acceleration: Optimized for both CPU and GPU deployment

πŸ“Š Dataset & Performance

  • Classes: 4 (Glioma, Meningioma, No Tumor, Pituitary)
  • Format: Medical MRI images with grayscale-to-RGB conversion
  • Training Strategy: Progressive unfreezing with cosine annealing learning rate
  • Evaluation: Per-class precision, recall, and F1-score metrics

πŸ—οΈ Architecture Overview

🧠 Deep Learning Model

  • Base Architecture: ResNet18 with pre-trained ImageNet weights
  • Custom Classifier: 4-class output layer for tumor classification
  • Training Strategy: Three-phase progressive unfreezing with cosine annealing LR
  • Data Pipeline: CSV-based dataset loading with robust preprocessing

πŸ” Explainable AI

  • Grad-CAM Integration: Visual explanations for model predictions
  • Heatmap Generation: Highlights important regions in MRI scans
  • Clinical Interpretability: Helps medical professionals understand model decisions

🌐 Full-Stack Application

  • Backend API: FastAPI with /predict endpoint and image serving
  • Frontend Interface: React.js with Material-UI for intuitive user experience
  • Real-time Processing: Upload MRI scans and receive instant classification results

πŸ“ Project Structure

tumor/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ models/model.py        # TumorClassifier with ResNet18 backbone
β”‚   β”œβ”€β”€ data/dataset.py        # Data loading and preprocessing utilities
β”‚   β”œβ”€β”€ train.py              # Three-phase training pipeline
β”‚   β”œβ”€β”€ evaluate.py           # Comprehensive model evaluation
β”‚   β”œβ”€β”€ grad_cam.py           # Grad-CAM visualization implementation
β”‚   β”œβ”€β”€ api/main.py           # FastAPI backend with inference endpoints
β”‚   └── img_data/             # MRI image dataset
β”œβ”€β”€ frontend/                 # React.js web interface with Material-UI
β”œβ”€β”€ outputs/                  # Generated Grad-CAM visualizations
β”œβ”€β”€ grad_cam_outputs/         # Standalone Grad-CAM outputs
β”œβ”€β”€ best_model2.pth          # Trained model weights
β”œβ”€β”€ confusion_matrix1.png    # Performance visualization
└── training_history2.csv   # Training metrics and logs

πŸ› οΈ Installation & Usage

Prerequisites

  • Python 3.10+ with pip
  • PyTorch with CUDA support (optional for GPU acceleration)

Quick Start

# Install dependencies
pip install -r requirements.txt

# Run model evaluation
python src/evaluate.py

# Start FastAPI backend
cd src/api && python main.py

# Start React frontend (in separate terminal)
cd frontend && npm install && npm start

API Endpoints

  • POST /predict: Upload MRI image for classification
  • GET /visualization/{filename}: Retrieve Grad-CAM visualizations

🎯 Results & Performance

  • Comprehensive Metrics: Accuracy, precision, recall, and F1-score for each tumor type
  • Visual Analytics: Confusion matrix heatmaps for performance analysis
  • Explainable Predictions: Grad-CAM overlays showing model attention regions
  • Production Logs: Detailed training history with loss and accuracy tracking

🌟 Professional Highlights

  • Medical AI Expertise: Specialized in healthcare image analysis
  • Full-Stack Development: End-to-end solution from model to deployment
  • Explainable AI: Critical for medical applications requiring interpretability
  • Production Quality: Robust error handling, logging, and documentation
  • Scalable Architecture: Designed for clinical environment deployment

πŸ‘¨β€πŸ’» Technical Skills Demonstrated

  • Deep Learning with PyTorch
  • Transfer Learning and Fine-tuning
  • Medical Image Processing
  • Explainable AI (Grad-CAM)
  • FastAPI Backend Development
  • React.js Frontend Development
  • Model Evaluation and Metrics
  • Production-Ready Code Architecture

πŸ“„ License

This project is available for portfolio demonstration purposes.

  • Node.js 18+ with npm

Backend (FastAPI) setup

From the repository root on Windows PowerShell:

python -m venv .venv ; .\.venv\Scripts\Activate.ps1
pip install --upgrade pip
pip install -r requirements.txt
uvicorn src.api.main:app --host 0.0.0.0 --port 8000 --reload

The API will be available at http://localhost:8000

Frontend (React) setup

From the repository root:

cd frontend
npm install
npm start

The app will open at http://localhost:3000 and call the backend at http://localhost:8000


Dataset format

CSV files in src/ specify the split. Required columns:

  • filename: image filename under src/img_data/
  • label: one of glioma, meningioma, notumor, pituitary

Images are expected in src/img_data/ (JPEG/PNG). During training, grayscale images are converted to 3-channel RGB and normalized to ImageNet stats.


Training

Run three-phase fine-tuning with cosine LR scheduling:

# From repo root (venv activated)
python src\train.py

Training details:

  • Phase 1 (epochs=5): train classifier only (backbone frozen)
  • Phase 2 (epochs=5): partially unfreeze last layers
  • Phase 3 (epochs=10): train all layers

Artifacts:

  • Metrics logged to training_history2.csv
  • Best checkpoint saved as best_model2.pth

Evaluation

Evaluate on the test split and generate a confusion matrix:

python src\evaluate.py

Outputs:

  • Overall accuracy and per-class precision/recall/F1 in console
  • Confusion matrix saved as confusion_matrix1.png

Confusion Matrix

Note: evaluate.py by default constructs a plain ResNet18 head and loads best_model.pth. If your best training checkpoint is best_model2.pth (using the custom classifier), adapt evaluate.py to instantiate TumorClassifier and load that file.


API reference

  • POST /predict
    • Body: multipart/form-data with file (image)
    • Response JSON:
      • predicted_class: string
      • confidence: float (0-1)
      • class_probabilities: map of classβ†’probability
      • visualization_url: path to the generated Grad-CAM (e.g., /visualization/<id>.png)
  • GET /visualization/{filename}
    • Serves the Grad-CAM overlay image generated for your upload.

Grad-CAM visualization

The Grad-CAM pipeline highlights image regions that contribute most to the model’s decision. You can:

  • Use the API’s /predict to automatically generate and retrieve overlays, or
  • Run the standalone script:
python src\grad_cam.py

Overlays are saved under grad_cam_outputs/.


Frontend UX

Upload a scan, run analysis, and view:

  • Primary predicted class and confidence
  • Per-class probability bars
  • A Grad-CAM overlay beside the original image

The interface uses Material UI components and is localized primarily in French.


Known notes and tips

  • TorchVision weights: new versions prefer the ResNet18_Weights enum (weights=...) rather than pretrained=True. Align your environment or update the code accordingly.
  • Architectural alignment: training uses TumorClassifier while evaluation/API may use a plain resnet18 head. Keep using best_model.pth for those scripts, or refactor them to reuse TumorClassifier and load best_model2.pth.
  • CSV and paths: ensure src/brain_tumor_*.csv and src/img_data/ paths are correct on your machine.

Roadmap

  • Unify checkpoint loading across training, evaluation, and API
  • Add test-time augmentation (TTA) and confidence calibration
  • Add dataset sanity checks and a small validation dashboard
  • Containerize (Docker) for reproducible deployment

Project structure (excerpt)

frontend/           # React UI (MUI)
src/
  api/              # FastAPI service
  data/             # Datasets and dataloaders
  models/           # Model definitions
  grad_cam.py       # Grad-CAM utilities
  train.py          # Training script
  evaluate.py       # Evaluation script
best_model*.pth     # Checkpoints
confusion_matrix*.png

License

license: mit

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Evaluation results