--- title: MedGemma Symptom Analyzer emoji: ๐Ÿฅ colorFrom: blue colorTo: green sdk: gradio sdk_version: 5.35.0 app_file: app.py pinned: false license: apache-2.0 --- # MedGemma Symptom Analyzer A modern medical AI application using Google's MedGemma model via HuggingFace Inference API for symptom analysis and medical consultation. ## ๐Ÿฅ Features - **AI-Powered Symptom Analysis**: Uses Google's MedGemma-4B model for medical insights - **Comprehensive Medical Reports**: Provides differential diagnoses, next steps, and red flags - **Interactive Web Interface**: Built with Gradio for easy use - **Demo Mode**: Fallback functionality when API is unavailable - **Medical Safety**: Includes appropriate disclaimers and safety guidance ## ๐Ÿš€ Quick Start ### 1. Installation ```bash # Clone the repository git clone cd medgemma-symptomps # Install dependencies pip install -r requirements.txt ``` ### 2. HuggingFace Access Setup The app uses Google's MedGemma model, which requires special access: 1. **Get HuggingFace Token**: - Visit [HuggingFace Settings](https://huggingface.co/settings/tokens) - Create a new token with `read` permissions 2. **Request MedGemma Access**: - Visit [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it) - Click "Request access to this model" - Wait for approval from Google (may take some time) 3. **Set Environment Variable**: ```bash export HF_TOKEN="your_huggingface_token_here" ``` ### 3. Run the Application ```bash python3 app.py ``` The app will start on `http://localhost:7860` (or next available port). ## ๐Ÿ”ง Configuration ### Environment Variables - `HF_TOKEN`: Your HuggingFace API token (required for model access) - `FORCE_CPU`: Set to `true` to force CPU usage (not needed for API version) ### Model Access Status The app handles different access scenarios: - โœ… **Full Access**: MedGemma model available via API - โš ๏ธ **Pending Access**: Waiting for model approval (uses demo mode) - โŒ **No Access**: Falls back to demo responses ## ๐Ÿงช Testing Test the API connection: ```bash python3 test_api.py ``` This will verify: - HuggingFace API connectivity - Token validity - Model access permissions ## ๐Ÿ“‹ Usage ### Web Interface 1. Open the app in your browser 2. Enter patient symptoms in the text area 3. Adjust creativity slider if desired 4. Click "Analyze Symptoms" 5. Review the comprehensive medical analysis ### Example Symptoms Try these example symptom descriptions: - **Flu-like**: "Fever, headache, body aches, and fatigue for 3 days" - **Chest pain**: "Sharp chest pain worsening with breathing, shortness of breath" - **Digestive**: "Abdominal pain, nausea, and diarrhea after eating" ## ๐Ÿ”’ Medical Disclaimer **โš ๏ธ IMPORTANT**: This tool is for educational purposes only. It should never replace professional medical advice, diagnosis, or treatment. Always consult qualified healthcare professionals for medical concerns. ## ๐Ÿ—๏ธ Architecture ### API-Based Design The app now uses HuggingFace Inference API instead of local model loading: - **Advantages**: - No local GPU/CPU requirements - Faster startup time - Always up-to-date model - Reduced memory usage - **Requirements**: - Internet connection - Valid HuggingFace token - Model access approval ### File Structure ``` medgemma-symptomps/ โ”œโ”€โ”€ app.py # Main Gradio application โ”œโ”€โ”€ test_api.py # API connection test script โ”œโ”€โ”€ requirements.txt # Python dependencies โ”œโ”€โ”€ README.md # This file โ””โ”€โ”€ medgemma_app.log # Application logs ``` ## ๐Ÿ› ๏ธ Development ### Key Components 1. **MedGemmaSymptomAnalyzer**: Main class handling API connections 2. **Gradio Interface**: Web UI with symptom input and analysis display 3. **Demo Responses**: Fallback functionality for offline use ### API Integration ```python from huggingface_hub import InferenceClient client = InferenceClient(token=hf_token) response = client.text_generation( prompt=medical_prompt, model="google/medgemma-4b-it", max_new_tokens=400, temperature=0.7 ) ``` ## ๐Ÿ” Troubleshooting ### Common Issues 1. **404 Model Not Found**: - Ensure you have requested access to MedGemma - Wait for Google's approval - Verify your HuggingFace token is valid 2. **Demo Mode Only**: - Check your internet connection - Verify HF_TOKEN environment variable - Confirm model access approval status 3. **Slow Responses**: - API responses may take 10-30 seconds - Consider adjusting max_tokens parameter ### Getting Help - Check the application logs: `tail -f medgemma_app.log` - Test API connection: `python3 test_api.py` - Verify model access: Visit the HuggingFace model page ## ๐Ÿ“š Resources - [MedGemma Model Card](https://huggingface.co/google/medgemma-4b-it) - [HuggingFace Inference API](https://huggingface.co/docs/api-inference/index) - [Gradio Documentation](https://gradio.app/docs/) ## ๐Ÿ“„ License This project uses the MedGemma model which has its own licensing terms. Please review the [model license](https://huggingface.co/google/medgemma-4b-it) before use. --- **Remember**: Always prioritize patient safety and consult healthcare professionals for medical decisions.