Upload 2 files
Browse files- huggingface_readme.md +132 -0
- readme_parameters.md +204 -0
huggingface_readme.md
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# GemmaXRayAnalyzer: Fine-tuned Gemma 3 for X-ray Analysis
|
| 2 |
+
|
| 3 |
+
This model is a fine-tuned version of [unsloth/gemma-3-4b-it](https://huggingface.co/unsloth/gemma-3-4b-it) specifically optimized for analyzing and describing medical X-ray images. It leverages LoRA (Low-Rank Adaptation) to efficiently adapt the model's capabilities to the medical imaging domain.
|
| 4 |
+
|
| 5 |
+
## Model Description
|
| 6 |
+
|
| 7 |
+
GemmaXRayAnalyzer was trained to provide detailed, medically accurate descriptions of X-ray images. It can identify common radiological findings, describe anatomical structures, and suggest potential conditions based on the image characteristics described in the prompt.
|
| 8 |
+
|
| 9 |
+
### Model Architecture
|
| 10 |
+
|
| 11 |
+
- **Base Model**: Gemma 3 4B (Instruction Tuned)
|
| 12 |
+
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
|
| 13 |
+
- **LoRA Parameters**:
|
| 14 |
+
- Rank: 16
|
| 15 |
+
- Alpha: 16
|
| 16 |
+
- Dropout: 0
|
| 17 |
+
- Target Modules: Language layers, attention modules, MLP modules
|
| 18 |
+
|
| 19 |
+
### Training Data
|
| 20 |
+
|
| 21 |
+
This model was fine-tuned on the [unsloth/Radiology_mini](https://huggingface.co/datasets/unsloth/Radiology_mini) dataset, which contains a collection of X-ray images paired with radiological descriptions.
|
| 22 |
+
|
| 23 |
+
The dataset includes various types of X-ray images with corresponding professional radiological assessments, covering a wide range of anatomical structures and pathological conditions.
|
| 24 |
+
|
| 25 |
+
## Intended Use
|
| 26 |
+
|
| 27 |
+
This model is designed for:
|
| 28 |
+
|
| 29 |
+
- Medical education and training
|
| 30 |
+
- Assisting radiologists with preliminary analyses
|
| 31 |
+
- Research in medical NLP and diagnostic AI
|
| 32 |
+
- Development of medical imaging tools
|
| 33 |
+
|
| 34 |
+
### Limitations and Biases
|
| 35 |
+
|
| 36 |
+
- This model is **NOT intended for clinical use** and should not be used for making medical diagnoses without professional oversight.
|
| 37 |
+
- The model may have limited knowledge of rare conditions or specialized imaging techniques.
|
| 38 |
+
- The model may reflect biases present in its training data, including potential geographic, demographic, or institutional biases in radiological practice.
|
| 39 |
+
- This model is not a replacement for professional medical advice or diagnosis.
|
| 40 |
+
|
| 41 |
+
## How to Use the Model
|
| 42 |
+
|
| 43 |
+
You can use this model to generate radiological descriptions based on X-ray image descriptions:
|
| 44 |
+
|
| 45 |
+
```python
|
| 46 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 47 |
+
|
| 48 |
+
# Load the model and tokenizer
|
| 49 |
+
model_id = "YOUR_USERNAME/GemmaXRayAnalyzer_Finetune_Gemma_3_4b"
|
| 50 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
| 51 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 52 |
+
|
| 53 |
+
# Create a prompt that describes an X-ray image
|
| 54 |
+
prompt = """<start_of_turn>user
|
| 55 |
+
You are an expert radiologist. Analyze this chest X-ray showing increased opacity in the right lower lobe.
|
| 56 |
+
<end_of_turn>
|
| 57 |
+
<start_of_turn>model
|
| 58 |
+
"""
|
| 59 |
+
|
| 60 |
+
# Generate a response
|
| 61 |
+
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
|
| 62 |
+
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7, top_p=0.9)
|
| 63 |
+
response = tokenizer.batch_decode(outputs)[0]
|
| 64 |
+
print(response)
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
### Sample Input/Output
|
| 68 |
+
|
| 69 |
+
**Input:**
|
| 70 |
+
```
|
| 71 |
+
You are an expert radiologist. Analyze this chest X-ray showing increased opacity in the right lower lobe.
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
**Output:**
|
| 75 |
+
```
|
| 76 |
+
The chest X-ray demonstrates an area of increased opacity in the right lower lobe, which appears as a patchy consolidation with air bronchograms visible within it. This finding is most consistent with pneumonia, particularly bacterial pneumonia given the lobar distribution.
|
| 77 |
+
|
| 78 |
+
The remainder of the lung fields appears clear without evidence of pleural effusion, pneumothorax, or pulmonary edema. The cardiac silhouette is normal in size, and the mediastinum is not widened. No bony abnormalities are evident.
|
| 79 |
+
|
| 80 |
+
Impression: Right lower lobe consolidation, most likely representing pneumonia. Clinical correlation and follow-up imaging after treatment would be recommended to ensure resolution.
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
## Training Details
|
| 84 |
+
|
| 85 |
+
- **Training Process**: The model was fine-tuned using the Unsloth library for efficient training
|
| 86 |
+
- **Optimization**: AdamW 8-bit optimizer
|
| 87 |
+
- **Learning Rate**: 2e-4 with linear scheduler
|
| 88 |
+
- **Batch Size**: 2 per device with gradient accumulation steps of 4
|
| 89 |
+
- **Training Steps**: Approximately 50 steps
|
| 90 |
+
|
| 91 |
+
## Citation
|
| 92 |
+
|
| 93 |
+
If you use this model in your research or applications, please cite:
|
| 94 |
+
|
| 95 |
+
```
|
| 96 |
+
@misc{gemmaXrayAnalyzer2025,
|
| 97 |
+
author = {Your Name},
|
| 98 |
+
title = {GemmaXRayAnalyzer: Fine-tuned Gemma 3 for X-ray Analysis},
|
| 99 |
+
year = {2025},
|
| 100 |
+
publisher = {HuggingFace},
|
| 101 |
+
howpublished = {\url{https://huggingface.co/YOUR_USERNAME/GemmaXRayAnalyzer_Finetune_Gemma_3_4b}}
|
| 102 |
+
}
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
### Dataset Citation
|
| 106 |
+
|
| 107 |
+
This model was fine-tuned on the Radiology_mini dataset. Please also cite:
|
| 108 |
+
|
| 109 |
+
```
|
| 110 |
+
@misc{unslothRadiologyMini2024,
|
| 111 |
+
author = {Unsloth Team},
|
| 112 |
+
title = {Radiology_mini: A Dataset for Medical X-ray Image Analysis},
|
| 113 |
+
year = {2024},
|
| 114 |
+
publisher = {HuggingFace},
|
| 115 |
+
howpublished = {\url{https://huggingface.co/datasets/unsloth/Radiology_mini}}
|
| 116 |
+
}
|
| 117 |
+
```
|
| 118 |
+
|
| 119 |
+
## Ethical Considerations
|
| 120 |
+
|
| 121 |
+
This model is intended for research, educational purposes, and assisted diagnosis only. It should not be used as the sole basis for clinical decisions. Always consult with qualified healthcare professionals for medical advice and diagnosis.
|
| 122 |
+
|
| 123 |
+
## License
|
| 124 |
+
|
| 125 |
+
This model is subject to the same license as the base Gemma 3 model. Please refer to the [Gemma 3 license](https://huggingface.co/unsloth/gemma-3-4b-it) for details.
|
| 126 |
+
|
| 127 |
+
## Acknowledgments
|
| 128 |
+
|
| 129 |
+
- DeepMind for creating the Gemma 3 model
|
| 130 |
+
- Unsloth team for providing efficient fine-tuning tools
|
| 131 |
+
- Hugging Face for the infrastructure to share models
|
| 132 |
+
- The creators of the Radiology_mini dataset
|
readme_parameters.md
ADDED
|
@@ -0,0 +1,204 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# GemmaXRayAnalyzer: Model Parameters and Usage Guide
|
| 2 |
+
|
| 3 |
+
This document provides detailed information about the parameters and recommended usage for the GemmaXRayAnalyzer model.
|
| 4 |
+
|
| 5 |
+
## Model Loading Parameters
|
| 6 |
+
|
| 7 |
+
When loading the model, you can use the following parameters:
|
| 8 |
+
|
| 9 |
+
```python
|
| 10 |
+
model, tokenizer = FastModel.from_pretrained(
|
| 11 |
+
model_name="YOUR_USERNAME/GemmaXRayAnalyzer_Finetune_Gemma_3_4b",
|
| 12 |
+
max_seq_length=2048, # Maximum sequence length (adjust based on needs)
|
| 13 |
+
dtype=None, # Auto-detect best dtype
|
| 14 |
+
load_in_4bit=True, # Use 4-bit quantization (for GPU memory efficiency)
|
| 15 |
+
device_map="auto" # Automatically distribute model across available devices
|
| 16 |
+
)
|
| 17 |
+
```
|
| 18 |
+
|
| 19 |
+
### Key Loading Parameters:
|
| 20 |
+
|
| 21 |
+
| Parameter | Description | Recommended Value |
|
| 22 |
+
|-----------|-------------|-------------------|
|
| 23 |
+
| `max_seq_length` | Maximum sequence length | 2048 (or 4096 if memory allows) |
|
| 24 |
+
| `load_in_4bit` | Use 4-bit quantization | `True` for GPU, `False` for CPU |
|
| 25 |
+
| `device_map` | Device allocation strategy | "auto" |
|
| 26 |
+
| `dtype` | Data type for model weights | `None` (auto-detect) |
|
| 27 |
+
|
| 28 |
+
## Generation Parameters
|
| 29 |
+
|
| 30 |
+
For optimal results when generating text with the model, we recommend the following parameters:
|
| 31 |
+
|
| 32 |
+
```python
|
| 33 |
+
outputs = model.generate(
|
| 34 |
+
**inputs,
|
| 35 |
+
max_new_tokens=256, # Adjust based on desired response length
|
| 36 |
+
temperature=0.7, # Controls randomness (0.5-0.8 recommended)
|
| 37 |
+
top_p=0.9, # Nucleus sampling parameter
|
| 38 |
+
top_k=50, # Top-k sampling parameter
|
| 39 |
+
repetition_penalty=1.1, # Helps avoid repetitive text
|
| 40 |
+
do_sample=True # Enable sampling (vs greedy decoding)
|
| 41 |
+
)
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
### Key Generation Parameters:
|
| 45 |
+
|
| 46 |
+
| Parameter | Description | Range | Recommended |
|
| 47 |
+
|-----------|-------------|-------|-------------|
|
| 48 |
+
| `max_new_tokens` | Maximum length of generated response | 64-512 | 256 |
|
| 49 |
+
| `temperature` | Randomness of generation | 0.1-1.5 | 0.7 |
|
| 50 |
+
| `top_p` | Nucleus sampling (probability cutoff) | 0.5-1.0 | 0.9 |
|
| 51 |
+
| `top_k` | Limits vocabulary to top K tokens | 10-100 | 50 |
|
| 52 |
+
| `repetition_penalty` | Penalizes repetition | 1.0-1.2 | 1.1 |
|
| 53 |
+
|
| 54 |
+
### Parameter Selection Guide:
|
| 55 |
+
|
| 56 |
+
- **For deterministic/consistent responses**: Lower temperature (0.3-0.5)
|
| 57 |
+
- **For creative/varied responses**: Higher temperature (0.7-1.0)
|
| 58 |
+
- **For detailed medical analyses**: Use longer `max_new_tokens` (256-384)
|
| 59 |
+
- **For concise summaries**: Use shorter `max_new_tokens` (128-192)
|
| 60 |
+
|
| 61 |
+
## Prompt Formatting
|
| 62 |
+
|
| 63 |
+
The model works best with prompts formatted using Gemma's chat template. Here are example formats:
|
| 64 |
+
|
| 65 |
+
### Basic Format:
|
| 66 |
+
|
| 67 |
+
```
|
| 68 |
+
<start_of_turn>model
|
| 69 |
+
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
## Example Prompt Templates
|
| 73 |
+
|
| 74 |
+
### Clinical Case Analysis:
|
| 75 |
+
|
| 76 |
+
```
|
| 77 |
+
<start_of_turn>user
|
| 78 |
+
You are an expert radiologist. Analyze this chest X-ray of a 65-year-old male with a history of smoking, presenting with shortness of breath and cough for 3 days. The X-ray shows increased opacity in the right lower lobe with some air bronchograms visible.
|
| 79 |
+
<end_of_turn>
|
| 80 |
+
<start_of_turn>model
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
### Educational Explanation:
|
| 84 |
+
|
| 85 |
+
```
|
| 86 |
+
<start_of_turn>user
|
| 87 |
+
You are an expert radiologist teaching medical students. Explain what characteristics differentiate bacterial pneumonia from viral pneumonia on a chest X-ray. Describe the typical presentation of each.
|
| 88 |
+
<end_of_turn>
|
| 89 |
+
<start_of_turn>model
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
### Comparative Analysis:
|
| 93 |
+
|
| 94 |
+
```
|
| 95 |
+
<start_of_turn>user
|
| 96 |
+
You are an expert radiologist. Compare the findings in a chest X-ray showing pulmonary edema versus one showing pneumonia. What are the key differentiating features?
|
| 97 |
+
<end_of_turn>
|
| 98 |
+
<start_of_turn>model
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
## Memory Usage Optimization
|
| 102 |
+
|
| 103 |
+
When using the model on resource-constrained systems, consider these optimizations:
|
| 104 |
+
|
| 105 |
+
1. **Reduce Sequence Length**: Lower `max_seq_length` to reduce memory usage
|
| 106 |
+
2. **Use 4-bit Quantization**: Enable `load_in_4bit=True` for significant memory savings
|
| 107 |
+
3. **CPU Offloading**: Use `device_map="auto"` to offload parts of the model to CPU when needed
|
| 108 |
+
4. **Batch Size**: Use smaller batch sizes for inference (`batch_size=1`)
|
| 109 |
+
|
| 110 |
+
## Performance Benchmarks
|
| 111 |
+
|
| 112 |
+
The model has been tested on various GPU configurations with the following performance characteristics:
|
| 113 |
+
|
| 114 |
+
| Hardware | Loading Time | Inference Time (256 tokens) | Memory Usage |
|
| 115 |
+
|----------|-------------|----------------------------|--------------|
|
| 116 |
+
| NVIDIA A100 | ~10s | ~1.5s | ~8 GB |
|
| 117 |
+
| NVIDIA T4 | ~15s | ~3.0s | ~6 GB |
|
| 118 |
+
| CPU (16 cores) | ~40s | ~15.0s | ~12 GB |
|
| 119 |
+
|
| 120 |
+
## Command-Line Usage (10_load_finetuned.py)
|
| 121 |
+
|
| 122 |
+
The included utility script supports these parameters:
|
| 123 |
+
|
| 124 |
+
```bash
|
| 125 |
+
# Interactive mode
|
| 126 |
+
python 10_load_finetuned.py --interactive --temperature 0.7
|
| 127 |
+
|
| 128 |
+
# Single prompt
|
| 129 |
+
python 10_load_finetuned.py --prompt "Analyze this chest X-ray showing a small pleural effusion on the right side."
|
| 130 |
+
|
| 131 |
+
# Load from Hugging Face Hub
|
| 132 |
+
python 10_load_finetuned.py --hub YOUR_USERNAME/GemmaXRayAnalyzer_Finetune_Gemma_3_4b --max-tokens 384
|
| 133 |
+
|
| 134 |
+
# With system prompt
|
| 135 |
+
python 10_load_finetuned.py --interactive --system-prompt "You are an expert pediatric radiologist specializing in chest X-rays."
|
| 136 |
+
```
|
| 137 |
+
|
| 138 |
+
## API Usage (11_use_model_api.py)
|
| 139 |
+
|
| 140 |
+
Parameters for using the model via the Hugging Face Inference API:
|
| 141 |
+
|
| 142 |
+
```bash
|
| 143 |
+
# Interactive API session
|
| 144 |
+
python 11_use_model_api.py --interactive --model-id YOUR_USERNAME/GemmaXRayAnalyzer_Finetune_Gemma_3_4b
|
| 145 |
+
|
| 146 |
+
# Single prompt via API
|
| 147 |
+
python 11_use_model_api.py --prompt "Analyze this chest X-ray showing cardiomegaly." --temperature 0.5
|
| 148 |
+
```
|
| 149 |
+
|
| 150 |
+
## Fine-tuning Parameters
|
| 151 |
+
|
| 152 |
+
If you wish to further fine-tune the model, these parameters were used in the original training and are recommended:
|
| 153 |
+
|
| 154 |
+
```python
|
| 155 |
+
training_args = SFTConfig(
|
| 156 |
+
per_device_train_batch_size=2,
|
| 157 |
+
gradient_accumulation_steps=4,
|
| 158 |
+
max_steps=50,
|
| 159 |
+
learning_rate=2e-4,
|
| 160 |
+
fp16=is_bf16_supported(), # Use bf16 if available
|
| 161 |
+
bf16=is_bf16_supported(),
|
| 162 |
+
logging_steps=5,
|
| 163 |
+
eval_strategy="steps",
|
| 164 |
+
eval_steps=10,
|
| 165 |
+
optim="adamw_8bit",
|
| 166 |
+
weight_decay=0.01,
|
| 167 |
+
lr_scheduler_type="linear",
|
| 168 |
+
warmup_steps=5,
|
| 169 |
+
)
|
| 170 |
+
```
|
| 171 |
+
|
| 172 |
+
## Limitations and Best Practices
|
| 173 |
+
|
| 174 |
+
- **Not for Clinical Use**: This model is for research and educational purposes only
|
| 175 |
+
- **Verify Outputs**: Always have a medical professional verify any generated analyses
|
| 176 |
+
- **Complex Cases**: The model may struggle with rare or complex pathologies
|
| 177 |
+
- **Prompt Design**: Be specific in your prompts about the features visible in the X-ray
|
| 178 |
+
- **Temperature Tuning**: Lower temperatures (0.3-0.5) provide more conservative and factual responses
|
| 179 |
+
|
| 180 |
+
## Ethical Considerations
|
| 181 |
+
|
| 182 |
+
When using this model, please adhere to these ethical guidelines:
|
| 183 |
+
|
| 184 |
+
1. Do not present the model's outputs as medical advice
|
| 185 |
+
2. Clearly distinguish between AI-generated content and human expert analyses
|
| 186 |
+
3. Respect patient privacy and confidentiality
|
| 187 |
+
4. Consider potential biases in the model's training data
|
| 188 |
+
5. Use the model as an assistive tool, not a replacement for professional expertise
|
| 189 |
+
turn>user
|
| 190 |
+
You are an expert radiologist. Analyze this chest X-ray showing [description of X-ray findings].
|
| 191 |
+
<end_of_turn>
|
| 192 |
+
<start_of_turn>model
|
| 193 |
+
```
|
| 194 |
+
|
| 195 |
+
### With System Prompt:
|
| 196 |
+
|
| 197 |
+
```
|
| 198 |
+
<start_of_turn>system
|
| 199 |
+
You are an expert radiologist with 20 years of experience in chest radiology. Provide detailed, accurate analyses of X-ray images.
|
| 200 |
+
<end_of_turn>
|
| 201 |
+
<start_of_turn>user
|
| 202 |
+
Analyze this chest X-ray showing [description of X-ray findings].
|
| 203 |
+
<end_of_turn>
|
| 204 |
+
<start_of_
|