--- tags: - model_hub_mixin - pytorch_model_hub_mixin - medical license: apache-2.0 language: - en metrics: - Dice - Jaccard - 95HD - ASD pipeline_tag: image-segmentation library_name: pytorch --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - **Hugging Face Space** (available now): https://huggingface.co/spaces/Tournesol-Saturday/railNet-tooth-segmentation-in-CBCT-image - Code: https://github.com/Tournesol-Saturday/RAIL - Paper: [RAIL: Region-Aware Instructive Learning for Semi-Supervised Tooth Segmentation in CBCT](https://huggingface.co/papers/2505.03538) **Steps to use our model in this repository:** 1. Clone this repository with the following command: ```bash git clone https://huggingface.co/Tournesol-Saturday/railNet-tooth-segmentation-in-CBCT-image cd railNet-tooth-segmentation-in-CBCT-image ``` 3. Create a virtual environment to experience our model using the following command: ```python conda create -n railnet python=3.10 conda activate railnet pip install -r requirements.txt python gradio_app.py ``` 4. In the current working directory, find the `example_input_file` folder. Select an arbitrary `.h5` file in this folder and drag it into the `Gradio` interface for model inference. 5. Waiting for about 1min~2min30s, the model inference is completed and the segmentation result and 3D rendering visualization will be produced. Both the original image and the segmentation result are saved in `.nii.gz` format in the `output` folder of the same directory. 6. Since `Gradio` performs 1/2 downsampling on the 3D segmentation visualization, the segmentation accuracy is degraded. Users can drag the `.nii.gz` format files in the `output` folder into the `ITK-SNAP` software to view the accurate segmentation visualization.