Spaces:
Build error
Build error
Commit
·
059e3b8
1
Parent(s):
aff0cf3
Update pages/2_SmoothGrad.py
Browse files- pages/2_SmoothGrad.py +19 -3
pages/2_SmoothGrad.py
CHANGED
|
@@ -15,8 +15,8 @@ st.set_page_config(layout='wide')
|
|
| 15 |
BACKGROUND_COLOR = '#bcd0e7'
|
| 16 |
|
| 17 |
|
| 18 |
-
st.title('Feature attribution with SmoothGrad')
|
| 19 |
-
st.write("""> **Which features are responsible for the current prediction?**
|
| 20 |
|
| 21 |
In machine learning, it is helpful to identify the significant features of the input (e.g., pixels for images) that affect the model's prediction.
|
| 22 |
If the model makes an incorrect prediction, we might want to determine which features contributed to the mistake.
|
|
@@ -26,9 +26,25 @@ The brightness of each pixel in the mask represents the importance of that featu
|
|
| 26 |
There are various methods to calculate an image sensitivity mask for a specific prediction.
|
| 27 |
One simple way is to use the gradient of a class prediction neuron concerning the input pixels, indicating how the prediction is affected by small pixel changes.
|
| 28 |
However, this method usually produces a noisy mask.
|
| 29 |
-
To reduce the noise, the [SmoothGrad](https://arxiv.org/abs/1706.03825)
|
|
|
|
| 30 |
""")
|
| 31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
imagenet_df = pd.read_csv('./data/ImageNet_metadata.csv')
|
| 33 |
|
| 34 |
# --------------------------- LOAD function -----------------------------
|
|
|
|
| 15 |
BACKGROUND_COLOR = '#bcd0e7'
|
| 16 |
|
| 17 |
|
| 18 |
+
st.title('Feature attribution visualization with SmoothGrad')
|
| 19 |
+
st.write("""> **Which features are responsible for the current prediction of ConvNeXt?**
|
| 20 |
|
| 21 |
In machine learning, it is helpful to identify the significant features of the input (e.g., pixels for images) that affect the model's prediction.
|
| 22 |
If the model makes an incorrect prediction, we might want to determine which features contributed to the mistake.
|
|
|
|
| 26 |
There are various methods to calculate an image sensitivity mask for a specific prediction.
|
| 27 |
One simple way is to use the gradient of a class prediction neuron concerning the input pixels, indicating how the prediction is affected by small pixel changes.
|
| 28 |
However, this method usually produces a noisy mask.
|
| 29 |
+
To reduce the noise, the SmoothGrad technique as described in [SmoothGrad: Removing noise by adding noise](https://arxiv.org/abs/1706.03825) by Daniel _et al_ is used,
|
| 30 |
+
which adds Gaussian noise to multiple copies of the image and averages the resulting gradients.
|
| 31 |
""")
|
| 32 |
|
| 33 |
+
instruction_text = """Users need to input the model(s), type of image set and image set setting to use this functionality.
|
| 34 |
+
1. Choose model: Users can choose one or more models for comparison.
|
| 35 |
+
There are 3 models supported: [ConvNeXt](https://huggingface.co/facebook/convnext-tiny-224),
|
| 36 |
+
[ResNet](https://huggingface.co/microsoft/resnet-50) and [MobileNet](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/).
|
| 37 |
+
These 3 models have similar number of parameters.
|
| 38 |
+
2. Choose type of Image set: There are 2 types of Image set. They are _User-defined set_ and _Random set_.
|
| 39 |
+
3. Image set setting: If users choose _User-defined set_ in Image set,
|
| 40 |
+
users need to enter a list of image IDs separated by commas (,). For example, `0,1,4,7` is a valid input.
|
| 41 |
+
Check the page **ImageNet1k** (in the side bar) to see all the Image IDs.
|
| 42 |
+
If users choose _Random set_ in Image set, users just need to choose the number of random images to display here.
|
| 43 |
+
"""
|
| 44 |
+
with st.expander("See more instruction", expanded=False):
|
| 45 |
+
st.write(instruction_text)
|
| 46 |
+
|
| 47 |
+
|
| 48 |
imagenet_df = pd.read_csv('./data/ImageNet_metadata.csv')
|
| 49 |
|
| 50 |
# --------------------------- LOAD function -----------------------------
|