File size: 12,351 Bytes
8de5b84
 
65ad169
 
 
 
 
 
 
 
 
 
 
 
8de5b84
65ad169
 
 
 
 
 
124b379
 
 
561ecf2
124b379
561ecf2
124b379
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65ad169
124b379
 
 
 
 
 
 
 
8de5b84
 
65ad169
 
 
 
 
 
 
 
 
 
 
 
 
 
8de5b84
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
---
license: mit
task_categories:
- image-to-3d
- image-segmentation
- text-retrieval
tags:
- 3d-reconstruction
- semantic-segmentation
- instance-segmentation
- panoptic-segmentation
- referring-segmentation
- scannet
- english
---

This is the official Hugging Face repository for [SIU3R: Simultaneous Scene Understanding and 3D Reconstruction Beyond Feature Alignment](https://huggingface.co/papers/2507.02705).

Project Page: https://insomniaaac.github.io/siu3r/
Code: https://github.com/WU-CVGL/SIU3R

# Pretrained Models for SIU3R
We provide pretrained models for the Panoptic Segmentation task. We train MASt3R backbone with adapter on the COCO dataset for SIU3R initialization.

# Preprocessed Scannet Dataset for SIU3R Training
This dataset is a processed version of the ScanNet dataset, which is available at http://www.scan-net.org/. The dataset is provided by WU-CVGL(https://github.com/WU-CVGL) for research purposes only.

The dataset is split into 2 parts: train and val. Both splits are provided with color images, depth images in millimeter (convert to meter by div 1000.0), ground truth c2w pose in txt file, ground truth camera intrinsic in txt file, ground truth annotations for 2D semantic segmentation, 2D instance segmentation and 2D panoptic segmentation, iou overlap value between images in iou.pt file. The annotations are provided in format described as follows:

- 2D semantic segmentation: a single channel uint8 image with pixel-wise class labels. The class is defined as below:
    ```yaml
    0: "unlabeled",
    1: "wall",
    2: "floor",
    3: "cabinet",
    4: "bed",
    5: "chair",
    6: "sofa",
    7: "table",
    8: "door",
    9: "window",
    10: "bookshelf",
    11: "picture",
    12: "counter",
    13: "desk",
    14: "curtain",
    15: "refrigerator",
    16: "shower curtain",
    17: "toilet",
    18: "sink",
    19: "bathtub",
    20: "otherfurniture",
    ```
- 2D instance segmentation: a 3 channel uint8 image, where encoded as follows:
    The segment id is defined as 1000 * semantic_label + instance_label. Note that the semantic_label is NOT the same as the 2D semantic segmentation. The instance_label is a unique id for each instance within the same semantic class.
    semantic label is defined as below:
    ```yaml
    0: "unlabeled",
    1: "cabinet",
    2: "bed",
    3: "chair",
    4: "sofa",
    5: "table",
    6: "door",
    7: "window",
    8: "bookshelf",
    9: "picture",
    10: "counter",
    11: "desk",
    12: "curtain",
    13: "refrigerator",
    14: "shower curtain",
    15: "toilet",
    16: "sink",
    17: "bathtub",
    18: "otherfurniture",
    ```
    Then, the segment_id is encoded in the 3 channel image as follows:
    ```yaml
    R: segment_id % 256,
    G: segment_id // 256,
    B: segment_id // 256 // 256.
    ```
- 2D panoptic segmentation: a 3 channel uint8 image, which encoded just like instance sgementation task do, but note that the defination of semantic label is as same as 2D semantic segmentation.

- iou.pt file store the iou overlap value between images, which is a Tensor with shape (N, N), where N is the max index of images in the dataset (note we remove some images which pose is unavailable or semantic annotations is blank). The iou[i, j] value is calculated by unproject depth[i] into 3d space, then project to images[j]'s camera coordinate, detailed calculation can be found in the code.

We also provide image pairs for validation and testing, which are stored in the val_pair.json file. The image pairs are defined as below:
```json
[
    {
        "scan": "scene0011_00",
        "context_ids": [
            1727,
            1744
        ],
        "target_ids": [
            1727,
            1729,
            1732,
            1738,
            1739,
            1744
        ],
        "iou": 0.38273486495018005
    },
    {
        "scan": "scene0011_00",
        "context_ids": [
            255,
            337
        ],
        "target_ids": [
            255,
            267,
            310,
            325,
            331,
            337
        ],
        "iou": 0.47921222448349
    },
    ...
]
```
The "scan" field is the scan name, the "context_ids" field is the image ids of context images, the "target_ids" field is the image ids of target images, and the "iou" field is the iou overlap value between 2 context images. The context images are used as input to the model, and the target images are used as ground truth for evaluation.
For refer segmentation task, we provide the refer segmentation annotations in train_refer_seg_data.json and val_refer_seg_data.json. The annotations are provided in format described as follows:
```json
{
    "scene0011_00": {
        "2": {
            "object_name": "kitchen_cabinets",
            "instance_label_id": 1,
            "panoptic_label_id": 3,
            "frame_id": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, ...],
            "text": ["there are brwon wooden cabinets. placed on the side of the kitchen.", "there is a set of bottom kitchen cabinets in the room. it has a microwave in the middle of it.", "there is a set of bottom kitchen cabinets in the room. there is a microwave in the middle of them.", "brown kitchen cabinets, the top is decorated with marble layers it is placed on the left in the direction of view. the right are 4 brown chairs.", "the kitchen cabinets are located along the right wall. they are below the counter top. the kitchen cabinets are located to the right of the table and chairs."],
            "text_token": [
                [49406, 997, 631, 711, 1749, 9057, 33083, 269, 9729, 525, 518, 1145, 539, 518, 4485, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
                [49406, 997, 533, 320, 1167, 539, 5931, 4485, 33083, 530, 518, 1530, 269, 585, 791, 320, 24240, 530, 518, 3694, 539, 585, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
                [49406, 997, 533, 320, 1167, 539, 5931, 4485, 33083, 530, 518, 1530, 269, 997, 533, 320, 24240, 530, 518, 3694, 539, 1180, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
                [49406, 2866, 4485, 33083, 267, 518, 1253, 533, 15917, 593, 13071, 15900, 585, 533, 9729, 525, 518, 1823, 530, 518, 5407, 539, 1093, 269, 518, 1155, 631, 275, 2866, 12033, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
                [49406, 518, 4485, 33083, 631, 5677, 2528, 518, 1155, 2569, 269, 889, 631, 3788, 518, 7352, 1253, 269, 518, 4485, 33083, 631, 5677, 531, 518, 1155, 539, 518, 2175, 537, 12033, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
            ]
        },
        "3": {
            "object_name": "table",
            "instance_label_id": 5,
            "panoptic_label_id": 7,
            "frame_id": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, ...],
            "text": ["this is a long table. there are three brown chairs behind it.", "this is a long table. it is surrounded by chairs.", "there is a large table in the room. it has ten chairs pulled up to it.", "a brown table, placed in the middle of the room, on the left is 4 brown chairs, on the right are 4 brown chairs. the front is a brown door with light shining on.", "this is a brown table. it is surrounded by quite a few matching chairs."],
            "text_token": [
                [49406, 589, 533, 320, 1538, 2175, 269, 997, 631, 2097, 2866, 12033, 2403, 585, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
                [49406, 589, 533, 320, 1538, 2175, 269, 585, 533, 13589, 638, 12033, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
                [49406, 997, 533, 320, 3638, 2175, 530, 518, 1530, 269, 585, 791, 2581, 12033, 8525, 705, 531, 585, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
                [49406, 320, 2866, 2175, 267, 9729, 530, 518, 3694, 539, 518, 1530, 267, 525, 518, 1823, 530, 518, 5407, 539, 1093, 269, 518, 1155, 631, 275, 2866, 12033, 269, 518, 2184, 533, 320, 2866, 2489, 593, 1395, 10485, 525, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
                [49406, 589, 533, 320, 2866, 2175, 269, 585, 533, 13589, 638, 4135, 320, 1939, 11840, 12033, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
            ]
        },
        ...
    }
    ...
}
```
The "scene0011_00" field is the scan name, the "2" field is the object id (also instance_label), the "object_name" field is the object name, the "instance_label_id" field is the semantic label id in instance segmentation task, the "panoptic_label_id" field is the semantic label id in panoptic segmentation task, the "frame_id" field is the frame ids of images which contain this object, the "text" field is the refer segmentation text description, and the "text_token" field is the tokenized refer segmentation text by openclip (https://github.com/mlfoundations/open_clip), note that we use `convnext_large_d_320` model (https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup). The refer segmentation task is to segment the object in the image based on the refer segmentation text. This part of data is obtained from the uniseg3d repository (https://github.com/dk-liang/UniSeg3D), thanks for their great work.

## Sample Usage
To run inference with the SIU3R model using this dataset, you first need to download the pre-trained model checkpoint and place it in the `pretrained_weights` directory (as described in the [GitHub repository](https://github.com/WU-CVGL/SIU3R)).

Then, you can run the inference script:
```bash
python inference.py --image_path1 <path_to_image1> --image_path2 <path_to_image2> --output_path <output_directory> [--cx <cx_value>] [--cy <cy_value>] [--fx <fx_value>] [--fy <fy_value>]
```
A `output.ply` will be generated in the specified output directory, containing the reconstructed gaussian splattings. The `cx`, `cy`, `fx`, and `fy` parameters are optional and can be used to specify the camera intrinsics. If not provided, default values will be used.

You can view the results in the online viewer by running:
```bash
python viewer.py --output_ply <output_directory/output.ply>
```

# Citation
If you find our work useful, please consider citing our paper:
```bibtex
@misc{xu2025siu3r,
      title={SIU3R: Simultaneous Scene Understanding and 3D Reconstruction Beyond Feature Alignment}, 
      author={Qi Xu and Dongxu Wei and Lingzhe Zhao and Wenpu Li and Zhangchi Huang and Shunping Ji and Peidong Liu},
      year={2025},
      eprint={2507.02705},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.02705}, 
}
```