--- license: cc-by-4.0 task_categories: - image-to-3d - text-to-3d tags: - 3d-reconstruction - gaussian-splatting - video-diffusion - synthetic-data --- # Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation **[Paper](https://arxiv.org/abs/2509.19296), [Project Page](https://research.nvidia.com/labs/toronto-ai/lyra/), [Code](https://github.com/nv-tlabs/lyra)** [Sherwin Bahmani](https://sherwinbahmani.github.io/), [Tianchang Shen](https://www.cs.toronto.edu/~shenti11/), [Jiawei Ren](https://jiawei-ren.github.io/), [Jiahui Huang](https://huangjh-pub.github.io/), [Yifeng Jiang](https://cs.stanford.edu/~yifengj/), [Haithem Turki](https://haithemturki.com/), [Andrea Tagliasacchi](https://theialab.ca/), [David B. Lindell](https://davidlindell.com/), [Zan Gojcic](https://zgojcic.github.io/), [Sanja Fidler](https://www.cs.utoronto.ca/~fidler/), [Huan Ling](https://www.cs.utoronto.ca/~linghuan/), [Jun Gao](https://www.cs.utoronto.ca/~jungao/), [Xuanchi Ren](https://xuanchiren.com/)
## Dataset Description: The PhysicalAI-SpatialIntelligence-Lyra-SDG Dataset is a multi-view 3D and 4D dataset generated using [GEN3C](https://github.com/nv-tlabs/GEN3C). The 3D reconstruction setup uses 59,031 images, while the 4D setup has 7,378 videos. All the data are from diverse text prompts, spanning various scenarios such as indoor and outdoor environments, humans, animals, and both realistic and imaginative content. We synthesize 6 camera trajectories for each image (3D) or video (4D), yielding 354,186 videos for the 3D and 44,268 videos for the 4D. It contains videos in RGB and camera poses and depth of the videos. This dataset is ready for commercial use. ## Dataset Owner(s): NVIDIA Corporation ## Dataset Creation Date: 2025/09/23 ## License/Terms of Use: This dataset is licensed under the [Creative Commons Attribution 4.0 International License (CC-BY-4.0)](https://creativecommons.org/licenses/by/4.0/). ## Intended Usage: Researchers and academics working in spatial intelligence problems can use it to train AI models for multi-view video generation or reconstruction. ## Dataset Characterization: ** Data Collection Method
[Synthetic] ** Labeling Method
[Synthetic] ## Dataset Format: RGB in mp4, Camera pose in .npz, Depth in zip format ## Dataset Quantification: The 3D reconstruction setup has 59,031 multi-view examples, while the 4D setup has 7,378 multi-view examples. For each multi-view example, we have 6 views. For each view, we have videos in Red, Green, Blue (RGB) and camera poses and depth of the videos. | Field | Format | |-------------|--------| | Video | mp4 | | Camera pose | .npz | | Depth | .zip | Storage: 25TB ## Sample Usage Lyra supports both images and videos as input for 3D Gaussian generation. First, you need to download the demo samples: ```bash # Download test samples from Hugging Face huggingface-cli download nvidia/Lyra-Testing-Example --repo-type dataset --local-dir assets/demo ``` ### Example 1: Single Image to 3D Gaussians Generation 1) Generate multi-view video latents from the input image using scripts/bash/static_sdg.sh. ```bash CUDA_HOME=$CONDA_PREFIX PYTHONPATH=$(pwd) torchrun --nproc_per_node=1 cosmos_predict1/diffusion/inference/gen3c_single_image_sdg.py \ --checkpoint_dir checkpoints \ --num_gpus 1 \ --input_image_path assets/demo/static/diffusion_input/images/00172.png \ --video_save_folder assets/demo/static/diffusion_output_generated \ --foreground_masking \ --multi_trajectory ``` 2) Reconstruct multi-view video latents with the 3DGS decoder: ```bash accelerate launch sample.py --config configs/demo/lyra_static.yaml ``` ### Example 2: Single Video to Dynamic 3D Gaussians Generation 1) Generate multi-view video latents from the input video and ViPE estimated depth using scripts/bash/dynamic_sdg.sh. ```bash CUDA_HOME=$CONDA_PREFIX PYTHONPATH=$(pwd) torchrun --nproc_per_node=1 cosmos_predict1/diffusion/inference/gen3c_dynamic_sdg.py \ --checkpoint_dir checkpoints \ --vipe_path assets/demo/dynamic/diffusion_input/rgb/6a71ee0422ff4222884f1b2a3cba6820.mp4 \ --video_save_folder assets/demo/dynamic/diffusion_output \ --disable_prompt_upsampler \ --num_gpus 1 \ --foreground_masking \ --multi_trajectory ``` 2) Reconstruct multi-view video latents with the 3DGS decoder: ```bash accelerate launch sample.py --config configs/demo/lyra_dynamic.yaml ``` ### Training To train, you need to download the full training data (this dataset) from Hugging Face: ```bash # Download our training datasets from Hugging Face and untar them into a static/dynamic folder huggingface-cli download nvidia/PhysicalAI-SpatialIntelligence-Lyra-SDG --repo-type dataset --local-dir lyra_dataset/tar ``` Then you can use the provided progressive training script (as detailed in the GitHub repository): ```bash bash train.sh ``` For more detailed usage instructions, including how to test on your own videos or perform training, please refer to the [Lyra GitHub repository](https://github.com/nv-tlabs/lyra). ## Reference(s): - [GEN3C](https://github.com/nv-tlabs/GEN3C) ## Ethical Considerations: NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). ## Citation ``` @inproceedings{bahmani2025lyra, title={Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation}, author={Bahmani, Sherwin and Shen, Tianchang and Ren, Jiawei and Huang, Jiahui and Jiang, Yifeng and Turki, Haithem and Tagliasacchi, Andrea and Lindell, David B. and Gojcic, Zan and Fidler, Sanja and Ling, Huan and Gao, Jun and Ren, Xuanchi}, booktitle={arXiv preprint arXiv:2509.19296}, year={2025} } ``` ``` @inproceedings{ren2025gen3c, title={GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control}, author={Ren, Xuanchi and Shen, Tianchang and Huang, Jiahui and Ling, Huan and Lu, Yifan and Nimier-David, Merlin and M\u00fcller, Thomas and Keller, Alexander and Fidler, Sanja and Gao, Jun}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, year={2025} } ```