Update README.md
Browse files
README.md
CHANGED
|
@@ -1,49 +1,49 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: other
|
| 3 |
-
license_name: license
|
| 4 |
-
license_link: LICENSE
|
| 5 |
-
library_name: DepthCrafter
|
| 6 |
-
arxiv: 2409.02095
|
| 7 |
-
tags:
|
| 8 |
-
- vision
|
| 9 |
-
pipeline_tag: depth-estimation
|
| 10 |
-
widget:
|
| 11 |
-
- inference: false
|
| 12 |
-
---
|
| 13 |
-
|
| 14 |
-
## ___***DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos***___
|
| 15 |
-
<div align="center">
|
| 16 |
-
<img src='https://depthcrafter.github.io/img/logo.png' style="height:140px"></img>
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
<
|
| 21 |
-
<
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
[
|
| 26 |
-
[
|
| 27 |
-
[
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
## π Introduction
|
| 42 |
-
π€ DepthCrafter can generate temporally consistent long depth sequences with fine-grained details for open-world videos,
|
| 43 |
-
without requiring additional information such as camera poses or optical flow.
|
| 44 |
-
|
| 45 |
-
## π₯ Visualization
|
| 46 |
-
We provide some demos of unprojected point cloud sequences, with reference RGB and estimated depth videos.
|
| 47 |
-
Please refer to our [project page](https://depthcrafter.github.io) for more details.
|
| 48 |
-
<img src="./assets/visualization.gif">
|
| 49 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
license_name: license
|
| 4 |
+
license_link: LICENSE
|
| 5 |
+
library_name: DepthCrafter
|
| 6 |
+
arxiv: 2409.02095
|
| 7 |
+
tags:
|
| 8 |
+
- vision
|
| 9 |
+
pipeline_tag: depth-estimation
|
| 10 |
+
widget:
|
| 11 |
+
- inference: false
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
## ___***DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos***___
|
| 15 |
+
<div align="center">
|
| 16 |
+
<img src='https://depthcrafter.github.io/img/logo.png' style="height:140px"></img>
|
| 17 |
+
<a href='https://arxiv.org/abs/2409.02095'><img src='https://img.shields.io/badge/arXiv-2409.02095-b31b1b.svg'></a> <a href='https://depthcrafter.github.io'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
_**[Wenbo Hu<sup>1* †</sup>](https://wbhu.github.io),
|
| 21 |
+
[Xiangjun Gao<sup>2*</sup>](https://scholar.google.com/citations?user=qgdesEcAAAAJ&hl=en),
|
| 22 |
+
[Xiaoyu Li<sup>1* †</sup>](https://xiaoyu258.github.io),
|
| 23 |
+
[Sijie Zhao<sup>1</sup>](https://scholar.google.com/citations?user=tZ3dS3MAAAAJ&hl=en),
|
| 24 |
+
[Xiaodong Cun<sup>1</sup>](https://vinthony.github.io/academic), <br>
|
| 25 |
+
[Yong Zhang<sup>1</sup>](https://yzhang2016.github.io),
|
| 26 |
+
[Long Quan<sup>2</sup>](https://home.cse.ust.hk/~quan),
|
| 27 |
+
[Ying Shan<sup>3, 1</sup>](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en)**_
|
| 28 |
+
<br><br>
|
| 29 |
+
<sup>1</sup>Tencent AI Lab
|
| 30 |
+
<sup>2</sup>The Hong Kong University of Science and Technology
|
| 31 |
+
<sup>3</sup>ARC Lab, Tencent PCG
|
| 32 |
+
|
| 33 |
+
arXiv preprint, 2024
|
| 34 |
+
|
| 35 |
+
</div>
|
| 36 |
+
|
| 37 |
+
If you find DepthCrafter useful, please help β the </a>
|
| 38 |
+
<a style='font-size:18px;color: #FF5DB0' href='https://github.com/Tencent/DepthCrafter'>[Github Repo]</a>,
|
| 39 |
+
which is important to Open-Source projects. Thanks!
|
| 40 |
+
|
| 41 |
+
## π Introduction
|
| 42 |
+
π€ DepthCrafter can generate temporally consistent long depth sequences with fine-grained details for open-world videos,
|
| 43 |
+
without requiring additional information such as camera poses or optical flow.
|
| 44 |
+
|
| 45 |
+
## π₯ Visualization
|
| 46 |
+
We provide some demos of unprojected point cloud sequences, with reference RGB and estimated depth videos.
|
| 47 |
+
Please refer to our [project page](https://depthcrafter.github.io) for more details.
|
| 48 |
+
<img src="./assets/visualization.gif">
|
| 49 |
+
|