Datasets:
image
imagewidth (px) 768
2.5k
| label
class label 6
classes |
|---|---|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
0Ref-Cartoon-Movie
|
|
1Ref-Female-Upper-Body
|
|
1Ref-Female-Upper-Body
|
|
1Ref-Female-Upper-Body
|
|
1Ref-Female-Upper-Body
|
|
1Ref-Female-Upper-Body
|
|
1Ref-Female-Upper-Body
|
|
1Ref-Female-Upper-Body
|
|
1Ref-Female-Upper-Body
|
|
1Ref-Female-Upper-Body
|
|
2Ref-Female-Whole-Body
|
|
2Ref-Female-Whole-Body
|
|
3Ref-Male-Upper-Body
|
|
3Ref-Male-Upper-Body
|
|
3Ref-Male-Upper-Body
|
|
4Ref-Male-Whole-Body
|
|
4Ref-Male-Whole-Body
|
|
4Ref-Male-Whole-Body
|
|
5assets
|
SteadyDancer: Harmonized and Coherent Human Image Animation with First-Frame Preservation
Jiaming Zhang
·
Shengming Cao
·
Rui Li
·
Xiaotong Zhao
·
Yutao Cui
Xinglin Hou
·
Gangshan Wu
·
Haolan Chen
·
Yu Xu
·
Limin Wang
·
Kai Ma
Multimedia Computing Group, Nanjing University | Platform and Content Group (PCG), Tencent
This repository is the test dataset of paper "SteadyDancer: Harmonized and Coherent Human Image Animation with First-Frame Preservation", called X-Dance.
SteadyDancer is a strong animation framework based on Image-to-Video paradigm, ensuring robust first-frame preservation. In contrast to prior Reference-to-Video approaches that often suffer from identity drift due to spatio-temporal misalignments common in real-world applications, SteadyDancer generates high-fidelity and temporally coherent human animations, outperforming existing methods in visual quality and control while requiring significantly fewer training resources.
Standard benchmarks, such as TikTok and RealisDance, source both the reference image and pose sequence from the same video. This idealized setup fails to reflect the spatio-temporal misalignment challenges prevalent in real-world applications. To more robustly evaluate the model's generalization capabilities in such scenarios, we curated and introduced a new different-source evaluation dataset, X-Dance. We first collected 12 distinct driving videos, comprising 8 sequences of intricate, high-dynamic dance movements and 4 sequences of low-amplitude daily activities. These sequences are replete with non-ideal real-world factors, such as motion blur, severe occlusion, and drastic pose changes. Tailored to these motions, we specifically curated a diverse set of reference images to simulate real-world misalignments. This specially designed collection contains: (1) anime characters to introduce stylistic domain gaps; (2) half-body shots to represent compositional inconsistencies; (3) cross-gender or anime characters to simulate significant skeletal structural discrepancies; and (4) subjects in distinct postures to maximize the initial action gap. By systematically pairing these reference images with the 12 driving videos, we simulate two critical real-world challenges: (1) Spatial pose-structure inconsistency (e.g., an anime character driving a real-world pose); and (2) Temporal discontinuity, specifically the significant gap between the reference pose and the initial driving pose.
📚 Citation
If you find our paper or this codebase useful for your research, please cite us.
@misc{zhang2025steadydancer,
title={SteadyDancer: Harmonized and Coherent Human Image Animation with First-Frame Preservation},
author={Jiaming Zhang and Shengming Cao and Rui Li and Xiaotong Zhao and Yutao Cui and Xinglin Hou and Gangshan Wu and Haolan Chen and Yu Xu and Limin Wang and Kai Ma},
year={2025},
eprint={2511.19320},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2511.19320},
}
- Downloads last month
- 2,167
