SIM1: Physics-Aligned Simulator as Zero-Shot Data Scaler in Deformable Worlds
Abstract
A physics-aligned simulation framework enables effective robotic manipulation of deformable objects by creating metric-consistent synthetic data that matches real-world performance.
Robotic manipulation with deformable objects represents a data-intensive regime in embodied learning, where shape, contact, and topology co-evolve in ways that far exceed the variability of rigids. Although simulation promises relief from the cost of real-world data acquisition, prevailing sim-to-real pipelines remain rooted in rigid-body abstractions, producing mismatched geometry, fragile soft dynamics, and motion primitives poorly suited for cloth interaction. We posit that simulation fails not for being synthetic, but for being ungrounded. To address this, we introduce SIM1, a physics-aligned real-to-sim-to-real data engine that grounds simulation in the physical world. Given limited demonstrations, the system digitizes scenes into metric-consistent twins, calibrates deformable dynamics through elastic modeling, and expands behaviors via diffusion-based trajectory generation with quality filtering. This pipeline transforms sparse observations into scaled synthetic supervision with near-demonstration fidelity. Experiments show that policies trained on purely synthetic data achieve parity with real-data baselines at a 1:15 equivalence ratio, while delivering 90% zero-shot success and 50% generalization gains in real-world deployment. These results validate physics-aligned simulation as scalable supervision for deformable manipulation and a practical pathway for data-efficient policy learning.
Community
SIM1: Physics-Aligned Simulator as Zero-Shot Data Scaler in Deformable Worlds
SIM1: a world where simulation is the same one as reality, making simulated experience directly executable in the physical world, at scale, without loss.
A new scaling law emerges: intelligence scales, while real-world data does not.
Simulation is no longer a proxy. It is supervision.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- V-Dreamer: Automating Robotic Simulation and Trajectory Synthesis via Video Generation Priors (2026)
- D-REX: Differentiable Real-to-Sim-to-Real Engine for Learning Dexterous Grasping (2026)
- URDF-Anything+: Autoregressive Articulated 3D Models Generation for Physical Simulation (2026)
- SoftMimicGen: A Data Generation System for Scalable Robot Learning in Deformable Object Manipulation (2026)
- Real-to-Sim for Highly Cluttered Environments via Physics-Consistent Inter-Object Reasoning (2026)
- CRAFT: Video Diffusion for Bimanual Robot Data Generation (2026)
- MeshMimic: Geometry-Aware Humanoid Motion Learning through 3D Scene Reconstruction (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.08544 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper