Stable-Sim2Real:
Exploring Simulation of Real-Captured 3D Data with Two-Stage Depth Diffusion

1SSE, CUHKSZ    2FNii-Shenzhen    3Guangdong Provincial Key Laboratory of Future Networks of Intelligence, CUHKSZ       4Tencent Hunyuan3D    5ByteDance Games
Corresponding Author

ICCV 2025 (Highlight)

MY ALT TEXT

We introduce Stable-Sim2Real for simulating real-captured 3D data in a data-driven manner.

Abstract

3D data simulation aims to bridge the gap between simulated and real-captured 3D data, which is a fundamental problem for real-world 3D visual tasks. Most 3D data simulation methods inject predefined physical priors but struggle to capture the full complexity of real data. An optimal approach involves learning an implicit mapping from synthetic to realistic data in a data-driven manner, but progress in this solution has met stagnation in recent studies.
This work explores a new solution path of data-driven 3D simulation, called Stable-Sim2Real, based on a novel two-stage depth diffusion model. The initial stage finetunes Stable-Diffusion to generate the residual between the real and synthetic paired depth, producing a stable but coarse depth, where some local regions may deviate from realistic patterns. To enhance this, both the synthetic and initial output depth are fed into a second-stage diffusion, where diffusion loss is adjusted to prioritize these distinct areas identified by a 3D discriminator. We provide a new benchmark scheme to evaluate 3D data simulation methods. Extensive experiments show that training the network with the 3D simulated data derived from our method significantly enhances performance in real-world 3D visual tasks. Moreover, the evaluation demonstrates the high similarity between our 3D simulated data and real-captured patterns.


Method Overview

MY ALT TEXT

An overview of our framework. Conditioned on CAD (i.e., synthetic) depth maps, our model first generates the residual (i.e., difference) between real-captured and CAD depth maps, producing stable but coarse depth maps. Next, the Stage-I output is projected into 3D and sent to a 3D discriminator, to identify the unsatisfactory local areas that are distinct from real-captured patterns. Later, our Stage-II diffusion is conditioned on both CAD and Stage-I output, where the training loss of Stage-II is re-weighted to be specialized on enhancing unsatisfactory areas of Stage-I output.

Animated Qualitative Comparison

Qualitative Comparison

BibTeX

@inproceedings{xu2025sim2real,
        title={Stable-Sim2Real: Exploring Simulation of Real-Captured 3D Data with Two-Stage Depth Diffusion}, 
        author={Mutian Xu and Chongjie Ye and Haolin Liu and Yushuang Wu and Jiahao Chang and Xiaoguang Han},
        year={2025},
        booktitle = {International Conference on Computer Vision (ICCV)}
  }