跳到主要导航 跳到搜索 跳到主要内容

MonoDiffusion: Self-Supervised Monocular Depth Estimation Using Diffusion Model

  • Shuwei Shao
  • , Zhongcai Pei
  • , Weihai Chen*
  • , Dingchi Sun
  • , Peter C.Y. Chen
  • , Zhengguo Li*
  • *此作品的通讯作者
  • Beihang University
  • BeihangUniversity
  • National University of Singapore
  • Agency for Science, Technology and Research, Singapore

科研成果: 期刊稿件文章同行评审

摘要

Over the past few years, self-supervised monocular depth estimation has received widespread attention. Most efforts focus on designing different types of network architectures and loss functions or handling edge cases, for example, occlusion and dynamic objects. In this work, we take another path and propose a novel conditional diffusion-based generative framework for self-supervised monocular depth estimation, dubbed MonoDiffusion. Because the depth ground-truth is unavailable in a self-supervised setting, we develop a new pseudo ground-truth diffusion process to assist the diffusion for training. Instead of diffusing at a fixed high resolution, we perform diffusion in a coarse-to-fine manner that allows for faster inference time without sacrificing accuracy or even better accuracy. Furthermore, we develop a simple yet effective contrastive depth reconstruction mechanism to enhance the denoising ability of model. It is worth noting that the proposed MonoDiffusion has the property of naturally acquiring the depth uncertainty that is essential to be implemented in safety-critical cases. Extensive experiments on the KITTI, Make3D and DIML datasets indicate that our MonoDiffusion outperforms prior state-of-the-art self-supervised competitors. The source code will be publicly available upon the acceptance.

源语言英语
页(从-至)3664-3678
页数15
期刊IEEE Transactions on Circuits and Systems for Video Technology
35
4
DOI
出版状态已出版 - 2025

指纹

探究 'MonoDiffusion: Self-Supervised Monocular Depth Estimation Using Diffusion Model' 的科研主题。它们共同构成独一无二的指纹。

引用此