Vision-based 3D occupancy prediction in autonomous driving: a review and outlook

  • Yanan Zhang
  • , Jinqing Zhang
  • , Zengran Wang
  • , Junhao Xu
  • , Di Huang*
  • *Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

Abstract

In recent years, autonomous driving has garnered escalating attention for its potential to relieve drivers’ burdens and improve driving safety. Vision-based 3D occupancy prediction, which predicts the spatial occupancy status and semantics of 3D voxel grids around the autonomous vehicle from image inputs, is an emerging perception task suitable for cost-effective perception system of autonomous driving. Although numerous studies have demonstrated the greater advantages of 3D occupancy prediction over object-centric perception tasks, there is still a lack of a dedicated review focusing on this rapidly developing field. In this paper, we first introduce the background of vision-based 3D occupancy prediction and discuss the challenges in this task. Second, we conduct a comprehensive survey of the progress in vision-based 3D occupancy prediction from three aspects: feature enhancement, deployment friendliness and label efficiency, and provide an in-depth analysis of the potentials and challenges of each category of methods. Finally, we present a summary of prevailing research trends and propose some inspiring future outlooks. To provide a valuable reference for researchers, a regularly updated collection of related papers, datasets, and codes is organized at github.com/zya3d/Awesome-3D-Occupancy-Prediction website.

Original languageEnglish
Article number2001301
JournalFrontiers of Computer Science
Volume20
Issue number1
DOIs
StatePublished - Jan 2026

Keywords

  • 3D occupancy prediction
  • Transformer
  • autonomous driving
  • bird’s-eye-view (BEV)

Fingerprint

Dive into the research topics of 'Vision-based 3D occupancy prediction in autonomous driving: a review and outlook'. Together they form a unique fingerprint.

Cite this