TY - GEN
T1 - Towards Cross-Modal Point Cloud Retrieval for Indoor Scenes
AU - Yu, Fuyang
AU - Wang, Zhen
AU - Li, Dongyuan
AU - Zhu, Peide
AU - Liang, Xiaohui
AU - Wang, Xiaochuan
AU - Okumura, Manabu
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - Cross-modal retrieval, as an important emerging foundational information retrieval task, benefits from recent advances in multimodal technologies. However, current cross-modal retrieval methods mainly focus on the interaction between textual information and 2D images, lacking research on 3D data, especially point clouds at scene level, despite the increasing role point clouds play in daily life. Therefore, in this paper, we proposed a cross-modal point cloud retrieval benchmark that focuses on using text or images to retrieve point clouds of indoor scenes. Given the high cost of obtaining point cloud compared to text and images, we first designed a pipeline to automatically generate a large number of indoor scenes and their corresponding scene graphs. Based on this pipeline, we collected a balanced dataset called CRISP, which contains 10K point cloud scenes along with their corresponding scene images and descriptions. We then used state-of-the-art models to design baseline methods on CRISP. Our experiments demonstrated that point cloud retrieval accuracy is much lower than cross-modal retrieval of 2D images, especially for textual queries. Furthermore, we proposed ModalBlender, a tri-modal framework which can greatly improve the Text-PointCloud retrieval performance. Through extensive experiments, CRISP proved to be a valuable dataset and worth researching. (Dataset can be downloaded at https://github.com/CRISPdataset/CRISP.)
AB - Cross-modal retrieval, as an important emerging foundational information retrieval task, benefits from recent advances in multimodal technologies. However, current cross-modal retrieval methods mainly focus on the interaction between textual information and 2D images, lacking research on 3D data, especially point clouds at scene level, despite the increasing role point clouds play in daily life. Therefore, in this paper, we proposed a cross-modal point cloud retrieval benchmark that focuses on using text or images to retrieve point clouds of indoor scenes. Given the high cost of obtaining point cloud compared to text and images, we first designed a pipeline to automatically generate a large number of indoor scenes and their corresponding scene graphs. Based on this pipeline, we collected a balanced dataset called CRISP, which contains 10K point cloud scenes along with their corresponding scene images and descriptions. We then used state-of-the-art models to design baseline methods on CRISP. Our experiments demonstrated that point cloud retrieval accuracy is much lower than cross-modal retrieval of 2D images, especially for textual queries. Furthermore, we proposed ModalBlender, a tri-modal framework which can greatly improve the Text-PointCloud retrieval performance. Through extensive experiments, CRISP proved to be a valuable dataset and worth researching. (Dataset can be downloaded at https://github.com/CRISPdataset/CRISP.)
KW - Cross-modal Retrieval
KW - Indoor Scene
KW - Point Cloud
UR - https://www.scopus.com/pages/publications/85184797734
U2 - 10.1007/978-3-031-53302-0_7
DO - 10.1007/978-3-031-53302-0_7
M3 - 会议稿件
AN - SCOPUS:85184797734
SN - 9783031533013
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 89
EP - 102
BT - MultiMedia Modeling - 30th International Conference, MMM 2024, Proceedings
A2 - Rudinac, Stevan
A2 - Worring, Marcel
A2 - Liem, Cynthia
A2 - Hanjalic, Alan
A2 - Jónsson, Björn Pór
A2 - Yamakata, Yoko
A2 - Liu, Bei
PB - Springer Science and Business Media Deutschland GmbH
T2 - 30th International Conference on MultiMedia Modeling, MMM 2024
Y2 - 29 January 2024 through 2 February 2024
ER -