TY - GEN
T1 - Delving into Light-Dark Semantic Segmentation for Indoor Scenes Understanding
AU - Ying, Xiaowen
AU - Lang, Bo
AU - Zheng, Zhihao
AU - Chuah, Mooi Choo
N1 - Publisher Copyright:
© 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2022/10/10
Y1 - 2022/10/10
N2 - State-of-the-art segmentation models are mostly trained with largescale datasets collected under favorable lighting conditions, and hence directly applying such trained models to dark scenes will result in unsatisfactory performance. In this paper, we present the first benchmark dataset and evaluation methodology to study the problem of semantic segmentation under different lighting conditions for indoor scenes. Our dataset, namely LDIS, consists of samples collected from 87 different indoor scenes under both wellilluminated and low-light conditions. Different from existing work, our benchmark provides a new task setting, namely Light-Dark Semantic Segmentation (LDSS), which adopts four different evaluation metrics that assess the performance of a model from multiple aspects. We perform extensive experiments and ablation studies to compare the effectiveness of different existing techniques with our standardized evaluation protocol. In addition, we propose a new technique, namely DepthAux, that utilizes the consistency of depth images under different lighting conditions to help a model learn a unified and illumination-invariant representation. Our experimental results show that the proposed DepthAux can provide consistent and significant improvements when applied to a variety of different models. Our dataset and other resources are publicly available on our project page: http://mercy.cse.lehigh.edu/LDIS.
AB - State-of-the-art segmentation models are mostly trained with largescale datasets collected under favorable lighting conditions, and hence directly applying such trained models to dark scenes will result in unsatisfactory performance. In this paper, we present the first benchmark dataset and evaluation methodology to study the problem of semantic segmentation under different lighting conditions for indoor scenes. Our dataset, namely LDIS, consists of samples collected from 87 different indoor scenes under both wellilluminated and low-light conditions. Different from existing work, our benchmark provides a new task setting, namely Light-Dark Semantic Segmentation (LDSS), which adopts four different evaluation metrics that assess the performance of a model from multiple aspects. We perform extensive experiments and ablation studies to compare the effectiveness of different existing techniques with our standardized evaluation protocol. In addition, we propose a new technique, namely DepthAux, that utilizes the consistency of depth images under different lighting conditions to help a model learn a unified and illumination-invariant representation. Our experimental results show that the proposed DepthAux can provide consistent and significant improvements when applied to a variety of different models. Our dataset and other resources are publicly available on our project page: http://mercy.cse.lehigh.edu/LDIS.
KW - Dataset
KW - Evaluation
KW - Low-light
KW - Semantic Segmentation
UR - https://www.scopus.com/pages/publications/85142649363
U2 - 10.1145/3552482.3556556
DO - 10.1145/3552482.3556556
M3 - 会议稿件
AN - SCOPUS:85142649363
T3 - PIES-ME 2022 - Proceedings of the 1st Workshop on Photorealistic Image and Environment Synthesis for Multimedia Experiments
SP - 3
EP - 9
BT - PIES-ME 2022 - Proceedings of the 1st Workshop on Photorealistic Image and Environment Synthesis for Multimedia Experiments
PB - Association for Computing Machinery, Inc
T2 - 1st Workshop on Photorealistic Image and Environment Synthesis for Multimedia Experiments, PIES-ME 2022
Y2 - 14 October 2022 through 14 October 2022
ER -