TY - GEN
T1 - TOWARDS GENERALIZABLE REFERRING IMAGE SEGMENTATION VIA TARGET PROMPT AND VISUAL COHERENCE
AU - Liu, Yajie
AU - Ge, Pu
AU - Ma, Haoxiang
AU - Fan, Shichao
AU - Liu, Qingjie
AU - Huang, Di
AU - Wang, Yunhong
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Referring image segmentation (RIS) aims to segment objects in an image conditioning on free-form text descriptions. Despite the overwhelming progress, it still remains challenging for current approaches to perform well on cases with various text expressions or with unseen visual entities, limiting its further application. In this paper, we present a novel RIS approach, which substantially improves the generalization ability by addressing the two dilemmas mentioned above. Specially, to deal with unconstrained texts, we propose to boost a given expression with an explicit and crucial prompt, which complements the expression in a unified context, facilitating target capturing in the presence of linguistic style changes. Furthermore, we introduce a multi-modal fusion aggregation module with visual guidance from a powerful pretrained model to leverage spatial relations and pixel coherences to handle the incomplete target masks and false positive irregular clumps which often appear on unseen visual entities. Extensive experiments are conducted in the zero-shot cross-dataset settings and the proposed approach achieves consistent gains compared to the state-of-the-art, e.g., 4.15%, 5.45%, and 4.64% mIoU increase on RefCOCO, RefCOCO+ and ReferIt respectively, demonstrating its effectiveness.
AB - Referring image segmentation (RIS) aims to segment objects in an image conditioning on free-form text descriptions. Despite the overwhelming progress, it still remains challenging for current approaches to perform well on cases with various text expressions or with unseen visual entities, limiting its further application. In this paper, we present a novel RIS approach, which substantially improves the generalization ability by addressing the two dilemmas mentioned above. Specially, to deal with unconstrained texts, we propose to boost a given expression with an explicit and crucial prompt, which complements the expression in a unified context, facilitating target capturing in the presence of linguistic style changes. Furthermore, we introduce a multi-modal fusion aggregation module with visual guidance from a powerful pretrained model to leverage spatial relations and pixel coherences to handle the incomplete target masks and false positive irregular clumps which often appear on unseen visual entities. Extensive experiments are conducted in the zero-shot cross-dataset settings and the proposed approach achieves consistent gains compared to the state-of-the-art, e.g., 4.15%, 5.45%, and 4.64% mIoU increase on RefCOCO, RefCOCO+ and ReferIt respectively, demonstrating its effectiveness.
KW - Referring image segmentation
KW - generalization
KW - zero-shot cross-dataset
UR - https://www.scopus.com/pages/publications/85216880354
U2 - 10.1109/ICIP51287.2024.10647728
DO - 10.1109/ICIP51287.2024.10647728
M3 - 会议稿件
AN - SCOPUS:85216880354
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 2599
EP - 2605
BT - 2024 IEEE International Conference on Image Processing, ICIP 2024 - Proceedings
PB - IEEE Computer Society
T2 - 31st IEEE International Conference on Image Processing, ICIP 2024
Y2 - 27 October 2024 through 30 October 2024
ER -