TY - JOUR
T1 - Exploiting vector attention and context prior for ultrasound image segmentation
AU - Xu, Lu
AU - Gao, Shengbo
AU - Shi, Lijuan
AU - Wei, Boxuan
AU - Liu, Xiaowei
AU - Zhang, Jicong
AU - He, Yihua
N1 - Publisher Copyright:
© 2021
PY - 2021/9/24
Y1 - 2021/9/24
N2 - Automatic ultrasound image segmentation is crucial for clinical diagnosis and treatment. However, ultrasound image segmentation is challenging because of the ambiguous structure, incomplete boundary and analogous appearance among different categories. To address above challenges, we propose a flexibly plug-and-play module called vector self-attention layer (VSAL) to conduct long-range spatial and channels reasoning simultaneously. Moreover, it also preserves translational equivariance and considers multi-scale information, by using geometric priors and multi-scale calibration. Besides, a novel context aggregation loss (CAL) is designed to consider the contextual dependences between inter-classes and intra-classes based on context prior. The proposed methods, VSAL and CAL, are flexible enough to be integrated in any CNN-based methods. We validate the effectiveness of the modules on two different ultrasound datasets, multi-target Fetal Apical Four-chamber dataset and one-target Fetal Head dataset. Experiment results reveal significant performance gain when using the proposed modules.
AB - Automatic ultrasound image segmentation is crucial for clinical diagnosis and treatment. However, ultrasound image segmentation is challenging because of the ambiguous structure, incomplete boundary and analogous appearance among different categories. To address above challenges, we propose a flexibly plug-and-play module called vector self-attention layer (VSAL) to conduct long-range spatial and channels reasoning simultaneously. Moreover, it also preserves translational equivariance and considers multi-scale information, by using geometric priors and multi-scale calibration. Besides, a novel context aggregation loss (CAL) is designed to consider the contextual dependences between inter-classes and intra-classes based on context prior. The proposed methods, VSAL and CAL, are flexible enough to be integrated in any CNN-based methods. We validate the effectiveness of the modules on two different ultrasound datasets, multi-target Fetal Apical Four-chamber dataset and one-target Fetal Head dataset. Experiment results reveal significant performance gain when using the proposed modules.
KW - Context prior
KW - Convolutional neural network
KW - Self-attention mechanism
KW - Ultrasound image segmentation
UR - https://www.scopus.com/pages/publications/85110354777
U2 - 10.1016/j.neucom.2021.05.033
DO - 10.1016/j.neucom.2021.05.033
M3 - 文章
AN - SCOPUS:85110354777
SN - 0925-2312
VL - 454
SP - 461
EP - 473
JO - Neurocomputing
JF - Neurocomputing
ER -