TY - GEN
T1 - Local shape transfer for image co-segmentation
AU - Teng, Wei
AU - Zhang, Yu
AU - Chen, Xiaowu
AU - Li, Jia
AU - He, Zhiqiang
N1 - Publisher Copyright:
© 2016 The copyright of this document resides with its authors.
PY - 2016
Y1 - 2016
N2 - Image co-segmentation is a challenging computer vision task that aims to segment all pixels of the common objects in an image set. In real-world cases, however, the common objects often vary greatly in poses, locations and scales, making their global shapes highly inconsistent across images and difficult to be segmented. To address this problem, this paper proposes a novel co-segmentation approach that transfers patch-level local object shapes, which appear more consistently across different images. In our approach, we first employ dense correspondences to construct a patch neighbourhood system, which is refined using Locally Linear Embedding. Based on the patch relationships, an efficient algorithm is developed to jointly segment the objects in each image while transferring their local shapes across different images. Experiments show that our approach performs comparably with or better than the state-of-the-arts on iCoseg dataset [2], while achieving more than 31% relative improvements on a challenging benchmark Fashionista [31].
AB - Image co-segmentation is a challenging computer vision task that aims to segment all pixels of the common objects in an image set. In real-world cases, however, the common objects often vary greatly in poses, locations and scales, making their global shapes highly inconsistent across images and difficult to be segmented. To address this problem, this paper proposes a novel co-segmentation approach that transfers patch-level local object shapes, which appear more consistently across different images. In our approach, we first employ dense correspondences to construct a patch neighbourhood system, which is refined using Locally Linear Embedding. Based on the patch relationships, an efficient algorithm is developed to jointly segment the objects in each image while transferring their local shapes across different images. Experiments show that our approach performs comparably with or better than the state-of-the-arts on iCoseg dataset [2], while achieving more than 31% relative improvements on a challenging benchmark Fashionista [31].
UR - https://www.scopus.com/pages/publications/85047720185
U2 - 10.5244/C.30.3
DO - 10.5244/C.30.3
M3 - 会议稿件
AN - SCOPUS:85047720185
SN - 1901725596
T3 - British Machine Vision Conference 2016, BMVC 2016
SP - 3.1-3.12
BT - British Machine Vision Conference 2016, BMVC 2016
PB - British Machine Vision Conference, BMVC
T2 - 27th British Machine Vision Conference, BMVC 2016
Y2 - 19 September 2016 through 22 September 2016
ER -