Survey on recent progresses of semantic image segmentation with CNNs

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Convolutional neural networks (CNNs) have been the mainstream in many computer vision tasks, such as image classification, object detection, face recognition and so on. We survey the state-of-The-Art results on Pascal VOC 2012 semantic segmentation challenge which has made great progresses in 2015. We investigate the effectiveness of the new layers, structures and strategies behind these results proposed to produce more refined segmentation. Their main contributions focus on utilizing more structures and contextual information in the image or feature spaces. Most of these approaches serve for several independent stages in semantic image segmentation. In this paper, we discuss possible architectures to incorporate existing structures and strategies. Finally possible directions on enhancing CNNs to segment given semantic objects are proposed.

Original languageEnglish
Title of host publicationProceedings - 2016 International Conference on Virtual Reality and Visualization, ICVRV 2016
EditorsDandan Ding, Dangxiao Wang, Jian Chen, Xun Luo
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages158-163
Number of pages6
ISBN (Electronic)9781509051885
DOIs
StatePublished - 1 Jun 2017
Event6th International Conference on Virtual Reality and Visualization, ICVRV 2016 - Hangzhou, Zhejiang, China
Duration: 24 Sep 201626 Sep 2016

Publication series

NameProceedings - 2016 International Conference on Virtual Reality and Visualization, ICVRV 2016

Conference

Conference6th International Conference on Virtual Reality and Visualization, ICVRV 2016
Country/TerritoryChina
CityHangzhou, Zhejiang
Period24/09/1626/09/16

Keywords

  • CNN
  • Pascal VOC 2012 challenge
  • Semantic image segmentation

Fingerprint

Dive into the research topics of 'Survey on recent progresses of semantic image segmentation with CNNs'. Together they form a unique fingerprint.

Cite this