PLFCN: Pyramid loss reinforced fully convolutional network

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In the field of remote sensing, the semantic segmentation network for orthophotos has received widely attention. However, it is usually impossible to achieve high accuracy and high efficiency at the same time. In this paper, we propose a novel pyramid loss reinforced fully convolutional network (PLFCN) to address this issue. By introducing deep pyramid supervisions, the network explores multiscale spatial context information to improve performance of semantic segmentation. And the auxiliary pyramid loss structure can be ignored during testing, so that the network can inference as fast as FCN. The main contributions of this paper are as follows: 1) auxiliary pyramid loss structure is proposed to enhance the performance of FCN by multi-scale and deep supervisions; 2) the advantages of multi scale structures and auxiliary loss is combined to improve the performance and maintain the efficiency at the same time. The results show that the semantic segmentation performance is significantly improved, while achieves the high effeciency as FCN.

Original languageEnglish
Title of host publicationICDSC 2019 - 13th International Conference on Distributed Smart Cameras
PublisherAssociation for Computing Machinery
ISBN (Electronic)9781450371896
DOIs
StatePublished - 9 Sep 2019
Event13th International Conference on Distributed Smart Cameras, ICDSC 2019 - Trento, Italy
Duration: 9 Sep 201911 Sep 2019

Publication series

NameACM International Conference Proceeding Series

Conference

Conference13th International Conference on Distributed Smart Cameras, ICDSC 2019
Country/TerritoryItaly
CityTrento
Period9/09/1911/09/19

Keywords

  • FCN
  • Pyramid Loss
  • Remote Sensing
  • Semantic Segmentation

Fingerprint

Dive into the research topics of 'PLFCN: Pyramid loss reinforced fully convolutional network'. Together they form a unique fingerprint.

Cite this