Segment-Oriented Depiction and Analysis for Hyperspectral Image Data

Research output: Contribution to journalArticlepeer-review

Abstract

A novel segment-oriented dictionary learning (SeODL) framework for hyperspectral image (HSI) classification is proposed. Differing from existing HSI classification methods which directly process the original whole spectral curves of pixels, our work focuses on local segment analysis to achieve fine depiction and effective exploitation. Viewing the separated segment as a basic processing unit, we first cluster them into two sets with the homogeneity in trend and fluctuation, and then two small dictionaries can be quickly learned. Second, to get meticulous and discriminability enhanced segment-oriented representations (SORs), the segments of the training and test pixels are coded on a novel binary-separated coding strategy. The coding stage for obtaining SORs is sped up by the employment of our proposed enhanced orthogonal matching pursuit technique. A characteristic splicing classifier with high performance can be trained using these SORs of the training pixels. Finally, a spiral searching strategy and a multiple majority-voting method are adopted for fully spatial information incorporation of the test pixels whose final SORs will be embedded into the trained characteristics splicing classifier to ascertain the labels. Experimental results on three real HSI data sets demonstrate the superiority of the proposed SeODL framework over several well-known classification algorithms in terms of classification accuracies.

Original languageEnglish
Article number7898839
Pages (from-to)3982-3996
Number of pages15
JournalIEEE Transactions on Geoscience and Remote Sensing
Volume55
Issue number7
DOIs
StatePublished - Jul 2017

Keywords

  • Dictionary learning
  • hyperspectral image (HSI) classification
  • segment
  • sparse representation

Fingerprint

Dive into the research topics of 'Segment-Oriented Depiction and Analysis for Hyperspectral Image Data'. Together they form a unique fingerprint.

Cite this