TY - JOUR
T1 - Sparsity-guided saliency detection for remote sensing images
AU - Zhao, Danpei
AU - Wang, Jiajia
AU - Shi, Jun
AU - Jiang, Zhiguo
N1 - Publisher Copyright:
© 2015 The Authors.
PY - 2015/1/1
Y1 - 2015/1/1
N2 - Traditional saliency detection can effectively detect possible objects using an attentional mechanism instead of automatic object detection, and thus is widely used in natural scene detection. However, it may fail to extract salient objects accurately from remote sensing images, which have their own characteristics such as large data volumes, multiple resolutions, illumination variation, and complex texture structure. We propose a sparsity-guided saliency detection model for remote sensing images that uses a sparse representation to obtain the high-level global and background cues for saliency map integration. Specifically, it first uses pixel-level global cues and background prior information to construct two dictionaries that are used to characterize the global and background properties of remote sensing images. It then employs a sparse representation for the high-level cues. Finally, a Bayesian formula is applied to integrate the saliency maps generated by both types of high-level cues. Experimental results on remote sensing image datasets that include various objects under complex conditions demonstrate the effectiveness and feasibility of the proposed method.
AB - Traditional saliency detection can effectively detect possible objects using an attentional mechanism instead of automatic object detection, and thus is widely used in natural scene detection. However, it may fail to extract salient objects accurately from remote sensing images, which have their own characteristics such as large data volumes, multiple resolutions, illumination variation, and complex texture structure. We propose a sparsity-guided saliency detection model for remote sensing images that uses a sparse representation to obtain the high-level global and background cues for saliency map integration. Specifically, it first uses pixel-level global cues and background prior information to construct two dictionaries that are used to characterize the global and background properties of remote sensing images. It then employs a sparse representation for the high-level cues. Finally, a Bayesian formula is applied to integrate the saliency maps generated by both types of high-level cues. Experimental results on remote sensing image datasets that include various objects under complex conditions demonstrate the effectiveness and feasibility of the proposed method.
KW - Bayesian integration
KW - background prior
KW - global cues
KW - remote sensing images
KW - sparse representation
KW - sparsity-guided saliency model
UR - https://www.scopus.com/pages/publications/84942563853
U2 - 10.1117/1.JRS.9.095055
DO - 10.1117/1.JRS.9.095055
M3 - 文章
AN - SCOPUS:84942563853
SN - 1931-3195
VL - 9
JO - Journal of Applied Remote Sensing
JF - Journal of Applied Remote Sensing
IS - 1
M1 - 095055
ER -