TY - GEN
T1 - Ordinal palmprint represention for personal identification
AU - Sun, Zhenan
AU - Tan, Tieniu
AU - Wang, Yunhong
AU - Li, Stan Z.
PY - 2005
Y1 - 2005
N2 - Palmprint-based personal identification, as a new member in the biometrics family, has become an active research topic in recent years. Although great progress has been made, how to represent palmprint for effective classification is still an open problem. In this paper, we present a novel palmprint representation - ordinal measure, which unifies several major existing palmprint algorithms into a general framework. In this framework, a novel palmprint representation method, namely orthogonal line ordinal features, is proposed. The basic idea of this method is to qualitatively compare two elongated, line-like image regions, which are orthogonal in orientation and generate one bit feature code. A palmprint pattern is represented by thousands of ordinal feature codes. In contrast to the state-of-the-art algorithm reported in the literature, our method achieves higher accuracy, with the equal error rate reduced bv 42% for a difficult set, while the complexity of feature extraction is halved.
AB - Palmprint-based personal identification, as a new member in the biometrics family, has become an active research topic in recent years. Although great progress has been made, how to represent palmprint for effective classification is still an open problem. In this paper, we present a novel palmprint representation - ordinal measure, which unifies several major existing palmprint algorithms into a general framework. In this framework, a novel palmprint representation method, namely orthogonal line ordinal features, is proposed. The basic idea of this method is to qualitatively compare two elongated, line-like image regions, which are orthogonal in orientation and generate one bit feature code. A palmprint pattern is represented by thousands of ordinal feature codes. In contrast to the state-of-the-art algorithm reported in the literature, our method achieves higher accuracy, with the equal error rate reduced bv 42% for a difficult set, while the complexity of feature extraction is halved.
UR - https://www.scopus.com/pages/publications/24644458229
U2 - 10.1109/CVPR.2005.267
DO - 10.1109/CVPR.2005.267
M3 - 会议稿件
AN - SCOPUS:24644458229
SN - 0769523722
SN - 9780769523729
T3 - Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
SP - 279
EP - 284
BT - Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
PB - IEEE Computer Society
T2 - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
Y2 - 20 June 2005 through 25 June 2005
ER -