Skip to main navigation Skip to search Skip to main content

A Robust Method for Hands Gesture Recognition from Egocentric Depth Sensor

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We present a method for robust and accurate hand pose recognition from egocentric depth cameras. Our method combines CNN based hand pose estimation and joint locations based hand gesture recognition. In pose estimation stage, we use a hand geometry prior network to estimate the hand pose. In gesture recognition stage, we defined a hand language which based on a set of pre-define basic propositions, obtained by applying four predicate types to the fingers and palm states. The hand language is used to convert the estimated joint location to hand gesture. Our experimental results indicate that the method enables robust and accurate gesture recognition in self-occlusion environment.

Original languageEnglish
Title of host publicationProceedings - 8th International Conference on Virtual Reality and Visualization, ICVRV 2018
EditorsKai Xu, Bin Zhou, Xun Luo, Yanwen Guo
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages40-45
Number of pages6
ISBN (Electronic)9781538684979
DOIs
StatePublished - 2 Jul 2018
Event8th International Conference on Virtual Reality and Visualization, ICVRV 2018 - Qingdao, China
Duration: 22 Oct 201824 Oct 2018

Publication series

NameProceedings - 8th International Conference on Virtual Reality and Visualization, ICVRV 2018

Conference

Conference8th International Conference on Virtual Reality and Visualization, ICVRV 2018
Country/TerritoryChina
CityQingdao
Period22/10/1824/10/18

Keywords

  • CNN
  • depth image
  • hand gesture recognition
  • hand pose estimation

Fingerprint

Dive into the research topics of 'A Robust Method for Hands Gesture Recognition from Egocentric Depth Sensor'. Together they form a unique fingerprint.

Cite this