Skip to main navigation Skip to search Skip to main content

LPQ based static and dynamic modeling of facial expressions in 3D videos

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Automatic Facial Expression Recognition (FER) is one of the most active topics in the domain of computer vision and pattern recognition. In this paper, we focus on discrete facial expression recognition by using 4D data (i.e. 3D range image sequences), and present a novel method to address such an issue. The Local Phase Quantisation from Three Orthogonal Planes (LPQ-TOP) descriptor is applied to extract both the static and dynamic clues conveyed in facial expressions. On the one hand, it locally captures the shape attributes in each 3D face model (facial range image). On the other hand, it detects the latent temporal information and represents dynamic changes occurred in facial muscle actions. The SVM classifier is finally used to predict the expression type. The experiments are carried out on the BU-4DFE database, and the achieved results demonstrate the effectiveness of the proposed method.

Original languageEnglish
Title of host publicationBiometric Recognition - 8th Chinese Conference, CCBR 2013, Proceedings
Pages122-129
Number of pages8
DOIs
StatePublished - 2013
Event2012 International Conference on Service-Oriented Computing, ICSOC 2012 - Jinan, China
Duration: 16 Nov 201317 Nov 2013

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume8232 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference2012 International Conference on Service-Oriented Computing, ICSOC 2012
Country/TerritoryChina
CityJinan
Period16/11/1317/11/13

Keywords

  • 4D facial expression recognition
  • LPQ-TOP

Fingerprint

Dive into the research topics of 'LPQ based static and dynamic modeling of facial expressions in 3D videos'. Together they form a unique fingerprint.

Cite this