Exploring a rich spatial–temporal dependent relational model for skeleton-based action recognition by bidirectional LSTM-CNN

  • Aichun Zhu*
  • , Qianyu Wu
  • , Ran Cui
  • , Tian Wang
  • , Wenlong Hang
  • , Gang Hua
  • , Hichem Snoussi
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

With the fast development of effective and low-cost human skeleton capture systems, skeleton-based action recognition has attracted much attention recently. Most existing methods using Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM) have achieved promising performance for skeleton-based action recognition. However, these approaches are limited in the ability to explore the rich spatial–temporal relational information. In this paper, we propose a new spatial–temporal model with an end-to-end bidirectional LSTM-CNN (BiLSTM-CNN). First, a hierarchical spatial–temporal dependent relational model is used to explore rich spatial–temporal information in the skeleton data. Then a new framework is proposed to fuse CNN and LSTM. In this framework, the skeleton data are built by the dependent relational model and serve as the input of the proposed network. Then LSTM is used to extract the temporal features, and followed by a standard CNN to explore the spatial information from the output of LSTM. Finally, the experimental results demonstrate the effectiveness of the proposed model on the NTU RGB+D, SBU Interaction and UTD-MHAD dataset.

Original languageEnglish
Pages (from-to)90-100
Number of pages11
JournalNeurocomputing
Volume414
DOIs
StatePublished - 13 Nov 2020

Keywords

  • Action recognition
  • Dependent relational model
  • Spatial–temporal information

Fingerprint

Dive into the research topics of 'Exploring a rich spatial–temporal dependent relational model for skeleton-based action recognition by bidirectional LSTM-CNN'. Together they form a unique fingerprint.

Cite this