Deepfake Video Detection via Facial Action Dependencies Estimation

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Deepfake video detection has drawn significant attention from researchers due to the security issues induced by deepfake videos. Unfortunately, most of the existing deepfake detection approaches have not competently modeled the natural structures and movements of human faces. In this paper, we formulate the deepfake video detection problem into a graph classification task, and propose a novel paradigm named Facial Action Dependency Estimation (FADE) for deepfake video detection. We propose a Multi-Dependency Graph Module (MDGM) to capture abundant dependencies among facial action units, and extracts subtle clues in these dependencies. MDGM can be easily integrated into the existing frame-level detection schemes to provide significant performance gains. Extensive experiments demonstrate the superiority of our method against the state-of-the-art methods.

Original languageEnglish
Title of host publicationAAAI-23 Technical Tracks 4
EditorsBrian Williams, Yiling Chen, Jennifer Neville
PublisherAAAI press
Pages5276-5284
Number of pages9
ISBN (Electronic)9781577358800
DOIs
StatePublished - 27 Jun 2023
Event37th AAAI Conference on Artificial Intelligence, AAAI 2023 - Washington, United States
Duration: 7 Feb 202314 Feb 2023

Publication series

NameProceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
Volume37

Conference

Conference37th AAAI Conference on Artificial Intelligence, AAAI 2023
Country/TerritoryUnited States
CityWashington
Period7/02/2314/02/23

Fingerprint

Dive into the research topics of 'Deepfake Video Detection via Facial Action Dependencies Estimation'. Together they form a unique fingerprint.

Cite this