TY - GEN
T1 - Attacking gait recognition systems via silhouette guided GANs
AU - Jia, Meijuan
AU - Huang, Di
AU - Yang, Hongyu
AU - Wang, Yunhong
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/10/15
Y1 - 2019/10/15
N2 - This paper investigates a new attack method to gait recognition systems. Different from typical spoofing attacks that require impostors to mimic certain clothing or walking styles, it proposes to intercept the video stream captured by the on-site camera and replace it with synthesized samples. To this end, we present a novel Generative Adversarial Network (GAN) based approach, which is able to render a faked video from the source walking sequence of a specified subject and the target scene image with both good visual effects and sufficient discriminative details. A new generator architecture is built, where the features of the source foreground sequence and the target background image are combined at multiple scales, making the synthesized video vivid. To fool recognition systems, the silhouette-conditioned losses are specially designed to constrain the static and dynamic consistency between the subjects in the source and generated videos. The person re-identification similarity based triplet loss is exploited to guide the generator, which keeps the personalized appearance properties stable. The edge and flow-related losses further regulate the generation of the attacking video. Two state-of-the-art gait recognition systems are used for evaluation, namely GaitSet and CNN-Gait, and we analyze their performance under attacking. Both the visual fidelity and attacking ability of the generated videos validate the effectiveness of the proposed method.
AB - This paper investigates a new attack method to gait recognition systems. Different from typical spoofing attacks that require impostors to mimic certain clothing or walking styles, it proposes to intercept the video stream captured by the on-site camera and replace it with synthesized samples. To this end, we present a novel Generative Adversarial Network (GAN) based approach, which is able to render a faked video from the source walking sequence of a specified subject and the target scene image with both good visual effects and sufficient discriminative details. A new generator architecture is built, where the features of the source foreground sequence and the target background image are combined at multiple scales, making the synthesized video vivid. To fool recognition systems, the silhouette-conditioned losses are specially designed to constrain the static and dynamic consistency between the subjects in the source and generated videos. The person re-identification similarity based triplet loss is exploited to guide the generator, which keeps the personalized appearance properties stable. The edge and flow-related losses further regulate the generation of the attacking video. Two state-of-the-art gait recognition systems are used for evaluation, namely GaitSet and CNN-Gait, and we analyze their performance under attacking. Both the visual fidelity and attacking ability of the generated videos validate the effectiveness of the proposed method.
KW - Gait Recognition
KW - Generative Adversarial Networks
KW - Spoofing Attacks
KW - Video Generation
UR - https://www.scopus.com/pages/publications/85074854976
U2 - 10.1145/3343031.3351018
DO - 10.1145/3343031.3351018
M3 - 会议稿件
AN - SCOPUS:85074854976
T3 - MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia
SP - 638
EP - 646
BT - MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia
PB - Association for Computing Machinery, Inc
T2 - 27th ACM International Conference on Multimedia, MM 2019
Y2 - 21 October 2019 through 25 October 2019
ER -