Enhanced Accuracy and Robustness via Multi-teacher Adversarial Distillation

  • Shiji Zhao
  • , Jie Yu
  • , Zhenlong Sun
  • , Bo Zhang
  • , Xingxing Wei*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Adversarial training is an effective approach for improving the robustness of deep neural networks against adversarial attacks. Although bringing reliable robustness, adversarial training (AT) will reduce the performance of identifying clean examples. Meanwhile, Adversarial training can bring more robustness for large models than small models. To improve the robust and clean accuracy of small models, we introduce the Multi-Teacher Adversarial Robustness Distillation (MTARD) to guide the adversarial training process of small models. Specifically, MTARD uses multiple large teacher models, including an adversarial teacher and a clean teacher to guide a small student model in the adversarial training by knowledge distillation. In addition, we design a dynamic training algorithm to balance the influence between the adversarial teacher and clean teacher models. A series of experiments demonstrate that our MTARD can outperform the state-of-the-art adversarial training and distillation methods against various adversarial attacks. Our code is available at https://github.com/zhaoshiji123/MTARD.

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2022 - 17th European Conference, Proceedings
EditorsShai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, Tal Hassner
PublisherSpringer Science and Business Media Deutschland GmbH
Pages585-602
Number of pages18
ISBN (Print)9783031197710
DOIs
StatePublished - 2022
Event17th European Conference on Computer Vision, ECCV 2022 - Tel Aviv, Israel
Duration: 23 Oct 202227 Oct 2022

Publication series

NameLecture Notes in Computer Science
Volume13664 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference17th European Conference on Computer Vision, ECCV 2022
Country/TerritoryIsrael
CityTel Aviv
Period23/10/2227/10/22

Keywords

  • Adversarial training
  • DNNs
  • Knowledge distillation

Fingerprint

Dive into the research topics of 'Enhanced Accuracy and Robustness via Multi-teacher Adversarial Distillation'. Together they form a unique fingerprint.

Cite this