GROUPCOVER: A Secure, Efficient and Scalable Inference Framework for On-device Model Protection based on TEEs

  • Zheng Zhang
  • , Na Wang*
  • , Ziqi Zhang
  • , Yao Zhang
  • , Tianyi Zhang
  • , Jianwei Liu
  • , Ye Wu
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

Due to the high cost of training DNN models, how to protect the intellectual property of DNN models, especially when the models are deployed to users' devices, is becoming an important topic.One practical solution is to use Trusted Execution Environments (TEEs) and researchers have proposed various model obfuscation solutions to make full use of the high-security guarantee of TEEs and the high performance of collocated GPUs.In this paper, we first identify a common vulnerability, namely the fragility of randomness, that is shared by existing TEE-based model obfuscation solutions.This vulnerability benefits model-stealing attacks and allows the adversary to recover about 97% of the secret model.To improve the security of TEE-shielded DNN models, we further propose a new model obfuscation approach GROUPCOVER, which uses sufficient randomization and mutual covering obfuscation to protect model weights.Experimental results demonstrate that GROUPCOVER can achieve a comparable security level as the upper-bound (black-box protection), which is remarkably over 3× compared with existing solutions.Besides, GROUPCOVER introduces 19% overhead and negligible accuracy loss compared to model unprotected scheme.

Original languageEnglish
Pages (from-to)59992-60003
Number of pages12
JournalProceedings of Machine Learning Research
Volume235
StatePublished - 2024
Event41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria
Duration: 21 Jul 202427 Jul 2024

Fingerprint

Dive into the research topics of 'GROUPCOVER: A Secure, Efficient and Scalable Inference Framework for On-device Model Protection based on TEEs'. Together they form a unique fingerprint.

Cite this