Abstract
Deep learning techniques based on the neural network have made significant achievements in various fields of artificial intelligence. However, model training requires large-scale data sets, these data sets are crowd-sourced and model parameters will contain the encoding of private information, resulting in the risk of privacy leakage. With the trend toward sharing pretrained models, the risk of stealing training data sets through member inference attacks and model inversion attacks is further heightened. To tackle the privacy-preserving problems in deep learning tasks, we propose an improved Differential Privacy Stochastic Gradient Descent algorithm, using Simulated Annealing algorithm and Laplace Smooth denoising mechanism to optimize the allocation method of privacy loss, replacing the constant clipping method with adaptive gradient clipping method to improve model accuracy. we also analyze privacy cost under random shuffle data batch processing method in detail within the framework of Subsampled Rényi Differential Privacy. Compared with the existing privacy protection training methods with fixed parameters and dynamic privacy parameters in classification tasks, our implementation and experiments show that we can use less privacy budget train deep neural networks with the nonconvex objective function, obtain a higher model evaluation, and have almost zero additional cost in terms of model complexity, training efficiency, and model quality.
| Original language | English |
|---|---|
| Pages (from-to) | 10694-10714 |
| Number of pages | 21 |
| Journal | International Journal of Intelligent Systems |
| Volume | 37 |
| Issue number | 12 |
| DOIs | |
| State | Published - Dec 2022 |
Keywords
- deep learning
- differential privacy
- nonconvex optimization
- simulated annealing
- stochastic gradient descent
Fingerprint
Dive into the research topics of 'An improved stochastic gradient descent algorithm based on Rényi differential privacy'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver