Abstract
Deep learning has shown outstanding performance in various fields, and it is increasingly deployed in privacy-critical domains. If sensitive data in the deep learning model are exposed, it can cause serious privacy threats. To protect individual privacy, we propose a novel activation function and stochastic gradient descent for applying differential privacy to deep learning. Through experiments, we show that the proposed method can effectively protect the privacy and the performance of proposed method is better than the previous approaches.
Original language | English |
---|---|
Pages (from-to) | 905-908 |
Number of pages | 4 |
Journal | IEICE Transactions on Information and Systems |
Volume | 104 |
Issue number | 6 |
DOIs | |
Publication status | Published - 2021 |
Bibliographical note
Funding Information:Manuscript received January 25, 2021. Manuscript revised March 9, 2021. Manuscript publicized March 18, 2021. †The authors are with the Department of Computer Science and Engineering, Korea University, Seoul, Republic of Korea. ∗This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2018-0-00269, A research on safe and convenient big data processing methods). This research was also supported by National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT (No. NRF-2019H1D8A2105513). a) E-mail: [email protected] (Corresponding author) DOI: 10.1587/transinf.2021EDL8007
Publisher Copyright:
Copyright © 2021 The Institute of Electronics, Information and Communication Engineers
Keywords
- Activation function
- Deep learning
- Differential privacy
ASJC Scopus subject areas
- Software
- Hardware and Architecture
- Computer Vision and Pattern Recognition
- Electrical and Electronic Engineering
- Artificial Intelligence