Dec 27, 2016
Secure and Privacy-Preserving Deep Learning
ABOUT THE PROJECT
At a glance
Protecting privacy requires both preventing leakage of the training data and ensuring that the final model does not reveal private information. Given the opaque nature of deep neural networks, preventing any and all leaks is a significant challenge; existing systems developed for deep learning, such as Caffe, Torch, Theano, and TensorFlow, were not designed with security in mind.
This project will investigate a novel combination of techniques enabling secure, privacy-preserving deep learning. The team’s approach employs trusted hardware to provide end-to-end security for data collection, and uses differentially private deep learning algorithms to provide guaranteed privacy for individuals. The combination provides strong guarantees of both security and privacy: first, the original training data will not be revealed to any party, and second, the results of deep learning tasks will be differentially-private, and not reveal new information about any individual in the original training data. This combination also enables a high performance solution: unlike software-based solutions like secure multiparty computation (SMC), trusted hardware guarantees security while running at full speed. By guaranteeing security and privacy for individuals, this solution will enable the collection of enormous amounts of new data for deep learning purposes.
|Dawn Song||Joe Near|
BAIR/CPAR/BDD Internal Weekly Seminar
The Berkeley Artificial Intelligence Research Lab co-hosts a weekly internal seminar series with the CITRIS People and Robots Initiative and the Berkeley Deep Drive Consortium. The seminars are every Friday afternoon in room 250 Sutardja Dai Hall from 3:10-4:10 PM, and are open to BAIR/BDD faculty, students, and sponsors. Seminars will be webcast live and recorded talks will be available online following the seminar.