Secure and Privacy-Preserving Deep Learning


At a glance

Deep learning with neural networks has become a highly popular machine learning method due to recent breakthroughs in computer vision, speech recognition, and other areas. These recent successes are a direct result of the ability to train on large-scale data sets, from labeled photographs for object recognition to parallel texts for machine translation. While such data has largely come from public sources so far, private data aggregated from individuals would not only provide a boost to existing applications, but also enable new ones for deep learning. The increasing prevalence of autonomous vehicles also presents new opportunities for collecting and learning from enormous amounts of unstructured data, however, some types of private data, e-mails and vehicle location data for example, can be particularly sensitive. To convince individuals to allow deep learning based on such data, strong security and privacy guarantees must be provided.

Protecting privacy requires both preventing leakage of the training data and ensuring that the final model does not reveal private information. Given the opaque nature of deep neural networks, preventing any and all leaks is a significant challenge; existing systems developed for deep learning, such as Caffe, Torch, Theano, and TensorFlow, were not designed with security in mind.

This project will investigate a novel combination of techniques enabling secure, privacy-preserving deep learning. The team’s approach employs trusted hardware to provide end-to-end security for data collection, and uses differentially private deep learning algorithms to provide guaranteed privacy for individuals. The combination provides strong guarantees of both security and privacy: first, the original training data will not be revealed to any party, and second, the results of deep learning tasks will be differentially-private, and not reveal new information about any individual in the original training data. This combination also enables a high performance solution: unlike software-based solutions like secure multiparty computation (SMC), trusted hardware guarantees security while running at full speed. By guaranteeing security and privacy for individuals, this solution will enable the collection of enormous amounts of new data for deep learning purposes.
Principal investigatorsresearchersthemes
Dawn SongJoe Near
Richard Shin

Deep Learning
Data Security


Project Update for Secure and Privacy-Preserving Deep Learning

BAIR/CPAR/BDD Internal Weekly Seminar

Event Location: 
250 Sutardja Dai Hall

The Berkeley Artificial Intelligence Research Lab co-hosts a weekly internal seminar series with the CITRIS People and Robots Initiative and the Berkeley Deep Drive Consortium. The seminars are every Friday afternoon in room 250 Sutardja Dai Hall from 3:10-4:10 PM, and are open to BAIR/BDD faculty, students, and sponsors. Seminars will be webcast live and recorded talks will be available online following the seminar.