Low Latency Deep Inference for Self-Driving Vehicles


At a glance

Deep Neural Networks (DNNs) have revolutionized computer vision, speech recognition, and have recently proved themselves in reinforcement learning and policy optimization for robot and vehicle control. DNNs have shown great accuracy, but are also relatively expensive for both training and evaluation. Training a state-of-the-art CNN image recognizer or RNN semantic model takes several days on a single GPU machine and hours on a mid-sized compute cluster. For driving applications, low latency is at least as important. Human reflexes work in tenths of a second, and vehicles and driving infrastructure have been developed around this time scale. DNNs must complete perception-action cycles with similar latencies. The focus of this work is to improve DNN latencies and develop software that can execute perception-action cycles on the order of a few tenths of a second, and deliver this performance on state-of-the-art embedded computing hardware.

The team will approach this on two fronts:
- Embedded Kernel Optimization: Most deep learning toolkits have been developed to optimize training time. They use minibatch sizes of several hundred inputs, and performance often depends on the parallelism gained by processing data in batches. By contrast, the team will develop optimized kernels and algorithms for single inputs and minimum latency.
- End-to-End Kernel Development: The team’s past work on an NVIDIA GPU demonstrated an order-of-magnitude speedup over Google’s reference CPU implementation. While such speedups are not unusual for GPUs, the team’s implementation used several new techniques and substantially improved on the throughput (to 200 gflops) demonstrated to date for inference on a sparse, power-law dataset. This was achieved through end-to-end design, performing the entire forward-backward model update pipeline with one “touch” of the input data. The calculation to register memory was also moved, leveraging the large register storage on current GPUs, and finally the team developed a complementary algorithmic technique (negative sample sharing) to reduce memory bandwidth. While the goal in that project was to minimize training time, end-to-end design is also a key pattern for minimum latency.
John Canny Deep Neural Nets
Kernel Optimization and Development


BAIR/CPAR/BDD Internal Weekly Seminar

Event Location: 
250 Sutardja Dai Hall

The Berkeley Artificial Intelligence Research Lab co-hosts a weekly internal seminar series with the CITRIS People and Robots Initiative and the Berkeley Deep Drive Consortium. The seminars are every Friday afternoon in room 250 Sutardja Dai Hall from 3:10-4:10 PM, and are open to BAIR/BDD faculty, students, and sponsors. Seminars will be webcast live and recorded talks will be available online following the seminar.