Dec 22, 2016
FPGA PRET Accelerators of Deep Learning Classifiers for Autonomous Vehicles
About This Project
At a glance
In recent work, the team focused on PRET challenges in numerical computation for high-performance control system design. A highly parameterized processor template based on Very Long Instruction Word (VLIW) architecture was developed (shown below), along with a set of highly transparent programming tools. The architecture is characterized by complete predictability and repeatability of computation timing, to facilitate design and verification of control systems. The template-based generator approach allows researchers to quickly and easily specialize processor instance according to the needs of the computation to be accelerated. The scalar type, number of parallel units, and memories can be changed at any point during design. In combination with transparent programming tools, this allows the team to quickly discover latency versus energy efficiency tradeoffs for computations and quickly configure processors instances.
For this project, the team aims to create Deep Learning Classifier implementations on embedded hardware (FPGAs) interfacing their generator tools directly to Caffe.
|Ranko Sredojevic||Deep Learning|
BAIR/CPAR/BDD Internal Weekly Seminar
The Berkeley Artificial Intelligence Research Lab co-hosts a weekly internal seminar series with the CITRIS People and Robots Initiative and the Berkeley Deep Drive Consortium. The seminars are every Friday afternoon in room 250 Sutardja Dai Hall from 3:10-4:10 PM, and are open to BAIR/BDD faculty, students, and sponsors. Seminars will be webcast live and recorded talks will be available online following the seminar.