Dec 22, 2016
Real-Time Perception/ Prediction of Traffic Scene
Real-Time Perception/Prediction of Traffic Scene with Deep Learning for Autonomous Driving
At a glance
Currently, commercial products with sensor fusion of video camera and Doppler radar are available for front target(s) detection/tracking/estimation in real-time. In principle, it could track multiple targets, including vehicles, and provide their distance speed and acceleration -with respect to the subject vehicle. However, only the immediate front target detection/tracking is currently reliable. Detection/tracking of targets in adjacent lanes still needs work to improve its accuracy and reliability.
In this project, the team will assume that real-time traffic scene perception with video camera with respect to the subject vehicle is already available to investigate how to improve the sensor fusion by using more video image data for real-time target detection/tracking/prediction and traffic scene prediction, critical for autonomous vehicles to operate on public roadways. The objectives of the project for the first year are to: collect freeway traffic scene data with space-time synchronized video camera and radar/lidar; fuse video data with radar/lidar data for real-time traffic scene perception; and predict nearby vehicle intention with respect to the subject vehicle using some existing learning algorithm such as stochastic steepest decent with offline learning.
BAIR/CPAR/BDD Internal Weekly Seminar
The Berkeley Artificial Intelligence Research Lab co-hosts a weekly internal seminar series with the CITRIS People and Robots Initiative and the Berkeley Deep Drive Consortium. The seminars are every Friday afternoon in room 250 Sutardja Dai Hall from 3:10-4:10 PM, and are open to BAIR/BDD faculty, students, and sponsors. Seminars will be webcast live and recorded talks will be available online following the seminar.