Learning to Drive Under Unstructured Conditions

(A) Self-driving model car, first version. (B) View from the car's camera as its trained deep network negotiates a gentle curve in the type of environment in which training data was originally obtained. (C) In a novel environment (large open space) with an unfamiliar obstacle (other model car), the self-driving model car performs avoidance behavior to avoid collision. The advantage of using model cars is that 'dangerous' situations such as this can be explored at low cost.

ABOUT THE PROJECT

At a glance

Safe driving requires correct decision-making in the event of the unexpected. A system trained to follow the 'rules of the road' may mimic human driving behavior under some conditions. However, under unusual conditions that pose safety challenges, the human driver can revert to core evolved behavior for locomotion on an arbitrary ground plane. In order to advance the development of robust and safe self-driving vehicles, this project will focus on driving in unstructured conditions, in order to learn basic principles of locomotion and navigation that can then be fine-tuned for road driving. The team will use self-driving model cars (SDMC) for this approach, combining complexities and constraints of the physical world while avoiding the obvious dangers and costs associated with trial-and-error learning in real automobiles. The goal of this project is to instantiate a set of SDMC's simultaneously driving in an obstacle-filled arena in order to generate a repository of data rich in unstructured and ‘unsafe’ conditions lacking in existing driving datasets. This data will be a valuable contribution to the study of collision-free navigation of robots in a complex environment, and will provide the basis for the team’s research with neural network representations for ground locomotion and following behavior.

Different methods for training the SDMC’s will be used. First is the classic approach of mapping from camera output to steering angles and accelerator level, which will serve as a baseline for comparison. The second is a multi-stage approach, using one deep network to map from the camera output to a potential function, followed by a second (pre-trained) network for mapping from the potential function to steering angles and speed. The use of a potential function is a proven technique for finding goal-directed and obstacle avoiding trajectories. Within the potential function, goals are represented as dips, obstacles as hills, and the preferred behavior is found by following the gradient.

 

Principal InvestigatorsResearchersThemes

Bruno Olshausen
Karl Zipser

 

Autonomous Vehicles
Machine Learning

 

BAIR/CPAR/BDD Internal Weekly Seminar

Event Location: 
250 Sutardja Dai Hall

The Berkeley Artificial Intelligence Research Lab co-hosts a weekly internal seminar series with the CITRIS People and Robots Initiative and the Berkeley Deep Drive Consortium. The seminars are every Friday afternoon in room 250 Sutardja Dai Hall from 3:10-4:10 PM, and are open to BAIR/BDD faculty, students, and sponsors. Seminars will be webcast live and recorded talks will be available online following the seminar. 

Schedule:

http://citris-uc.org/bair-seminar-series/

Annual Fall Meeting

Event Location: 
International House, 2299 Piedmont Ave, Berkeley, CA 94720 Parking: Stadium Parking Garage, 2175 Gayley Road, Berkeley CA 94720

Schedule: 
             8:00am - 9:00am: Coffee and breakfast available
           9:00am - 11:30am: Presentations
           11:30am - 1:00pm: Lunch and poster session
           1:00pm - 5:00pm: Presentations
           5:00pm - 7:00pm Reception - University Club, Memorial Stadium

Please RSVP by October 5, 2016