Dec 22, 2016
Human-Machine Arbitration in Hybrid Driving Systems
About the Project
At a glance
A number of companies have announced that by 2020 they will have highly or fully automated vehicles ready for the market. While selective companies may indeed offer driver-less cars (Level 5 per SAE J3016), most market-ready products in the upcoming decade will likely be categorized as partially automated cars (Levels 2-4, per SAE J3016). We will refer to these automated systems, in which machine and driver share driving tasks, as hybrid driving systems (HDS).
The implementation of HDS faces equally challenging if not greater, albeit somewhat different, problems when compared to fully automated systems. The challenges are particularly noticeable when a human operator is involved in the loop of decision-making and during transition between automated and manual modes. For example, serious risks may arise when control is yielded back to a human under various unknowns and uncertainties. Some concerns have been raised as to whether HDS can be safely deployed in the near future. For hybrid driving systems to function properly, human-machine inputs need to be arbitrated to achieve safe and effective interaction with the surrounding environment. The primary objective of this project is to explore the potential of applying deep-learning methodologies to perform human-machine arbitration, with an emphasis on the decision-making and control strategies for HDS.
|Hybrid Driving Systems|
Oct 17, 2016
Project Update: Human-Machine Arbitration in Hybrid Driving Systems
BAIR/CPAR/BDD Internal Weekly Seminar
The Berkeley Artificial Intelligence Research Lab co-hosts a weekly internal seminar series with the CITRIS People and Robots Initiative and the Berkeley Deep Drive Consortium. The seminars are every Friday afternoon in room 250 Sutardja Dai Hall from 3:10-4:10 PM, and are open to BAIR/BDD faculty, students, and sponsors. Seminars will be webcast live and recorded talks will be available online following the seminar.