Verifiable Control for (Semi)Autonomous Cars that Learns from Human (Re)Actions

Image on left: car merges ahead of human - anticipates human braking; Image on right: car backs up at four-way stop - anticipates human proceeding

ABOUT THE PROJECT

At a glance

Currently, autonomous cars are overly defensive, cautiously breaking whenever another car starts to go at an intersection, or starts to merge into their lane. However, being defensive does not necessarily mean they are being safe. Sometimes, defensiveness can lead to accidents. Systematic testing and verification techniques often do not catch these risks because they make assumptions about the behavior of other cars that are not always realistic and do not capture the full range of behaviors of human drivers. In particular, the actions of a human driver do not happen in a vacuum: they are influenced by the actions of an autonomous car.
The goal of this project is two-fold: to make cars both more aggressive and safer. These goals may seem contradictory at first, but the team will show that more sophisticated models of other (human) drivers will enable both. The foundation of the team’s research will be that a human driver’s actions depend on the autonomous car’s actions. The proposed work will learn such models (Challenge 1), develop controllers that leverage them and purposefully take actions that will trigger human reactions (such as causing a human driver to accelerate or brake) (Challenge 2), and contribute verification tools for these controllers (Challenge 3).
 
Principal InvestigatorSResearchersThemes
Dorsa Sadigh
Chandrayee Basu
Autonomous Vehicles
Interaction with Human Drivers