Hybrid Confidence – Based Human Driver Modeling

ABOUT THE PROJECT

At a glance

In this project, we focus on predictive models of human driving. Being able to anticipate what other drivers will do in new situations is crucial for safe autonomous driving. Furthermore, being able to anticipate what passengers would want the car to do can lead to increased user comfort and adoption of the technology.

One way to acquire a predictive user model is Inverse Reinforcement Learning (IRL). The idea is to assume that when people drive, they are approximately optimizing some reward function. IRL then treats human driving data as evidence about the parameters of this unobserved reward function. In any new situation, we can optimize the learned reward function to produce a prediction of how a human would drive.

IRL has been successful in capturing user driving styles, as well as in our own work on endowing autonomous car planners with better



models of what the other cars around will do. Unfortunately, IRL predictions are not always accurate. An alternative approach is called behavior cloning: rather than assuming a particular reward function structure and focusing learning on the parameters of that model, behavior cloning learns directly a policy that maps state to human action, i.e. it performs function approximation on the data observer from human driving. The two approaches tend to be complementary: behavior cloning works well locally, around situations in the training data, while IRL fits the training data less well but can often extrapolate better.

In this project, we propose that it doesn’t have to be an either-or: that the prediction algorithm should decide, based on the input data, what learned model, if any, to trust.


 

principal investigatorsresearchersthemes
Anca DraganAndrew Cui
Gil Lederman
Interaction with Human Drivers
Passenger Comfort