Acting and Learning under Uncertainty

ABOUT THE PROJECT

At a glance

Modern deep learning methods achieve high accuracy when training data distribution and testing data distribution line up well, it can prove far more difficult to perform well on test data from a different distribution. Recent work on domain adaptation and domain confusion, can improve performance, but the issue remains that trained neural nets might make fairly arbitrary decisions when faced with out-of-distribution data. Rather than such potentially catastrophic behavior, it’d be preferable for the system to explicitly account for its uncertainty rather than simply deciding based on the single prediction that’s most likely.

The goal of this project is to enable detection of out-of-distribution data, safe planning and control in situations with uncertainty, and safe fast transfer from controllers learned in simulation to real the world. We have started to investigate detection of out-of-distribution perceptual inputs and how to adjust planning and control to minimize risk in such situations, while still allowing for exploration and learning. To obtain uncertainty estimates from the neural network, we propose an uncertainty estimation method for discriminatively trained neural networks based on bootstrapping and dropout. A model-based reinforcement learning algorithm can then gather samples using the neural network collision prediction model; these samples are aggregated and used to further improve the collision prediction model. Our empirical results demonstrate that a robot equipped with our uncertainty-aware neural network collision prediction model experiences substantially fewer dangerous collisions during training.

principal investigatorsresearchersthemes
Pieter AbbeelGregory Kahn