Training Real-World Robotic Skills in Diverse Simulated Environments


At a glance

Deep reinforcement learning methods have shown remarkable success for learning complex behaviors from raw sensory readings in domains ranging from video games to robotic control. However, reinforcement learning requires learning from trial and error: before the robot (which may be an autonomous car, a drone, or a robotic manipulator) can learn how to perform a task reliably and safely, it must experience both successes and failures. This project, will explore how simulated robotic experience can be used to train deep neural network policies that readily transfer from the simulation into the real world, by means of highly diverse and randomized simulators, domain adaptation, and specialized reinforcement learning algorithms that can learn stably and reliably in simulation.

The technical approach for transferring simulated experience to the real world will consist of three key components: (1) diverse and

highly randomized simulated environments; (2) domain adaptation for both perceptual discrepancy and physical mismatch; (3) development of robust and reliable reinforcement learning algorithms that make use of simulated environments to acquire generalizable and powerful policies. Our favorable, preliminary results on simulation to real world transfer include an exploration of both visual transfer and physical transfer. In regard to visual transfer, we’ve conducted experiments on autonomous quadrotor navigation through indoor environments. Exploring physical transfer, we used multiple simulated environments as a proxy for simulation to real world transfer. These preliminary experiments again examined randomization of the training scenarios, and also showed that this type of randomization substantially improves transfer, with experimental results on simulated locomotion.

principal investigatorsresearchersthemes
Sergey Levine Simulation to Real World Transfer