Forget Boston Dynamics. This robot taught itself to walk
A pair of robot legs called Cassie has been taught to walk using reinforcement learning, the training technique that teaches AIs complex behavior via trial and error. The two-legged robot learned a range of movements from scratch, including walking in a crouch and while carrying an unexpected load.
But can it boogie? Expectations for what robots can do run high thanks to viral videos put out by Boston Dynamics, which show its humanoid Atlas robot standing on one leg, jumping over boxes, and dancing. These videos have racked up millions of views and have even been parodied. The control Atlas has over its movements is impressive, but the choreographed sequences probably involve a lot of hand-tuning. (Boston Dynamics has not published details, so it's hard to say how much.)
These videos may lead some people to believe that this is a solved and easy problem," says Zhongyu Li at the University of California, Berkeley, who worked on Cassie with his colleagues. But we still have a long way to go to have humanoid robots reliably operate and live in human environments." Cassie can't yet dance, but teaching the human-size robot to walk by itself puts it several steps closer to being able to handle a wide range of terrain and recover when it stumbles or damages itself.
Virtual limitations: Reinforcement learning has been used to train many bots to walk inside simulations, but transferring that ability to the real world is hard. Many of the videos that you see of virtual agents are not at all realistic," says Chelsea Finn, an AI and robotics researcher at Stanford University, who was not involved in the work. Small differences between the simulated physical laws inside a virtual environment and the real physical laws outside it-such as how friction works between a robot's feet and the ground-can lead to big failures when a robot tries to apply what it has learned. A heavy two-legged robot can lose balance and fall if its movements are even a tiny bit off.
Double simulation: But training a large robot through trial and error in the real world would be dangerous. To get around these problems, the Berkeley team used two levels of virtual environment. In the first, a simulated version of Cassie learned to walk by drawing on a large existing database of robot movements. This simulation was then transferred to a second virtual environment called SimMechanics that mirrors real-world physics with a high degree of accuracy-but at a cost in running speed. Only once Cassie seemed to walk well there was the learned walking model loaded into the actual robot.
The real Cassie was able to walk using the model learned in simulation without any extra fine-tuning. It could walk across rough and slippery terrain, carry unexpected loads, and recover from being pushed. During testing, Cassie also damaged two motors in its right leg but was able to adjust its movements to compensate. Finn thinks that this is exciting work. Edward Johns, who leads the Robot Learning Lab at Imperial College London agrees. This is one of the most successful examples I have seen," he says.
The Berkeley team hopes to use their approach to add to Cassie's repertoire of movements. But don't expect a dance-off anytime soon.