Virtual Obstacle Courses Teach Real Robots How to Walk

Virtual Obstacle Courses Teach Real Robots How to Walk
An army of robots in a simulation. Robotic Systems Lab / Nvidia / YouTube
Please, share this article with your cool friends:

According to an initial report from Wired, a virtual army of 4,000 doglike robots was used to train an algorithm capable of improving the legwork of real-world robots.

Simulated Robot Army Mastered Step and Block Navigation

The simulated army was created by researchers at ETH Zurich in Switzerland, as well as engineers from chip manufacturer Nvidia.


They worked together to overcome difficult obstacles for robots, such as steps, slopes, and sharp drops carved into a virtual landscape, using meandering bots in a simulation called ANYMals.

Every time a robot solves a navigational problem, the researchers give it a more difficult one, nudging the algorithm in a maddeningly unforgiving puzzle whose sole purpose is to teach its digital guest how to overcome the impossible, achieving a level of sophistication never seen in AI mobility.

The ensuing drama is depicted graphically as an army of confused ants writhing across a vast sea of geometric insanity. During training, the robots mastered the up- and downstairs walks with little difficulty. Slopes, on the other hand, threw them for a loop.


Few people understood the fundamentals of sliding down a slope. However, once the final algorithm was applied to a real-world version of ANYmal, the four-legged doglike robot with sensors in its head and a detachable robot arm successfully navigated blocks and stairs but struggled at higher speeds.

  The World's First Artificial Kidney Could Put an End to Dialysis

Robot Army in a Negative Feedback Loop with AI

The algorithm is not to blame, according to the researchers. Instead, they believe that a mismatch between how the sensors perceive the real world and the virtual one is causing coordination problems.

This type of fast-track robot learning, on the other hand, could accelerate the learning curve for robots and other machines to learn a wide range of skills, from sewing clothes and harvesting crops to sorting packages in a massive Amazon warehouse. 


The project also reaffirms the importance of using simulation to advance artificial intelligence capabilities (AI). “At a high level, very fast simulation is a really great thing to have,” UC Berkeley Professor Pieter Abbeel said in a Wired report. Abbeel is also a cofounder of Covariant, a company that uses artificial intelligence in simulations to train robot arms in the art of object sorting for logistics companies.

  Samsung Wants to 'Copy and Paste' the Brain Into a Chip

According to the report, Abbeel believes the Swiss and Nvidia researchers’ work with robotic algorithms “got some nice speed-ups,” AI has progressed to the point where it can now improve robots’ ability to perform tasks in our everyday lives that are difficult to translate into software.

The ability to grasp awkward, strange, and slippery surfaces, for example, cannot be reduced to a few lines of simple code.


This is why 4,000 simulated robots were trained with reinforcement learning, an AI method that mimics how animals learn through positive and negative feedback.

  EU launches investigation into Google's adtech

When robots move their legs, a judging algorithm assesses how this affects the robot’s ability to continue walking and adjusts the control algorithms to adapt as motion continues.

The simulations were made possible by Nvidia’s specialized AI chips, which allowed the researchers to train an army of robots in one-hundredth the time it would have taken otherwise.


We’ve finally arrived at the beginning of self-learning robots, and by combining reinforcement learning with recent AI advances, the limits of robotic movement may be getting closer to the physical world.

Please, share this article with your cool friends: