New physics sim trains robots 430,000 times faster than reality

0
74

Benj Edwards – Arstechnica

On Thursday, a large group of university and private industry researchers unveiled Genesis, a new open source computer simulation system that lets robots practice tasks in simulated reality 430,000 times faster than in the real world. Researchers also plan to introduce an AI agent to generate 3D physics simulations from text prompts.

The accelerated simulation means a neural network for piloting robots can spend the virtual equivalent of decades learning to pick up objects, walk, or manipulate tools during just hours of real computer time.

“One hour of compute time gives a robot 10 years of training experience. That’s how Neo was able to learn martial arts in a blink of an eye in the Matrix Dojo,” wrote Genesis paper co-author Jim Fan on X, who says he played a “minor part” in the research. Fan has previously worked on several robotics simulation projects for Nvidia.

Genesis arrives as robotics researchers hunt for better tools to test and train robots in virtual environments before deploying them in the real world. Fast, accurate simulation helps robots learn complex tasks more quickly while reducing the need for expensive physical testing. For example, on this project page, the researchers show techniques developed in Genesis physics simulations (such as doing backflips) being applied to quadruped robots and soft robots.

The Genesis platform, developed by a group led by Zhou Xian of Carnegie Mellon University, processes physics calculations up to 80 times faster than existing robot simulators ( like Nvidia’s Isaac Gym). It uses graphics cards similar to those that power video games to run up to 100,000 copies of a simulation at once. That’s important when it comes to training the neural networks that will control future real-world robots.

“If an AI can control 1,000 robots to perform 1 million skills in 1 billion different simulations, then it may ‘just work’ in our real world, which is simply another point in the vast space of possible realities,” wrote Fan in his X post. “This is the fundamental principle behind why simulation works so effectively for robotics.”

Generating dynamic worlds

The team also announced they are working on the ability to generate what it calls “4D dynamic worlds”—perhaps using “4D” because they can simulate a 3D world in motion over time.  The system will reportedly use vision-language models (VLMs) to generate complete virtual environments from text descriptions (similar to “prompts” in other AI models), utilizing Genesis’s own simulation infrastructure APIs to create the worlds.

The AI-generated worlds will reportedly include realistic physics, camera movements, and object behaviors, all from text commands. The system then creates physically accurate ray-traced videos and data that robots can use for training. Of course, we have not tested this, so these claims should be taken with a grain of salt at the moment.

This prompt-based system may let researchers create complex robot testing environments by typing natural language commands instead of programming them by hand. “Traditionally, simulators require a huge amount of manual effort from artists: 3D assets, textures, scene layouts, etc. But every component in the workflow can be automated,” wrote Fan.

Using its engine, Genesis could also generate character motion, interactive 3D scenes, facial animation, and more, which may allow for the creation of artistic assets for creative projects, but may also lead to more realistic AI-generated games and videos in the future, constructing a simulated world in data instead of operating on the statistical appearance of pixels as with a video synthesis diffusion model.

While the generative system isn’t yet part of the currently available code on GitHub, the team plans to release it in the future.

Training tomorrow’s robots today (using Python)

Genesis remains under active development on GitHub, where the team accepts community contributions.

The platform stands out from other 3D world simulators for robotic training by using Python for both its user interface and core physics engine. Other engines use C++ or CUDA for their underlying calculations while wrapping them in Python APIs. Genesis takes a Python-first approach.

Notably, the non-proprietary nature of the Genesis platform makes high-speed robot training simulations available to any researcher for free through simple Python commands that work on regular computers with off-the-shelf hardware.

Previously, running robot simulations required complex programming and specialized hardware, says Fan in his post announcing Genesis, and that shouldn’t be the case. “Robotics should be a moonshot initiative owned by all of humanity,” he wrote.

Benj Edwards – Arstechnica

 

Author

LEAVE A REPLY

Please enter your comment!
Please enter your name here