🧞 Genie: Generative Interactive Environments


Genie Team


We introduce Genie, a foundation world model trained from Internet videos that can generate an endless variety of playable (action-controllable) worlds from synthetic images, photographs, and even sketches.

A Foundation Model for Playable Worlds

The last few years have seen an emergence of generative AI, with models capable of generating novel and creative content via language, images, and even videos. Today, we introduce a new paradigm for generative AI, generative interactive environments (Genie), whereby interactive, playable environments can be generated from a single image prompt. 


Genie can be prompted with images it has never seen before, such as real world photographs or sketches, enabling people to interact with their imagined virtual worlds-–essentially acting as a foundation world model. This is possible despite training without any action labels. Instead, Genie is trained from a large dataset of publicly available Internet videos. We focus on videos of 2D platformer games and robotics but our method is general and should work for any type of domain, and is scalable to ever larger Internet datasets. 

Learning to control without action labels

What makes Genie unique is its ability to learn fine-grained controls exclusively from Internet videos. This is a challenge because Internet videos do not typically have labels regarding which action is being performed, or even which part of the image should be controlled. Remarkably, Genie learns not only which parts of an observation are generally controllable, but also infers diverse latent actions that are consistent across the generated environments. Note here how the same latent actions yield similar behaviors across different prompt images. 

latent actions: 6, 6, 7, 6, 7, 6, 5, 5, 2, 7

latent actions: 5, 6, 2, 2, 6, 2, 5, 7, 7, 7

Enabling a new generation of creators

Amazingly, it only takes a single image to create an entire new interactive environment. This opens the door to a variety of new ways to generate and step into virtual worlds, for instance, we can take a state-of-the-art text-to-image generation model and use it to produce starting frames that we can then bring to life with Genie. Here we generate images with Imagen2 and bring them to life with Genie. 

But it doesn’t stop there, we can even step into human designed creations such as sketches! 🧑‍🎨

Or real world images 🤯

A stepping stone for generalist agents

Genie also has implications for training generalist agents. Previous works have shown that game environments can be an effective testbed for developing AI agents, but we are often limited by the number of games available. With Genie, our future AI agents can be trained in a never-ending curriculum of new, generated worlds. In our paper we have a proof of concept that the latent actions learned by Genie can transfer to real human-designed environments, but this is just scratching the surface of what may be possible in the future.

The future of generative virtual worlds

Finally, while we have focused on results from Platformers on this website, Genie is a general method and can be applied to a multitude of domains without requiring any additional domain knowledge. 

We trained a smaller 2.5B model on action-free videos from RT1. As was the case for Platformers, trajectories with the same latent action sequence typically display similar behaviors. This indicates Genie is able to learn a consistent action space which may be amenable to training embodied generalist agents.

latent actions: 0, 0, 1, 2, 6, 1, 1, 4, 0, 0

Genie can also simulate deformable objects 👕, a challenging task for human-designed simulators that can instead be learned from data.

Genie introduces the era of being able to generate entire interactive worlds from images or text. We also believe it will be a catalyst for training the generalist AI agents of the future. 🤖

The Genie Team 🫶

Jake Bruce*, Michael Dennis*, Ashley Edwards*, Jack Parker-Holder*, Yuge (Jimmy) Shi*, Edward Hughes, Matthew Lai, Aditi Mavalankar, Richie Steigerwald, Chris Apps, Yusuf Aytar, Sarah Bechtle, Feryal Behbahani, Stephanie Chan, Nicolas Heess, Lucy Gonzalez, Simon Osindero, Sherjil Ozair, Scott Reed, Jingwei Zhang, Konrad Zolna, Jeff Clune, Nando de Freitas, Satinder Singh, Tim Rocktäschel*

*Equal contribution

Please contact Ashley Edwards (edwardsashley@google.com), Jack Parker-Holder (jparkerholder@google.com) or Jake Bruce (jacobbruce@google.com) for additional questions :)



More details in the paper!