Runway Research
Runway Research: Multimodal AI and Video Generation
Phenaki is an advanced AI model that generates realistic, long-form videos from changing text prompts. Create dynamic visual stories, animations, and scenes from plain descriptions.
Phenaki is a cutting-edge video generation model that transforms sequences of text prompts into long-form videos. Unlike traditional video synthesis tools that work frame-by-frame or use static input, Phenaki is designed to handle evolving narratives. It can generate high-quality, coherent videos that span several minutes—seamlessly transitioning between scenes and contexts as the prompt changes.
Phenaki uses a novel video representation system based on discrete tokens and causal temporal attention. This approach allows it to work with videos of variable length while preserving both spatial and temporal coherence. It is one of the first models capable of creating continuous videos based on a dynamic series of text inputs, making it ideal for storytelling and animated content creation.
The process begins with a text prompt or a sequence of prompts over time. These are converted into text tokens, which condition a masked transformer model. The transformer outputs compressed video tokens that are then decoded into a full-resolution video.
Phenaki stands out by supporting prompt sequences that evolve across time. This enables the creation of stories or scene transitions without needing manual video editing. For example, a video could begin with «a teddy bear swimming, ” then shift to „the bear walks on the beach, ” and end with „the bear by the campfire”—all within the same clip.
A specialized video encoder compresses each scene into tokens using causal attention over time. This compression method significantly reduces computational load while preserving video quality, enabling longer and more detailed generations.
Phenaki is ideal for artists, writers, and animators looking to bring stories to life. The ability to craft complex sequences from evolving text makes it suitable for concept videos, experimental films, and narrative art pieces.
Educators can describe learning scenarios—like scientific simulations, historical reenactments, or animated demonstrations—and instantly generate relevant videos that enhance student engagement.
Film studios and content creators can use Phenaki to prototype storyboards and visual sequences quickly. Instead of spending hours on sketches or mockups, creators can visualize their concepts directly from the script.
Phenaki can generate multi-minute stories: From a futuristic city traffic jam → to an alien spaceship arrival → to an astronaut in a blue room → and ending with a lion in a suit in a high-rise office
Phenaki also allows generation from a static image and a text prompt, producing consistent forward motion from the given frame.
The model compresses video data into discrete tokens using a temporal-aware encoder. This enables the processing of longer clips while reducing hardware requirements.
Phenaki was trained using both image-text and video-text pairs. This hybrid dataset design improves generalization and makes the model capable of generating content across a broad range of scenarios, even with limited video data.
Phenaki achieves better temporal and spatial quality than existing models. Its transformer-based architecture and efficient tokenizer design help reduce artifacts while improving coherence across frames.
Although currently presented as a research preview, Phenaki demonstrates the future of open-domain video generation. Future versions may allow public access or developer tools for integrating its capabilities into creative workflows.
Visit phenaki.video to explore generated videos and read the full research paper.