Runway says its latest AI video model can actually generate consistent scenes and people
Briefly

Runway has introduced its latest AI video synthesis model, Gen-4, designed to improve storytelling consistency by generating characters and objects that remain uniform across multiple shots using just one reference image. Users provide a description of the desired composition, and the model ensures consistent outputs from different angles. This development addresses a common challenge in AI-generated videos, focusing on continuity and control in storytelling. The rollout is currently limited to paid and enterprise users, following the controversial launch of Gen-3 Alpha, which had issues related to content sourcing.
Runway's new Gen-4 video model enables users to create consistent characters and scenes across multiple shots, significantly improving AI-generated storytelling.
The new model facilitates continuity and control, allowing users to maintain character appearance and scene integrity, even under varying lighting conditions.
Read at The Verge
[
|
]