Not to be outdone by Meta’s Make-A-Video, Google today detailed its work on Imagen Video, an AI system that can generate video clips given a text prompt (e.g., “a teddy bear washing dishes”). While the results aren’t perfect — the looping clips the system generates tend to have artifacts and noise — Google claims that Imagen Video is a step toward a system with a “high degree of controllability” and world knowledge, including the ability to generate footage in a range of artistic styles.
As my colleague Devin Coldewey noted in his toptechtrends.com/2022/09/29/meta-make-a-video-ai-achieves-a-new-creepy-state-of-the-art/”>piece about Make-A-Video, text-to-video systems aren’t new. Earlier this year, a group of researchers from Tsinghua University and the Beijing Academy of Artificial Intelligence released CogVideo, which can translate text into reasonably-high-fidelity short clips. But Imagen Video appears to be a significant leap over the previous state-of-the-art, showing an aptitude for animating captions that existing systems would have trouble understanding.
“It’s definitely an improvement,” Matthew Guzdial, an assistant professor at the University of Alberta studying AI and machine learning, told TechCrunch via email. “As you can see from the video examples, even though the comms team is selecting the best outputs there’s still weird blurriness and artificing. So this definitely is not going to be used directly in animation or TV anytime soon. But it, or something like it, could definitely be embedded in tools to help speed some things up.”
Imagen Video builds on Google’s toptechtrends.com/2022/05/23/openai-look-at-our-awesome-image-generator-google-hold-my-shiba-inu/”>Imagen, an image-generating system comparable to OpenAI’s toptechtrends.com/2022/07/20/openai-expands-access-to-dall-e-2-its-powerful-image-generating-ai-system/”>DALL-E 2 and toptechtrends.com/tag/stable-diffusion/”>Stable Diffusion. Imagen is what’s known as a “diffusion” model, generating new data (e.g., videos) by learning how to “destroy” and “recover” many existing samples of data. As it’s fed the existing samples, the model gets better at recovering the data it’d previously destroyed to create new works.
As the Google research team behind Imagen Video explains in a paper, the system takes a text description and generates a 16-frame, three-frames-per-second video at 24-by-48-pixel resolution. Then, the system upscales and “predicts” additional frames, producing a final 128-frame, 24-frames-per-second video at 720p (1280×768).
Google says that Imagen Video was trained on 14 million video-text pairs and 60 million image-text pairs as well as the publicly available LAION-400M image-text data set, which enabled it to generalize to a range of aesthetics. In experiments, they found that Imagen Video could create videos in the style of Van Gogh paintings and watercolor. Perhaps more impressively, they claim that Imagen Video demonstrated an understanding of depth and three-dimensionality, allowing it to create videos like drone flythroughs that rotate around and capture objects from different angles without distorting them.
In a major improvement over the image-generating systems available today, Imagen Video can also render text properly. While both Stable Diffusion and DALL-E 2 struggle to translate prompts like “a logo for ‘Diffusion’” into readable type, Imagen Video renders it without issue — at least judging by the paper.
That’s not to suggest that Imagen Video is without limitations. As is the case with Make-A-Video, even the clips cherrypicked from Imagen Video are jittery and distorted in parts, as Guzdial alluded to, with objects that blend together in physically unnatural — and impossible — ways. To improve upon this, the Imagen Video team plans to combine forces with the researchers behind Phenaki, another Google text-to-video system that can turn long, detailed prompts into two-plus-minute videos, albeit at a lower quality.
It’s worth peeling back the curtains on Phenaki a bit to see where a collaboration between the teams might lead. While Imagen Video focuses on quality, Phenaki prioritizes coherency and length. The system can turn paragraph-long prompts into films of an arbitrary length, from a scene of a person riding a motorcycle to an alien spaceship flying over a futuristic city. Phenaki-generating clips suffer from the same glitches as Imagen Video’s, but it’s remarkable to me how closely they follow the long and nuanced text descriptions that prompted them.
For example, here’s a prompt fed to Phenaki:
Lots of traffic in futuristic city. An alien spaceship arrives to the futuristic city. The camera gets inside the alien spaceship. The camera moves forward until showing an astronaut in the blue room. The astronaut is typing in the keyboard. The camera moves away from the astronaut. The astronaut leaves the keyboard and walks to the left. The astronaut leaves the keyboard and walks away. The camera moves beyond the astronaut and looks at the screen. The screen behind the astronaut displays fish swimming in the sea. Crash zoom into the blue fish. We follow the blue fish as it swims in the dark ocean. The camera points up to the sky through the water. The ocean and the coastline of a futuristic city. Crash zoom towards a futuristic skyscraper. The camera zooms into one of the many windows. We are in an office room with empty desks. A lion runs on top of the office desks. The camera zooms into the lion’s face, inside the office. Zoom out to the lion wearing a dark suit in an office room. The lion wearing looks at the camera and smiles. The camera zooms out slowly to the skyscraper exterior. Timelapse of sunset in the modern city.
And here’s the generated video:
Back to Imagen Video, the researchers also note that the data used to train the system contained problematic content, which could result in Imagen Video producing graphically violent or sexually explicit clips. Google says it won’t release the Imagen Video model or source code “until these concerns are mitigated.”
Still, with text-to-video tech progressing at a rapid clip, it might not be long before an open source model emerges — both supercharging creativity and presenting an intractable challenge where it concerns deepfakes and misinformation.
toptechtrends.com/2022/10/05/google-answers-metas-video-generating-ai-with-its-own-dubbed-imagen-video/”>Google answers Meta’s video-generating AI with its own, dubbed Imagen Video by toptechtrends.com/author/kyle-wiggers/”>Kyle Wiggers originally published on toptechtrends.com/”>TechCrunch