Technology · May 10, 2023

Google brings new generative models to Vertex AI, including Imagen

To paraphrase Andreessen Horowitz, generative AI, particularly on the text-to-art side, is eating the world. At least, investors believe so — judging by the billions of dollars they’ve poured into startups developing AI that creates text and images from prompts.

Not to be left behind, Big Tech is investing in its own generative AI art solutions, whether through partnerships with the aforementioned startups or in-house R&D. (See: Microsoft teaming up with OpenAI for toptechtrends.com/2022/10/12/microsoft-brings-dall-e-2-to-the-masses-with-designer-and-image-creator/”>Image Creator.) Google, leveraging its robust R&D wing, has decided to go the latter route, commercializing its work in generative AI to compete with the platforms already out there.

Today at its annual I/O developer conference, Google announced new AI models heading to toptechtrends.com/2022/06/09/google-expands-vertex-its-managed-ai-service-with-new-features/”>Vertex AI, its fully managed AI service, including a text-to-image model called Imagen. Imagen, which Google previewed via its toptechtrends.com/2022/08/25/googles-new-app-lets-you-experimental-ai-systems-like-lamda/”>AI Test Kitchen app last November, can generate and edit images as well as write captions for existing images.

“Any developer can use this technology using Google Cloud,” Nenshad Bardoliwalla, director of Vertex AI at Google Cloud, told TechCrunch in a phone interview. “You don’t need to be a data scientist or developer.”

Imagen in Vertex

Getting started with Imagen in Vertex is, indeed, a relatively straightforward process. A UI for the model is accessible from what Google calls the Model Garden, a selection of Google-developed models alongside curated open source models. Within the UI, similar to generative art platforms such as MidJourney and Nightcafe, customers can enter prompts (e.g. “a purple handbag”) to have Imagen generate a handful of candidate images.

Editing tools and follow-up prompts refine the Imagen-generated images, for example adjusting the color of the objects depicted in them. Vertex also offers upscaling for sharpening images, in addition to fine-tuning that allows customers to steer Imagen toward certain styles and preferences.

As alluded to earlier, Imagen can also generate captions for images, optionally translating those captions leveraging Google Translate. To comply with privacy regulations like GDPR, generated images that aren’t saved are deleted within 24 hours, Bardoliwalla says.

“We make it very easy for people to start working with generative AI and their images,” he added.

Of course, there’s a host of ethical and legal challenges associated with all forms of generative AI — no matter how polished the UI. AI models like Imagen “learn” to generate images from text prompts by “training” on existing images, which often come from data sets that were scraped together by trawling public image hosting websites. Some experts suggest that training models using public images, even copyrighted ones, will be covered by the fair use doctrine in the U.S. But it’s a matter that’s unlikely to be settled anytime soon.

Google I/O 2023 Vertex AI

Google’s Imagen model in action, in Vertex AI.

To wit, two companies behind popular AI art tools, toptechtrends.com/tag/midjourney/”>Midjourney and toptechtrends.com/2022/10/17/stability-ai-the-startup-behind-stable-diffusion-raises-101m/”>Stability AI, are in the crosshairs of a legal case that alleges they infringed on the rights of millions of artists by training their tools on web-scraped images. Stock image supplier Getty Images has taken Stability AI to court, separately, for reportedly using millions of images from its site without permission to train the art-generating model Stable Diffusion.

I asked Bardoliwalla whether Vertex customers should be concerned that Imagen might’ve been trained on copyrighted materials. Understandably, they might be deterred from using it if that were the case. 

Bardoliwalla didn’t say outright that Imagen wasn’t trained on trademarked images — only that Google conducts broad “data governance reviews” to “look at the source data” inside its models to ensure that they’re “free of copyright claims.” (The hedged language doesn’t come as a massive surprise considering that the original Imagen was trained on a public data set, LAION, known to contain copyrighted works.)

“We have to make sure that we’re completely within the balance of respecting all of the laws that pertain to copyright information,” Bardoliwalla continued. “We’re very clear with customers that we provide them with models that they can feel confident they can use in their work, and that they own the IP generated from their trained models in a completely secure fashion.”

Owning the IP is another matter. In the U.S. at least, it isn’t clear whether AI-generated art is copyrightable.

One solution — not to the problem of ownership, per se, but to questions around copyrighted training data — is allowing artists to “opt out” of AI training altogether. AI startup toptechtrends.com/2023/05/03/spawning-lays-out-its-plans-for-letting-creators-opt-out-of-generative-ai-training/”>Spawning is attempting to establish industry-wide standards and tools for opting out of generative AI tech. Adobe is toptechtrends.com/2023/03/21/adobe-firefly-generative-ai/”>pursuing its own opt-out mechanisms and tooling. So is DeviantArt, which in November toptechtrends.com/2022/11/11/deviantart-provides-a-way-for-artists-to-opt-out-of-ai-art-generators/”>launched an HTML-tag-based protection to prohibit software robots from crawling pages for images.

toptechtrends.com/wp-content/uploads/2023/05/Screenshot-2023-05-09-at-4.40.31-PM.png” alt=”Google I/O 2023 Vertex AI” width=”1024″ height=”576″ srcset=”https://toptechtrends.com/wp-content/uploads/2023/05/Screenshot-2023-05-09-at-4.40.31-PM.png 2290w, https://toptechtrends.com/wp-content/uploads/2023/05/Screenshot-2023-05-09-at-4.40.31-PM.png?resize=150,84 150w, https://toptechtrends.com/wp-content/uploads/2023/05/Screenshot-2023-05-09-at-4.40.31-PM.png?resize=300,169 300w, https://toptechtrends.com/wp-content/uploads/2023/05/Screenshot-2023-05-09-at-4.40.31-PM.png?resize=768,432 768w, https://toptechtrends.com/wp-content/uploads/2023/05/Screenshot-2023-05-09-at-4.40.31-PM.png?resize=680,382 680w, https://toptechtrends.com/wp-content/uploads/2023/05/Screenshot-2023-05-09-at-4.40.31-PM.png?resize=1536,864 1536w, https://toptechtrends.com/wp-content/uploads/2023/05/Screenshot-2023-05-09-at-4.40.31-PM.png?resize=2048,1152 2048w, https://toptechtrends.com/wp-content/uploads/2023/05/Screenshot-2023-05-09-at-4.40.31-PM.png?resize=1200,675 1200w, https://toptechtrends.com/wp-content/uploads/2023/05/Screenshot-2023-05-09-at-4.40.31-PM.png?resize=50,28 50w” sizes=”(max-width: 1024px) 100vw, 1024px”>

Image Credits: Google

Google doesn’t offer an opt-out option. (To be fair, neither does one of its chief rivals, OpenAI.) Bardoliwalla didn’t say whether this might change in the future, only that Google is “inordinately concerned” with making sure that it trains models in a way that’s “ethical and responsible.”

That’s a bit rich, I think, coming from a company that canceled an outside AI ethics board, forced out toptechtrends.com/2020/12/03/google-fires-ethical-ai-co-lead/”>prominent AI ethics researchers and is curtailing publishing AI research to “compete and keep knowledge in house.” But interpret Bardoliwalla’s words as you will.

I also asked Bardoliwalla about steps Google’s taking, if any, to limit the amount of toxic or biased content that Imagen creates — another problem with generative AI systems. Just recently, researchers at AI startup Hugging Face and Leipzig University published a tool demonstrating that models like Stable Diffusion and OpenAI’s toptechtrends.com/tag/dall-e-2/”>DALL-E 2 tend to produce images of people that look white and male, especially when asked to depict people in positions of authority.

Bardoliwalla had a more detailed answer prepped for this question, claiming that every API call to Vertex-hosted generative models is evaluated for “safety attributes” including toxicity, violence and obscenity. Vertex scores models on these attributes and, for certain categories, blocks the response or gives customers the choice as to how to proceed, Bardoliwalla said. 

“We have a very good sense from our consumer properties of the type of content that may not be the kind of content that our customers are looking for these generative AI models to produce,” he continued. “This is an area of significant investment as well as market leadership for Google — for us to make sure that our customers are able to produce the results that they’re looking for that doesn’t harm or damage their brand value.”

To that end, Google is launching reinforcement learning from human feedback (RLHF) as a managed service offering in Vertex, which it claims will help organizations maintain model performance over time and deploy safer — and measurably more accurate — models in production. RLHF, a popular technique in machine learning, trains a “reward model” directly from human feedback, like asking contract workers to rate responses from an AI chatbot. It then uses this reward model to optimize a generative AI model along the lines of Imagen.

Google I/O 2023 Vertex AI

Image Credits: Google

Bardoliwalla says that the amount of fine-tuning needed through RLHF will depend on the scope of the problem a customer’s trying to solve.  There’s debate within academia as to whether RLHF is always the right approach — AI startup Anthropic, for one, toptechtrends.com/2023/05/09/anthropic-thinks-constitutional-ai-is-the-best-way-to-train-models/#:~:text=Dubbed%20%E2%80%9Cconstitutional%20AI%2C%E2%80%9D%20Anthropic,simpler%20to%20adjust%20as%20needed.”>argues that it isn’t, in part because RLHF can entail hiring scores of low-paid contractors that are forced to rate extremely toxic content. But Google feels differently.

“With our RLHF service, a customer can choose a modality and the model and then rate responses that come from the model,” Bardoliwalla said. “Once they submit those responses to the reinforcement learning service, it tunes the model to generate better responses that are aligned with … what an organization is looking for.”

New models and tools

Beyond Imagen, several other generative AI models are now available to select Vertex customers, Google announced today: Codey and Chirp.

Codey, Google’s answer to GitHub’s Copilot, can generate code in over 20 languages including Go, Java, Javascript, Python and Typescript. Codey can suggest the next few lines based on the context of code entered into a prompt or, like OpenAI’s toptechtrends.com/tag/chatgpt/”>ChatGPT, the model can answer questions about debugging, documentation and high-level coding concepts.

Google I/O 2023 Vertex AI

Image Credits: Google

As for Chirp, it’s a speech model trained “millions” of hours of audio that supports more than 100 languages and can be used to caption videos, offer voice assistance and generally power a range of speech tasks and apps.

In a related announcement at I/O, Google launched the Embeddings API for Vertex in preview, which can convert text and image data into representations called vectors that map specific semantic relationships. Google says that it’ll be used to build semantic search and text classification functionality like Q&A chatbots based on an organization’s data, sentiment analysis and anomaly detection.

Codey, Imagen, the Embeddings API for images and RLHF are available in Vertex AI to “trusted testers,” Google says. Chirp, the Embeddings API and Generative AI Studio, a suite for interacting with and deploying AI models, meanwhile, are accessible in preview in Vertex to anyone with a Google Cloud account.

toptechtrends.com/tag/google-i-o-2023/”>Read more about Google I/O 2023 on TechCrunch

toptechtrends.com/2023/05/10/google-brings-new-generative-models-to-vertex-ai-including-imagen/”>Google brings new generative models to Vertex AI, including Imagen by toptechtrends.com/author/kyle-wiggers/”>Kyle Wiggers originally published on toptechtrends.com/”>TechCrunch

About The Author