Keeping up with an industry as fast-moving as toptechtrends.com/category/artificial-intelligence/”>AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.
This week in AI, we saw OpenAI, Anthropic, Google, Inflection, Microsoft, Meta and Amazon voluntarily toptechtrends.com/2023/07/21/top-ai-companies-visit-the-white-house-to-make-voluntary-safety-commitments/”>commit to pursuing shared AI safety and transparency goals ahead of a planned Executive Order from the Biden administration.
As my colleague Devin Coldewey writes, there’s no rule or enforcement being proposed, here — the practices agreed to are purely voluntary. But the pledges indicate, in broad strokes, the AI regulatory approaches and policies that each vendor might find amendable in the U.S. as well as abroad.
Among other commitments, the companies volunteered to conduct security tests of AI systems before release, share information on AI mitigation techniques and develop watermarking techniques that make AI-generated content easier to identify. They also said that they would invest in cybersecurity to protect private AI data and facilitate the reporting of vulnerabilities, as well as prioritize research on societal risks like systemic bias and privacy issues.
The commitments are important step, to be sure — even if they’re not enforceable. But one wonders if there are ulterior motives on the part of the undersigners.
Reportedly, OpenAI drafted an internal policy memo that shows the company supports the idea of requiring government licenses from anyone who wants to develop AI systems. CEO Sam Altman first raised the idea at a U.S. Senate hearing in May, during which he backed the creation of an agency that could issue licenses for AI products — and revoke them should anyone violate set rules.
In a recent interview with press, Anna Makanju, OpenAI’s VP of global affairs, insisted that OpenAI wasn’t “pushing” for licenses and that the company only supports licensing regimes for AI models more powerful than OpenAI’s current GPT-4. But government-issued licenses, should they be implemented in the way that OpenAI proposes, set the stage for a potential clash with startups and open source developers who may see them as an attempt to make it more difficult for others to break into the space.
Devin said it best, I think, when he described it to me as “dropping nails on the road behind them in a race.” At the very least, it illustrates the two-faced nature of AI companies who seek to placate regulators while shaping policy to their favor (in this case putting small challengers at a disadvantage) behind the scenes.
It’s a worrisome state of affairs. But, if policymakers step up to the plate, there’s hope yet for sufficient safeguards without undue interference from the private sector.
Here are other AI stories of note from the past few days:
- toptechtrends.com/2023/07/21/openais-head-of-trust-and-safety-dave-willner-steps-down/”>OpenAI’s trust and safety head steps down: Dave Willner, an industry veteran who was OpenAI’s head of trust and safety, announced in a post on LinkedIn that he’s left the job and transitioned to an advisory role. OpenAI said in a statement that it’s seeking a replacement and that CTO Mira Murati will manage the team on an interim basis.
- toptechtrends.com/2023/07/20/openai-launches-customized-instructions-for-chatgpt/”>Customized instructions for ChatGPT: In more OpenAI news, the company has launched custom instructions for toptechtrends.com/2023/07/20/chatgpt-everything-you-need-to-know-about-the-open-ai-powered-chatbot/”>ChatGPT users so that they don’t have to write the same instruction prompts to the chatbot every time they interact with it.
- toptechtrends.com/2023/07/20/google-reportedly-testing-ai-tool-write-news-articles/”>Google news-writing AI: Google is testing a tool that uses AI to write news stories and has started demoing it to publications, according to a new report from The New York Times. The tech giant has pitched the AI system to The New York Times, The Washington Post and The Wall Street Journal’s owner, News Corp.
- toptechtrends.com/2023/07/19/apple-is-testing-chatgpt-like-ai-chatbot/”>Apple tests a ChatGPT-like chatbot: Apple is developing AI to challenge OpenAI, Google and others, according to a new report from Bloomberg’s Mark Gurman. Specifically, the tech giant has created a chatbot that some engineers are internally referring to as “Apple GPT.”
- toptechtrends.com/2023/07/18/meta-releases-llama-2-a-more-helpful-set-of-text-generating-models/”>Meta releases Llama 2: Meta unveiled a new family of AI models, Llama 2, designed to drive apps along the lines of OpenAI’s toptechtrends.com/tag/chatgpt/”>ChatGPT, toptechtrends.com/2023/07/18/microsoft-brings-bing-chat-to-the-enterprise/”>Bing Chat and other modern chatbots. Trained on a mix of publicly available data, Meta claims that Llama 2’s performance has improved significantly over the previous generation of Llama models.
- toptechtrends.com/2023/07/18/thousands-of-authors-sign-letter-urging-ai-makers-to-stop-stealing-books/”>Authors protest against generative AI: Generative AI systems like ChatGPT are trained on publicly available data, including books — and not all content creators are pleased with the arrangement. In an open letter signed by more than 8,500 authors of fiction, non-fiction and poetry, the tech companies behind large language models like toptechtrends.com/2023/07/13/chatgpt-everything-you-need-to-know-about-the-open-ai-powered-chatbot/”>ChatGPT, Bard, LLaMa and more are taken to task for using their writing without permission or compensation.
- toptechtrends.com/2023/07/18/microsoft-brings-bing-chat-to-the-enterprise/”>Microsoft brings Bing Chat to the enterprise: At its annual Inspire conference, Microsoft announced Bing Chat Enterprise, a version of its Bing Chat AI-powered chatbot with business-focused data privacy and governance controls. With Bing Chat Enterprise, chat data isn’t saved, Microsoft can’t view a customer’s employee or business data and customer data isn’t used to train the underlying AI models.
More machine learnings
Technically this was also a news item, but it bears mentioning here in the research section. Fable Studios, which previously made CG and 3D short films for VR and other media, toptechtrends.com/2023/07/18/maybe-showing-off-an-ai-generated-fake-tv-episode-during-a-writers-strike-is-a-bad-idea/”>showed off an AI model it calls Showrunner that (it claims) can write, direct, act in and edit an entire TV show — in their demo, it was South Park.
I’m of two minds on this. On one hand, I think pursuing this at all, let alone during a huge Hollywood strike that involves issues of compensation and AI, is in rather poor taste. Though CEO Edward Saatchi said he believes that the tool puts power in the hands of creators, the opposite is also arguable. At any rate it was not received particularly well by people in the industry.
On the other hand, if someone on the creative side (which Saatchi is) does not explore and demonstrate these capabilities, then they will be explored and demonstrated by others with less compunction about putting them to use. Even if the claims Fable makes are a bit expansive for what they actually showed (which has serious limitations) it is like the original DALL-E in that it prompted discussion and indeed worry even though it was no replacement for a real artist. AI is going to have a place in media production one way or the other — but for a whole sack of reasons it should be approached with caution.
On the policy side, a little while back we had the National Defense Authorization Act going through with (as usual) some really ridiculous policy amendments that have nothing to do with defense. But among them was one addition that the government must host an event where researchers are companies can do their best to detect AI-generated content. This kind of thing is definitely approaching “national crisis” levels so it’s probably good this got slipped in there.
Over at Disney Research, they’re always trying to find a way to bridge the digital and the real — for park purposes, presumably. In this case they have developed a way to map virtual movements of a character or motion capture (say for a CG dog in a film) onto an actual robot, even if that robot is a different shape or size. It relies on two optimization systems each informing the other of what is ideal and what is possible, sort of like a little ego and super-ego. This should make it much easier to make robot dogs act like regular dogs, but of course it’s generalizable to other stuff as well.
And here’s hoping AI can help us steer the world away from sea-bottom mining for minerals, because that is definitely a bad idea. A multi-institutional study put AI’s ability to sift signal from noise to work predicting the location of valuable minerals around the globe. As they write in the abstract:
In this work, we embrace the complexity and inherent “messiness” of our planet’s intertwined geological, chemical, and biological systems by employing machine learning to characterize patterns embedded in the multidimensionality of mineral occurrence and associations.
The study actually predicted and verified locations of uranium, lithium, and other valuable minerals. And how about this for a closing line: the system “will enhance our understanding of mineralization and mineralizing environments on Earth, across our solar system, and through deep time.” Awesome.