Fresh from telling US lawmakers he’s a fan of regulation and laws are needed to mitigate the risks around artificial intelligence — and, indeed, toptechtrends.com/2023/05/22/openai-leaders-propose-international-regulatory-body-for-ai/”>calling for an international regulatory body for AI — OpenAI CEO’s Sam Altman is on a tour of Europe this week to meet European regulators and warn against, er, too much regulation of AI.
This is a familiar dance. Big Tech CEOs love to claim they support regulation — but, goodness, just not that regulation. The only rules they’re happy to take are the rules they suggest themselves. So it turns out that shiny new tech giants with babyfaced CEOs are much like the old, tarnished platform giants in this regard.
But onward to Altman’s tour…
So far, OpenAI’s CEO’s dash around European capitals has produced a string of flesh-pressing photo ops with heads of government in Spain, Poland, France and the UK.
Why did Spain’s prime minister, Pedro Sanchez, get to be first on Altman’s charm offensive/hit list? The salient detail here is the Southern European nation takes up the rotating six-month presidency of the European Council this summer, which will give it a lot of leeway to shape discussions at a crucial time in negotiations over the bloc’s AI rulebook. Spain has also said it’s eager to get the file over the line during its tenure.
Alongside this high profile flesh-pressing parade, Altman has been sounding off in public and his tour has duly generated a bunch of headlines warning that OpenAI could shut up shop in the region because of European regulation. So, er, nil points for subtlety.
Here’s the first bit, in tweet digest form…
Tech watchers may recall similar such whistlestop tours in recent years, undertaken by Meta’s Mark Zuckerberg and Google’s Sundar Pichai, as last season’s tech giants sought to lobby heads of government on major pieces of EU digital policy, such as the Digital Services Act and Digital Market Act.
So OpenAI is reaching for the familiar Big Tech playbook here. Albeit it’s repeating the EU lobbying pattern on what’s — apparently — a relatively shoestring budget* vs the toptechtrends.com/2021/08/31/us-giants-top-tech-industrys-100m-a-year-lobbying-blitz-in-eu/”>multiple millions of dollars tech giants like Google and Meta routinely spend annually on lobbying Brussels.
Altman also appears to have gone for a bit of a two track approach. As well as schmoozing regional heads of government who — in the case of France, Poland and Spain — have influence over the final shape of the EU’s AI rulebook, via the European Council, he’s been lobbying loudly in public too: Taking part in a discussion event at University College London where he used an on-stage interview to discuss his preference for regulation that was “something between the traditional European approach and the traditional U.S. approach”, per a write up in Time.
He also spilled more feels to attendant members of the press, telling Time and Reuters that his company might just stop operating in Europe if it could not comply with incoming rules for AI. “We’re gonna try to comply,” Time reported Altman telling it. But he griped he had “a lot” of criticisms of the wording of the EU AI Act — which presumably means he’s unhappy with amendments recently proposed by lawmakers in the European Parliament.
Earlier this month, Members of the European Parliament on two key committees backed a series of amendments to the Commission’s original (toptechtrends.com/2021/04/21/europe-lays-out-plan-for-risk-based-ai-rules-to-boost-trust-and-uptake/”>April 2021) proposal for a risk-based framework for regulating AI that aim to ensure general purpose AI, foundational models and generative AI do not fall outside the rules.
As we toptechtrends.com/2023/05/11/eu-ai-act-mep-committee-votes/”>reported at the time, MEPs backed obligations on providers of foundational models to apply safety checks, data governance measures and risk mitigations prior to putting their models on the market — including obligating them to consider “foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law”.
The amendments would also commit foundational model makers to reduce energy consumption and resource use of their systems and register them in an EU database that will be established by the regulation. While providers of generative AI technologies (such as OpenAI’s ChatGPT) would get transparency obligations — meaning they must ensure users are informed content is machine generated; apply “adequate safeguards” in relation to content their systems generate; and provide a summary of any copyrighted materials used to train their AIs.
So Altman appears to be taking issue with all that. (We’ve reached out to OpenAI about its position on the EU AI Act and will update this report with any response.)
In further remarks to Reuters, Altman also said: “The current draft of the EU AI Act would be over-regulating but we have heard it’s going to get pulled back” — which, again, appears to be a reference to the parliament’s proposed amendments. And sounds like an attempt to lobby the Council, a body composed of representatives of EU Member States’ governments, to push back in the upcoming trilogue discussions with MEPs in order that the amendments “get pulled back”.
I mean, it’s a safe bet that Altman was repeating his concerns about EU “over-regulation” to Spain’s prime minister Pedro Sanchez, Poland’s prime minister Mateusz Jakub Morawieck and France’s president Emmanuel Macron in his facetime with them this week.
Helpfully, Poland’s government’s tweet to mark the visit confirms that “issues related to legal regulations regarding the use of AI” were on the discussion agenda in Warsaw — alongside talk of “opportunities for Polish companies to participate” in the development of AI… (So, er, quid pro quo guys! Do you want those OpenAI AI engineer jobs or not?!)
Given the timing of Altman’s trip to Europe, OpenAI may also have its eye on a plenary vote in parliament which is expected early next month — a step that will confirm MEPs’ negotiating mandate with the Council on the AI Act file. It may therefore be hoping its lobbying of heads of governments will translate into pressure on certain political factions in the parliament to vote against the amendments backed by the two key committees.
This, friends, is how the EU legislative sausage gets made!
Altman’s tour probably won’t end here either. We’re told the EU’s internal market commissioner, Thierry Breton, is expecting to meet with Altman in the coming weeks.
But why is OpenAI’s CEO taking time out from his eyeball-taxing tour of EU Member State government bigwigs to meet with UK prime minister, Rishi Sunak — in a meeting that took place alongside execs at a couple of AI rivals (Google-DeepMind’s Demis Hassabis and Anthropic’s Dario Amodei)?
The UK does of course remain a major economy in Europe. Plus there have been some rumors OpenAI has been considering setting up a local HQ in the country. While Anthropic has just announced its own London office. But — on the regulatory side at least — it is no longer a member of the EU, which means it has no lawmakers in Brussels who can shape the EU’s AI Act. So it’s a relative minnow in AI rule-making terms.
Add to that, Sunak’s government has signalled it’s not planning any new domestic legislation to regulate AI. A toptechtrends.com/2023/03/29/uk-ai-white-paper/”>recent government white paper laid out in its preferred approach of relying upon existing regulatory bodies, such as the competition authority and privacy watchdog, to provide guidance on safe development of AI — rather than legislating bespoke guardrails to regulate uses of the technology.
So this meeting is certainly the odd one out for Altman. Also because he was not the only AI CEO in the room.
Sunak’s tweet about the meeting offers only a self-serving observation that discussions focused on “how the UK can provide international leadership on AI”. (We’ve reached out to Number 10 Downing Street with questions and will update this report with any response.)
DeepMind’s Hassabis also tweeted about having a “good conversation” on “developing AI responsibly”, before offering the usual technosolutionist promo claim that: “AI has the potential to improve life dramatically, transform industries, deliver scientific and medical breakthroughs if government and industry work together.”
(Asked about the meeting, Google declined public comment but in background remarks it suggested the participants shared a strong conviction on the promise of AI and important challenges that will require international action, adding that it expects more dialogue as things move forward.)
Sunak’s meeting with AI CEOs may offer a hint of a third strategic track for these tech exes’ lobbying of governments and regulators — one that’s focused on trying to defer and dazzle lawmakers with talk of flashy potential and big future fears, while pandering to political self interest (local jobs!) and provincial self importance.
The goal is to reframe what responsible AI development means — by zooming attention out, not in — onto talk of achieving consensus on some broad-brush international principles/standards, rather than having lots of prescriptive local rules. And doing that at the same time as distracting lawmakers (and the media) with talk of “superintelligent” AIs that don’t actually exist. (To wit: Time’s report of Altman’s London talk covers a “handful” of protestors who are photographed holding signs outside the venue apparently protesting such hypothetical superintelligence.)
So the strategy is really about drawing lawmakers’ attention away from harms that already exist and are demonstrably flowing from use of current AI systems — whether that’s mass copyright infringement, supercharged disinformation, systematic privacy abuse, speech and safety issues or indeed economic concerns related to the impact of generative automation on all sorts of jobs — with specious talk of existential risks to human civilization posed by non-existent AGI (artificial general intelligence) so that regulators expend their (limited) bandwidth chasing AI ghosts.
toptechtrends.com/2023/05/22/openai-leaders-propose-international-regulatory-body-for-ai/”>Earlier this week, OpenAI published a blog post entitled “Governance of superintelligence” in which current gen AI risks (i.e. the ones which do actually exist) were framed as a secondary concern — whereas the company pressed for the spotlight to be trained on risks that are entirely theoretical, writing: “We must mitigate the risks of today’s AI technology too but superintelligence will require special treatment and coordination.”
It’s notable that across tens of blog posts OpenAI has penned over the years it hasn’t found time to write an equivalent post setting out how it thinks (actual) AI should be governed. (The closest it’s come are a blog post from last June, about “best practices” for deploying large language models, which talks about a “preliminary set of best practices” to “mitigate unintentional harm” and “prohibit misuse”; and a blog post from last month on “safety” which was published in response to a regulator intervention in the EU — after Italy’s data protection watchdog ordered the service suspended over a raft of suspended infringements of the General Data Protection Regulation.)
While, asked by a US senate committee earlier this month to take a stab at defining how AI should be regulated, Altman offered compute power or model capability as one way lawmakers might draw a line for where AI systems should be licensed. But his overall remarks suggested a view — far from all the headlines which duly reported him calling for regulation — that most AI systems should get a carve out from the rules. “I think there are very different levels here,” he suggested. “And I think it’s important that any new approach, any new law does not stop the innovation from happening.”
The problem for Altman and other AI CEOs in the room who want the kind of general free licence that existed for social media firms when they were scaling into platform giants without prescriptive rules cramping their style, is that the EU is ahead far down the path of regulating AI — and doing so in a way that looks set to bring in far more specific rules for existing technologies than the tech bros are comfortable with.
Add to that, if Brussels gets its risk-based AI framework in place first, it could end up setting the defacto rulebook for the rest of the world on AI. And Altman at least looks keen to avoid the fabled “Brussels effect” setting the tone at his AI party.
So in meeting Sunak, most likely, the OpenAI CEO is seeking to drive a bigger wedge between the UK and the EU on AI regulation — by encouraging the former towards support for some less prescriptive (fuzzier) international standards which can be scoped by the AI companies themselves.
As such, the meeting may also represent a third strand of how the company is trying to apply pressure on EU lawmakers at a key point in the bloc’s co-legislative process. Since, if one major European economy (the UK) is seen to be taking a different direction and backing looser international ‘standards’ vs specific rules, it might cause EU lawmakers to have second thoughts and doubt the full-frame approach before the file is sealed. (And as they say in Brussels, nothing is decided until everything is decided. So there’s plenty still to play for.)
If this is indeed his game, Altman should be aware that EU lawmakers are a wily lot. Nor do they let the grass grow on their patch. So of course the bloc’s leaders have already involved themselves in international standards-making for AI — including announcing, earlier this week, an initiative joined by Google, to create an “AI Pact” to act as a stop-gap before the full-fat AI rules come in.
The EU’s digital strategy chief, Margrethe Vestager, has also pressed for G7 nations to back internationally agreed guardrails, seemingly with some success — in light of the leaders agreeing last weekend to launch the “Hiroshima AI Process” and work towards devising AI guardrails, including for generative AI. They also called for the development and adoption of technical standards to keep artificial intelligence “trustworthy” (a choice of term that explicitly echoes the EU’s long-standing language on regulating AI).
So, well, expect any future meeting between Altman and the European Commission to entail Brussels urging OpenAI’s CEO to commit the company to a set of international standards the bloc is already involving itself in shaping.
*Per OpenAI’s entry in the European Transparency Register, where it has only been a registered lobbyist of the EU since June last year, it has allocated a tiny budget to push its positions in Brussels — disclosing an annual spend on EU lobbying activities of just €10,000-€24,999. Which pales in comparison beside the multiple-millions-apiece routinely spent by the likes of Google and Meta on trying to bend EU lawmakers to their will, per toptechtrends.com/2021/08/31/us-giants-top-tech-industrys-100m-a-year-lobbying-blitz-in-eu/”>lobby watchers analysis of annual spend.
OpenAI’s entry in the register also states that it has so far only lobbied on the AI Act — further stipulating:
To date, we have met with Washington-based and Brussels-based EU representatives. We have hosted one roundtable with 20 EU diplomats to date and visited 3 embassies. We have also engaged EU representatives in Brussels from time to time starting last summer.
(Note: The EU transparency register is not the place where OpenAI’s heads-of-government focused charm offensive would likely be recorded — toptechtrends.com/2022/04/22/google-facebook-apple-eu-lobbying-report/”>EU lobby watchers routinely criticize an ongoing lack of transparency vis-a-vis lobbying that’s directed at EU Member States and their delegations.)
toptechtrends.com/2023/05/25/sam-altman-european-tour/”>Sam Altman’s big European tour by toptechtrends.com/author/natasha-lomas/”>Natasha Lomas originally published on toptechtrends.com/”>TechCrunch