Technology · May 24, 2023

Google to work with Europe on stop-gap ‘AI Pact’

Google’s Sundar Pichai has agreed to work with lawmakers in Europe on what’s being referred to as an “AI Pact” — seemingly a stop-gap set of voluntary rules or standards while formal regulations for applying AI are still being worked on.

Pichai was meeting with Thierry Breton, the European Union’s internal market commissioner, who put out a statement after today’s confab — saying: “There is no time to lose in the AI race to build a safe online environment.”

A briefing put out by his office after the meeting also said the EU wants to be “proactive” and work on an AI pact ahead of incoming EU legislation set to apply to AI.

The memo added that the bloc wants to launch an AI Pact “involving all major European and non-European AI actors on a voluntary basis” and ahead of the legal deadline of the aforementioned pan-EU AI Act.

However — at present — the only tech giant’s name that’s been publicly attached to the initiative is Google’s.

We’ve reached out to Google and the European Commission with questions about the initiative.

toptechtrends.com/2023/05/17/cnil-ai-action-plan/”>France’s privacy watchdog eyes protection against data scraping in AI action plan

In further public remarks, Breton said:

We expect technology in Europe to respect all of our rules, on data protection, online safety, and artificial intelligence. In Europe, it’s not pick and choose.

I am pleased that Sundar Pichai recognises this, and that he is committed to complying with all EU rules.

The GDPR [General Data Protection Regulation] is in place. The DSA [Digital Services Act] and DMA [Digital Markets Act] are being implemented. Negotiations on the AI Act are approaching the final stage and I call on the European Parliament and Council to adopt the framework before the end of the year.

 Sundar and I agreed that we cannot afford to wait until AI regulation actually becomes applicable, and to work together with all AI developers to already develop an AI Pact on a voluntary basis ahead of the legal deadline.

I also welcome Sundar’s commitment to step up the fight against disinformation ahead of elections in Europe.

While there’s no details on what might be contained in the “AI pact”, as with any self-regulatory arrangement, it would lack legal bite so there would be no way to force developers to sign up — nor consequences for any failing to meet the (voluntary) commitments.

Still, it’s perhaps a step towards the kind of international cooperation on rule-making that’s been called for in recent weeks and months by a number of technologists.

The EU has past precedent when it comes to getting tech giants to ink their name to a little self-regulation: Having established, over several years, a couple of voluntary agreements (aka Codes) which a number of tech giants signed up to (including Google), committing to improve their responses to reports of online hate speech and the spread of harmful disinformation. And while the two aforementioned Codes haven’t resolved what remain complex online speech moderation issues, they have provided a stick for the EU to measure whether or not platforms are living up to their own claims — and, at times, use to dish out a light toptechtrends.com/2023/04/26/twitter-disinformation-dsa-risk/”>public beating when they’re not.

More generally, the EU remains ahead of the global pack on digital rule-making and has already toptechtrends.com/2022/04/01/ai-act-powers/”>drafted regulations for artificial intelligence — proposed a risk-based framework for AI apps two years ago. However even the bloc’s best efforts are still lagging developments in the field which have felt especially blistering this year, after OpenAI’s generative AI chatbot, ChatGPT, had been made broadly available to web users and garnered viral attention.

Currently, the draft EU AI Act, proposed back in April 2021, remains a live piece of lawmaking between the European parliament and Council — with the former recently agreeing on a raft of amendments they want included, including several targeting generative AI.

A compromise on a final text will need to be reached between EU co-legislators so it remains to be seen what the final shape of the bloc’s AI rulebook will look like.

Plus, even if the law gets adopted before the end of the year, which is the most optimistic timeline, it will certainly come with an implementation period — of most likely at least a year before it applies to AI developers. Hence why EU commissioners are keenly pressing for stop-gap measures. 

Earlier this week, EVP Margrethe Vestager, who heads up the bloc’s digital strategy, suggested the EU and US were set to cooperate on establishing minimum standards before legislation enters into force (via Reuters).

Despite these sudden expressions of high level haste, it’s worth noting that the EU’s existing data protection rulebook, the GDPR, may apply and has already been applied against certain AI apps — including toptechtrends.com/2023/04/28/chatgpt-resumes-in-italy/”>ChatGPT, toptechtrends.com/2023/02/03/replika-italy-data-processing-ban/”>Replika and and toptechtrends.com/2023/05/10/clearview-ai-another-cnil-gspr-fine/”>Clearview AI to name three. For example, a regulation intervention on ChatGPT in Italy at the end of March briefing led to a service suspension which was followed by OpenAI producing new disclosures and controls for users in a bid to comply with privacy rules.

Add to that, as Breton notes, the incoming DSA and DMA may also create hard requirements that AI app makers will need to abide by in the coming months and years as those rules start to apply on digital services, platforms and tech giants.

Nonetheless the EU remains convinced of the need for dedicated risk-based rules for AI. And, it seems, is keen to double down on the slated ‘Brussels effect’ its digital lawmaking can attract by announcing an stop-gap AI pact.

In recent weeks and months, US lawmakers have also been turning their attention to the fraught question of how best to regulate AI — with a Senate committee recently holding a hearing in which they took testimony from OpenAI’s CEO Sam Altman, asking him for his thoughts on how to regulate the technology.

Google may be hoping to play the other side by rushing to work with the EU on voluntary standards. Let the AI regulation arms race begin!

toptechtrends.com/2023/05/11/eu-ai-act-mep-committee-votes/”>EU lawmakers back transparency and safety rules for generative AI

toptechtrends.com/2023/04/21/eu-ai-act-generative-ai/”>EU lawmakers eye tiered approach to regulating generative AI

toptechtrends.com/2023/05/24/eu-google-ai-pact/”>Google to work with Europe on stop-gap ‘AI Pact’ by toptechtrends.com/author/natasha-lomas/”>Natasha Lomas originally published on toptechtrends.com/”>TechCrunch

About The Author