This story is from The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
It’s done. It’s over. Two and a half years after it was first introduced—after months of lobbying and political arm-wrestling, plus grueling final negotiations that took nearly 40 hours—EU lawmakers have reached a deal over the AI Act. It will be the world’s first sweeping AI law.
The AI Act was conceived as a landmark bill that would mitigate harm in areas where using AI poses the biggest risk to fundamental rights, such as health care, education, border surveillance, and public services, as well as banning uses that pose an “unacceptable risk.”
“High risk” AI systems will have to adhere to strict rules that require risk-mitigation systems, high-quality data sets, better documentation, and human oversight, for example. The vast majority of AI uses, such as recommender systems and spam filters, will get a free pass.
The AI Act is a major deal in that it will introduce important rules and enforcement mechanisms to a hugely influential sector that is currently a Wild West.
Here are MIT Technology Review’s key takeaways:
The AI Act ushers in important, binding rules on transparency and ethics
Tech companies love to talk about how committed they are to AI ethics. But when it comes to concrete measures, the conversation dries up. And anyway, actions speak louder than words. Responsible AI teams are often the first to see cuts during layoffs, and in truth, tech companies can decide to change their AI ethics policies at any time. OpenAI, for example, started off as an “open” AI research lab before closing up public access to its research to protect its competitive advantage, just like every other AI startup.
The AI Act will change that. The regulation imposes legally binding rules requiring tech companies to notify people when they are interacting with a chatbot or with biometric categorization or emotion recognition systems. It’ll also require them to label deepfakes and AI-generated content, and design systems in such a way that AI-generated media can be detected. This is a step beyond the voluntary commitments that leading AI companies made to the White House to simply develop AI provenance tools, such as watermarking.
The bill will also require all organizations that offer essential services, such as insurance and banking, to conduct an impact assessment on how using AI systems will affect people’s fundamental rights.
AI companies still have a lot of wiggle room
When the AI Act was first introduced, in 2021, people were still talking about the metaverse. (Can you imagine!)
Fast-forward to now, and in a post-ChatGPT world, lawmakers felt they had to take so-called foundation models—powerful AI models that can be used for many different purposes—into account in the regulation. This sparked intense debate over what sorts of models should be regulated, and whether regulation would kill innovation.
The AI Act will require foundation models and AI systems built on top of them to draw up better documentation, comply with EU copyright law, and share more information about what data the model was trained on. For the most powerful models, there are extra requirements. Tech companies will have to share how secure and energy efficient their AI models are, for example.
But here’s the catch: The compromise lawmakers found was to apply a stricter set of rules only the most powerful AI models, as categorized by the computing power needed to train them. And it will be up to companies to assess whether they fall under stricter rules.
A European Commission official would not confirm whether the current cutoff would capture powerful models such as OpenAI’s GPT-4 or Google’s Gemini, because only the companies themselves know how much computing power was used to train their models. The official did say that as the technology develops, the EU could change the way it measures how powerful AI models are.
The EU will become the world’s premier AI police
The AI Act will set up a new European AI Office to coordinate compliance, implementation, and enforcement. It will be the first body globally to enforce binding rules on AI, and the EU hopes this will help it become the world’s go-to tech regulator. The AI Act’s governance mechanism also includes a scientific panel of independent experts to offer guidance on the systemic risks AI poses, and how to classify and test models.
The fines for noncompliance are steep: from 1.5% to 7% of a firm’s global sales turnover, depending on the severity of the offense and size of the company.
Europe will also become the one of the first places in the world where citizens will be able to launch complaints about AI systems and receive explanations about how AI systems came to the conclusions that affect them.
By becoming the first to formalize rules around AI, the EU retains its first-mover advantage. Much like the GDPR, the AI Act could become a global standard. Companies elsewhere that want to do business in the world’s second-largest economy will have to comply with the law. The EU’s rules also go a step further than ones introduced by the US, such as the White House executive order, because they are binding.
National security always wins
Some AI uses are now completely banned in the EU: biometric categorization systems that use sensitive characteristics; untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases like Clearview AI; emotion recognition at work or in schools; social scoring; AI systems that manipulate human behavior; and AI that is used to exploit people’s vulnerabilities.
Predictive policing is also banned, unless it is used with “clear human assessment and objective facts, which basically do not simply leave the decision of going after a certain individual in a criminal investigation only because an algorithm says so,” according to an EU Commission official.
However, the AI Act does not apply to AI systems that have been developed exclusively for military and defense uses.
One of the bloodiest fights over the AI Act has always been how to regulate police use of biometric systems in public places, which many fear could lead to mass surveillance. While the European Parliament pushed for a near-total ban on the technology, some EU countries, such as France, have resisted this fiercely. They want to use it to fight crime and terrorism.
European police forces will only be able to use biometric identification systems in public places if they get court approval first, and only for 16 different specific crimes, such as terrorism, human trafficking, sexual exploitation of children, and drug trafficking. Law enforcement authorities may also use high-risk AI systems that don’t pass European standards in “exceptional circumstances relating to public security.”
What next?
It might take weeks or even months before we see the final wording of the bill. The text still needs to go through technical tinkering, and has to be approved by European countries and the EU Parliament before it officially enters into law.
Once it is in force, tech companies have two years to implement the rules. The bans on AI uses will apply after six months, and companies developing foundation models will have to comply with the law within one year.