The European Parliament has voted to confirm its negotiating mandate for the AI Act — hitting a major milestone which unlocks the next stage of negotiations toward a pan-EU rulebook for artificial intelligence.
Parliamentarians backed an amended version of the Commission proposal that expands the rulebook in a way they say is aimed at ensuring AI that’s developed and used in Europe is “fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing”.
Among the changes MEPs have backed is a total ban on remote biometric surveillance and on predictive policing. They have also added a ban on “untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases” — so basically a hard prohibition on Clearview AI and its ilk.
The proposed ban on remote biometric surveillance would apply to both real-time or post (after the fact) applications of technologies like facial recognition, except, in the latter case, for law enforcement for the prosecution of serious crimes with judicial sigh off.
MEPs also added a ban on the use of emotional recognition tech being used by law enforcement, border agencies, workplaces and educational institutions.
Parliamentarians also expanded the classification of high-risk AI systems to include those that pose significant harm to people’s health, safety, fundamental rights or the environment, as well as AI systems used to influence voters and the outcome of elections.
Larger social media platforms that use algorithms to recommend content were also added to the high-risk list by MEPs.
The plenary vote follows committee backing for the amended proposal last month after MEPs from different political groups hashed out how they wanted to tweak the Commission text, including by adding obligations on makers of so-called general purpose AI.
Responding to fast-paced developments in generative AI, MEPs have supported putting a set of obligation on foundational/general purpose AI models, such as the technology that underpins OpenAI’s AI chatbot ChatGPT, requiring that such systems must identify and mitigate risks prior to placement on the market, as well as applying transparency disclosures to AI-generated content and implementing safeguards against illegal content being generated.
Makers of general purpose AIs must also publish “detailed summaries” of copyrighted information used to train their models under the MEPs’ proposal.
During a tour of European capitals to meet with lawmakers last month, OpenAI CEO Sam Altman was critical of this aspect of the EU proposal. He suggested the company might have to withdrew service in the regiona if it was unable to comply, telling journalists he was hopeful the obligations would be rolled back.
In the event, today’s plenary vote shows overwhelming support among parliamentarians for the amended version of the draft legislation — including the proposed obligations for general purpose AIs — with 499 votes in favour, and just 28 against (plus 93 abstentions).
The vote passing the mandate means discussions between the parliament and EU Member States governments can now kick off — with the first trilogue slated to take place this evening.
Commenting in a statement after the vote, co-rapporteur Brando Benifei said:
All eyes are on us today. While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose. We want AI’s positive potential for creativity and productivity to be harnessed but we will also fight to protect our position and counter dangers to our democracies and freedoms during the negotiations with Council.
In another supporting statement, co-rapporteur Dragos Tudorache added:
The AI Act will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology, set to radically transform our societies through the massive benefits it can offer, evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law.
The version of the AI Act MEPs have backed today also adds exemptions for research activities and AI components provided under open source licenses, which MEPs suggest will ensure support for innovation — along with regulatory sandboxes for testing systems set to be established under the framework.
MEPs’ proposal also adds a suite of consumer rights over AI decision making — including the ability for consumers to ask for collective redress if an AI system has caused them harm.
The European consumer organization, BEUC, welcomed these changes but was critical of the parliament for not backing a total ban on use of emotional recognition AIs (since the proposal does not limit commercial use of such snake oil).
It also thinks MEPs have given developers too much discretion to decide whether their systems fall into the high risk category or not, which it says could undermine the efficacy of the risk-based framework.
That may prove one bone of contention during trilogue discussions which need to find a compromise between the position of the EU Council, which is the body composed of Member States governments, and lawmakers in the parliament to clinch the necessary political agreement on a final text and seal the file.
Typically, the EU Council takes a more pro-industry line while parliament tends to be more concerned with fundamental rights. So where the two sides will meet in the middle on regulating AI remains to be seen.
If they can’t agree the EU’s law-making process can stall — or even fail. But there’s an impetus in Brussels to get this file over the line given how much global attention is now fixed on regulating AI. (Being first to the punch with a democratic rulebook for AI presents opportunities for the bloc to exert influence beyond its borders as other jurisdictions scramble to figure out their own approaches to regulating a complex field of fast-developing technology.)
The Council adopted its position on the file back in December. At that time Member States largely favored deferring what to do about general purpose AI — to additional, implementing legislation. But, given what’s happened in the interim, with generative AI tools like ChatGPT shooting to center stage of discussion about the tech and generating multiple calls for regulation (including from plenty of tech industry types themselves), it will be interesting to see whether Member States will agree with MEPs on the need to add obligations for this class of AI systems to the text of the AI Act.
The EU’s executive presented the original proposal for the risk-based framework for AI back in April 2021. While that first Commission draft text did not grapple so extensively with the topic of general purpose AI, it did propose transparency provisions for chatbots and deepfake technology. So even back then EU lawmakers were taking the view that consumers should be informed they’re interacting with machine generated content.
While the Commission remains hopeful that trilogue talks on the AI Act file will deliver political agreement by the end of this year there will still be an implementation period — so the legislation will likely not apply before 2026.
This is why the EU is also working on several voluntary initiatives that aim to press AI firms to self regulate on safety in the meanwhile.