EU committees green-light the AI Act
Published by Thomas Tillman in Artificial Intelligence News · Tuesday 16 May 2023
Tags: asianheritagesociety.org, asianheritageawards.com, wordslingerbook.com, asian, american, san, diago, ai, technology
Tags: asianheritagesociety.org, asianheritageawards.com, wordslingerbook.com, asian, american, san, diago, ai, technology
The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the AI Act.
This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure that AI systems are safe, transparent, traceable, and non-discriminatory.
After the vote, co-rapporteur Brando Benifei (S&D, Italy) said:
“We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level.We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.”
Co-rapporteur Dragos Tudorache (Renew, Romania) added:
“Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy, and safe.We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate while protecting fundamental rights, strengthening democratic oversight, and ensuring a mature system of AI governance and enforcement.”
The rules are based on a risk-based approach and they establish obligations for providers and users depending on the level of risk that the AI system can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring.
MEPs also substantially amended the list of prohibited AI practices to include bans on intrusive and discriminatory uses of AI systems, such as real-time remote biometric identification systems in publicly accessible spaces, post-remote biometric identification systems (except for law enforcement purposes), biometric categorisation systems using sensitive characteristics, predictive policing systems, emotion recognition systems in law enforcement, border management, workplace, and educational institutions, and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.
MEPs also expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights, or the environment. They also added AI systems that influence voters in political campaigns and recommender systems used by social media platforms to the high-risk list.
To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law also promotes regulatory sandboxes – or controlled environments established by public authorities – to test AI before its deployment.
MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.
Tim Wright, Tech and AI Regulatory Partner at London-based law firm Fladgate, commented:
“US-based AI developers will likely steal a march on their European competitors given the news that the EU parliamentary committees have green-lit its groundbreaking AI Act, where AI systems will need to be categorised according to their potential for harm from the outset.The US tech approach is typically to experiment first and, once market and product fit is established, to retrofit to other markets and their regulatory framework. This approach fosters innovation whereas EU-based AI developers will need to take note of the new rules and develop systems and processes which may take the edge off their ability to innovate.The UK is adopting a similar approach to the US, although the proximity of the EU market means that UK-based developers are more likely to fall into step with the EU ruleset from the outset. However, the potential to experiment in a safe space – a regulatory sandbox – may prove very attractive.”
Before negotiations with the Council on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.
(Photo by Denis Sebastian Tamas on Unsplash)