The European Parliament has approved the Artificial Intelligence Act, which ensures safety and compliance with fundamental rights, while boosting innovation.
TakeAwayPoints:
- Parliament has approved the first comprehensive AI Act
- The Act seeks to enhance the general purpose of artificial intelligence as well as safeguard fundamental rights.
- The law will be fully applicable 24 months after it is officially endorsed by the council.
First AI Act
The Act seeks to safeguard high-risk AI from undermining democracy, human rights, the rule of law, and environmental sustainability while fostering innovation and positioning Europe as a pioneer in the area. Based on its possible hazards and degree of influence, the rule lays out requirements for AI.
MEPs approved the regulation with 523 votes in support, 46 against, and 49 abstentions, as agreed upon during negotiations with member states in December 2023.
Dragos Tudorache (Renew, Romania) Civil Liberties Committee co-rapporteur commended the effort of the EU on this matter; however, he emphasised that much work still lies ahead.
“The EU has delivered. We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies. However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice.”
Tudorache said.
Banned applications
The new regulations prohibit the use of some AI applications that pose a risk to people’s rights, such as biometric classification systems that rely on delicate features and the aimless collection of face photos from CCTV or the internet in order to build databases for facial recognition.
“We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected.”
Brando Benifei (S&D, Italy) the Internal Market Committee co-rapporteur said.
Exemptions
In general, law enforcement is not allowed to use biometric identification systems (RBI) unless certain circumstances are met, such as those that are specifically listed and restricted. Only under particular conditions—such as limited time and geographic scope and explicit previous judicial or administrative authorization—can “real-time” RBI be implemented.
Examples of such applications could be finding a missing person or stopping a terrorist assault. It is deemed a high-risk use case to employ such systems post-facto, or “post-remote RBI,” which necessitates legal authorization connected to a criminal activity.
Obligations for high-risk systems
Other high-risk AI systems are also expected to have clear obligations because of their significant potential harm to democracy, the environment, human rights, safety, and health.
Some of the high-risk applications of AI are in the areas of critical infrastructure, employment, education and vocational training, vital private and public services (like banking and healthcare), specific law enforcement systems, migration and border management, justice, and democratic processes (like influencing elections).
Such systems need to be accurate and transparent, maintain use logs, evaluate and minimise risks, and provide human oversight. The public will be able to file complaints regarding AI systems and request information regarding decisions made that impact their rights and are based on high-risk AI systems.
Furthermore, the more potent GPAI models that may pose systemic risks will be subject to additional requirements, such as performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.
Furthermore, artificial or manipulated images, audio, or video content (“deepfakes”) need to be clearly labelled as such. General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training.
Before novel AI is put on the market, regulatory sandboxes and real-world testing must be established at the national level and made available to SMEs and start-ups for development and training.
“The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s development.”
Benifei added
Road ahead
The law will be fully applicable 24 months after it is officially endorsed by the Council and will come into effect 20 days after it is published in the official Journal. The only exceptions are the prohibitions on prohibited practices, which will take effect six months after the entry into force date, the codes of practice, which will take effect nine months after the entry, and the general-purpose AI rules, which include governance, which will take effect 12 months after the entry.