European Parliament adopts Artificial Intelligence Act: A closer look

It is crucial for governments and regulatory bodies to adapt and implement measures that protect citizens while fostering innovation and growth in the AI sector.
Artificial Intelligence
Artificial Intelligence

The European Union (EU) has taken a significant step towards regulating artificial intelligence (AI) with the approval of the EU AI Act on March 13, 2024, with 523 votes in favour, 46 against and 49 abstentions.

The Act, which was first proposed in December 2022, is expected to become law in May. It aims to balance innovation and safety by regulating high-risk AI systems and promoting responsible AI development.

This article will discuss the key highlights of the EU AI Act, including risk classification, general purpose AI requirements, the innovation-friendly approach, shared accountability for ongoing monitoring and penalties for non-compliance.

The AI Act aims to establish a comprehensive legal framework for the development, marketing and use of AI in the EU, ensuring a high level of protection of health, safety and fundamental rights. The Act classifies AI systems by risk level and mandates development, deployment and use requirements based on the risk classification. It establishes the AI Office to oversee general-purpose AI models, the AI Board to advise the European Commission and member-state competent authorities, the Advisory Forum to provide technical expertise and the Scientific Panel to support implementation and enforcement.

The Act prohibits unacceptable risk AI and introduces heightened technical and documentary requirements for high-risk AI systems, including fundamental rights impact assessments and conformity assessments. It also requires human oversight and data governance, protecting the fundamental rights to data protection, private life and confidentiality of communications through responsible data processing. The Act fosters innovation and competitiveness in the AI ecosystem while addressing key challenges posed by AI. The Act will enter into force 20 days after publication in the Official Journal of the European Union, with a gradual approach to implementation over 24 months. Enforcement and penalties include powers for the AI Office and national market surveillance authorities, with penalties for prohibited AI violations, most other violations, and supplying incorrect information to authorities. The Commission can issue delegated acts on various aspects of the Act, and codes of practice should be ready nine months after the Act enters into force.

Risk classification: The EU AI Act bans AI in situations such as social scoring systems, emotional recognition systems in schools and workplaces, AI used to exploit people’s vulnerabilities - such as their ages or disabilities - and police scanning using AI-powered remote biometric systems, except for serious crimes. High-risk AI systems will require conformity assessments to ensure they meet safety and data protection requirements before being put on the market.

General purpose AI requirements: General purpose AI systems, which have a wide range of possible uses, will be subject to specific requirements under the EU AI Act. These include detailed summaries of data gathered from the internet to train these models, labeling AI-generated deep fakes as artificially manipulated, and requiring companies to provide some of the most significant models to assess and mitigate risks. Companies must also report serious incidents and disclose their energy use.

Innovation-friendly approach: To foster innovation and design for regulatory compliance, the EU AI Act introduces regulatory sandboxes. These allow for real-world research, development and testing of AI technologies under less stringent regulations. This approach encourages the responsible development of AI while supporting innovation.

Shared accountability for ongoing monitoring: Once an AI system is on the market, EU authorities will monitor for proper risk classification. Builders and developers providing AI systems will need to maintain human oversight and post-market monitoring. Buyers and subscribers of AI will need to report serious incidents and malfunctions.

Penalties: The European Commission's AI Office, responsible for overseeing AI systems based on a general-purpose AI model, will function as a market surveillance authority. National market surveillance authorities will be in charge of supervising all other AI systems. The AI Office's primary objective is to coordinate governance among member countries and enforce rules related to general-purpose AI.

Member-state authorities will establish regulations concerning penalties and enforcement measures, including warnings and non-monetary penalties. Individuals can file complaints of infringement with their national competent authority, which can then initiate market surveillance activities. The Act does not provide for individual damages.

Penalties for prohibited AI violations can reach up to 7% of a company's global annual turnover or 35 million Euro. Most other violations are subject to penalties of up to 3% of global annual turnover or 15 million Euro. Providing incorrect information to authorities may result in penalties of up to 1% of global annual turnover or 7.5 million Euro.

The AI Board will play a crucial role in advising on the implementation of the Act, coordinating between national authorities, and issuing recommendations and opinions.


The EU AI Act represents a significant step towards regulating AI in the European Union. By balancing innovation and safety, the Act aims to ensure that AI systems are developed and used responsibly. As the AI landscape continues to evolve, it is crucial for governments and regulatory bodies to adapt and implement measures that protect citizens while fostering innovation and growth in the AI sector.

Sidhant Raghuvanshi is Practice Technology Solutions Coordinator at White & Case.

Bar and Bench - Indian Legal news