
The EU Artificial Intelligence (AI) Act, a landmark regulation on AI technologies, was formally signed on 13 June 2024 and will enter into force on 1 August 2024. This Act is set to transform how AI systems are developed, deployed and managed across the European Union. As this crucial date approaches, it is imperative for organisations to understand their obligations under the new legislation.
The EU AI Act applies to all providers, deployers, importers and distributors of AI systems impacting EU users. Initially proposed in 2021, the act has undergone significant changes following negotiations. The AI Act adopts a risk-based approach, assigning AI applications to four risk categories based on the potential threat they pose: unacceptable risk, high-risk, limited risk and minimal risk applications.
The EU AI Act is expected to become a global standard for AI regulation. It aims to establish a unified framework that incorporates the concepts of risk acceptability and the trustworthiness of AI systems as perceived by their users. By addressing these aspects within a single framework, the Act aims to provide a comprehensive approach to AI regulation that can be adopted and implemented internationally.
The EU AI Act will have extraterritorial reach across all sectors. Other jurisdictions should be aware that they may still be subject to the Act if:
According to the EU AI Act, AI systems are classified into four risk categories, and organisations are subject to different obligations depending on the category.
<span class="news-text_italic-underline">Category 1: Unacceptable Risk</span>
Article 5 of the EU AI Act lists the artificial intelligence practices that are automatically prohibited. Organisations should not deploy, provide, place on the marke, or use these prohibited AI systems. These include use of AI systems for:
<span class="news-text_italic-underline">Category 2: High Risk</span>
High-risk AI systems are listed in Annex III of the EU AI Act and include AI systems used in the areas of biometrics, critical infrastructure, education, employment and law enforcement, provided certain criteria are met. High-risk AI systems are not prohibited but require compliance with strict obligations. Article 29 of the EU AI Act imposes the following obligations on organisations using high-risk AI systems:
<span class="news-text_italic-underline">Category 3: Limited Risk</span>
This category includes lower-risk AI systems, such as chatbots and deepfake generators, with less stringent obligations than the high-risk category. Organisations must inform users that they are interacting with an AI system and label all audio, video, and photo recordings as being AI-generated.
<span class="news-text_italic-underline">Category 4: Minimal Risk</span>
AI systems in this category are not associated with any statutory obligations and include systems such as spam filters and recommendation systems.
<span class="news-text_italic-underline">1. Clarification on General-Purpose AI Models</span>
The Act now includes a specific chapter on general-purpose AI (GP-AI) models, addressing their unique nature and broad applicability. GP-AI models are defined by their ability to perform a wide range of distinct tasks, regardless of their market placement.
Providers of GP-AI models must:
GP-AI models deemed to have ‘systemic risk’ face additional requirements. Models are classified as such based on their high-impact capabilities, which include significant computational power usage. The European Commission will maintain and publish a list of these high-risk models.
Providers must conduct evaluations using standardised protocols and include comprehensive testing and validation in their documentation. They must also ensure cybersecurity protections are proportional to the systemic risk. Compliance can be demonstrated through codes of practice approved by the AI Office or through alternative means with Commission approval.
<span class="news-text_italic-underline">2. Deep Fakes Regulation</span>
The Act mandates clear labelling of deep fakes, ensuring that any artificially generated or manipulated content (images, audio, video) is disclosed as such. This aligns with regulations like the Digital Services Act, emphasising transparency.
<span class="news-text_italic-underline">3. Open-Source Licenses</span>
The final text introduces provisions for open-source AI models. These models must publicly share parameters, architecture, and usage information and are generally exempt from transparency requirements unless classified as high-risk or prohibited AI practices.
<span class="news-text_italic-underline">4. Banned Applications</span>
Certain AI applications are explicitly banned under the Act due to their potential threat to citizens’ rights. These include:
<span class="news-text_italic-underline">5. Changes in the Penalty Regime</span>
Penalties for non-compliance have been revised:
The EU AI Act's implementation will be phased:
Organisations must act swiftly to ensure compliance:
The road to compliance is challenging, but understanding the obligations and preparing accordingly will ensure a smooth transition under the new EU AI Act.



