Back to news

Legal News

July 14, 2024

The EU Artificial Intelligence Act Set to Come into Force on 1 August 2024

The EU AI Act, effective 1 August 2024, imposes risk-based obligations on AI providers, bans certain uses, and introduces penalties up to €35M or 7% of turnover.

The EU Artificial Intelligence (AI) Act, a landmark regulation on AI technologies, was formally signed on 13 June 2024 and will enter into force on 1 August 2024. This Act is set to transform how AI systems are developed, deployed and managed across the European Union. As this crucial date approaches, it is imperative for organisations to understand their obligations under the new legislation.

Scope and Applicability

The EU AI Act applies to all providers, deployers, importers and distributors of AI systems impacting EU users. Initially proposed in 2021, the act has undergone significant changes following negotiations. The AI Act adopts a risk-based approach, assigning AI applications to four risk categories based on the potential threat they pose: unacceptable risk, high-risk, limited risk and minimal risk applications.

The EU AI Act is expected to become a global standard for AI regulation. It aims to establish a unified framework that incorporates the concepts of risk acceptability and the trustworthiness of AI systems as perceived by their users. By addressing these aspects within a single framework, the Act aims to provide a comprehensive approach to AI regulation that can be adopted and implemented internationally.

Extraterritorial Reach

The EU AI Act will have extraterritorial reach across all sectors. Other jurisdictions should be aware that they may still be subject to the Act if:

  1. their system is placed on the market in the EU;
  2. their providers or users are physically present in the EU; or
  3. the output of the system is used in the EU.

Risk Categories and Obligations

According to the EU AI Act, AI systems are classified into four risk categories, and organisations are subject to different obligations depending on the category.

<span class="news-text_italic-underline">Category 1: Unacceptable Risk</span>

Article 5 of the EU AI Act lists the artificial intelligence practices that are automatically prohibited. Organisations should not deploy, provide, place on the marke, or use these prohibited AI systems. These include use of AI systems for:

  • predictive policing;
  • real-time biometric identification;
  • targeted reading of facial images from the Internet or from video surveillance systems to create facial recognition databases; and
  • making conclusions about people's emotions in the workplace.

<span class="news-text_italic-underline">Category 2: High Risk</span>

High-risk AI systems are listed in Annex III of the EU AI Act and include AI systems used in the areas of biometrics, critical infrastructure, education, employment and law enforcement, provided certain criteria are met. High-risk AI systems are not prohibited but require compliance with strict obligations. Article 29 of the EU AI Act imposes the following obligations on organisations using high-risk AI systems:

  • carrying out a risk assessment of fundamental rights;
  • training and support for staff responsible for monitoring high-risk AI systems; and
  • keeping logs that are automatically generated by these systems.

<span class="news-text_italic-underline">Category 3: Limited Risk</span>

This category includes lower-risk AI systems, such as chatbots and deepfake generators, with less stringent obligations than the high-risk category. Organisations must inform users that they are interacting with an AI system and label all audio, video, and photo recordings as being AI-generated.

<span class="news-text_italic-underline">Category 4: Minimal Risk</span>

AI systems in this category are not associated with any statutory obligations and include systems such as spam filters and recommendation systems.

Key Developments

<span class="news-text_italic-underline">1. Clarification on General-Purpose AI Models</span>

The Act now includes a specific chapter on general-purpose AI (GP-AI) models, addressing their unique nature and broad applicability. GP-AI models are defined by their ability to perform a wide range of distinct tasks, regardless of their market placement.

Providers of GP-AI models must:

  • maintain and update technical documentation;
  • ensure compliance with EU law on copyright and related rights;
  • publicly disclose a detailed summary of training data; and
  • label outputs in a machine-readable format as artificially generated or manipulated.

GP-AI models deemed to have ‘systemic risk’ face additional requirements. Models are classified as such based on their high-impact capabilities, which include significant computational power usage. The European Commission will maintain and publish a list of these high-risk models.

Providers must conduct evaluations using standardised protocols and include comprehensive testing and validation in their documentation. They must also ensure cybersecurity protections are proportional to the systemic risk. Compliance can be demonstrated through codes of practice approved by the AI Office or through alternative means with Commission approval.

<span class="news-text_italic-underline">2. Deep Fakes Regulation</span>

The Act mandates clear labelling of deep fakes, ensuring that any artificially generated or manipulated content (images, audio, video) is disclosed as such. This aligns with regulations like the Digital Services Act, emphasising transparency.

<span class="news-text_italic-underline">3. Open-Source Licenses</span>

The final text introduces provisions for open-source AI models. These models must publicly share parameters, architecture, and usage information and are generally exempt from transparency requirements unless classified as high-risk or prohibited AI practices.

<span class="news-text_italic-underline">4. Banned Applications</span>

Certain AI applications are explicitly banned under the Act due to their potential threat to citizens’ rights. These include:

  • biometric categorisation based on sensitive characteristics;
  • untargeted scraping for facial recognition databases;
  • emotion recognition in workplaces and schools;
  • social scoring; and
  • predictive policing based solely on profiling.

<span class="news-text_italic-underline">5. Changes in the Penalty Regime</span>

Penalties for non-compliance have been revised:

  • up to EUR 35,000,000 or 7% of annual turnover for violating AI practice prohibitions;
  • up to EUR 15,000,000 or 3% of annual turnover for other non-compliance issues; and
  • up to EUR 7,500,000 or 1% of annual turnover for supplying incorrect information.

The Phased Approach

The EU AI Act's implementation will be phased:

  • 6 months after entry into force: Prohibitions on specific AI systems begin.
  • 12 months: Obligations for GP-AI providers and penalty provisions apply.
  • 18 months: Guidelines on high-risk use cases released.
  • 24 months: Regulation applies to high-risk AI systems.
  • 36 months: Obligations for high-risk AI systems under Article 6(1) commence.

Next Steps

Organisations must act swiftly to ensure compliance:

  1. Inventory AI Systems: Catalogue all AI systems and determine their categorisation under the Act.
  2. Governance Structures: Establish oversight mechanisms to manage compliance strategies effectively.
  3. Documentation and Training: Ensure all required documentation is prepared and maintained. Train staff on the new requirements and implications.

The road to compliance is challenging, but understanding the obligations and preparing accordingly will ensure a smooth transition under the new EU AI Act.

Address
London:
2 Eaton Gate
London SW1W 9BJ
New York:
295 Madison Avenue 12th Floor
New York City, NY 10017
Paris:
56 Avenue Kléber
75116 Paris
BELGRAVIA LAW LIMITED is registered with the Solicitors Regulation Authority with SRA number 8004056 and is a limited company registered in England & Wales with company number 14815978. The firm’s registered office is at 2 Eaton Gate, Belgravia, London SW1W 9BJ.

‘Belgravia Law’ (c) 2026. All rights reserved.
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy and Cookie Policy for more information.