The EU AI Act: The Key Takeaways

Definition of AI Systems 

The initial definition of ‘artificial intelligence’ in the EU AI act focused on ‘software’. However, the final version narrowed down the definition, with the term now referring to systems “developed through machine learning approaches and logic and knowledge-based approaches.”  

In full, it states: 

“AI system” - “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

The AI Act also introduces harmonised rules for the placing on the market of General-Purpose AI Models (GPAIs). GPAIs are defined in the AI Act as AI systems intended by the provider to perform general functions like image or speech recognition across various contexts. In a scenario where GPAI models are part of other high-risk AI systems (which are defined below), they will fall under similar compliance requirements. Further guidance on compliance obligations for GPAI providers will be outlined in future implementing acts.

The following table provides a brief definition of the different actors recognised by the AI Act.
 
ROLE Definition
 Deployer A natural or legal person, public authority, agency, or other body using an AI system under its authority, excluding personal non-professional activities.
Importer Any natural or legal person located or established in the EU that places on the market an AI system bearing the name or trademark of a natural or legal person established outside the EU.
Distributor Any natural or legal person in the supply chain, other than the provider or importer, that makes an AI system available on the EU market.
Provider A natural or legal person, public authority, agency, or other body that develops or has developed an AI system or general-purpose AI model, placing them on the market or putting them into service under its own name or trademark whether for payment or free of charge.
Operator Includes the provider, product manufacturer, deployer, authorized representative, importer, or distributor.
 

Scope of Application 

The EU AI Act imposes obligations on providers, deployers, importers, distributors and product manufacturers of AI systems, connected to the EU market. The AI Act applies to: 
  • Providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the EU, irrespective of whether those providers are established or located in the EU or a third country
  • Deployers of AI systems that have their place of establishment or are located in the EU
  • Providers and deployers of AI systems that have their place of establishment or are in a third country, where the output produced by the AI system is used in the EU
  • Importers and distributors of AI systems
  • Product manufacturers placing on the market or putting into service an AI system together with their product and under their name or trademark
  • Authorised representatives of providers, which are not established in the EU
  • Affected persons who are in the EU.
The AI Act does not apply to:
  • AI systems developed or used exclusively for military, defence, or national security purposes
  • Public authorities in a third country and international organisations, where they use AI systems in the framework of international cooperation, agreements for law enforcement and judicial cooperation with the EU or with one or more EU member states. This is provided that such third countries or international organisations provide adequate safeguards concerning the protection of fundamental rights and freedoms of individuals.
  • AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.
  • Any research, testing or development activity regarding AI systems or models before being placed on the market or put into service. Such activities must be conducted by applicable EU law. Testing in real-world conditions is not covered by that exclusion.
  • Natural persons using AI systems during purely personal non-professional activity.
It is worth mentioning that the AI Act does not prevent EU member states from making laws, rules, or administrative measures that provide stronger protections for the rights of the workers regarding AI systems used by their employers. 

The AI Act will also institute a fresh governance structure to supervise AI models. This involves establishing an AI Office in the European Commission, tasked with supervising advanced AI models, defining standards and ensuring uniform regulations across the EU member states. 

Additionally, an independent Scientific Panel will advise the AI Office on GPAI models (general-purpose AI models). For more information on GPAI models, please refer to our article . The Scientific Panel will be formed by the experts selected by the European Commission based on up-to-date scientific or technical expertise in the field of AI. The panel will assist in developing evaluation methodologies, offer guidance on influential models, and oversee safety considerations.

The EU AI Act uses a risk-based approach. What does this mean?

The EU Commission crafted AI regulations employing a risk-based methodology, that includes various risk categories: unacceptable risk, high risk, and limited risk. 

Prohibited Practices 

Unacceptable levels of risk correspond to prohibited AI practices. According to the AI Act, the following AI practices are prohibited:
  • Covert influence (manipulating individuals subconsciously or through deceptive methods);
  • Targeting (as long as it exploits a person's age, disability, or socioeconomic status);
  • Social behaviour scoring;
  • Predictive policing (profiling individuals to predict criminal behaviour, with specific exceptions);
  • Real-time biometric surveillance in public spaces (with limited exceptions for law enforcement);
  • Facial recognition databases;
  • Deducing emotions (in settings such as workplaces and educational institutions, except in limited circumstances);
  • Biometric categorisation (to infer characteristics like race, political stance, or sexual orientation, except for in limited circumstances).
According to the AI Act, the use of ‘real-time’ remote biometric identification systems in public spaces can be authorised only if the law enforcement authority has completed a fundamental rights impact assessment and registered the system in the EU database. However, in duly justified cases of urgency, such systems may be used without registration in the EU database, provided that the registration is completed without delay.

Examples of when this may be strictly necessary include:  
  • Searching for victims of abduction, trafficking, or sexual exploitation and missing persons
  • Preventing substantial and imminent threats to life, safety, or terrorist attacks 
  • Locating or identifying suspects for serious criminal investigations or prosecutions.
High-Risk AI Systems 

An AI system is considered high-risk if it is intended to be used as a safety component of a product or the AI system is itself a product. The AI system is required to undergo a third-party conformity assessment to be placed on the market. Annex III of the AI Act provides the list of high-risk AI systems:

Obligations for providers of high-risk AI systems

A risk management system must be implemented, documented, and maintained.


High-risk AI systems that involve the training of AI models with data must be developed based on training, validation, and testing data sets and must be subject to appropriate data governance and management practices.

A technical documentation of a high-risk AI system must be drawn up before that system is placed on the market or put into service and must be kept up to date. This technical documentation must also be retained for up to 10 years after deployment if the high-risk AI system is on the market or in service.

High-risk AI systems must allow for the automatic recording of events (‘logs’) over their lifetime.

High-risk AI systems must be designed and developed with a level of transparency that allows users to clearly understand the system's outputs, ensuring they can be applied correctly and safely.

High-risk AI systems must be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons.

High-risk AI systems must be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity.

Fines/Penalties

The Act requires EU member states to establish penalties and enforcement measures, including warnings and non-monetary actions for violations of this regulation. They must ensure these measures are properly enforced. 

EU member states must also lay down rules on administrative fines imposed on public authorities and bodies established in that EU member state.

Penalties for breaches of the EU AI Act will vary based on factors such as the type of AI system, organisation size, and the seriousness of the violation. They will cover the following range of fines:

DESCRIPTION OF THE OFFENCE

FINE

1. Non-compliance with the prohibition of AI practices

€35 million or 7% of a company’s total worldwide annual turnover (whichever is higher)

2. Non-compliance of an AI system with provisions related to operators, notified bodies, or other specified obligations

€15 million or 3% of a company’s total worldwide annual turnover (whichever is higher)

3. Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities

€7.5 million or 1.5% of a company’s total worldwide annual turnover (whichever is higher)

4. Non-compliance related to general-purpose AI models

€15 million or 3% of a company’s total worldwide annual turnover (whichever is higher)

When does the AI Act come into force?

The AI Act will come into effect twenty days after its publication in the Official Journal (expected in July 2024) and will be fully enforceable in 24 months post-publication, with the following exceptions: 

  • bans on prohibited practices, which will apply six months after the effective date; 
  • codes of practice (nine months after the effective date); 
  • general-purpose AI rules, including governance (12 months after the effective date); 
  • obligations for high-risk systems (36 months after the effective date); 
  • pre-existing high-risk AI systems used by public authorities (72 months after the effective date). 

How will the EU AI Act impact UK organisations?

The EU AI Act is poised to exert a significant impact on UK businesses, as adherence will be imperative for any businesses trading internationally. Any UK business involved in offering or selling AI systems in the EU market or deploying AI systems in the EU will be affected. These businesses will have to  establish and maintain robust AI governance programmes to ensure compliance or face the risks of financial penalties and reputational damage.

The AI Act could also have a ripple effect on UK organisations that operate in the UK market only. This is because, like the General Data Protection Regulation (GDPR), the EU AI Act is likely to set a global standard. This could mean two things: first, UK companies that proactively embrace and comply with the EU AI Act can differentiate themselves in the UK market, attracting customers who prioritise ethical and accountable AI solutions; secondly, we could potentially see the UK’s domestic regulations evolving towards the EU AI Act.

Further, the EU AI Act is a pivotal piece of legislation that encourages voluntary compliance, even for companies that may not initially fall within its scope. This means that UK companies, particularly those that not only offer AI services in the EU but also harness AI technologies to deliver their services within the EU, are likely to be impacted by the Act. It's crucial to bear in mind that many UK businesses have a market reach extending far beyond the UK, making the EU AI Act particularly relevant to them. 

If you have any queries or would like further information, please visit our data protection services section or contact Christopher Beveridge.