Long Reads: The EU AI Act - What you need to know
The world's first AI Act comes into force next month
The worlds' first AI Act comes into force next month. As one might expect, it is a sizable document, the English language PDF version running to 144 pages of of dense legalese. It's not an easy read, but because it's the first, and since the EU is one of the world's largest trading blocs, it will likely be imitated around the world. Moreover, it has significant extraterritorial reach, potentially affecting anyone who wants to sell AI systems or use their outputs in the European market. Love it or hate it (there are those on both sides), the EU AI act cannot be ignored. We asked three legal experts for their take.
Contents
Introduction
What is the EU AI Act's purpose?
Provisions for SMEs
What are the risk categories?
Prohibited AI systems
High-risk AI systems
Limited-risk AI systems
Minimal or no-risk AI systems
Exclusions
General purpose AI (GPAI) models
The EU AI Office
How will the EU AI Act be enforced?
Comparison with other jurisdictions
Introduction
The EU's flagship AI Act will come into force on 1st August. Two years after that date, virtually all producers, suppliers and deployers of AI systems in its scope will need to be compliant.
The EU AI Act is the first the world's first horizontal and standalone law specifically governing AI. It covers producers, deployers and importers of AI systems that are used in the EU and is not restricted to suppliers in the EU. Deployers of AI systems into Europe not based in the bloc will need to appoint an authorised representative in the EU before making their AI systems available.
Providers of AI systems are caught by the Act if they place on the market or put into service AI systems in the EU, or place general purpose AI (GPAI) models on the EU market, or if the output of their AI system is used in the EU.
Therefore everyone making use of AI in the EU, as well as makers, distributors and sellers of AI products, needs to know about it.
That said, many, perhaps most, AI products and applications won't be in scope. That's because the EU AU Act only applies to "prohibited", "high-risk" and "limited-risk" AI systems. There are also some exclusions for AI systems used exclusively for military or scientific R&D purposes, and for international law enforcement.
The EU AI Act is unusual in that it is part product safety legislation, and part protector of fundamental rights.
Just as aircraft components, toys and hair-driers need to achieve appropriate safety credentials to be sold in the EU, so will risky AI systems. The Act also aims to protect individuals' privacy and notably incudes the right for "affected persons" in the EU to obtain an explanation of decision-making based on certain high-risk AI outputs. In this latter case, it resembles another fundamental rights law, the GDPR, with which it shares some common ground.
What is the EU AI Act's purpose?
The Act's stated objective is to "improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation".
In other words, it is all about creating a marketplace for trustworthy AI while upholding standards around safety and trust and preventing harm.
Provisions for SMEs and startups
While not a primary goal, the AI Act seeks to ensure a level playing field among foundation model providers, including compliance with EU copyright law, so that no provider can gain a competitive advantage in the EU market by applying copyright standards that are lower than those in the EU.
It also aims to provide for the needs of SMEs, so that compliance is not overly burdensome.
"There are specific provisions that aim at helping smaller enterprises," noted Kalliopi Spyridaki, chief privacy strategist EMEA & Asia Pacific at analytics software company SAS.
For example, SMEs and startups will be given priority access to "regulatory sandboxes". These are controlled environments providing a safe space to assess the performance, potential risks and societal impact of AI systems before they are deployed in the market. Each member state is encouraged to establish these sandboxes, which will be managed by the new EU AI Office together with national authorities.
There are other provisions to ease the regulatory burden on SMEs too, Spyridaki said, and managing these is one of the responsibilities of the new EU AI Office.
"The AI Office is not just a regulator, it's also a body that's tasked with helping smaller enterprises and promoting innovation also through funding, and there are also lower fines."
What are the risk categories?
The EU takes a tiered approach to regulating AI systems based on their risks, as perceived by lawmakers.
An "AI system" is defined as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
Prohibited AI systems
Some AI systems are prohibited altogether. These include those used for social scoring, for emotion recognition in educational or work environments, to exploit vulnerable groups such as children; systems that create untargeted facial recognition databases; systems that infer or categorise people based on biometric data; and others. Their use will be banned 6 months after the act comes into force, i.e. from 2nd February 2025.
High-risk AI systems
The Act will apply to most high-risk AI systems after 24 months (from 2nd August 2026), although exceptions or transitional arrangements allow for 36 months or even 6 years in some cases.
"An AI system is considered high-risk if it's meant to be used as a safety component of a product, or is itself a product, of certain types, that is required to undergo third-party conformity assessments under EU law, like radio equipment" said Dr Kuan Hon, counsel at law firm Dentons.
Of more concern to most organisations, the high-risk category also includes autonomous vehicles; robots and industrial automation; AI systems used for medical diagnosis, prognosis or treatment recommendation; HR systems for automated hiring and employment decisions; biometric identification and surveillance systems; automated management of critical infrastructure; systems for financial decision making and credit scoring; legal and judicial systems; and those for training and education, content moderation; and others.
Providers and deployers
"High-risk AI systems aren't banned outright," Hon said. "But 'providers' (i.e. developers) of such systems must comply with a long list of obligations including on testing and conformity assessments, and must provide certain technical information to 'deployers' (i.e. users of AI systems) who put the system into use.
She continued: "Deployers of high-risk AI systems must also comply with various requirements, including, in most cases, conducting a pre-deployment fundamental rights impact assessment. Even more rules apply to deployers in particular situations."
Limited-risk AI systems
Limited-risk AI systems are those that, while not dangerous enough to fall into the high-risk category, still carry potential risks that need to be addressed and managed. They include chatbots and virtual assistants; product recommendation systems; fraud detection systems; emotion detection systems; personalised marketing; and predictive maintenance systems. Requirements for certain types of limited-risk systems will also apply from 2nd August 2026.
The key issue here is transparency, said Hon.
"For example, providers of AI systems meant to interact with humans, like chatbots, must generally make clear it's an AI system (unless that's obvious). Providers of AI systems (including general purpose AI) used to generate synthetic audio, image, video or text must generally ensure outputs are marked and detectable, machine-readably, as AI-generated or edited; while deployers of deepfake content must generally indicate it was AI-generated or edited; similarly with text published 'with the purpose of informing the public on matters of public interest'."
Minimal or no-risk AI systems
For completeness, AI systems that pose little or no risk to safety, privacy or other fundamental rights are not regulated under the AI Act and there are no specific compliance requirements for such systems. For these systems, voluntary industry codes of conduct may develop over time, according to Hon.
Exclusions
The EU AI Act specifically excludes certain activities, such as those related to the security and integrity of European countries, including defence, national security and law enforcement. However, this does not include the use of AI for live or "near live" remote biometric identification systems or for determining an individual's access to essential private or public services.
Research and development use cases are also excluded. "This Regulation should support innovation, should respect freedom of science, and should not undermine research and development activity," the Act states.
General purpose AI models (GPAIs)
General purpose AI models were a relatively late addition to the draft Act, included after public interest in ChatGPT had become apparent.
Commonly known as "foundation models", the definition of a GPAI model is based on "...in particular the generality and the capability to competently perform a wide range of distinct tasks," the Act says.
GPAI models are typically trained on large amounts of data, through methods such as self-supervised, unsupervised or reinforcement learning.
A GPIA model is a component part of an AI system rather than an AI system in itself. As such, GPIAs do not come under the risk categories outlined above, but providers of GPAI models still have plenty of hoops to jump through with regard to transparency, safety and societal impact.
"These rules are intended to increase transparency, for example requiring detailed documentation from providers of GPAI models, including for ‘downstream providers' that integrate a GPAI model to provide AI systems themselves - such as to offer those AI systems to the downstream provider's own customers," said Hon.
"In addition, even more rules apply to any GPAI model that's considered to pose systemic risks to the EU, a 'significant impact' on the EU market, or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights or the society as a whole."
A GPAI model is assumed to pose this risk "when the cumulative amount of computation used for its training measured in floating point operations is greater than 1025."
This definition pulls in most of the well known LLMs and text-to-image models including GPT 4, Claude, LamDA, Midjourney, DALL-E and other foundation models.
However, providers of models like these are given some lead time to comply. The GPAI rules don't kick in until 2nd August 2025, and provisions allowing GPAI model providers to be fined won't be effective until 2nd August 2026.
In addition, providers of models that were placed on the EU market before 1st August 2025 only need to comply with the Act's obligations by 2nd August 2027, 36 months after it comes into force.
Compliance with GPAI model rules will also take account of the size of the provider, and its expected there will be simplified routes for SMEs.
As with the rest of the Act, these rules are all subject to review by the European Commission and the new AI Office.
The EU AI Office
Employing 140 individuals including technology specialists, lawyers and economists, the AI Office will play a central role in the implementation of the AI Act, from monitoring, compliance and enforcement, to developing guidance, facilitating governance cooperation, and fostering trustworthy AI innovation across the EU.
The AI Office will be advised by an independent AI Board, with one representative per member state.
How will the AI Act be enforced?
The AI Office will coordinate enforcement of the AI Act across EU member states. Each member state will need to assign National Competent Authorities (NCAs) responsible for enforcement at the national level. These NCAs will conduct investigations and audits of AI products, handle complaints, issue penalties and cooperate with other NCAs in the case of cross-border complaints. The lead NCA, typically from the state where the provider is established, will take the lead in enforcement actions.
Penalties for non-compliance include warnings, bans on the use or supply of non-compliant AI systems - and fines. Fines can reach up to up to €35 million or 7% of total global annual turnover, whichever is higher, for serious breaches.
This arrangement is similar to that for the enforcement of the GDPR, where Data Protection Authorities in individual states field complaints and take the lead in enforcement actions.
However, a law is only as good as its enforcement, and enforcing the new rules could be more complex than GDPR (where cases can take years to reach judgment), since the AI Act uniquely merges two legal approaches - product safety and fundamental rights protection - into one piece of legislation.
This complexity could prove problematic said Jonathan Armstrong, partner at Punter Southall Law Ltd.
"Under GDPR many fines have been successfully appealed often because the enforcement process wasn't clear. The EU has not learned lessons from this so expect confusion with enforcement," he said.
Armstrong also believes that, despite the EU's best efforts to cater for the needs of SMEs, the Act could end up limiting the playing field to those with large budgets.
"The big players can afford compliance teams, they can afford fines and if necessary they can afford to appeal," he said. "They can also afford for their output to be wrong, as we've seen, whereas a startup can't."
This sentiment was echoed by Amanda Brock, CEO of non-profit OpenUK, in a recent piece for Computing: "Overly prescriptive in nature, the AI Act runs a real and present risk of creating regulatory capture as it comes into force," she said.
But in contrast to the GDPR, where enforcement tends to happen after a complaint is made (although GDPR does require most AI applications to go through a data protection impact assessment (DPIA) before they process data), the AI Act requires a conformity assessment to be carried out before a product is placed on the market, Kalliopi Spyridaki of SAS pointed out. The AI Office will be well funded and should be capable of overseeing fair play and creating a market for trustworthy AI, she added, but conceded that: "It's a new area and they will have to hire the right people to be able to understand AI, and then understand the law, and then be able to enforce it."
She continued: "But they they don't want to just impose fines, they want to be able to fulfil all the objectives of the law, which are consumer protection, product security, product safety."
It's important to note that the AI Act coming into force on 1st August is just the first step, said Kuan Hon. It will be expanded and amended over time.
"This is a novel law that attempts to regulate holistically a new and evolving technology, and it can reasonably be expected that it will take years for the new regulatory framework established by the EU AI Act to mature. We expect a wave of regulatory guidance, industry initiatives and market practice to develop over the years to come. "
Nevertheless some critics have suggested it is already overly complicated, and the EU would have been better off building from a less prescriptive framework. Spain and Italy recently took action against OpenAI on the basis of GDPR.
Armstrong wondered whether the EU might be simply inventing a more complex wheel: "The big question is what the new Act can do that a fully functioning, properly enforced GDPR can't do," he said.
Comparison with other jurisdictions
The EU AI Act may be the first specific AI legislation by a major bloc (arguably the EU wanted to get ahead of the game, cementing its reputation as a regulatory trailblazer), but it certainly won't be the last. Some other jurisdictions are following in its its risk-based, product safety footsteps, while others favour something more lightweight and flexible, and perhaps more tech-industry-friendly.
UK
Many industry watchers expected the Labour government to announce an AI bill in the King's Speech this week, but this failed to materialise. Mention was limited to the government seeking to establish "appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models."
Most legal experts expect Labour's emerging strategy to sit somewhere between the previous Conservative government's relatively laisse faire approach, and the more prescriptive EU AI Act.
The UK already has a respected AI Safety Institute, and the government will create a new Regulatory Innovation Office to manage AI and support existing regulators including the ICO and the CMA in using existing powers to regulate AI.
USA
Some US states, including California, Colorado, Connecticut and Utah, already have AI laws covering AI systems, some focusing on high-risk systems like the EU. However, at a federal level the US has so far taken "a softer approach" than the EU, in part because of the the latter's precautionary principle said Spyridaki.
"When the [EU] regulators can see a risk they will regulate so that the risk doesn't materialise. This is not the same as the US and the UK."
Other jurisdictions
Recent announcements from India indicate it will pursue an EU-like risk-based approach, with Canada and Brazil also likely taking a similar this path, according to Spyridaki. Japan looks to be adopting a similar approach to the UK, with a safety institute. Meanwhile, Australia is seemingly hovering between light-touch and prescriptive.
China, one of the the largest producers of AI systems, takes a targeted approach, focusing on specific AI technologies and use cases. AI providers must register with the government, and they are also responsible for monitoring user behaviour and content moderation, including filtering and detect illegal or harmful content.
It's also worth noting that international bodies such as UNESCO and the OECD have their own guidelines and recommendations, but these are not binding.