The EU's landmark AI Act 'rushed' out as countdown begins on compliance

'We will be hiring lawyers while the rest of the world is hiring coders' – Europe's warning about new AI legislation

Cameras and sensors on the ceiling of a cashierless Sensei Continente Labs supermarket in Lisbon, which uses artificial intelligence to track consumers' purchases
Cameras and sensors on the ceiling of a cashierless Sensei Continente Labs supermarket in Lisbon, which uses AI to track consumers' purchases
(Image credit: Jose Sarmento Matos / Bloomberg / Getty Images)

The EU's pioneering legislation to regulate AI is set to come into force next month, despite criticisms that it is incomplete, ambiguous, and stifling to the tech industry. 

The first law of its kind anywhere in the world, the EU Artificial Intelligence Act aims to protect citizens from potentially harmful uses of AI by regulating companies within the EU, without losing ground to AI superpowers China and the US

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

What does the act do?

The act, which will be implemented in stages over the next two years, classifies different types of AI by risk. 

Minimal risk uses – AI-powered video games and spam filters, for example – will not be subject to regulation. Limited risk activities, such as chatbots and other generative AI platforms, will be subject to "light regulation", including transparency requirements to inform consumers that they are interacting with a machine.

The "high-risk" category includes AI systems used by law enforcement like biometric identification, as well as systems used to access public services or critical infrastructure.

A further "unacceptable risk" category bans all AI systems that "threaten citizens' rights", said The Verge. This includes AI used to deceive or manipulate humans, or profile them as potential criminals based on behaviour or personality traits.

What are the criticisms of the legislation?

Progress on the legislation, which has been in the works for three years, was "upended" in 2022 when OpenAI released ChatGPT, said the FT. The emergence of "generative AI" models, which create text or images based on user prompts, "reshaped the tech landscape" and had parliamentarians "rushing to rewrite the rules" to regulate the large language models that underpin such apps.

"Time pressure led to an outcome where many things remain open," a parliamentary aide involved in drafting the "rather vague" law told the FT. Regulators "couldn't agree on them and it was easier to compromise" on what the unidentified aide called "a shot in the dark".

Some critics say the text lacks clarity – especially on whether systems like ChatGPT are acting illegally when they use sources protected by copyright law. There is also confusion over who is responsible for content generated by AI, or what "fair remuneration" might look like for those who create the content it draws from. The act also does not specify who might enforce the rules in individual member states, or how, which could lead to patchy implementation across the continent.

The cost of compliance is also a problem, particularly for small companies. It could make it "very hard for deep tech entrepreneurs to find success in Europe", Andreas Cleve, chief executive of Danish healthcare start-up Corti, told the FT. Many believe that cost will hinder those European companies competing with the US and China. "We will be hiring lawyers while the rest of the world is hiring coders," said Cecilia Bonefeld-Dahl, director general for DigitalEurope, which represents the bloc's technology sector. 

What's still to be done?

Tech companies have until February next year to comply with the "unacceptable risk" rules or face a fine of 7% of their total global annual revenue, or €35 million (£29.5 million), whichever is higher. 

Developers of systems that fall in the "high risk" category will have until August 2027 to comply with rules around risk assessment and human oversight.

By some estimates, the EU needs between 60 and 70 pieces of secondary legislation regarding the details of how the act will be implemented and enforced, and those must be in place by May next year. "The devil will be in the details," a diplomat who took a leading role in drafting the act told the FT. "But people are tired and the timeline is tight."

Harriet Marsden is a writer for The Week, mostly covering UK and global news and politics. Before joining the site, she was a freelance journalist for seven years, specialising in social affairs, gender equality and culture. She worked for The Guardian, The Times and The Independent, and regularly contributed articles to The Sunday Times, The Telegraph, The New Statesman, Tortoise Media and Metro, as well as appearing on BBC Radio London, Times Radio and “Woman’s Hour”. She has a master’s in international journalism from City University, London, and was awarded the "journalist-at-large" fellowship by the Local Trust charity in 2021.