The Proposed EU ‘Artificial Intelligence Act’ – One Step Closer to the Regulation of AI

The Proposed EU ‘Artificial Intelligence Act’ – One Step Closer to the Regulation of AI

Artificial intelligence (AI) is an extremely powerful tool, which is increasingly prominent in our technologically advancing world, shown by the rise of ChatGPT (see our article).

We’re often seeing news stories around the use of AI in everyday society, from using AI to write content to composing music. 

However, rapid developments in AI have led to increasing call for regulation, particularly in Europe. There has been a lot of concern over AI technology advancing too quickly without any rules, and potentially going too far and therefore resulting in serious risks and consequences.

On 14 June 2023, the European Parliament approved its version of a draft ‘EU Artificial Intelligence Act’ (EU AI Act), following which there will be negotiations with member states to agree the final text of the new proposed law. This is ground-breaking and landmark regulation, as the new law will be the first ever comprehensive set of AI laws.

Whist the EU AI Act was first proposed back in 2021, there have since been several discussions around it and considerations around the emergence of ChatGPT have shaped its progress. The aim of the law is to turn Europe into the ‘global hub for trustworthy AI’.

In a press release from the European Commission, Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, stated the importance of the new proposed law, explaining:

On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.

What is the EU AI Act about?

A key aim of the EU AI Act will be to set out rules around the development and use of AI. There have been various new elements added to the proposed law, to include considerations around new technologies such as ChatGPT. The key concern of lawmakers is to ensure that the use of AI is safe and transparent and the law aims to, amongst other things, strengthen rules regarding data quality, transparency, human oversight and accountability when using AI.

The process for passing a law around the regulation of AI is extremely complicated, given AI is so fast developing and it is practically difficult for lawmakers to ‘futureproof’ legal rules to cover all types of AI systems, existing both now and in the future.

Various rules would apply under the EU AI Act, for example:

  • The law would impose a classification system determining the level of risk that AI technology could impose, categorising tiers of risk as: unacceptable, high, limited and minimal risk. Whilst unacceptable risk systems will be banned outright, high risk AI systems will be subject to extremely stringent rules.
  • The need to conduct ‘conformity assessments’ to make sure that high-risk AI systems conform with legal requirements.
  • Making sure businesses provide transparency about their AI systems, such as how the systems work and how decision making is carried out by them. For example, businesses may need to advise consumers about the use of AI models by using ‘pop up’ notices, similar to cookie banners.

If adopted, what could the new EU AI Act mean for businesses?

When in force, the act will impact several businesses – such as AI developers and providers who need to comply with its legal rules, users and operators of AI systems and consumers (who the new law aims to protect, when they are interacting with AI systems).

The EU AI Act will be complex and comprehensive, but at this stage we’d highlight some of the key proposals for businesses to note:

  • The law would prohibit certain uses of AI systems altogether.
  • The law may also apply to companies outside of the EU who provide AI systems to EU customers, who would also need to ensure that their systems comply with the new legal rules.  It’s likely the law will have global impact, since so many global businesses are involved in commerce with the EU.
  • Fines for breaching the EU AI Act will be extremely high – breaching a prohibited practice under the law could result in fines of up to €40 million, or 7% of a company’s annual worldwide revenue, whichever is higher. This is even higher than the already large fines for breaching the GDPR- see our article.  
  • EU member states will need to put in place appropriate monitoring and oversight authorities to ensure compliance with the standards of the new law.

Key Takeaways

At this stage, we await further developments around this proposed law and will continue to monitor its progress – it is expected that the earliest time the law will come into force will be 2025.

In the meantime, however, it is clear that the regulation of AI is a serious and high-risk topic and businesses who develop or use AI systems should start to take steps now, to prepare for the upcoming changes in rules and their impact on business practices.

In fact, the EU’s new legal rules may be closely followed by regulators in other countries if the EU AI Act becomes the ‘gold standard’ for AI regulation – therefore, companies using AI globally should take note. In the UK, the government’s white paper sets out its approach to AI regulation. We will continue to monitor developments in this space and report on how the UK decides to tackle the regulation of AI.

Some practical issues for businesses to consider now include:

  • Understanding what types of AI your organisation uses and why.
  • Putting in place policies around the development and use of AI and training staff using AI on them.
  • Carrying out risk assessments around the use of AI.
  • Keeping up to date with legal developments around the regulation of AI, to forward plan and prepare for the new rules.
  • Ensuring that where you are using AI and those models process personal data, you are always acting in compliance with the GDPR rules.

This is a complex and fast-moving topic and this law could have a number of implications around the use of AI. If you would like preliminary advice on these developments and how they could impact your business, please contact our team.

About our expert

Becky White

Becky White

Senior Data Protection & Privacy Solicitor
Becky is a highly experienced commercial lawyer, specialising in Data Protection and Privacy Law matters. She trained in at DAC Beachcroft in the City of London nearly 20 years ago, and then spent most of her career working in-house as the senior or sole legal adviser in a variety of sectors including Construction and Engineering, Oil and Gas, Government and Recruitment.


Our offices

A national law firm

A national law firm

Our commercial lawyers are based in or close to major cities across the UK, providing expert legal advice to clients both locally and nationally.

We mainly work remotely, so we can work with you wherever you are. But we can arrange face-to-face meeting at our offices or a location of your choosing.

Floor 5, Cavendish House, 39-41 Waterloo Street, Birmingham, B2 5PP
Stirling House, Cambridge Innovation Park, Denny End Road, Waterbeach, Cambridge, CB25 9QE
13th Floor, Piccadilly Plaza, Manchester, M1 4BT
10 Fitzroy Square, London, W1T 5HP
Harwell Innovation Centre, 173 Curie Avenue, Harwell, Oxfordshire, OX11 0QG
Floor 2, Cubo, 38 Carver Street, Sheffield, S1 4FS
A national law firm

To access legal support from just £140 per hour arrange your no-obligation initial consultation to discuss your business requirements.

Make an enquiry