Knowledge Hub
for Growth

Regulation of AI in the UK: the BoE paper on AI and Machine Learning

AlphaGo has been hogging the limelight for a while now. It is Google’s Deep Learning System that has been making waves by defeating the world’s number 1 ranked player in Go. By some estimates, a vast majority of tasks carried out by treasury professionals could eventually become automated.

The regulators in UK have taken a different approach to regulation than in the EU. In the paper titled ‘Establishing a pro-innovation approach to regulation AI’, laid before Parliament on 18 July 2022, the UK Government stated that it proposed to regulate AI based on its use and impact. It proposed to address high risk issues rather than hypothetical risks and wants to encourage innovation and avoid placing unnecessary barriers in the way. The paper recognises that AI ‘is partially regulated through a patchwork of legal and regulatory requirements built for other purposes’. It proposes to take a different approach from that taken by the EU, which it believes has based its approach on the product safety regulation of the Single Market. However, it does not believe that this captures the full application of AI and its regulatory implications.

The UK’s preferred approach is to set out the core characteristics of AI to inform the scope of the AI regulatory framework but allow regulators to set out and evolve more detailed definitions of AI as required. This is in line with the government’s view to regulate the use of AI rather than the technology itself - and a detailed universally applicable definition is therefore not needed. The UK has set out two core characteristics of AI that are relevant from a regulatory perspective: (i) the adaptiveness of the technology, i.e. AI often partially operates on the basis of instructions that have not been expressly programmed with human intent; and (ii) AI often demonstrate a high degree of autonomy. Both these elements could lead to issues that worry a regulator. In the case of adaptiveness, the logic behind the output of systems may not be clear. In the case of autonomy of AI, decisions may be made without the express intent or control of a human. The UK would prefer to respond to these with a context specific approach that is pro-innovation and risk based, coherent, proportionate, and adaptable.

On 17 February 2022, the Bank of England (BoE) published the final report of the Artificial Intelligence Public-Private Forum (AIPPF), where they committed to publish a discussion paper late in 2022 to address (i) how existing regulation and legislation may be applied to AI; and (ii) whether AI can be managed through extensions of the existing regulatory framework.  This Discussion Paper (‘DP’) 5/22 was published in 11 October 2022, some aspects of which are examined in this article below.

And remember, if you require advice on how regulations and legal areas impacted by AI and machine learning may affect your business, our expert financial services solicitors can help.

Existing regulatory approach to AI


The DP affirms that one of the regulatory principles that are central to discharge of oversight is: ‘burden of restrictions…should be proportionate to the benefits…which are expected to result from the imposition of that burden or restriction’.

Consumer protection - FCA

The DP gives some examples of how some of the FCA’s Principles for business and rules and guidance may be relevant to AI risks to consumer protection. This is not an exhaustive list and each situation will need analysis on its own merits to determine the rules applicable. The new Consumer Principle and Consumer Duty are potentially relevant where retail customers are involved from July 2023. In addition, there are principles relating to treating customers fairly, communication with clients, special duties where there is a relationship of trust which may also be relevant. FCA’s Guidance on the treatment of Vulnerable Customers is also potentially of importance.

The principles underlying the Consumer Duty and the provisions of the Equality Act 2010 may also be relevant in determining risks relating to bias and discrimination.

Consent and privacy

Where financial services firms use AI to process personal data, firms will have obligations under UK data protection law. The DP also states that some practices may breach the FCA Principles or the FCA Consumer Duty, for instance where a firm did not present the way they would use customer data in a way that was clear, fair and not misleading, and used their data in ways to which they had not consented, and which was potentially to their detriment.

Governance, policies, procedures, and controls to manage model risk

Model risk management

While the DP references various guidelines and standards that should inform model risk management (‘MRM’), regulation is quite limited in the UK for MRM. The PRA published CP6/22 on 21 June 2022, which includes a set of principles which it considers to be important for an effective MRM framework. The DP also references the Guidelines for corporate governance principles for Banks by the Basel Committee on Banking Supervision.

Safety and Soundness, board composition, collective expertise and engagement

The supervisory authorities take a principles based approach to governance in the UK. The DP references that general rules, guidance and principles will be relevant for assessing safety and soundness risks caused by AI. These are set out in the FCA Principles for Business and in the PRA’s fundamental rules. There are also specific references to AI such as the UK version of the EU Commission Delegated Regulation (EU) 2017/565 of 25 April 2016 supplementing MiFID, requiring investment firms to store data in a way so that it cannot be manipulated or altered except in specific situations. Likewise with regards to board composition, expertise and management it is expected that the PRA expectations on how firms should act will continue to apply in respect of AI. These include, among other things, an expectation that firms should take steps to address any skill gap arising from the use of advanced technologies at the firm’s senior management and board level.

The Senior Manager and Certification Regime will apply when determining who should be responsible for AI related functions and activities. The DP paper points out that the algorithmic trading certification function extends to persons who: (i) approve the deployment of trading algorithms; (ii) approve the amendment to trading algorithms; and (iii) have significant responsibility for the management of the monitoring of, or decide, whether or not trading algorithms are compliant with a firm’s obligations. PRA SS5/18 'Algorithmic trading' sets out expectations for governance (e.g. cross lines of defence coordination, SMFs, testing) with regard to the use of algorithms in the context of trading.

Humans in the Loop

In certain circumstances it is a regulatory requirement for humans to be involved in decision making. Regulations treat automated decisions differently from human decisions. The DP points to the example of Article 22 of the UK GDPR in this context, which states among other things that ‘the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.’ The regulatory expectation is that ‘firms deploying AI systems need to have a sufficiently strong set of oversight and governance arrangements that make effective use of humans in the decision-making loop and review the accuracy of those arrangements.’

The Bank of England has set out a set of questions in the DA and has requested responses to the paper 10 February 2023. For a full list of these questions and the DP see here.

About our expert

John Pauley

John Pauley

Financial Services Partner
John is a specialist solicitor with extensive expertise in financial services regulation. He advises financial institutions, services providers, and merchants on regulated activities including payments, e-money, consumer credit, data protection, anti-money laundering, and gambling operations.

What next?

Please leave us your details and we’ll contact you to discuss your situation and legal requirements. There’s no charge for your initial consultation, and no-obligation to instruct us. We aim to respond to all messages received within 24 hours.

Your data will only be used by Harper James Solicitors. We will never sell your data and promise to keep it secure. You can find further information in our Privacy Policy.

Our offices

A national law firm

A national law firm

Our commercial lawyers are based in or close to major cities across the UK, providing expert legal advice to clients both locally and nationally.

We mainly work remotely, so we can work with you wherever you are. But we can arrange face-to-face meeting at our offices or a location of your choosing.

Head Office

Floor 5, Cavendish House, 39-41 Waterloo Street, Birmingham, B2 5PP
Regional Spaces

Stirling House, Cambridge Innovation Park, Denny End Road, Waterbeach, Cambridge, CB25 9QE
13th Floor, Piccadilly Plaza, Manchester, M1 4BT
10 Fitzroy Square, London, W1T 5HP
Harwell Innovation Centre, 173 Curie Avenue, Harwell, Oxfordshire, OX11 0QG
1st Floor, Dearing House, 1 Young St, Sheffield, S1 4UP
White Building Studios, 1-4 Cumberland Place, Southampton, SO15 2NP
A national law firm

Like what you’re reading?

Get new articles delivered to your inbox

Join 8,153 entrepreneurs reading our latest news, guides and insights.


To access legal support from just £145 per hour arrange your no-obligation initial consultation to discuss your business requirements.

Make an enquiry