UK: The FCA, Bank of England and PRA issue their strategic approach to regulating AI in response to the government’s AI White Paper

The UK’s financial regulators have responded to the government’s White Paper on its approach to regulating the use of artificial intelligence and machine learning (AI). Publications from the Financial Conduct Authority (FCA), the Bank of England and the Prudential Regulation Authority (PRA), each dated 22 April 2024, signal that a principles-based approach will be taken for the UK financial sector in relation to AI for the time being, but indicate that the regulators may, in the future, issue further guidance or prescriptive rules specifically governing the use of AI.

Background

The UK government launched a White Paper on its intended approach to regulating AI on 29 March 2023 which it said would ‘turbocharge growth’ by driving responsible innovation and maintaining public trust. Following a consultation period, the government issued its response on 6 February 2024 in which it made clear that it would not rush to legislate or risk implementing ‘quick-fix’ rules that would quickly become outdated or ineffective given the rapid pace of development in this area. Instead, the government stated its intention to empower existing regulators to address AI risks in a targeted way and tailored to the needs of each sector.

The government’s approach to ensuring responsible AI use is centred around five cross-sectoral principles:

  1. Safety, security and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

As part of its consultation response, the government called upon regulators in key sectors to publish updates by the end of April 2024 outlining their strategic approach to AI, including an explanation of their current capability to address AI and the actions they are taking to ensure they have the right structures and skills in place.

On 22 April 2024, the financial regulators responded with the following publications:

  • the FCA published an AI Update (the “FCA Update”) in response to the government’s White Paper and its further paper Implementing the UK’s AI regulatory principles: initial guidance for regulators; and
  • the Bank of England and the Prudential Regulation Authority (PRA) published a letter addressed to the Secretary of State for Science, Innovation and Technology, Michelle Donelan MP and the Economic Secretary to the Treasury and City Minister, Bim Afolami MP, setting out an update on its approach to AI (the “BoE/PRA Update”).

Alignment with the government’s approach and no new regulation expected

Both the FCA and BoE/PRA Updates indicate that they welcome the government’s principles-based, technology-agnostic and sector-led approach to regulating AI. In particular, they note that the cross-sectoral principles are consistent with their regulatory approach, and have mapped out the ways in which their respective regulatory frameworks could meet or address those principles (as summarised in the table below). They take the view that the existing FCA and PRA regulatory frameworks are appropriate to support AI innovation in ways that will benefit the industry and the wider economy whilst also addressing the risks, in line with the five principles set out in the White Paper.

The FCA has, however, noted that its regulatory approach will have to adapt to the speed, scale and complexity of AI and that greater focus on the testing, validation and understanding of AI models may be needed, as well as strong accountability principles. 

The PRA has also noted that the continued adoption of AI in financial services could have potential financial stability implications and it will undertake deeper analysis on this during the course of 2024, the findings of which will be considered by the Financial Policy Committee (FPC) of the Bank of England.

The table below summarises the ways in which the FCA and PRA’s existing regulatory frameworks are said to address the UK government’s principles.

Principles under the UK White Paper

FCA’s Existing Regulatory Framework

PRA’s Existing Regulatory Framework

Safety, security, and robustness

 

FCA Principles for Business: The Principles provide a general statement of the fundamental obligations of firms and other persons to whom they apply. Under the Principles, firms must conduct their business with due skill, care and diligence (Principle 2) and take reasonable care to organise and control their affairs responsibly, effectively and with adequate risk management systems (Principle 3).

FCA Threshold Conditions: Firms with a Part 4A permission that carry on specific regulated activities need to ensure their business model must be suitable, which includes consideration of whether the business model is compatible with the firm’s affairs being conducted in a sound and prudent manner, as well as consideration of the interests of consumers and the integrity of the UK financial system.

FCA Senior Management Arrangements, Systems and Controls (SYSC) sourcebook: Relevant provisions include risk controls under SYSC 7 and general organisational requirements under SYSC 4, including requirements for relevant firms to have sound security mechanisms in place relating to data, as well as requirements related to business continuity under SYSC 4.1. Additionally, under SYSC 15A, it is required for relevant firms to be able to respond to, recover, learn from and prevent future operational disruptions. SYSC 8 and SYSC 13 (in respect of insurers) also contain specific rules and guidance on outsourcing, including in relation to operational risk.

PRA SS2/21 (Outsourcing and third party risk management): The PRA’s policies on outsourcing/third-party risk management (TPRM) put the onus on regulated firms to manage risks from their suppliers where third-party businesses services support important business services.

 

CP26/23 (Operational resilience: Critical third parties to the UK financial sector): The Bank of England, PRA and FCA are currently assessing their approach to Critical Third Parties (CTPs), which has included publishing a Consultation Paper. The adoption of AI may lead to the emergence of third-party providers of AI services who are critical to the financial sector. If that were to be the case, these systemic AI providers could come within scope of the proposed regime for CTPs, if designated by HM Treasury.

Appropriate transparency and explainability

FCA Consumer Duty: Firms need to act in good faith, which is characterised by honesty, fair and open dealing with retail consumers (see PRIN 2A.2.2R). Additionally, firms need to meet the information needs of retail customers and equip them to make decisions that are effective, timely and properly informed (see our previous article on the Consumer Duty here).

FCA Principles for Business: In instances where the Consumer Duty does not apply, Principle 7 requires firms to pay due regard to the information needs of clients and communicate with them in a way that is clear, fair and not misleading.

PRA SS1/23 (Model Risk Management principles for banks): The supervisory statement lists explainability and transparency as factors that should be considered when assessing the complexity of a model. The principles are designed such that increased complexity requires more oversight by banks, prioritising validation activities and risk controls to these more complex models. These principles apply to all models used by banks, not just AI models, and to their use of third-party models.

Fairness

FCA Consumer Duty: Firms are required to play a greater and more proactive role in delivering good outcomes for retail customers, including (in some circumstances) those who are not direct clients of the firm. Firms are required to act in good faith, avoid causing foreseeable harm, and enable and support retail customers to pursue their financial objectives.

FCA Principles for Business: Firms will need to consider Principle 8 on managing conflicts of interests and Principle 9 on the suitability of advice and discretionary decisions. Where firms are not conducting retail market business and the Consumer Duty does not apply, firms need to pay due regard to the interests of their customers and treat them fairly (Principle 6) and communicate information in a way that is clear, fair and not misleading (Principle 7).

Equality Act: This cross-sector legislation (applicable to both financial and non-financial sectors) prohibits discrimination on the basis of protected characteristics. 

Although the PRA acknowledged that this principle is more relevant to consumer-facing regulators including the FCA, where fairness is deemed relevant to the PRA’s remit (eg, prudential soundness), the PRA would expect firms to define fairness for themselves with justification.

Accountability and governance

FCA Principles for Business: Principle 3 requires a firm to take reasonable care to organise and control its affairs responsibly and effectively, with adequate risk management systems.

FCA SYSC sourcebook: Under SYSC 4.1.1R, a firm must have robust governance arrangements, which include a clear organisational structure with well defined, transparent and consistent lines of responsibility, effective processes to identify, manage, monitor, and report the risks it is or might be exposed to, and internal control mechanisms, including sound administrative and accounting procedures and effective control and safeguard arrangements for information processing systems.

SM&CR: As noted in the PRA section on the right, this regulation emphasises senior management accountability and is relevant to the safe and responsible use of AI.

FCA Consumer Duty: Firms are required to ensure that their obligation to act to deliver good outcomes for retail customers is reflected in their strategies, governance and leadership.

 

Senior Managers and Certification Regime (SM&CR): Regulated firms are required to ensure that one or more of their Senior Managers (ie, a key decisionmaker within the firm) have overall responsibility for the main activities, business areas, and management functions of the firm. This means any material use of AI in relation to an activity, business area, or management function of a firm out to be set out as falling within the scope of a Senior Manager’s responsibilities. These individuals can be held accountable if there is a regulatory breach within their area of responsibility; and, if they failed to take reasonable steps to prevent it (see our previous article on the SM&CR here).

PRA Rulebook: Sections General Organisation Requirements and Conditions Governing Business provide an overview of governance rules for banks and insurance firms respectively. Additionally, Section 2.1A of the PRA Rulebook on Risk Control focuses on high level governance requirements for effective procedures for risk management.

PRA SS1/23: Model Risk Management (MRM) principles for banks establish that the PRA expects ‘strong governance oversight with a board that promotes a ‘MRM’ culture from the top through setting clear model risk appetite’. Additionally, SS1/23 requires banks to provide a comprehensive model inventory, which includes AI models.

Contestability and redress

Dispute Resolution: Complaints’ Sourcebook (DISP): Chapter 1 contains rules and guidance detailing how firms should deal with complaints.

The PRA indicated this principle sits more within the domain of consumer-facing regulators.

*Data protection laws are not covered in the table as the Information Commissioner’s Office (ICO) will remain responsible for enforcing compliance in this regard.

Investments to support innovation

The PRA and FCA each have, in addition to their primary objectives, a secondary objective to facilitate the international competitiveness of the UK economy and its growth in the medium to long term. Recognising this objective, the FCA and BoE/PRA Updates emphasise that the UK financial regulators are pro-innovation and make reference to some significant investments being made by the regulators to support the safe development of AI and to adopt AI technology to support their own functions. The FCA, for example, has established Regulatory and Digital Sandboxes to allow firms to test ideas in a controlled environment and is hosting a TechSprint whereby trade surveillance specialists will have access to FCA’s trading datasets to allow the development and testing of AI powered surveillance solutions. The FCA has also announced the creation of a new FCA digital hub in Leeds comprising more than 75 data scientists.

The next 12 months

Over the next 12 months, the FCA intends to focus on the following in relation to AI:

  1. Continuing to further its understanding of AI deployment in UK financial markets;
  2. Building on its existing foundations to consider regulatory adaptations if needed;
  3. Collaborating closely with the Bank of England, Payment Systems Regulator (PSR) and other regulators as well as regulated firms, academia, society and international regulators and organisations to build empirical understanding and intelligence around AI;
  4. Testing for beneficial AI including through the pilot AI and Digital Hub, the FCA’s Digital Sandbox and the Regulatory Sandbox. The FCA will continue to explore the possibility of establishing an AI Sandbox; and
  5. Conducting research on deepfakes and simulated content following engagement with stakeholders as part of the Digital Regulation Cooperation Forum (DRCF) Horizon Scanning and Emerging Technologies workstream in 2024-25.

The PRA has been exploring the following potential areas for clarification on its regulatory framework:

  1. Data management: Firms have noted the fragmented nature of the current regulatory landscape around data management in the context of AI and the PRA is considering options to address those challenges;
  2. Model risk management (MRM): The model risk management principles for banks will come into effect on 17 May 2024;
  3. Governance: In DP5/22, the Bank of England and the PRA sought feedback on whether there should be a dedicated Senior Management Function (SMF) for AI under the Senior Managers and Certification Regime (SM&CR). Respondents expressed the view that existing firm governance structures are sufficient to address AI risks including firms having to identify the relevant SMF(s) to assume overall responsibility for a firm’s MRM framework. It also requires firms to provide a model inventory, which includes AI models; and
  4. Operational resilience and third-party risks: The Bank of England, PRA and FCA are currently assessing their approach to critical third parties (CTPs) in the financial sector following CP23/30 which was published in December 2023. 

The PRA will also continue to engage directly with stakeholders. This may include establishing a new AI Consortium as a follow-up industry consortium to the Artificial Intelligence Public-Private Forum (AIPPF) which published its final report in February 2022. 

Impact on businesses

Many businesses that operate in the financial sector – including financial services firms and technology providers – will welcome the principles-based approach favoured by the regulators and will be happy that they will not need to prepare for a prescriptive new suite of regulations in relation to IT systems that rely on AI components. Some businesses, on the other hand, are already preparing for implementation of the EU Artificial Intelligence Act and will be faced with decisions as to how their AI governance arrangements should apply in the context of their UK operations. Nevertheless, having some clarity from the regulators will be helpful at a time when businesses are looking to rapidly adopt new AI capabilities as they become available.

In our experience, and as can be seen from the above there is no shortage of regulation and legislation that would apply to AI. However, the challenge for the financial services industry is that it not always clear exactly how the regulators expect the regulation to apply in the context of AI in practical operational terms. This is exacerbated when firms try to negotiate with their vendors providing AI systems who may take a less conservative view of how a particular regulation should apply to the AI systems provided. The industry would therefore welcome some further guidance and perhaps some clear practical examples would be of help.

Strategic guidance from other UK regulators such as the Information Commissioner's Office (ICO) and the Competition and Markets Authority (CMA) is expected and is also likely to have an impact on financial institutions. It is also a real possibility that a change in government following the upcoming general election at the end of this year could have a material impact on the direction of travel for AI regulation.

Our specialist teams are closely following developments in relation to AI and support clients with policy advocacy through engagement with regulators, government departments and legislators.  Please get in touch with us if you would like to discuss how we can help.

 

 

Authored by Melanie Johnson, Louise Crawford, and Daniel Lee.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.