The UK financial services regulators, the Bank of England (BoE), the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) (together Supervisory Authorities) jointly published a discussion paper (DP5/22) on artificial intelligence (AI) and machine learning on 11 October 2022. The purpose of the discussion paper was to facilitate a public debate on the safe and responsible adoption of AI in UK financial services.
Principally, DP5/22 examines:
- the potential merits of providing a regulatory definition for AI;
- the benefits, risks and harms related to the use of AI and machine learning that could significantly affect or even transform how financial services and markets operate; and
- how the current regulatory framework could apply to AI.
The Supervisory Authorities have also raised discussion questions for stakeholder input, with the aim of understanding whether the current regulatory framework is sufficient to address the potential risks and harms associated with AI and how any additional intervention may support the safe and responsible adoption of AI in UK financial services.
The Supervisory Authorities have not provided any new legal framework or their intended future approaches for regulating the use of AI and machine learning in UK financial services. However, the discussion paper provides a valuable platform for the Supervisory Authorities, experts and stakeholders to collaborate and jointly assess whether the current legal framework can adequately regulate AI technology by safeguarding each of the Supervisory Authorities’ objectives while at the same time promoting innovation in UK financial services.
This consultation occurs in parallel to the UK government’s ongoing work in developing its own cross-sector approach to the regulation of AI technology, and will therefore provide a valuable contribution to this broader policy debate.
Potential merits of providing a regulatory definition for AI
Despite the challenges of defining AI, the Supervisory Authorities point out that there are benefits for establishing a precise definition of AI which include: (i) creating a common language for firms and regulators, which may ease uncertainty; (ii) assisting in a uniform and harmonized response from regulators towards AI; and (iii) providing a basis for identifying whether or not specific use cases might be captured under particular rules and principles.
The Supervisory Authorities also point out the merits of distinguishing between AI and non-AI to provide clarity of what constitutes AI within the context of a specific regulatory regime and also to manage risks and expectations, by either:
Benefits and risks related to the use of AI in financial services
The benefits and risks of using AI have been categorized in the discussion paper based on each of the Supervisory Authorities’ objectives, namely consumer protection, competition, safety and soundness of firms, insurance policyholder protection, financial stability and market integrity.
- Consumer protection (FCA): AI can help identify consumer characteristics and preferences by processing large volumes of data, which in turn can provide more-tailored and personalized services such as making financial services available to consumers with non-standard histories. However, there is a danger that AI could potentially produce biased results and discriminate against consumers on the grounds of protected characteristics such as race, religion or sex.
- Competition (FCA): Consumer-facing AI systems such as those used in Open Banking can improve competition in a market by improving consumers’ ability to assess, access, and act on information. However, AI systems could facilitate collusive strategies between sellers by making price changes more easily detectible and the high costs of entry concerning data, skilled workers and AI technology may impede competition.
- Safety and soundness (PRA and FCA): AI allows financial services firms to create more accurate decision-making tools, develop new insights, and safer products and services for consumers, and improve their operational efficiency. However, AI could adversely amplify prudential risks (credit, liquidity, market, operational, reputational, etc.) and jeopardize the safety and soundness of firms.
- Insurance policyholder protection (PRA and FCA): AI can offer the automation of data collection, underwriting and claims processing, and help to provide more personalized insurance products to policyholders. However, any biased or unrepresentative input data can lead AI systems to treat certain policyholders unfairly, which may lead to inappropriate pricing and marketing.
- Financial stability and market integrity (BoE and FCA): AI can be used in processing large volumes of data and information more efficiently, particularly with respect to credit decisions, insurance contracts and customer interaction, which may contribute to a more efficient financial system overall. However, there are risks that with a growing number of financial services firms adopting AI technology with the use of similar datasets and AI algorithms, and relying on third party service providers, AI may amplify existing risks to financial stability and systematic risks.
Existing legal requirements for the use of AI
In the discussion paper, the Supervisory Authorities have provided current and future legal requirements and guidance that are relevant to mitigating the risks associated with AI, including but not limited to the FCA Consumer Duty rules, UK General Data Protection Regulation (UK GDPR), Equality Act 2010 and Senior Managers and Certification Regime (SM&CR).
Legal requirements covering the above and other relevant regulations and guidance will be dealt with in more detail in our next article in our forthcoming series on AI and machine learning in financial services.
The discussion paper closes on 10 February 2023, and stakeholders can submit any comments or enquiries to DP5_22@bankofengland.co.uk before the deadline. We will keep a close eye on any responses to this discussion paper and the UK government’s future approach to regulating AI.
As noted above, this article kickstarts a forthcoming series of articles that we will be launching on the range of regulations and legal areas impacted by AI and machine learning.
Authored by John Salmon, Michael Thomas, Julie Patient, Dan Whitehead, Jo Broadbent, Melanie Johnson, Daniel Lee.