This article is the second in a series on the range of regulations and legal areas impacted by artificial intelligence (AI) and machine learning. We previously published an article analyzing the recent discussion paper (DP5/22) published by the Bank of England, PRA and FCA on AI and machine learning.
On 27 July 2022, the FCA published final rules and guidance (Policy Statement PS22/9 and Final non-Handbook Guidance FG22/5) for a new Consumer Duty that will set higher and clearer standards of consumer protection across financial services and require firms to put their customers’ needs first.
The Consumer Duty rules comprise:
- an overarching Consumer Principle that firms must act to deliver good outcomes for retail customers;
- three cross-cutting Rules requiring firms to act in good faith, avoid causing foreseeable harm, and enable and support customers to pursue their financial objectives; and
- four outcomes requiring firms to ensure consumers receive communications that they can understand, products and services that meet their needs, and offer fair value and the support they need.
The implementation stage is now fully underway, with firms expected to “put the pedal to the metal”. For products that are open to sale or renewal, the new rules come into force on 31 July 2023. For closed products or services, the rules comes into force on 31 July 2024.
The role of AI in producing good outcomes for consumers
As set out in the first article of the series, the UK financial services regulators recognised in their recent discussion paper on AI and machine learning that the potential benefits of AI to financial services consumers will depend on how it is used and for what purpose.
The overarching aim of the Consumer Duty is to ensure that firms provide good outcomes for their retail customers. The FCA has clarified that retail customers include those who are not direct clients of a firm, broadening firms’ responsibility over the consumer market. In this context, AI can be a useful tool for firms to provide access to financial services to consumers with non-standard histories, by using its power to harness large volumes of data to identify demographics with specific needs and produce better product matches for consumers.
At the same time, the lack of human engagement in AI-led processes has the potential to widen existing gaps and exploit characteristics of vulnerability. This is particularly concerning in the context of the Consumer Duty, as the FCA has stressed that it wants to see customers in vulnerable circumstances experience outcomes as good as those for other customers and receive consistently fair treatment. It is vital for firms tempted to increase their reliance on AI to create a symbiotic relationship between AI and vulnerable consumers, to avoid the scales tipping to the other extreme.
The relationship between AI and the Cross-cutting Rules
The Cross-cutting Rules serve as overarching expectations that firms must comply with across all areas of their business that directly or indirectly affect consumers. The FCA has in some cases addressed the potential benefits and risks of AI in relation to these rules:
- Acting in good faith: firms must act in good faith at all stages of the customer journey. This includes a firm’s behavior focused on groups of customers, and when it is interacting with individual customers. The FCA has warned that using algorithms, including machine learning or AI, could lead to consumer harm. This might apply where algorithms embed or amplify bias and lead to outcomes that are systematically worse for some groups of customers, unless differences in outcome can be justified objectively.
- Avoid causing foreseeable harm: firms can cause foreseeable harm to customers through their actions and omissions, not only through direct relationships with consumers, but also through their role in the distribution chain even where their actions or omissions are not the sole cause of harm. The UK financial services regulators have recognised a risk that the use of AI could be associated with discriminatory decisions, arising inadvertently during model development, and leading to predictable discrimination when using AI. However, it is worth noting that this bias could arise from the underlying data on which the AI model is trained – data which is selected by human actors – so AI may not be the root cause of discrimination, but it may simply be a reflection of the human training it receives.
- Enable and support customers to pursue their financial objectives: AI can provide a more granular understanding of individual consumers’ characteristics, allowing firms to tailor their products to consumers’ individual financial objectives. At the same time, this kind of fully data-driven method risks exploiting inertia or creating harmful price discrimination, fuelling behavioral biases that ultimately increase the gap between low and high risk consumers of financial services.
Potential for AI in an outcomes-based approach to the Consumer Duty
For an outcomes-based approach to AI, the UK financial services regulators raise questions that are yet to be decided: what are the most relevant metrics to measure the impact of AI on these outcomes, what evidence is required to demonstrate good outcomes for consumers, and how can this evidence be collected? This leads us to ask a further key question: how much space should firms make for AI in their interactions with consumers?
- Products and services: firms must design products and services to meet the needs of customers within their target market. AI’s ability to analyse huge amount of data could be a significant ally in achieving this outcome, provided, as described above, that the design of the model and the training used to develop the AI model is first checked for bias at the human level.
- Price and value: the specific focus of this rule is on ensuring the price the customer pays for a product or service is reasonable compared to the overall benefits. Value needs to be considered overall, and low prices do not always mean fair value. Firms are not forbidden from adopting business models with different pricing by customer group, but the outcome-based approach will require them to monitor and explain how they are complying with this outcome if they use AI models that result in price and value differences.
- Consumer understanding and consumer support: these two outcomes are strongly interconnected as they refer to understanding the needs of individual consumers and tailoring interactions with them accordingly. AI could struggle in these areas, as technology may not yet be at a point where it can react to completely unexpected situations outside of its training. This could be especially true in the case of fully automated chatbots, which could fail to meet the needs of customers dealing with non-standard issues.
While firms will need to assess the Consumer Duty implications of their use of AI in detail, it is clear that on a global scale AI is here to stay and machine learning will continue to develop. As recognised by the UK financial services regulators, when used properly AI can improve firms’ compliance with the Consumer Duty and ultimately create more positive outcomes for consumers, though this will certainly not be a case of “one size fits all.”
Keep an eye on Engage for our next article in this series.
You can find our article series on AI and machine learning in the context of financial services below:
- UK FCA, PRA, and BoE publish discussion paper (DP5/22) on AI and machine learning
Authored by John Salmon, Michael Thomas, Julie Patient, Dan Whitehead, Jo Broadbent, Melanie Johnson, Daniel Lee, Diana Suciu.