The potential impact of artificial intelligence (“AI”) in the UK competition law arena is not new – Sarah Cardell, Chief Executive of the Competition and Markets Authority (“CMA”), stated earlier this year that AI “has been on our radar for some time”. Indeed, in the past decade the CMA has investigated algorithm-enabled practices in relation to both horizontal concerns, such as a 2015 investigation into the use of automated repricing software to coordinate pricing in the online marketplace for posters and frames, and more recently in relation to vertical agreements in the context of resale price maintenance (RPM) of musical instruments. For the latter, the CMA identified the use of “all-seeing software” by suppliers but also confirmed it had “launched its own in-house price monitoring tool aimed at deterring companies” from breaching competition law.
In addition, the CMA has also had AI specifically on its radar from a national security perspective since 2020 when the UK merger control thresholds were expanded to capture transactions involving target companies active in AI with revenues as low as £1m in the UK – although the CMA’s national security remit has now been shifted to the Investment Security Unit within the Cabinet Office as part of the National Security and Investment regime (for an overview see here).
What is new is the perceived speed of change and the way recent developments around generative AI have captured the public’s imagination and thrust this topic into the mainstream debate. The widespread commercial adoption of AI has shown that, as the CMA acknowledged in the ‘initial review’ launched into AI foundation models on 4 May 2023 (“Initial Review”), “we are at a pivotal moment in the development of a transformative technology”, and governments and competition authorities worldwide are swiftly escalating their regulatory responses.
The CMA has been developing its capabilities to deal with these new challenges for some time. The Data, Technology and Analytics (“DaTA”) unit was established in February 2019 to develop “data engineering, machine learning and artificial intelligence techniques” to enable the CMA to understand and address competition questions raised by emerging technologies. The DaTA unit has developed bespoke, technology-enabled enforcement tools for the CMA including the in-house automated price monitoring software mentioned above and a recently developed tool to track merger activity in an automated way with a natural language machine-learning model.
Additionally, the CMA’s Digital Markets Unit (“DMU”), currently in shadow form and to be given statutory footing by the Digital Markets, Competition and Consumers (“DMCC”) Act next year, will have new powers to impose conduct requirements and make interventions in relation to companies designated as having “Strategic Market Status” in the digital economy (see our ‘deep dive’ alert on the digital markets provisions of the DMCC Bill here).
Broadly, competition and consumer law questions will be focussed on how AI is developed and sold, its potential use cases and how to detect and prevent AI from falling foul of the laws – particularly as AI develops increasing autonomy from human input. All of these questions and developments will need to be seen in the broader regulatory context both in the UK and globally.
Specifically from a UK perspective, in March 2023, the UK Government published its AI White Paper (“White Paper”) and this set the tone and the framework of overarching principles to guide the development of regulations and approach to AI by UK regulators (see our previous alert on the White Paper here). Shortly after this, the CMA launched its Initial Review in May 2023.
The CMA’s Initial Review of AI foundation models
Prompted by the call in the White Paper for regulators to take initiative in relation to AI regulation, the CMA launched its Initial Review on 4 May 2023. Interestingly this is not a formal market study but rather the CMA is exercising its general review function powers, enabling it to be more flexible and nimble in its approach and also to ensure it has sufficient information to take informed decisions about its future work – including choice of the legal tools available to it.
For context, foundation models are AI systems trained on large datasets which can be adapted to a wide range of applications including large language models and ‘generative’ AI – where generative AI can create text, images, music, speech, code or video based on learning from existing available content.
The CMA’s aim is to create an “early and shared understanding” of the market for AI foundation models, which could help shape the AI regulatory landscape. In particular, the CMA is investigating:
- how the markets for foundation models and their use could evolve;
- what opportunities and risks these scenarios could present for competition and consumer protection; and
- what competition and consumer protection principles should be applied to guide the development of these markets going forward.
The review will focus on the current functioning and potential development of the market in relation to three core ‘themes’:
- Competition and barriers to entry in the development of foundation models: including how foundation models could “disrupt or reinforce the largest firms in the market”, considering barriers to entry, economies of scale and other market characteristics that would “tend towards centralisation, consolidation and integration”.
- The impact foundation models may have on competition in other markets: the CMA considers that AI foundation models are “likely to become an input to other markets” and will examine the competition concerns that may be raised if their capabilities “become necessary to compete effectively in certain markets” but could be “restricted unduly or controlled by a few large private companies facing insufficient competitive constraint”.
- Consumer protection: the CMA will examine a range of potential risks to consumers, focusing on whether “current practices and incentives in the market are leading to accurate and safe foundation models”.
The CMA’s consultation ended on 2 June 2023, and it intends to gather evidence by drawing on existing research, issuing information requests to stakeholders and meeting with interested parties.
The CMA’s response to the Government’s White Paper
On 1 June 2023, the CMA published its response to the government’s White Paper, in which it welcomed the Government’s “context-specific” approach to AI regulation as “pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative”, and acknowledged the need for the Government’s central coordination functions to support the implementation, monitoring and coherent development of the framework across regulators.
The CMA considered how it will interpret the five principles in line with its role within the UK regulatory landscape and apply them to current and future CMA work. The CMA made the following observations in respect of each of the principles:
- Safety, security and robustness: The CMA noted that, in properly functioning markets, firms should “face the correct incentives to determine and implement the appropriate level of security and testing to ensure that their systems function robustly”, and the CMA may need to intervene when this incentive is missing, i.e. when “AI use affects a consumer who may not be in a position to assess technical functioning or security of the product”.
- Appropriate transparency and ‘explainability’: The CMA emphasised that “making sure that AI is appropriately transparent and explainable is well aligned with our competition and consumer protection objectives”, as well as with the ‘Trust and Transparency’ objective introduced by the DMCC Bill published on 25 April 2023 (for an overview of the DMCC Bill, see our previous alert here). In relation to competition, the CMA highlighted the risk of firms with “enduring market power” operating AI systems “that have substantial influence over other firms’ access to customers and economic success”, and noted that solutions to this could include guarantees that no self-preferencing is taking place or that provided data is being used only for certain purposes.
- Fairness: The CMA noted the “considerable overlap” of this principle with its remit. In particular, in terms of competition law, in relation to ensuring firms can compete without “unfair hinderances” arising from AI systems that underpin the functioning of the market, such as self-preferencing in recommender engines. The CMA added that the principle should be applied to “the context surrounding the AI system”, not just the algorithm, including data collection, testing and evaluation practices.
- Accountability and governance: In addition to its existing competition and consumer law tools, the CMA noted the proposed ex ante functions in the DMCC Bill as another means of holding firms “directly” accountable for the effects of AI systems which they deploy. The CMA also noted the novel challenges that may arise in relation to accountability for AI systems which learn to tacitly collude “without any explicit coordination, information sharing or intention by human operators”.
- Contestability and redress: The CMA considered that the “opacity of algorithmic systems and the lack of operational transparency” make it hard for customers to “discipline” firms, and stressed the importance for regulators to effectively monitor potential harms and to have the powers to act where necessary.
The CMA emphasised its support for cross-regulatory coordination and coherence to achieve these aims, particularly considering its involvement in the Digital Regulation Cooperation Forum (“DRCF”), which was established to allow greater cooperation between digital regulators (specifically the CMA, Information Commissioner’s Office, Ofcom and the Financial Conduct Authority) on online regulatory matters. However, the CMA noted the importance of keeping “additional layers in the existing regulatory landscape” to a minimum to maintain regulatory efficiency, and encouraged the use of existing initiatives such as the DCRF.
The CMA intends to publish a report on the outcome of its Initial Review into AI foundation models in early September 2023, and this may be followed by more in-depth reviews into specific issues if the CMA considers these appropriate. Further down the line and most likely before March 2024, it can be expected that the CMA, alongside and possibly in collaboration with other regulators, will begin to publish specific guidance on AI, in line with the roadmap set out in the White Paper.
While these initial documents give an indication of the direction of the CMA’s thinking towards regulating AI, and the main issues which it anticipates, we will undoubtedly see a continued evolution of the UK regulatory approach to AI – both as a result of ongoing technological shifts and international developments in the AI regulatory landscape.
Particular international developments which may influence the UK’s AI regulatory environment include the Atlantic Declaration, a US-UK economic partnership announced on 8 June 2023 which involves accelerated cooperation on AI, and the UK’s proposed global summit on AI set to take place this autumn (see our previous alert on the announcement of the summit here). Such international collaboration is likely to shape the approach of the UK Government to AI and, by extension, the role of the CMA in the UK’s emerging AI regulatory regime and how the use of AI will fit within the UK’s competition law framework.
Authored by Christopher Peacock, Angus Coulter, and Eleanor Winn.