The announcement comes only months after the European Union published its own bold and comprehensive proposals for an AI Regulation and in the same month that DCMS also outlined its suggested post-Brexit reforms of UK data protection law.
There are three ‘core pillars’ to the National AI Strategy. The paper sets out how the UK government intends to invest in the long-term needs of the AI ecosystem, how they can ensure that AI technology benefits all sectors and regions, and what steps can be taken to ensure effective AI governance.
AI governance proposals
The strategy’s section on AI governance sets out the objective of establishing a governance framework that addresses the unique challenges and opportunities of AI, while also emphasising the need to be sufficiently flexible and proportionate. This will be achieved by taking a number of steps:
- Publication of a white paper in early 2022, which will set out the key risks identified in connection with the current and future use of AI, alongside detailed proposals for regulating AI at a national level.
- Developing an ecosystem of AI assurance tools and services, through work with the UK Centre for Data Ethics and Innovation, which will assist organisations in being able to demonstrate how their systems operate in a safe, fair, and trustworthy manner.
- Growing the UK’s contribution to the development of global AI technical standards, in order to support the creation of compliant solutions and address issues such as algorithmic bias and transparency. This could include the piloting of an AI standards hub to expand the UK’s engagement and thought leadership at an international level.
- Working alongside the Alan Turing Institute to build the capacity of UK regulators to be able to use and assess AI technologies, so they can effectively supervise compliance of new products and services when they come to market.
This more interventionist approach to regulation, and the development of technical standards, contrasts with the government’s previous sector-led strategy, which placed the emphasis on particular regulators such as the FCA, CMA, and ICO to determine the relevant rules and guidelines that should apply in their domains. However, concerns about a lack consistency, the overlapping nature of regulatory mandates, and the move towards developing global cross-sector standards in other jurisdictions has resulted in a change in philosophy.
What remains unclear from the strategy is the precise nature of the obligations that may be imposed on organisations that develop and use AI. Nonetheless, we can deduce from the government’s pro-innovation plan for Digital Regulation, and its recent proposals on reforming data protection law, that any future legislation will likely be principle-based, seek to avoid “box-ticking exercises,” and place the emphasis on companies to determine how they seek to comply in practice. This position has also been supported by DCMS Minister, Chris Philp, who has stated that the government intends to keep “regulatory intervention to a minimum” by using "existing regulatory structures" where possible.
These factors all indicate that there is the potential for significant divergence from the approach that is being taken in the EU.
While the government has not opened an official consultation on the proposals, the National AI Strategy provides a significant opportunity for organisations and individuals to engage with DCMS and seek to influence the UK’s future AI governance framework.
Our series on AI regulation
This article is part 5 of a series of articles, which will further examine the existing and emerging legal challenges associated with AI and algorithmic decision-making. We will take a detailed look at key issues including algorithmic bias, privacy, consumer harms, explainability and cybersecurity. There will also be exploration of the specific impacts in industries such as financial services and healthcare, with consideration given to how existing policy proposals may shape the future use of AI technologies.
Authored by Dan Whitehead.