These principles are intended to be initially introduced on a non-statutory basis and apply across all UK industries. They will then be further supplemented by ‘context-specific’ regulatory guidance and voluntary standards that are expected to be developed by UK regulators such as the Information Commissioner’s Office (ICO), Competition and Markets Authority (CMA), Ofcom, Medicines and Healthcare products Regulatory Agency (MHRA), and Equality and Human Rights Commission (EHRC).
The proposal represents a potentially considerable divergence from the EU’s draft AI Act, which was announced last year, that seeks to introduce a more prescriptive and standardised approach to AI regulation across industries. By comparison, the UK appears to be moving towards a more light-touch and risk-based approach which is based on proportionality. With the practical requirements that organisations will be expected to implement being determined based on the industry and context in which the AI system is being deployed.
Scope of application
Two of the most controversial elements of the EU AI Act have been the definition of what constitutes an AI system that falls within its scope and how the regulation’s requirements are divided between different actors such as developers, distributors and users.
The UK proposal seeks to develop a more flexible definition of AI, which is based on two key characteristics that such technologies exhibit. These being the ‘adaptiveness’ of the technology to new environments and situations (i.e., the software being non-deterministic) and its autonomy in being able to make determinations and decisions once developed.
Equally, while the EU AI Act prescribes which obligations will apply to different parties, the UK government is proposing a more context-specific approach, where the regulatory standards will apply to the actor in the AI lifecycle that creates the relevant risks. However, it is likely that further clarity will be required in this area from either the government or regulators, in order to provide organisations with sufficient legal certainty.
The six core principles
The six core principles put forward in the paper are intended to address a number of key challenges identified by the government, including a lack of clarity on the application of existing UK laws in this field and the failure of those laws to adequately address specific risks associated with AI. The principles include:
-
Product safety – proportionate measures need to be taken to address the risks of AI systems compromising the physical safety of individuals, such as in the critical infrastructure and healthcare sectors.
-
Technical security and reliability – in order to ensure that consumers and the public have confidence in the proper functioning of AI systems, they must perform reliably under normal conditions and be resilient to security threats. These metrics should be tested and proven. Additionally, data used in training and deployment should be relevant, high quality, representative and contextualised.
-
Transparency and explainability – AI systems need to be appropriately transparent and explainable so that their outcomes can be understood. This may involve providing information in relation to the nature and purpose of the AI system, the data being used in training the system, and the logic and processes used in development.
-
Fairness – where there is the potential for a high impact on individuals, then the outcomes of AI systems should be justifiable and not arbitrary. Regulators will be expected to define what fairness means in the context of their sector or domain.
-
Legal responsibility – accountability and liability for the outcomes produced by AI systems must always rest with an identified or identifiable legal person.
-
Rights of redress and contestability – where individual rights have been affected, organisations should ensure that decisions made by AI can be contested where proportionate and contextually appropriate.
With respect to each of these six principles, regulators will be expected to consider their relevance to the sector or domain in which they operate, and introduce proportionate guidance and standards to support practical implementation and operationalisation by organisations.
Next steps
The publication of the proposals provides organisations with some degree of clarity on the key areas that they should be focusing on, particularly when implementing governance measures to address the risks associated with AI technologies that they are developing and deploying. Further details on the six core principles and the broader regulatory framework for AI are intended to be published as part of a UK government white paper later this year.
In the meantime, the current draft proposals are subject to public consultation by the Office for Artificial Intelligence, which will remain open until 26 September 2022.
Authored by Dan Whitehead.