Consistent with the Commission’s White Paper from 2020, which formed the basis for these proposals, the draft regulation notably does not apply blanket rules to all forms of AI. Instead the Commission has focused its attention on three primary categories of systems, which it considers to fall within the scope of the draft regulation. In summary, these categories are AI practices which are expressly prohibited, systems that are considered ‘high-risk’ and finally, other forms of AI which are intended to interact with humans.
Given the common complexities in AI supply-chains, the Commission acknowledges that there are multiple actors who will need to be subject to the regulation’s requirements. These actors include both the developers of AI (‘providers’) and the organisations which procure and make use of these systems (‘users’). Equally, due to the extra-territorial scope of the regulation, both providers and users may be required to comply even where they are established in a third country. Other actors such as importers and distributors of AI systems are also subject to limited obligations in certain circumstances.
For organisations that are found to be infringing particular provisions of the regulation, GDPR-level fines are being proposed of up to 4% of annual global turnover.
Prohibited AI practices
The Commission has set out a list of practices which it deems to be of such risk to individuals, that they are strictly prohibited. These practices are particularly focused on the digital environment and, given their broad definitions, have the potential to impact a wide variety of technology companies and other organisations that operate online. They include:
- Manipulative and exploitative practices where AI systems are being used to manipulate human behaviour through choice architectures or other elements of a user interface, or exploiting information known about an individual to target their vulnerabilities. In each case, these practices will be considered to be in scope where it results in a person to behave or take a decision to their detriment.
- Indiscriminate surveillance through the use of AI systems, where the surveillance is undertaken in a generalised manner across a population. This may include the monitoring or tracking of individuals in a physical or virtual environment where it is performed on a large-scale.
- Social scoring of individuals, where the scoring consists of the large scale evaluation or classification of trustworthiness based on their behaviour, leading to particular forms of detrimental treatment.
High-risk AI systems
The core component of the regulation focuses on high-risk AI systems. These systems are not prohibited, but their classification results in the application of specific obligations on providers and users.
A provisional list of high-risk AI systems is provided within the draft. Amongst others, this includes systems intended to be used for remote biometric identification in public spaces, software which is used to determine access to key aspects of society including jobs, education and credit and safety components of essential public infrastructure networks. The regulation includes an additional mechanism which allows the Commission to update this list from time-to-time, in order to reflect changes in technology.
Where an AI system is deemed high-risk, then providers will have an extensive range of obligations. These include requirements relating to the quality of training and testing data, documentation and record-keeping, transparency, human oversight, product safety, accuracy of outputs and security, alongside the need to register each AI system on a Commission-managed database.
Overlaying these requirements is a general obligation for providers to put in place a quality management system. This is can be interpreted as being a wide-ranging governance framework which ensures adherence to the regulation, including the development of a compliance strategy, appropriate controls and techniques to manage high risk AI systems and for determining responsibilities amongst the provider’s management team.
While users have fewer obligations, they are still expected to use AI systems in accordance with the instructions disclosed by the provider, in order to address any residual risks which exist. Equally, they must undertake data protection impact assessments where personal data is involved and perform ongoing monitoring of the performance of AI systems.
Where AI systems are not deemed to be high-risk, but are still intended to interact with individuals, then additional requirements relating to algorithmic transparency will apply.
This will always include the need for the provider to ensure that its software is designed in such a way as to ensure that the affected individuals are notified that they are interacting with an AI system. In addition, users will need to disclose other information where the systems are utilising either emotional recognition or manipulative content (e.g. ‘deep-fakes’).
Supervision and sanctions
In order to supervise the new regulation, the Commission intends to establish a European Artificial Intelligence Board, which will have the remit of ensuring the consistent application of the regulation across EU Member States and working alongside other bodies such as the EDPB.
Each Member State will also be expected to appoint one or more competent authorities to supervise compliance at a national level. These authorities will have the power to issue fines and other forms of penalty which will generally be determined at a Member State level. However, where the infringement relates to the undertaking of prohibited AI practices or the supply of incorrect or false information to notified bodies, then the regulation states that fines of up to €20m or 4% of global turnover can be applied.
The formal announcement of the proposals by the Commission is expected to take place next Wednesday 21 April. Following that it is likely that the draft regulation will be subject to a period of public consultation and trilogue negotiations with the EU Council and EU Parliament.
Our series on AI regulation
This article is part two of a series of articles, which is examining the existing and emerging legal challenges associated with AI and algorithmic decision-making. Part one can be accessed here.
Authored by Dan Whitehead.