Hogan Lovells responds to the European Commission’s consultation on the AI Regulation

On 6 August, Hogan Lovells submitted its response to the European Commission’s public consultation on its proposed AI Regulation (Draft Regulation). Our view is that the Draft Regulation is an ambitious and comprehensive framework, and we welcome the risk-based approach that has been adopted. Nonetheless, there are a number of areas where further clarity is required, as we have outlined below.

Definition of ‘AI system’

The proposed definition of what constitutes an ‘AI system’ appears to be unnecessarily broad.

Many of the stated concerns associated with AI are related to cases where the technology is designed to behave autonomously, including for example where a system does not perform as expected, produces discriminatory decisions or generates inaccurate outputs, which may go undetected due to a lack of interpretability.

However, the proposed definition does not currently contemplate situations in which AI systems are being at least partially operated and overseen by humans in a live environment, meaning they are only semi-autonomous. Where this arises, and there is a degree of human involvement, then many of the risks outlined above can be suitably mitigated. We therefore suggest that the Commission consider narrowing the definition of an AI system so that it takes into account the degree of autonomy that the AI system exercises, including for example when determining if it falls within the scope of the Draft Regulation and should be considered to be ‘high-risk’.

Territorial scope

Under Article 2 of the Draft Regulation, the territorial scope is defined to include providers who place on the market or put into service AI systems in the EU, as well as users of AI systems that are located within the EU. In each of these instances, the legal position appears relatively straightforward to both understand and apply.

Yet, part (c) of Article 2 is less clear. Its reference to how providers and users of AI systems, who are located in a third country, will remain subject to the Draft Regulation if the ‘output’ produced by the system is used in the EU, could be interpreted in a variety of ways. By way of example, it could mean that providers who do not operate within the EU and do not seek to market or sell their AI systems within the single market are still caught, if a user was to independently decide to make use of the output in the EU without their knowledge.

This ambiguity could be at least partially addressed through being more specific about the circumstances under which providers and users are deemed to be subject to the Draft Regulation as a result of the ‘output’ being used in the EU. Further clarification of this provision will ensure that organisations better understand when they are expected to develop or utilise AI systems in accordance with the Draft Regulation’s requirements.

Harmonisation of cross-border supervision

We note that the Draft Regulation does not currently provide a framework for the harmonised supervision and enforcement of its provisions where organisations who are subject to the law operate across multiple Member States. Instead the current proposal appears to envisage a more fragmented  approach, involving the possibility of multiple national competent authorities in each Member State, with no single point of contact. This approach appears to be inconsistent with the intended objective of the Draft Regulation, which is to create a harmonised approach to the regulation of AI in the EU.

A well-designed lead supervisory authority framework offers the benefits of providing organisations with greater certainty, ensuring harmonised enforcement and also avoiding the duplication of effort by regulators. The Commission should therefore consider introducing a one-stop-shop mechanism for cross-border activities, while also enhancing the existing cooperation and consistency measures, such as by strengthening the role of the European Artificial Intelligence Board. We suggest that the approach taken by the Commission in its drafting of the EU GDPR offers a useful basis for developing a similar framework here.

General-purpose AI systems

It is increasingly common for organisations (Third Party Developers) to develop and market AI tools that can then be further configured by their customers for a wide range of use-cases. This means that while the relevant AI tool may not be intended to be used for any purpose that is outlined as ‘high-risk’ under Annexes 2 and 3 of the Draft Regulation, it may be subsequently used for such a purpose by the customer.

Recital 60 acknowledges the role of Third Party Developers that are involved in the “artificial intelligence value chain” and indicates that they should “cooperate, as appropriate” with providers and users to enable their compliance. However, this is not an explicit obligation within the Draft Regulation, meaning that it is currently unclear whether Third Party Developers are intended to be subject to the law and, if they are, then what specific steps they are required to take to support providers and users in practice. We therefore suggest that, taking into account the primary role of the developer of the high-risk AI system, the specific obligations that apply to Third Party Developers are clearly enunciated in order to ensure legal certainty.

User obligations

In accordance with Article 29(1) of the Draft Regulation, the primary obligation that applies to users of high-risk AI systems is for them to use such systems in accordance with the instructions of use that are supplied by the provider.

By delegating the requirement to specify the precise obligations that need to be complied with to the provider, the Commission is potentially creating a significant degree of uncertainty for users. This is particularly true, given that it seems likely that both the content and quality of the instructions that will be made available to users will vary significantly depending on the nature and sophistication of the provider. It is also likely to create difficulties for providers, who will be forced to consider what restrictions and requirements they should impose on their own customers when using their products.

Taking these considerations into account, we believe it will be easier for all parties to understand and comply with the regulatory expectations placed upon them, if all of the obligations that apply to users are explicitly set out within the Draft Regulation.

In addition, the Commission should consider which provisions of the Draft Regulation are intended to apply to organisations that both use and develop high-risk AI systems in-house. Where this scenario arises, it is currently unclear whether the organisation would be considered a user, a provider, or both.

Data governance standards

Article 10(3) of the Draft Regulation states that providers are expected to use training, validation and testing data sets that are “relevant, representative, free of errors and complete”.

While we acknowledge the importance of providers implementing appropriate data governance measures in the development of high-risk AI systems, the current standard appears to be excessively onerous and impractical to comply with.

We suggest that the standard of data governance that providers are expected to adhere to should be determined with reference to the primary objectives of this obligation. Namely that proportionate steps should be taken by providers to ensure the training, validation and testing data sets are created in such a manner so to ensure that AI systems are: (i) developed to an appropriate degree of accuracy; (ii) mitigate the risks of algorithmic bias; and (iii) sufficiently representative of the population or environment that the AI system is looking to model.

Interrelationship with the GDPR

As outlined in the joint opinion of the European Data Protection Board and European Data Protection Supervisor, there is currently a lack of clarity on the relationship between the Draft Regulation and EU’s data protection framework, particularly the GDPR.

High-risk AI systems will often be heavily reliant on large volumes of personal data in their training, testing and use. We ask the Commission to particularly consider how the concepts of controller and processor under the GDPR are meant to align with roles of user and provider and ensure that the obligations and expectations under these two regulatory frameworks are consistent and do not result in conflicts.

Serious incident reporting

Article 62 of the Draft Regulation imposes on obligation on providers to report any incidents that are serious in nature to the market surveillance authorities of the Member States where the incident occurred.

In addition, the same Article states that any malfunctioning of high-risk AI systems which constitute a breach of obligations under any Union law intended to protect fundamental rights”, also need to be reported. This second part of the requirement appears to be excessively broad and, for instance, could result in a situation where providers are required to self-report even minor infractions of data protection and anti-discrimination laws, which would unlikely to be required in other contexts.

We instead suggest that malfunctioning events should only have to be reported where there is likely to be a high risk caused to the safety or fundamental rights of individuals as a consequence.

We hope that this contribution is helpful in highlighting possible areas for further legislative work in order to devise a framework that is proportionate, practical and effective, and look forward to the next stages of the process.

Authored by Dan Whitehead.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.