New disclosure obligations relating to high-risk AI systems

In its proposal concerning a Directive regulating the adaptation of civil liability to artificial intelligence, the European Commission requires defendants to disclose evidence under specific circumstances. It aims to provide persons seeking compensation for damage allegedly caused by high-risk AI systems, with effective means to identify potentially liable persons and relevant evidence for a claim. The Commission in particular grants claimants the right to request evidence disclosure both before and in the course of court disputes. Failure to comply with an order to disclose evidence will lead to a presumption of non-compliance with “a relevant duty of care” that the evidence requested was intended to prove and leaves it to the defendant to rebut this presumption.

One of the main issues addressed in the Proposal for a Directive of the Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive, “AILD Proposal”) rules is proportionality, described as a careful balance between the interests of industry and consumers.

According to the Commission staff’s working document “Executive Summary of the impact assessment report” dated 28 September 2022, consultation with the main stakeholders showed that the major difficulty with damage claims related to AI is the burden of proof.

The Questions & Answers document to the AILD furthermore notes that “Systems can oftentimes be complex, opaque and autonomous, making it excessively difficult, if not impossible, for the victim to meet this burden of proof.

Thus, the AILD Proposal establishes rules to be incorporated into Member States’ national fault-based liability systems aiming to allocate the burden of proof between the potentially harmed person claiming any type of damage covered by national law (the Questions & Answer document to the AILD names life, health, property and privacy as examples) and AI manufacturers. The AILD Proposal also provides for rules applicable to the preservation and disclosure of evidence in cases involving high-risk AI systems (as defined in the proposed Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence - Artificial Intelligence Act,AI Act Proposal”, i.e. AI systems used in critical infrastructures and certain essential private and public services as well as safety components of products).

The AILD Proposal shall ease the burden of proof in a targeted and proportionate manner through the use of disclosure. It will establish for those seeking compensation for damages a possibility to obtain information on high-risk AI systems to be recorded/documented pursuant to the AI Act Proposal.

The disclosure mechanism

The AILD Proposal aims to provide persons seeking compensation for damage caused by high-risk AI systems with effective means to identify (i) potentially liable persons involved in the design, development, deployment and operations of high-risk AI systems and (ii) relevant evidence for a claim. The Commission staff’s working document expresses concerns that without such regulation consumers might be less protected vis-a-vis AI systems compared to traditional products. The idea is to increase the level of trust in AI and promote its uptake.

In this respect the AILD Proposal provides that Member State courts must be able to compel the defendant (or other specified third parties) to disclose relevant evidence that is at their disposal.

The AILD Proposal limits the scope of disclosure to the cases of specific high-risk AI systems, that are suspected of having caused damage.

According to the AILD Proposal, the request for disclosure of evidence can be made either by the claimant in the course of litigation or by a potential claimant before submitting a claim for damages. This being said, before applying for the court to order disclosure, potential claimants should first request the document disclosure from one of the following stakeholders:

  • provider of an AI system,
  • product manufacturer, or
  • distributor, importer, user or other third-party who:
    • placed on the market or put into service a high-risk AI system under their name or trademark;
    • modified the intended purpose of a high-risk AI system already placed on the market or put into service; or
    • made a substantial modification to the high-risk AI system.

Furthermore, the AILD Proposal foresees that, to ensure a proportionate application of a disclosure measure towards third parties in claims for damages, national courts should order disclosure from third parties in the course of pending litigation only if the evidence cannot be obtained from the defendant. Requests cannot be addressed to parties other than listed above who have no access to the evidence.

To ensure proportionality in disclosing evidence (i.e. to limit the disclosure to the necessary minimum and prevent blanket requests), the requests should be supported by facts and evidence sufficient to establish the plausibility of the contemplated claim for damages and the requested evidence should be at the addressees’ disposal.

Moreover, the court may order such disclosure only if the claimant has undertaken all proportionate attempts at gathering the relevant evidence from the defendant and only to the extent necessary to sustain the claim, given that the information could be critical evidence to the alleged injured person’s claim in the case of damages that involve high-risk AI systems.

In determining whether an order for the disclosure of evidence is proportionate, national courts shall consider the legitimate interests of all parties, including third parties concerned, in particular in relation to the protection of trade secrets and of confidential information, such as information related to public or national security.

The limitation of disclosure of evidence as regards high-risk AI systems shall be consistent with the requirements of the AI Act Proposal, which is to provide certain specific documentation, record keeping and information obligations for operators involved in the design and deployment of high-risk AI systems. Only evidence “at disposal” can be requested, which means that stakeholders should not be creating specific documents to meet the evidence disclosure request.

Operators of AI systems posing lower or no risk would not be expected to document information to a level similar to that required for high-risk AI systems, thus they will not be obliged to disclose documents under the AILD.

In order for the judicial means to be effective, the AIDL Proposal provides that a court may also order the preservation of evidence.

Failure to comply with order to disclose or preserve evidence leads to a presumption of non-compliance

According to the AILD Proposal, where a defendant fails to comply with an order to disclose or to preserve evidence, national courts shall presume non-compliance with the corresponding duty of care, i.e. presume the breach that the disclosure request was supposed to evidence. As stated in Recital 16 of the AILD Proposal, this shall provide for a strong incentive to comply with the relevant requirements, e.g. the obligation under the AI Act Proposal to document or record the certain information.

The defendant would however be able to rebut that presumption by demonstrating “that these facts did in reality not occur or that other facts, for which the liable party is not responsible, occurred” (see Impact Assessment Report, p. 33). The refusal to disclose requested evidence to the (potential) claimant prior to the request to the court shall not trigger a presumption of non-compliance (Recital 17 of the AILD Proposal).

Confidentiality of trade secrets

According to the AILD Proposal, in respect of trade secrets, national courts should be empowered – either based on a respective “reasoned request of a party” or “on their own initiative” to take specific measures to ensure the confidentiality of trade secrets during and after the proceedings. This should include at least measures to restrict access to documents containing trade secrets or alleged trade secrets and access to hearings to a limited number of people, or allowing access to redacted documents or transcripts of hearings.

When deciding on such measures, national courts should take into account:

  • the need to ensure the right to an effective remedy and to a fair trial;
  • the legitimate interests of the parties and, where appropriate, of third parties; and
  • any potential harm for either of the parties, and, where appropriate, for third parties, resulting from the granting or rejection of such measures.

Outlook and next steps

The AILD Proposal is far from being final and is at an early stage of legislative work. Both the European Parliament and the Council will review the AILD Proposal and suggest amendments. Without a crystal ball it is not possible to predict when the trialogue process will conclude even if on average, such a process takes around two years. It is thus possible that the AILD will be enacted before the end of the term of the current European Parliament in 2024.

Hogan Lovells is actively monitoring developments in this space - keep an eye out for our future updates. For former articles of Hogan Lovells’ European Product liability team please head to: European Commission proposes new ground-breaking rules on product liability and AI civil liability to protect consumers - Hogan Lovells Engage.

 

 

Authored by Ina Brock, Christelle Coslin, Nicole Saurin, and Aleksandra Połatyńska.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.