New presumption of causal link relating to AI systems

With the aim of bringing product liability rules into the digital age, the European Commission proposes new rules to address liability claims related to AI systems. Indeed, due to their technical features (among others, opacity, autonomous behaviour and complexity), it may be excessively difficult for injured persons to meet their burden of proof and obtain compensation of damages allegedly caused by AI systems. Thus, the proposed new directive introduces specific tools which aim to make it easier for claimants to substantiate claims for damages caused by interaction with AI systems.

The new Proposal for a Directive of the Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (“AILD Proposal”), adopted by the European Commission (“EC”) on 28 September 2022, is aimed at easing the burden of proof for those seeking compensation for damage caused by AI systems through the use of a disclosure of evidence mechanism and of rebuttable presumptions.

Rebuttable presumption of causal link in case of fault

In particular, Article 4 of the AILD Proposal provides for a rebuttable1 presumption of causal link between the fault of the defendant, consisting in the lack of compliance with a duty of care under EU or national law, and the output produced by AI system/the failure of the AI system to produce an output that caused the damage. Such presumption would apply differently depending on the specific nature of the AI system taken into consideration as well as on the nature of the provider/user of the AI system.

General rule (for non-high risk AI systems)

As a general rule, the presumption of causal link applies as long as all the following conditions are met:

  • The claimant has demonstrated the fault of the defendant. Such fault could also be presumed by the court on the basis of a non-compliance with a court order for disclosure or preservation of evidence under Article 3, paragraph 5 of the AILD; in this regard, please check out our respective article dealing with new disclosure obligations relating to high-risk AI systems.

  • It can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output;

  • The claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.

The presumption of causal link shall only apply where the national court considers it excessively difficult for the claimant to prove the existence of the causal link.

This means that it will be up to the member states and to national courts to define and limit the scope of such presumption by dealing with such indefinite legal terms, which not only might entail a divergence in the application of the law but also might runs counter to the intention of an intra-European legal certainty.

High-risk AI systems

In consideration of the peculiarities of high-risk AI systems (as defined in the proposed Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence - Artificial Intelligence Act, “AI Act Proposal”, i.e. AI systems used in critical infrastructures and certain essential private and public services as well as safety components of products), the AILD provides for a different set of rules, introducing a further distinction between a) claims raised against the provider of a high risk AI system, b) claims raised against users2 of the AI system.

Claim against the provider of a high risk AI system

With reference to the provider of high-risk AI system, the above presumption of causal link applies as long as the claimant demonstrates that the provider of the AI system has failed to comply with the following specific requirements:

  • The AI system is a system which makes use of techniques involving the training of models with data and which was not developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in the AI Act Proposal;

  • The AI system was not designed and developed in a way that meets the transparency requirements laid down by the AI Act Proposal;

  • The AI system was not designed and developed in a way that allows an effective oversight by natural persons;

  • The AI system was not designed and developed as to achieve, in light of its intended purpose, an appropriate level of accuracy, robustness and cybersecurity; or

  • The necessary corrective actions were not immediately taken to bring the AI system in conformity with the obligations laid down in AI Act Proposal or to withdraw or recall the AI system, as appropriate.

Claim against user

In case of claim against the user of a high-risk AI system the fault of the defendant shall be considered presumed as long as the claimant proves that the user:

  • Did not comply with its obligations to use or monitor the AI system in accordance with the  accompanying instructions of use or, where appropriate, suspend or interrupt its use; or
  • Exposed the AI system to input data under its control which is not relevant in view of the AI system’s intended purpose.

In any case, concerning high risk AI systems the presumption of causal link referred to in paragraph 1 above shall not be applied where the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove the existence of the causal link.

This – again – means that it will be up to the member states and to national courts to define and limit the scope of such presumption by dealing with such indefinite legal terms, which not only might entail a divergence in the application of the law but also might runs counter to the intention of an intra-European legal certainty.

Claim against non-professional user

In case of a claim for damages against a defendant who used the AI system within a personal, non-professional activity, the presumption of causal link referred to in paragraph 1 above shall apply only when the defendant materially interfered with the conditions of the operation of the AI system or if the defendant was required and able to determine the conditions of operation of the AI system and failed to do so.   

Once more, by using such indefinite legal terms it will be up to the member states and to national courts to define and limit the scope of such presumption.

Next steps

The AILD Proposal heavily relies on references to the AI Act Proposal. This implies that its eventual applicability cannot yet be finally evaluated, as the AI Act Proposal continues to be revised at the level of the Council and the European Parliament and will likely face a prolonged trialogue procedure. More importantly, once concluded the ordinary legislative procedure, member states shall implement the AILD – once final – at a national level, deciding how to devise national legislation to reach the goals set forth by the AILD Proposal. In this respect, considering the broad definitions used by the current AILD Proposal, one might expect that the rules on "rebuttable presumptions" may be implemented or at least interpreted in different ways across the member states.

 

 

Authored by Christian Di Mauro, Paolo Lani, and Nicole Saurin.

References
 The defendant has, thus, the right to provide evidence and demonstrate that its fault could not have caused the damage. 
i.e. “any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity”.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.