Artificial Intelligence in health care

Health care is a promising playground for AI. What's the current status quo for AI medical devices in terms of regulation and product liability and where is the journey for medical devices heading?

Health care is a promising playfield for artificial intelligence (AI) and the implementation of AI is often seen to have great potential to advance health care. One of the greatest promises of AI is to learn from real-world use and experience, to adapt and to improve its performance. However, the more independently AI would act and the less influence humans would have on decision-making, the more difficult it gets in terms of regulatory and liability law. Apart from ethical and technical issues (see e.g. AI in health and medicine | Nature Medicine) continually learning and adaptive medical device software that can make autonomous decision raises legal concerns. In this article, we want to briefly outline the status quo for medical device AI in terms of regulation and product liability and where the journey might be heading.

What are the regulatory obligations in Germany and the EU?

Due to its capabilities to generate insights from vast amounts of data, machine learning and AI technologies are predestined to play a supporting role in medicine. On the other hand, the ability of an AI system to further develop, optimise and adapt itself is one of the biggest challenges for smart medical devices with regard to regulation and liability law – because such "learning" is currently not provided for in medical device legislation.

Although still new, the Medical Devices Regulation (EU) 2017/745 ("MDR") does not contain any special provisions on self-learning AI software. Pursuant to art. 2(1) MDR, software generally qualifies as a medical device if it is intended by the (legal) manufacturer to be used for a medical purpose. The Medical Device Coordination Group (MDCG) at the European Commission has issued guidance on the qualification of software as a medical device (MDCG 2019-11) and a general guidance on classification of medical devices (MDCG 2021-24). According to these guidances, software only falls within the scope of the MDR if its functions go beyond the mere storage, communication and transfer of data. The software must process, analyse, create or modify information that serves a medical purpose for the benefit of an individual patient.

What are the requirements of the MDR for AI software?

As there are not special provisions on self-learning AI software at the moment, the general provisions governing medical devices apply. Consequently, a CE marking under the MDR is required in order to place a medical device on the market. However, two worlds collide when self-learning dynamic AI and the requirements for medical device manufacturing meet: According to Annex I, section 17.1 MDR, software must be designed to ensure repeatability, reliability and performance in line with their intended use. While for “locked” algorithms that is not a problem, they provide the same result each time the same input is applied to it. However, continuously learning and adaptive algorithms, especially software based on a "black box" model are by definition not supposed to deliver repeatability. The particular benefit of AI for the health of patients both individually and in general is precisely its ability to learn from new data, adapt, improve its performance and to generate different results.

What are the current regulatory developments?

TThe European Commission recognised the need to initiate a general regulatory framework for AI across all sectors and therefore published the proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act, "AIA") in April 2021, whereas the Council of the European Union commented upon End of November 2021. According to a recent study mandated by the European Commission, there is also a need for specific regulations tailored to the health care sector (with focus on 6 specific categories). Only isolated national strategies of the EU member states (cf. country factsheets of the report) exist to date. However, the general AIA also would have a regulatory effect on medical devices. AI applications in medical devices within the meaning of the MDR, or those which constitute a medical device themselves, would be considered as "high-risk AI systems" under Art. 6(1) draft AIA in conjunction with Annex II, Section A nos. 11 and 12 of the draft AIA.

Pursuant to art. 43(3) of the draft AIA, conformity assessments for AI medical devices would continue to be conducted in accordance with the MDR, while certain provisions of the AIA would apply in addition. A notified body under the MDR would be permitted to participate in the conformity assessment if such body is also notified under the AIA. Whether a new conformity assessment has to be performed would depend on whether a certain change to an algorithm and its performance has been pre-determined by the provider and assessed at the moment of the initial conformity assessment and are part of the information contained in the technical documentation. So if a certain change was "pre-determined", it would not constitute a substantial modification and not require a new conformity assessment (Art. 43 (4) draft AIA). Whereas any substantial change not "pre-determined" would require a new conformity assessment – but what does "pre-determined" exactly mean and how specific does that need to be described in the technical documentation?

Who is liable for damage caused by artificial intelligence?

In addition to opportunities and innovations, the use of AI also always harbours the risk of damage caused by machine intelligence. Such cases raise the question of who can be held liable for such damage. Is it the programmer of the intelligent software? The party that made the software available? The doctor or hospital using the medical device together with the software? Or – according to the proposal of the European Parliament – even the AI system itself in the form of an "electronic person" still to be created? The principle that liability follows from responsibility reaches its limits in the area of artificial intelligence because the complexity and opaqueness of the decision-making process used by self-learning algorithms makes the identification as well as the traceability of the error and/or of the harmful source of the error almost impossible. In particular, it is not uncommon for problems to arise even when determining whether the source of the error lies in the device itself or is due to incorrect use or application.

Therefore, manufacturers must address these issues ideally as early as during the device development phase and contractually regulate as many of these potential risks as possible.

Outlook

The MDCG currently plans to develop non-binding guidance documents specifically for "AI in the context of the MDR" (p.3). On international level outside the EU others have taken things a step further already: A guidance document with 10 guiding principles on Good Machine Learning Practice for Medical Device Development: Guiding Principles has recently been published by the U.S. Food and Drug Association ("FDA"), the UK Medicines and Health care products Regulatory Agency and Health Canada. These principles are intended to promote safe, effective, and high-quality medical devices that use artificial intelligence and machine learning. Already in April of 2019, FDA published the Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback. Eventually, regulators have to develop certification processes to handle adaptive AI systems and to approve not only an initial AI model but also a process for adjusting an AI model over time.

The authors would like to thank the research assistant Moira Boennen for her active and valuable cooperation in this publication.

 

 

Authored by Arne Thiermann and Nicole Böck.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.