Emerging AI issues affecting EU, UK life sciences firms

At our recent Health Care AI Law and Policy Summit, Hogan Lovells attorneys Dan Whitehead, Bonella Ramsay, Louise Crawford, and Imogen Ireland convened with industry leaders virtually at the Health Care AI Law and Policy Summit to discuss how legal and ethical challenges arising in the creation and adoption of artificial intelligence (AI) systems in the health care industry are being addressed in the EU and UK. Below, we summarize key takeaways from their panel discussion.

How the EU is leading the way in AI regulatory development

The panel began by noting that it has been a year since the European Commission (Commission) released its proposed EU regulatory framework on artificial intelligence (AI). The proposal, released in April 2021, represents the first cross-sector regulation of its kind, creating a comprehensive framework that will address challenging ethical issues such as bias and transparency as well as risks arising from automated decision-making. According to panel moderator Imogen Ireland, a Senior Associate in Hogan Lovells’ Intellectual Property, Media, and Technology group, the AI legal landscape demands that we step out from the vacuum of our own sectors and work across silos.

Dan Whitehead, Counsel in the Hogan Lovells Privacy and Cybersecurity practice, noted that in recent years, the EU has focused significantly on digital regulation, and that the proposed AI Act will have a profound impact on AI governance in health care. Mr. Whitehead noted that the sanctions under the proposed AI Act are even greater than those under the General Data Protection Regulation (GDPR) and could run up to 30 million Euros or 6% of a company’s annual global turnover. Mr. Whitehead pointed out that the GDPR and other existing regulations (such as anti-discrimination and product safety laws) already indirectly address some of the key risks associated with AI, such as the risk of bias, performance inaccuracy (false positives or negatives) risks to patient safety when AI is used in a health care context, and the challenges in explaining complex technology and their impact on real-world decisions and actions. The new EU regulatory framework on AI will go further in addressing these risks specifically in the context of AI technology.

Bonella Ramsay, Senior Counsel in the firm’s Global Regulatory practice, gave an overview of AI regulations in the context of medical devices and in vitro diagnostics (IVDs). She noted that the new EU Medical Device Regulation (EU MDR) applies from May 2021 and the In Vitro Diagnostic Medical Devices Regulation (IVDR) from May 2022. Both are subject to transitional arrangement, but in the context of software as a medical device, AI will automatically be considered a Class IIA medical device, possibly even a Class III or Class IIB, making the conformity assessment for a CE mark more complex. However, the EU regulations do not expressly address AI as a medical device, leading to questions as to how AI products will be treated under the current regulatory framework and the proposed AI Act.

Is the UK keeping pace with the EU?

Mr. Whitehead pointed out that while the EU is leading the way, the UK also published a National AI Strategy last year which contains ambitious plans in terms of investment in, and regulation of, AI across all sectors. It remains to be seen how these plans will be implemented in practice.

Ms. Ireland noted that in the context of intellectual property, the UK is keeping pace and already looking at the ways in which intellectual property laws can and/or should meet the complexities presented by AI. In 2021, the UK Intellectual Property Office (UK IPO) opened a Consultation asking, amongst other questions, whether AI-devised inventions should be patent protected, and if so, how? Guidance from the UK IPO is expected to be forthcoming.

Practical next steps

Louise Crawford, Senior Associate in Hogan Lovells’ Technology practice, provided an overview of how the current liability regime, which relies upon a patchwork of tort, product liability, discrimination, privacy, and contract laws, may not be sufficient to provide appropriate remedies for those that suffer losses from AI errors or defects. Important in this analysis is the need to identify a link between fault and loss which can be particularly difficult when multiple parties have been involved in the development and operation of a complex solution. This liability regime is under review by the European Commission and proposals for significant changes are anticipated in the near future.

Reflecting on a 2020 EC White Paper, Ms. Crawford noted that while it is still early days, the EU is likely to take a two-pronged approach of 1) expanding on the current product liability regime to encompass digital products and 2) introducing an AI-operator specific regime that distinguishes between high-risk and low-risk systems and allocates liability accordingly. When it comes to legal reform in this area, the Commission’s priorities will be 1) harmonization across Member States; and 2) ensuring the liability framework is robust enough to foster trust in AI technology and encourage continued development in this area.

Turning to their ‘top tips’ for clients in addressing AI liability risks, the panel emphasized that having the right governance framework in place will be critical. Whether homegrown or from a third-party supplier, when using AI technology companies will need to have a framework of policies and protocols covering cost/benefit analyses and mechanisms for assessing AI risk factors. For companies that use vendors to provide AI technology, conducting due diligence on the vendor, the technology and the data is also critical. Vendors should be able to explain the effectiveness of their product as well as the risks and be able to demonstrate how these risks are mitigated. Appropriate contractual obligations should also be put in place to address liability risks associated with the technology.

The panel closed their session by advising stakeholders to pay close attention to developments in this fast-moving space: significant investment in AI technology is taking place in parallel with a changing regulatory environment. Ms. Ramsay advised life sciences firms to stay on top of regulations as they apply to each stage of the design and implementation of an AI product. Mr. Whitehead said that he advises clients to take AI governance seriously and not merely slot their AI compliance regime into another policy bucket. Mr. Whitehead also advised stakeholders to engage with regulators on policy proposals while there is still opportunity to do so: the AI Act is not in place yet and proposals could change significantly before they become law. Ms. Ireland said that the human continues to be central, not only to the R&D process, but also in terms of monitoring and developing assets such as IP.

 

You can view video recordings and summaries of the other panels from the Health Care AI Law and Policy Summit online here:

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.