How to Navigate AI Adoption and the Legal Landscape within the (Re)Insurance Industry in the EU and the UK

AI and machine learning technology have started to be widely adopted by (re)insurers to embrace new business opportunities. However, it is challenging to understand the level of risk they will be exposed to when developing their own AI solutions, partnering with external parties or providing new AI-related policies to their customers, partially due to the complex nature of legal, regulatory and commercial landscape in connection with AI and machine learning. This article aims to provide practical guidance on how (re)insurers can navigate this uncertain territory, develop their strategic plans and manage their legal and reputational risks.

The insurance and reinsurance industry has seen a rapid increase in the use of AI and machine learning technology in recent years. A survey from the Bank of England and Financial Conduct Authority in October 2022 suggests that 72% of UK financial services firms are developing or deploying machine learning applications.

(Re)Insurers have been thinking about AI strategically, and are either: increasingly embedding AI in their day-to-day operations to increase efficiency, enhance decision-making, reduce costs, gain insights from data and improve customer experience; or are starting to offer new insurance products or policies to their clients to protect them against AI-related claims. For instance, Munich Re has recently launched aiSelf, a coverage for users who implement self-developed AI solutions in their own companies. It is to protect companies from potential financial losses resulting from AI underperformance.

While AI and machine learning offer new opportunities to (re)insurers and help to transform the financial services sector, it is challenging for (re)insurance companies to understand and measure the actual risk of AI adoption, particularly given the complex nature of legal landscape around evolving AI regulations and existing insurance regulations (and even worse, the intersection of those regulations).

In that regard, this article aims to provide guidance on (re)insurance companies assessing the risk of AI adoption along with some risk mitigation measures, and the current legal landscape in the EU and the UK for the companies to make their strategic business decisions and manage their risks.

 

Ethical Considerations

As a starting point, stakeholders from (re)insurers should evaluate and monitor the extent to which ethical considerations have been taken into account when developing its own AI solutions or outsourcing to third party providers. By taking the following ethical factors into account, (re)insurers can contribute to the responsible deployment of AI solutions. This will help to protect the interests of customers, build trust, and reduce their exposure to enforcement risks.

  • Fairness: (Re)Insurers should ensure that their AI systems are fair and do not discriminate against individuals or groups of people. This includes ensuring that AI systems are not biased against certain protected characteristics or demographics, such as race, gender, age or income level.
  • Transparency: (Re)Insurers should be transparent about how their AI systems work and how they make decisions. This includes providing clear explanations of the algorithms used by the systems and the data that they are trained on. Re(Insurers) should also publish how customers can contact them to make enquiries relating to or request a review of unfair AI decisions.
  • Accountability: (Re)Insurers will be held accountable for the actions or decisions of their AI systems. It should be clear who is responsible for the AI’s actions, decisions and any harm caused. They should be able to explain how the systems work and why they made certain decisions. They should also be able to take steps to mitigate the risks of bias and discrimination.
  • Privacy: (Re)Insurers should protect the privacy of individuals whose data is used to train and operate AI systems. This includes ensuring that the data is collected and used in a lawful and transparent manner.
  • Security: (Re)Insurers should ensure that their AI systems are secure and that they are not vulnerable to cyberattacks. This includes taking steps to protect the systems from unauthorised access, modification, or destruction.
  • Human oversight: (Re)Insurers should ensure that human oversight is in place to monitor and control AI systems to the extent necessary. This means that there should be people who can intervene if the systems make decisions that are unfair, biased, or harmful, and to address any errors and problems.

 

Risks of AI Adoption

In parallel, (re)insurers will need to assess various risks (and critically evaluate whether the benefits of AI outweigh the following risks in their specific context) of designing or building AI systems, or introducing new AI-related policies to their customers. Some of the risks include:

  • AI hallucination: AI systems may generate content that is convincingly realistic but entirely fabricated or untrue, and not justified by its training data. The risk of AI hallucination is that it can lead to AI making incorrect or harmful decisions, and (re)insurers may be subject to litigation or enforcement action if the output provided is incorrect, misleading or deceptive.
  • Bias: If AI systems are trained with historical underwriting decisions or other data that could be biased against specific gender, age, race or other protected characteristics, AI systems can generate or lead to unfair or discriminatory decisions. This could lead to potential claims or regulatory fines.
  • Data privacy: AI systems require large amounts of data to train and operate. The data could have been collected without seeking appropriate consent or relying on other lawful basis for processing such data, which could lead to potential legal liabilities and reputational damage.
  • Cybersecurity: AI systems can be vulnerable to cyberattacks. This could potentially result in data breaches, which could carry substantial consequences for (re)insurers due to heightened regulatory scrutiny and significant reputational risks.
  • Policy language: Prior to (re)insurers introducing a new AI-replated policy to their customers, they should review and assess if it overlaps with their existing commercial insurance policies (e.g. cyber liability insurance, IP liability insurance), which can similarly cover losses arising from developing or using AI systems. They should also evaluate the indemnity, representations and warranties and exclusion wording in AI-related policies to ensure (re)insurers are not overly exposed to legal and commercial risks, and can avoid unanticipated risks that may fall within an insurance coverage.

Considering the risks outlined above, (re)insurers are encouraged to assess the potential impact on their own or their client’s business operations when developing or adopting AI systems. Additionally, the assessment should factor in the criticality of their specific operations, the sensitivity of data, financial or reputational risks upon any disruption or breach, or the value associated with their service agreements with clients.

 

AI and Insurance Legal Framework

Please refer to our article “AI regulation in financial services in the EU and the UK: Governance and risk-management” for the high-level overview of AI legal landscape in the EU and the UK. This article however will look into the insurance-specific elements of the EU AI Act and existing laws and regulations that may be applicable to (re)insurers for developing or using AI systems.

One of the biggest challenges for (re)insurers may be understanding how the emerging AI regulations such as the AI Act affect their existing or upcoming AI strategies, and knowing which existing insurance laws and regulations will be applicable to them, their partners or third party providers in the context of AI. In particular, if they utilise AI solutions from third-party vendors, their vulnerabilities may depend on the third party’s compliance with the laws and regulations.

The latest draft of the EU AI Act dated June 2023, for example, indicates that “AI systems intended to be used for making decisions or materially influencing decisions on the eligibility of natural persons for health and life insurance” will be high risk if “they pose a significant risk of harm to the health, safety or fundamental rights of natural persons”. If (re)insurer’s AI systems satisfy these conditions, they will be subject to more stringent regulatory requirements such as establishing a risk management system, use of high-quality data, conducting a risk assessment, ensuring human oversight as well as an appropriate level of accuracy, robustness, safety and cybersecurity, and conducting a conformity assessment.  

However, the AI Act have implications beyond health and life insurance policies. If (re)insurers use AI systems to (i) evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud; or (ii) make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, with some exceptions, the obligations for high risk AI systems may be applicable to them. Additionally, it is prohibited for (re)insurers to put into service or use of AI systems for the social scoring evaluation (which is defined as evaluating or classifying natural persons based on their social behaviour, socio-economic status or known or predicted personal or personality characteristics) that is detrimental or unfair to individuals.

For AI systems that do not fall under the above conditions, and constitute low-risk AI systems, some limited obligations may still be applicable to (re)insurers including transparency requirements.   

The UK government is not planning to introduce any AI-specific legislation or put AI principles on a statutory footing at least in the near future. It remains to be seen how the UK government and financial regulators will shape this evolving area of law for (re)insurers.

In the event that (re)insurers are subject to other applicable laws due to their use of AI systems or their insurance-related activities, they may be required to comply with existing laws and regulations. Some potential relevant laws depending on facts of each case may include the GDPR, Solvency II Directive, Insurance Distribution Directive, AML and CTF regulations, outsourcing regulations, cybersecurity laws and UK Consumer Duty obligations.

 

Risk Mitigation Measures

If (re)insurers decide to adopt, operate or embed AI solutions to their business operations, they should consider implanting the following risk mitigation measures in order to better manage their risks:

  • Regulation: (Re)Insurers should keep track of evolving legal developments around AI such as the European legislative process for the AI Act, and plan beforehand to comply with such regulations.
  • Responsible and ethical AI policies: These policies should define (re)insurers’ ethical principles for the development and use of AI, and should include a process for reviewing and approving AI projects, a risk assessment for all AI projects and training for employees on AI ethics.
  • Data governance: (Re)Insurers should establish robust data governance practices in place to ensure that the data used to train and operate AI systems is collected and used in a lawful and transparent manner. This includes ensuring that the data is accurate, complete, and up-to-date.
  • Algorithmic transparency: (Re)Insurers should ensure that AI systems are transparent and that they can explain how they work and why they make certain decisions. This includes providing clear explanations of the algorithms used by the systems and the data that they are trained on.
  • Human oversight: (Re)Insurers should ensure that there is an appropriate level of human oversight of AI systems, so that they can intervene if necessary. This means that there should be people who can understand how the systems work and who can make decisions about whether or not to intervene.
  • Security: (Re)Insurers should ensure that their AI systems have an appropriate level of accuracy, robustness, safety and cybersecurity to protect the systems from unauthorised access, modification, or destruction. (Re)Insurers should also establish an incident response plan to address security breaches promptly.
  • Vendor due diligence: If using AI solutions from third-party vendors, (re)insurers should conduct thorough due diligence to assess their security, compliance, and ethical practices. They should ensure that vendors align with their risk, compliance and ethical standards.
  • Training: (Re)Insurers should provide training to employees on AI ethics, security and compliance.

 

Next Steps

Stakeholders within the (re)insurance industry must conduct a thorough evaluation of their AI use cases, risk tolerance, business requirements, anticipated benefits and evolving regulatory landscape before embracing AI solutions or introducing new AI-related policies. Should (re)insurers opt to embrace new opportunities, they should begin formulating a plan or strategy, which may involve substantial revisions to their current practices, to align AI systems with upcoming AI regulations such as the AI Act as well as existing legal frameworks. Additionally, they should put in place risk-mitigation measures to effectively manage any potential legal, regulatory or financial risks.

 

 

Authored by Daniel Lee and John Salmon

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.