Artificial intelligence in the insurance sector: fundamental right impact assessments

The new AI Act establishes an obligation for the deployers of certain high-risk AI systems to conduct a “fundamental rights impact assessment” (“FRIA”). This will have a high impact on insurance companies that use AI systems for risk assessment and pricing in relation to life and health insurance products and also for creditworthiness purposes.

Brief Intro to the AI Act

Recently, the EU Parliament approved the Artificial Intelligence Regulation (“AI Act”). This regulation (estimated to be finally approved around April) aims to regulate the development, deployment and use of the AI systems within the EU. The regulation categorizes these systems into different risk levels, imposing stricter obligations on higher risk AI systems. The AI Act also prohibits certain uses of AI (e.g., systems designed to manipulate behaviour or exploit vulnerabilities).

Insurance companies making use of AI systems will have to comply with several obligations. For instance, they will need to have in place an AI governance program, affix the CE mark, comply with transparency obligations… and, in some cases, conduct a FRIA.

What is a FRIA?

A FRIA is an assessment that deployers* must conduct before deploying for the first time some high-risk AI systems. The aim of the FRIA is for the deployer to identify specific risks to the fundamental rights of people affected and the measures to be taken if those risk materialize. This assessment must include:

  • a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose and a description of the period of time during which each high-risk AI system is intended to be used;
  • the categories of natural persons and groups likely to be affected by its use in the specific context;
  • the specific risks of harm likely to impact the categories of persons or groups of persons identified pursuant the previous bullet, taking into account the information given by the provider;
  • a description of the implementation of human oversight measures, according to the instructions for use, and the measures to be taken where those risks materialise, including the arrangements for internal governance and complaint mechanisms.

Who has to perform a FRIA and when?

The FRIA has to be performed by the deployers of some high-risk AI systems, among others (i) AI systems that evaluate the creditworthiness of natural persons or establish their credit score (except for the systems used for detecting financial fraud); and (ii) AI systems used for risk assessment and pricing in relation to natural persons for life and health insurances.

Therefore, the AI Act will have a major impact in the insurance sector, due to the fact that the companies operating in this area may use this kind of systems for their daily activities. There is no doubt that AI can be really helpful for calculating life and health insurances premiums, but these companies must also balance the fundamental rights of individuals. In fact, in the AI Act, banking and insurance entities are named as examples of companies that should carry out a FRIA before implementing this kind of AI systems.

Although the FRIA needs to be performed only before deploying the system for the first time, the deployer needs to update any element that changes or is no longer up to date. Also, in similar cases, the deployer can rely on previously conducted FRIAs or existing impact assessments carried out by the provider of the system. In addition, the FRIA could be part of and complement a data protection impact assessment (“DPIA”) under the Regulation 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation).

Does this assessment have to be notified to any authority?

Yes, the deployer has to notify the market surveillance authority of its results (in Spain, the Statute of the Spanish Artificial Intelligence Supervisory Agency has already been approved). Along with this, a questionnaire will have to be completed through an automated tool, which will will be developed by the AI authority.

How should FRIAs be carried out in practice?

Depending on the structure of the AI obligations in the company, there are several options. The one that could make more sense for insurance companies is to carry out the FRIA together with the DPIA, as there may be many synergies to leverage. This way the data protection officer and the privacy team could also be involved.

In addition, insurance companies already have in place procedures to carry out DPIAs. Integrating FRIAs as part of the same process could be less problematic and involve less resources.

Finally, FRIAs should be aligned with the AI governance program of the insurance company. Very often the risks for individuals (e.g. the existence of biases or discrimination) would be already covered by the AI governance program.

When should insurance companies start carrying out FRIAs?

Even though the “formal” obligation will be applicable in a couple of years, the sooner the FRIAs process is ready the better. This way the impact on the implementation of the AI Act would be smoother and the company will be in a position to demonstrate compliance.

Next steps

  • Insurance companies should start creating the internal procedure to implement and validate FRIAs (potentially, integrating it with DPIA process).
  • FRIA process should also be aligned with the content of the AI governance program.
  • Insurance companies should identify the use of AI systems that would require a previous FRIA.


*Deployer is the natural or legal person using an AI system under its authority for a professional activity, which may be different from the developer or distributor of the system.



Authored by Gonzalo F. Gállego, Juan Ramón Robles, and 


This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.