Consequences of AI discrimination in Spain (even without an approved AI Regulation just yet)

The EU Artificial Intelligence Regulation will create a framework for the use of artificial intelligence systems. Discrimination and discriminatory bias will be prohibited and subject to fines. Other AI systems will be subject to strong regulatory obligations. However, the Artificial Intelligence Regulation is still a draft and the prohibitions will not be enforceable for years. Does this mean that AI discrimination is not subject to fines or compensation obligations today? Not in the case of Spain, as highlighted in this publication, where discrimination (including when using AI systems) is currently subject to specific prohibitions with high fines and strong legal presumptions for monetary compensation.

Like most countries, Spain has a general anti-discrimination provision in its Constitution in the form of fundamental right. However, this fundamental right has recently been further developed by the Integral law for equal treatment and non-discrimination (“Non-Discrimination Act”) which includes many rules, legal presumptions for legal actions, and sanctions against discrimination.

In this regard, the Non-Discrimination Act may apply to scenarios of discrimination arising from the use of artificial intelligence (“AI”) and processing of data at high scale. AI is not the main focus of the law, but it is one of the areas of concern of the legislator.

What is the aim and scope of the Non-Discrimination Act?

The Non-Discrimination Act aims to guarantee and promote the right to equal treatment and non-discrimination, to respect the equal dignity of persons.

It has both a subjective and objective scope of application:

  • Subjective scope of application: the Act recognises the right of all persons to equal treatment and non-discrimination irrespective of their nationality, whether they are minors or adults, or whether they are legally resident or not. No one may be discriminated on the grounds of birth, racial or ethnic origin, sex, religion, conviction or opinion, age, disability, sexual orientation or identity, gender expression, disease or health condition, serological status and/or genetic predisposition to suffer pathologies and disorders, language, socioeconomic status, or any other personal or social condition or circumstance.
  • Objective scope of application: the Act applies to specific areas / sectors listed therein, including AI and massive data management, as well as other areas of similar significance. Others affected sectors are: employment, education, health, transport, housing, advertising, social media, etc.

Although most obligations in this Act are applicable to the public sector, some are also applicable to private natural or legal persons residing, located or acting in Spanish territory, whatever their nationality, domicile or residence.

What are the main implications / obligations under the Non-Discrimination Act?

The main impact of this Act is the general prohibition of any provision, conduct, act, criterion or practice that violates the right to equality.

Discrimination is construed very widely: it includes (i) direct or indirect discrimination, (ii) discrimination by association and by mistake (e.g., a company considers that a person has a disease but he/she hasn’t), (iii) incitement, order or instruction to discriminate, (iv) retaliation, (v) failure to comply with affirmative action measures arising from statutory or treaty obligations, inaction, neglect of duty, or failure to perform duties.

Is it possible to treat people differently without such a differentiation being caught under the discrimination prohibition?

Differentiation of treatment is not forbidden. However, when a person is subject to a “differentiated treatment”, the company taking the decision shall be in a position to demonstrate that the criteria for differentiation are:

    1.  
      1. necessary, reasonable and proportionate;
      2. objective;
      3. the purpose is legitimate.

 

Differentiated treatment will also be accepted when a law authorizes it or in the context of positive discrimination according to public policies.

Who has the burden of the proof if a person alleges discrimination?

When a person alleges discrimination and provides well-founded indicia of its existence, the defendant or the party to whom the discriminatory situation is imputed shall prove that there has been no discrimination by providing an objective and reasonable justification of the measures adopted and of their proportionality. That is, at the end, the burden of the proof generally lies with the “potential” discriminating entity.

This is just another reason for companies that use AI systems to have in place a robust AI governance policy evidencing that the AI system is not subject to bias and that it was trained with accurate and representative data.

How can a company demonstrate that the AI system does not discriminate?

In line with the last version of draft AI Regulation, where an AI system differentiates on the basis of personal attributes, companies should consider conducting a fundamental right impact assessment before making use of any AI system. The assessment provides a way for companies to demonstrate that the AI system does not breach the non-discrimination principle and that any possible differentiation is lawful.

Similarly, implementing a data governance policy to ensure that the training and validation of the AI system is as free of biases or errors as possible, and that the data used is accurate and sufficient, would allow companies to demonstrate that the non-discrimination principle has not been breached.

In addition to the above, under the General Data Protection Regulation (“GDPR”), if an AI system is susceptible to anti-discrimination claims, data protection obligations may apply to the processing of personal data. For instance, carrying out

      1. a risk assessment (art. 24/25 GDPR) and
      2. a data protection impact assessment could be necessary. As there is a direct interplay between the “necessity test” and the “legitimacy test” under the Non-discrimination Act and the application of the principles of the GDPR, producing these assessments together could be an efficient manner to show accountability under both laws (and also useful for the purposes of the AI Regulation).

 

Interplay of Non-discrimination Act and AI Regulation for high-risk systems.

The Non-discrimination Act applies to any sort of discrimination in several contexts, including artificial intelligence and massive data management. However, it does not differentiate between different categories of AI systems. Therefore, the rules on burden of proof, the possibility to treat people differently and the sanctioning regime apply regardless of the consideration of the AI system as a high risk AI system or foundation system under the AI Regulation.

In other words, AI systems that do not qualify as high risk still need to comply with the Non-discrimination Act.

Another notable difference between the scope of the AI Regulation and the Non-discrimination Act is that the Non-discrimination Act applies to situations of actual (or incitement of actual) discrimination. It does not apply directly to the training, validation, or data governance rules for AI systems. However, implementing a proper governance system and conducting a fundamental right impact assessment are suitable measures to prove that the AI system does not discriminate.

Which are the consequences of non-compliance with the Non-Discrimination Act?

The Non-Discrimination Act establishes a regime of infringements and penalties for non-compliance entailing fines from EUR 300 to EUR 500,000 (fines for discrimination would not be lower than EUR 10,001). In very serious cases, non-compliance may result in the closure of the establishment in which the discrimination has occurred or the cessation of the economic or professional activity carried out by the offending person for a maximum period of five years.

However, please note that the Act specifically foresees that such a regime may be subject to specific development and classification, within the scope of its competences, by regional legislation (in which case those shall prevail).

Additionally, the following consequences may also arise from a breach of the Non-Discrimination Act:

  • Provisions, acts or clauses of legal transactions that constitute or cause discrimination shall be null and void.
  • Persons discriminated against shall receive monetary compensation and the discrimination shall be resolved (where possible). Once a discrimination has been proven, the existence of non-pecuniary damage will be presumed.
  • The court / authority can order the end of the practice involving discrimination in the future which may affect the AI system.

Next steps

  • As the Non-discrimination Act is already enforceable, providers and user of AI systems should document steps to demonstrate that the AI system does not discriminate.
  • Carrying out a fundamental rights impact assessment and having in place a data governance program are suitable mechanisms under the Non-discrimination Act and AI Regulation.
  • The interplay between the GDPR, the AI Regulation and the Non-discrimination Act must be addressed in order to leverage efforts made to show accountability.

 

Authored by Gonzalo Gallego, Juan Ramon Robles, and Clara Lazaro.

 

Contacts
Gonzalo F. Gallego
Partner
Madrid
Juan Ramon Robles
Senior Associate
Madrid
Clara Lazaro
Associate
Madrid

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.