UK ICO issues new guidance on AI and data protection

On July 30 the UK ICO published new guidance on AI and data protection. The guidance is intended to provide organisations that are either using or developing artificial intelligence technologies, with practical recommendations on the steps they should take to comply with data protection law.

While privacy professionals will be familiar with many of the topics addressed in the guidance, there are also a number of specific challenges raised in connection with the use of AI which may not be as commonly encountered. Many of these issues, such as the possibility of inherent bias, inaccuracies in model outputs and the difficulties in transparently explaining how decisions are made, arise as a result of some of the specific characteristics that are associated with AI and will often require positive interventions to meet regulatory expectations.

Consistent with the ICO's general approach to compliance, the guidance emphasises the importance of organisations taking a risk-based approach to AI. First, there should be an assessment of the risks to the rights and freedoms of data subjects that may arise in the circumstances. This should be followed by the identification and implementation of appropriate technical and organisational measures to mitigate those risks.

Key issues for organisations to consider

For organisations that are looking to use or develop AI (or are already doing so), some of the key issues identified by the ICO which need to be considered include:

  • Controllership

Careful consideration needs to be given to the controllership status of each party involved in the use, provision and development of AI, taking into account the particular circumstances. The role of developers and service providers may be particularly unclear, with them holding different statuses during the product lifecycle. For instance, where personal data is used to train a model, the organisation responsible for this is likely to be a controller. However, that same organisation may act as a processor when it makes the model available to its customers.

  • Bias

Concerns about the potential for inherent bias and discriminatory outcomes, arising from decisions taken through the use of AI, have been rife over recent years. The ICO emphasises that preventing bias is a key component in ensuring that processing is considered fair and protects individuals' rights and freedoms under the GDPR. Organisations should be looking to identify the risks of potential bias in their AI models and deploy technical measures, such as making modifications to the training data and the underlying algorithms to mitigate these risks. In the UK, the presence of bias should be determined in accordance with what constitutes discrimination under the Equality Act 2010.

  • Statistical accuracy

Where AI is being used to make predictions or decisions about a particular individual, it is important that there is a reasonable degree of confidence about their accuracy. While this does not mean that outputs from AI models need to be 100% accurate, the ICO expects that reasonable steps be taken to ensure that potentially incorrect inferences are corrected and errors are minimised.

  • Explainability

Being able to explain why an AI model reached a particular inference or prediction is vital to the GDPR's principle of transparency. The ICO expects organisations to provide clear and detailed information about the basis on which automated decisions are taken about individuals. This will likely include the reasons for a decision being taken, the data used to make that decision and details about the technical steps taken to ensure the AI operates in a fair and unbiased manner. Extensive guidance has been separately published on this topic by the ICO which can be found here.

  • Use of special category data

Special category data may be used across an AI product's lifecycle. A facial recognition system may use biometric data to train the model to recognise a person's characteristics. Equally, the ICO acknowledges that special category data may be utilised in testing to check for potential bias amongst particular groups, such as different ethnicities. When doing so, careful consideration will need to be given to any potential condition under the GDPR that could be satisfied. That may have to be explicit consent in some circumstances, but it may also be possible to satisfy an alternative condition under the UK Data Protection Act 2018.

  • Security

The ICO highlights a number of particular security risks that arise from the use of AI which will need to be assessed, taking into account the particular circumstances. A given example is the potential for 'model inversion' attacks, where a threat actor derives additional personal data about a specific individual from a model that was originally used for training purposes.

What organisations should do now

We advise organisations that are already using or planning to use AI models in the future to take proactive steps towards compliance, including:

  1. Enhance existing governance controls – the ICO stresses the importance of accountability in the context of either using or developing AI technologies. Organisations should therefore develop policies and procedures that address the specific risks associated with AI and ensure that relevant staff members are given appropriate training.
  2. Identify existing AI applications – where AI or similar technologies are being used to make decisions or predictions about individuals, these systems should be identified.
  3. Undertake risk-based assessments for applicable AI applications – where personal data is involved, this will probably include the use of a data protection impact assessment, which primarily considers the potential risks to the rights and freedoms of affected individuals.
  4. Agree and implement mitigating actions – this will likely involve the deployment of technical privacy by design measures at the development and testing stages. Where an organisation is procuring the technology from a third party, then the incorporation of appropriate contractual terms and undertaking due diligence are also likely to be relevant.
  5. Perform ongoing monitoring – AI models will often be updated and change over time. It will therefore be necessary to undertaking periodic monitoring of AI systems to ensure that they are performing as expected, particularly with respect to bias and statistical accuracy.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.