Privacy X Generative AI: Privacy considerations for AI deployments

AI can feel like a brave new world. In many ways, it is. However, when AI tools process personal data, it is important to remember two basic lessons that we have learned from deploying other digital technologies – data privacy considerations should be addressed throughout the lifecycle of AI tools; and existing privacy compliance frameworks may generally be well-suited to address the data privacy considerations, though adaptations may be required.

Below we address some of the key privacy compliance considerations that organizations should consider when deploying AI tools.

Privacy Statements Matter

When AI tools process personal data, traditional privacy considerations and legal requirements generally apply. Organizations should therefore confirm that AI tools process personal data in ways that are consistent with their privacy statements. If AI tools collect, use, retain, or disclose personal data in ways that are not consistent with existing privacy statements, that may expose an organization to litigation or enforcement actions. So, organizations may wish to foster open communications between business unites and privacy legal and compliance functions. By doing so, organizations will be better able to identify and address potential issues raised by innovative AI tools that process personal data.

Additionally, many organizations may benefit from enhancing their privacy statement disclosures regarding their uses of AI. Even absent strict legal requirements regarding specific AI disclosures in privacy statements, enhanced language can help mitigate risk for organizations that rely on privacy statements to support establishing informed consent for the use of AI tools for certain processing.

Third Party Considerations

Like many digital tools, AI often involves third-party outsourcing. Organizations must therefore assess the roles that third parties play when providing AI tools. If AI vendors are to be considered “service providers” or “processors”, organizations must assess whether the third parties are appropriately restricted from using personal data for their own purposes. However, many AI providers insist on terms and conditions that permit them to use customer data for a range of independent purposes, such as product improvement or data enhancement. While there may not be a legal prohibition on permitting third parties to use personal data for such purposes, such uses may require consent or the provision of opt out rights. Consumer organizations should therefore carefully assess the data processing roles that providers of AI tools will take on and the compliance requirements for supporting such roles.  

Additionally, in the United States, plaintiffs’ attorneys are testing the bounds of eavesdropping and wiretapping laws, alleging that the use of third-party AI tools (including chatbots powered by third-party AI systems) implicates eavesdropping and wiretapping laws if consumers do not consent to the use of the tools. The plaintiffs allege that third-party AI tools capture electronic communications between consumers and ecommerce platforms without knowledge or consent.

Though courts may eventually clarify that such claims are meritless, consumer-sector organizations are considering whether and how to mitigate litigation risk by obtaining affirmative consent to the use of third-party tools, such as through pop-up banners or check boxes. 

Sensitive Personal Data: Inputs vs. Outputs

AI systems are renowned for their ability to connect the dots and analyze data faster than existing methods. This analytical power can deliver great benefits to consumer-facing organizations, such as identifying consumer trends or opportunities to enhance engagement and loyalty. But AI-driven analytics can create surprising challenges. 

Many consumer sector organizations choose to minimize the collection of, or even avoid collecting, sensitive personal data, such as race, health, or immigration status. The benefits of collecting sensitive personal data, if any, may be outweighed by the compliance requirements, such as obtaining express consent to the processing of such data. 

Consumer organizations should recognize, though, that AI tools may generate insights regarding sensitive personal data characteristics even when processing innocuous personal data. For example, a person’s name, postal code, and transaction history may, when subject to AI-driven analytics, reveal or suggest a person’s race. As many privacy laws regulate the processing of inferences that reveal sensitive personal data, as well as the processing of sensitive personal data itself, AI tools may make inferences that create new compliance obligations for consumer-sector organizations. So, when deploying AI tools, organizations should monitor and analyze the outputs to assess whether they contain insights or inferences regarding sensitive personal data.

Concluding Thoughts

AI has the potential to deliver substantial insights and efficiencies. It is hard to imagine a large retail brand succeeding in the current market without availing itself of the benefits that AI has to offer. However, outsourcing business operations to AI should be undertaken thoughtfully. Retailers are well advised to assess whether AI tools are fit for purpose, confirm whether appropriate controls are in place to address legal risk, and monitor the performance of AI tools to assess whether they continue to operate as desired.


Authored by James Denvil and Sophie Baum.


This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.