Privacy Statements Matter
When AI tools process personal data, traditional privacy considerations and legal requirements generally apply. Organizations should therefore confirm that AI tools process personal data in ways that are consistent with their privacy statements. If AI tools collect, use, retain, or disclose personal data in ways that are not consistent with existing privacy statements, that may expose an organization to litigation or enforcement actions. So, organizations may wish to foster open communications between business unites and privacy legal and compliance functions. By doing so, organizations will be better able to identify and address potential issues raised by innovative AI tools that process personal data.
Additionally, many organizations may benefit from enhancing their privacy statement disclosures regarding their uses of AI. Even absent strict legal requirements regarding specific AI disclosures in privacy statements, enhanced language can help mitigate risk for organizations that rely on privacy statements to support establishing informed consent for the use of AI tools for certain processing.
Third Party Considerations
Like many digital tools, AI often involves third-party outsourcing. Organizations must therefore assess the roles that third parties play when providing AI tools. If AI vendors are to be considered “service providers” or “processors”, organizations must assess whether the third parties are appropriately restricted from using personal data for their own purposes. However, many AI providers insist on terms and conditions that permit them to use customer data for a range of independent purposes, such as product improvement or data enhancement. While there may not be a legal prohibition on permitting third parties to use personal data for such purposes, such uses may require consent or the provision of opt out rights. Consumer organizations should therefore carefully assess the data processing roles that providers of AI tools will take on and the compliance requirements for supporting such roles.
Additionally, in the United States, plaintiffs’ attorneys are testing the bounds of eavesdropping and wiretapping laws, alleging that the use of third-party AI tools (including chatbots powered by third-party AI systems) implicates eavesdropping and wiretapping laws if consumers do not consent to the use of the tools. The plaintiffs allege that third-party AI tools capture electronic communications between consumers and ecommerce platforms without knowledge or consent.
Though courts may eventually clarify that such claims are meritless, consumer-sector organizations are considering whether and how to mitigate litigation risk by obtaining affirmative consent to the use of third-party tools, such as through pop-up banners or check boxes.
Sensitive Personal Data: Inputs vs. Outputs
AI systems are renowned for their ability to connect the dots and analyze data faster than existing methods. This analytical power can deliver great benefits to consumer-facing organizations, such as identifying consumer trends or opportunities to enhance engagement and loyalty. But AI-driven analytics can create surprising challenges.
Many consumer sector organizations choose to minimize the collection of, or even avoid collecting, sensitive personal data, such as race, health, or immigration status. The benefits of collecting sensitive personal data, if any, may be outweighed by the compliance requirements, such as obtaining express consent to the processing of such data.
Consumer organizations should recognize, though, that AI tools may generate insights regarding sensitive personal data characteristics even when processing innocuous personal data. For example, a person’s name, postal code, and transaction history may, when subject to AI-driven analytics, reveal or suggest a person’s race. As many privacy laws regulate the processing of inferences that reveal sensitive personal data, as well as the processing of sensitive personal data itself, AI tools may make inferences that create new compliance obligations for consumer-sector organizations. So, when deploying AI tools, organizations should monitor and analyze the outputs to assess whether they contain insights or inferences regarding sensitive personal data.
Concluding Thoughts
AI has the potential to deliver substantial insights and efficiencies. It is hard to imagine a large retail brand succeeding in the current market without availing itself of the benefits that AI has to offer. However, outsourcing business operations to AI should be undertaken thoughtfully. Retailers are well advised to assess whether AI tools are fit for purpose, confirm whether appropriate controls are in place to address legal risk, and monitor the performance of AI tools to assess whether they continue to operate as desired.
Authored by James Denvil and Sophie Baum.