AI & Algorithms (Part 4): The FTC’s Guidance on AI

Although the U.S. has no federal law that specifically regulates artificial intelligence (AI), the Federal Trade Commission (FTC) has indicated that it may be preparing to exercise its consumer protection authority with respect to AI deployment.  In May, the FTC issued new guidance for the use of AI, building upon its 2020 AI guidance and its 2016 report on big data. And FTC Acting Chair Kelley Slaughter has stated in public remarks that the Commission will be exploring concerns relating to algorithmic harms, including bias and discrimination. Organizations deploying AI systems in the U.S. are advised to familiarize themselves with the FTC guidance in order to make sure that their uses of AI are in compliance with U.S. consumer protection requirements.

FTC Authority to Regulate Artificial Intelligence 

The FTC has long exercised its authority to regulate private sector uses of personal information and algorithms that impact consumers.  As discussed below, that authority stems from Section 5 of the FTC Act (Section 5), the Fair Credit Reporting Act (FCRA), and Equal Credit Opportunity Act (ECOA). 

Section 5 prohibits unfair or deceptive acts or practices in or affecting commerce. An act or practice is considered deceptive if there is a statement, omission, or other practice that is likely to mislead a consumer acting reasonably under the circumstances, causing harm to the consumer. An act or practice is considered unfair if it is likely to cause consumers substantial harm not outweighed by benefits to consumers, or to create competition circumstances where consumers cannot reasonably avoid the harm. The FTC’s most recent guidance offers examples of how AI deployments could be deemed deceptive (e.g., if organizations overpromise regarding AI performance or fairness) or unfair (e.g., if algorithms impact certain racial or ethnic groups unfairly).

FCRA regulates consumer reporting agencies and the use of consumer reports. The FTC’s AI guidance and enforcement actions make clear that the FTC considers certain algorithmic or AI-based collection and use of data subject to the FCRA. For example, if an organization purchases a report or score about a consumer from a background check company that was generated using AI tools, and uses that score or report to deny the consumer housing, that organization must provide an adverse action notice to the consumer as required by the FCRA. The FTC has also noted that organizations that supply data which may be used for AI-based insurance, credit, employment, or similar eligibility decisions may have FCRA obligations as “information furnishers.”

The ECOA prohibits discrimination in access to credit based on protected characteristics such as race, color, sex, religion, age, marital status. The FTC notes in both its 2020 and 2021 guidance that if, for example, a company used an algorithm that, either directly or through disparate impact, discriminated against a protected class with respect to credit decisions,  the FTC could challenge that practice under the ECOA. 

Recent FTC Guidance

The FTC’s updated guidance provides insight into the expectations for organizations using AI. 

  • Start with the right foundation: The FTC states that the key to addressing disparate treatment of protected groups is to assess, from the beginning, whether training data sets have gaps. And organizations should consider how they can improve their data sets or establish controls for AI to address any gaps, including limiting how and where the algorithm is used (depending on the potential data shortcomings). This builds on the FTC’s 2020 guidance, which recommended that companies validate and revalidate data sets to not only ensure accuracy but also to avoid unlawful discrimination, as well as the FTC’s 2016 big data report, which details the importance of relying on representative data sets and vetting data sets for bias. The FTC has previously noted that when evaluating the legality of AI, it will consider inputs to the model, “such as whether the model includes ethnically-based factors, or proxies for such factors, such as census tract.”
  • Watch out for discriminatory outcomes: The FTC recommends testing algorithms before use and regularly thereafter to “make sure that [organizations do not] discriminate on the basis of race, gender, or other protected class.” Again, this builds off the 2020 and 2016 recommendations designed to make AI outcomes fair and ethical. Additionally, the FTC’s 2020 guidance notes that organizations should consider the potential for disparate impact in an AI system’s outcomes. Some questions the FTC suggests to assess the fairness of algorithms are: 
  1. How representative is the data set?
  2. Does the data model account for biases?
  3. How accurate are the predictions based on big data?
  4. Does the particular reliance on big data raise ethical or fairness concerns?
  • Embrace transparency and independence: In order to reduce the potential for discriminatory outcomes, the FTC suggests embracing transparency and independent review by, for example, conducting and publishing independent audits and publishing source code for outside inspection. The 2020 guidance further notes the importance of being transparent with consumers regarding the use of automated tools – including the factors used to generate any automated decisions. 
  • Don’t exaggerate what your algorithms can do or whether it can deliver fair or unbiased results: The FTC reminds organizations to not exaggerate what their algorithms can do, as exaggerations may run afoul of the deception provisions of Section 5. This is one of the more straightforward areas for the FTC to enforce against, and typically where the FTC issues guidance about a particular technology, they are vigilant about misrepresentations related to that technology.
  • Tell the truth about how you use data: The FTC emphasizes in the 2021 and 2020 guidance that organizations should notify consumers about how and when consumer personal information will be used by or be used to develop AI, especially if the information is sensitive. The FTC notes that failure to properly explain how consumers can control the use of personal information to develop algorithms may lead to enforcement under Section 5. 
  • Do more good than harm: The FTC advises organizations to ask themselves if their AI models cause more harm than good. If so, the algorithms could be considered “unfair” under Section 5 and therefore subject to enforcement. Algorithms operating in areas like housing, credit, or other circumstances in which inaccuracies could have significant negative effects on consumers should be assessed carefully. 

Organizations deploying AI are well-advised to consider whether they are doing so in alignment with the FTC’s recommendations and to consider how best to demonstrate such use is truthful, fair, and equitable in the eyes of the FTC.  

Our series on AI regulation

If the early part of the 21st century will be known for being the age of big data, then what we have now entered is the age of the algorithms.

Across industries, organizations are increasingly relying upon the use of artificial intelligence and machine learning technologies to automate processes, introduce innovative new products into consumer markets and enhance research and development.

This article is part of a series of articles, which will further examine the existing and emerging legal challenges associated with AI and algorithmic decision-making. We will take a detailed look at key issues including algorithmic bias, privacy, consumer harms, explainability and cybersecurity. There will also be exploration of the specific impacts in industries such as financial services and healthcare, with consideration given to how existing policy proposals may shape the future use of AI technologies.

 

Authored by Bret Cohen, James Denvil, and Filippo Raso.

Brittney Griffin, a Senior Paralegal in our New York office, contributed to this entry.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.