How European and U.S. governments are looking to regulate AI technologies

The ethical and legal risks associated with AI technologies have long been discussed in academic and policymaking circles. Yet, 2021 was the year that legislators and supervisory authorities started to take tangible steps to regulate their use across Europe and the U.S., through the introduction of new legislation, strategic plans and enforcement actions.

European Perspective

The most significant of these developments came in 2021, when the European Commission announced its long-awaited draft AI Regulation. The draft regulation is a bold and comprehensive proposal that will have a significant impact on the financial services industry, given the growing reliance of firms on artificial intelligence technologies in recent years and GDPR-style fines that are envisaged. It also places the EU at the forefront of ongoing policy discussions across major economies about how best to ensure that future AI-driven technologies are developed in an ethical and human-centric manner.

In addition, the UK also published its own National AI Strategy in the autumn, which outlined an intention to introduce future regulation in the field. This was supplemented by new post-Brexit proposals to reform UK data protection law, which included the potential introduction of new AI-specific obligations.

Meanwhile, across Europe, data protection authorities in countries including Italy and the Netherlands have taken high-profile enforcement actions against various technology companies relating to algorithmic decision-making, often resulting in multi-million Euro fines.

Impact of the EU’s proposed AI Regulation for financial services

The European Commission states that the aim of the AI Regulation is introduce proportionate and flexible rules, that will help to address the specific risks posed by the use of AI systems by companies. The primary risk categorisations are: (i) unacceptable risk, which applies to a limited set of use-cases, where the AI systems will be banned entirely; (ii) high risk, where substantial new obligations are imposed; and (iii) limited risk, where the AI systems interact with individuals.

Of particular relevance to financial services firms will be the Commission’s decision to place AI systems used to evaluate credit scores or creditworthiness and certain forms of biometric identification software (e.g. facial recognition) in the ‘high-risk’ category. In the Commission’s assessment, a high-risk categorisation is warranted for credit scoring, as the use of AI in this context may have the result of denying citizens access to financial resources such as credit.

Where an AI system is deemed high-risk, then these rules will apply to financial services firms based in the EU, but also to those based in a third country (such as the UK or US), in circumstances where the outputs from the AI system are ‘used’ in the EU. The obligations that will apply depend on whether the firm has developed the AI system internally, or procured it from a third party supplier. In the former case, the firm will be deemed a ‘provider’ and required to fulfil an extensive set of new obligations before the system can be put on the market. Examples of these obligations include:

  • Implementing adequate risk assessment measures.
  • Maintaining substantial data governance measures with respect to training and testing data.
  • Introducing technical measures to facilitate algorithmic transparency, human oversight and mitigation of bias.
  • Drafting detailed technical documentation relating to how the AI system operates.

Where a firm procures an AI system from a third party, then they will instead be classified as a ‘user’. Users are currently subject to an alternative set of obligations, including adhering to the provider’s mandatory technical documentation and monitoring that the system complies with the relevant requirements. It currently remains unclear to what extent users will be liable for ensuring that their providers are adhering to their own obligations under the regulation.

From a practical perspective, the Commission recognises that some AI systems are already indirectly regulated under EU financial services legislation. In order to ensure that there is consistency with existing AI policy provisions, the Commission proposes that the EU financial services supervisory bodies should be designated as ‘competent authorities’ in relation to credit. Depending on how this is implemented in the future, it may give such authorities the power to supervise the requirements in the AI regulation that relate to AI systems provided or used by regulated credit institutions.

The UK AI Strategy

The UK currently has a fragmented approach to its regulation of AI, with different aspects of the development and use of AI being indirectly regulated through sector-specific regulations, technology-neutral legislation (such as data protection and consumer law) and other regulatory guidance. Under the UK AI Strategy published in September of this year, the government indicated its intention to establish a more effective regulatory approach to AI in the UK, which is pro-innovation and promotes trust in AI systems.

As a result, the government is now considering whether to retain the sector-based approach, or to introduce additional cross-sector principles and rules to enable more consistency across regimes. In the strategy paper, it is acknowledged that greater harmonisation could be achieved in part by minimising cross-sector regulatory overlap on matters which are currently dealt with by multiple regulators. In particular, the Government pointed to the concept of fairness, which is dealt with under the Equality Act, data protection laws and in the Financial Conduct Authority’s (FCA) approach to fair treatment of customers. It is expected that the government will publish a white paper in early 2022 which outlines its specific plans for future regulation.

In the meantime, the UK Department for Culture, Media and Sport (DCMS) has also published proposals for post-Brexit reforms to the existing data protection regime. This includes the potential relaxation of certain provisions which directly impact the development and use of AI (e.g. the use of personal data for training algorithms), while also considering the introduction of more substantial obligations governing the fairness of outcomes derived from AI, where they impact consumers and other individuals.

Next steps

Further progress on the implementation of the EU’s AI Regulation and the UK’s own proposals is expected during the course of 2022. In the meantime, it is increasingly important for financial services firms that use AI to ensure they have dedicated governance frameworks in place, along with appropriate technical measures, to manage the risks associated with these technologies.

U.S. Perspective

The United States lacks a federal law that specifically regulates artificial intelligence. In its absence, the current U.S. regulatory landscape includes federal laws in other areas that reach issues related to algorithms and automated decision-making, recently enacted state laws that impose new legal obligations in this area, and regulations by federal agencies that use their existing authority to develop AI policies. The appetite for additional, potentially broader, AI regulation is apparent in the number of bills being introduced in state legislatures and Congress. In addition, as the Biden Administration increasingly seeks to enact its regulatory agenda across government, we expect to see increased attention to AI and louder calls for more regulation.

Existing federal laws in the United States generally approach AI regulation through the lens of fairness and anti-discrimination principles. For example, the Fair Credit Reporting Act (FCRA)  and the Equal Credit Opportunity Act (ECOA) both take this approach. Under FCRA, which protects consumers from inaccuracy, unfairness, or misuse of their personal data by consumer reporting agencies, the use of algorithms from third parties to make eligibility decisions about employment, housing, credit, insurance, or other financial benefits may require organizations to provide adverse action notices and allow individuals to contest those decisions. The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.

In addition, AI is receiving increased attention as a consumer protection issue across the federal government. As the lead U.S. consumer protection agency, the Federal Trade Commission (FTC) has long exercised its authority to regulate private sector uses of personal information and algorithms under Section 5 of the Federal Trade Commission Act, which prohibits unfair or deceptive acts or practices in or affecting commerce. Recently, the FTC issued guidance highlighting the importance of truth, equity, and fairness in AI use, signaling that the agency may be preparing to expand its regulation of AI deployments and algorithmic harms such as bias and discrimination. The Consumer Financial Protection Bureau, under new leadership, has also expressed an interest in the use of AI and algorithms to distribute financial services, including algorithm-driven underwriting and the use of detailed behavioral data in financial decision-making. Congress has also shown interest in AI regulation, with several bills introduced this year.

State legislatures are also tackling AI, with legislation introduced in more than 17 states and AI rules included in new privacy laws in California, Colorado, and Virginia. For example, the California Privacy Rights Act includes “access and opt-out rights with respect to businesses’ use of automated decisions-making technology, including profiling” as well as transparency obligations related to consumer access requests. The Colorado Privacy Act and Virginia Consumer Data Protection Act both include rights to opt-out of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer. It remains to be seen how the states will implement these requirements through rulemaking – expected to take place over the course of 2022 – and how they will be enforced.

Next steps

The White House also has recently indicated that it plans to play a leading role in driving U.S. AI policy development. The Office of Science and Technology Policy (OSTP) plans to create an AI Bill of Rights in collaboration with others in the federal government, as well as academia, civil society, the private sector, and communities all over the country. With the spotlight coming from the White House, it seems likely that AI will remain at the top of policymakers’ minds for the foreseeable future. Financial institutions deploying AI are well-advised to consider whether their practices are aligned with the principles of trustworthiness, fairness, and equity that are driving regulation at the federal and state level across the United States.

 

 

Authored by Bret Cohen, Ambia Harper, Dan Whitehead and Nikki Ogun.

Contacts
Bret Cohen
Partner
Washington, D.C.
Ambia Harper
Knowledge Lawyer
Washington, D.C.
Dan Whitehead
Counsel
London
Nikki Ogun
Senior Associate
London

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.