AI and product safety: New report by the UK’s Office for Product Safety and Standards

Early last year, the Office for Product Safety and Standards (the “OPSS”) commissioned the Centre for Strategy and Evaluation Services to carry out a study on the impact of artificial intelligence (“AI”) on product safety. The scope of the study was large, encompassing all manufactured consumer products (except for vehicles, pharmaceuticals and food) and involved the consultation of a number of different stakeholders. The results of the study are contained in a comprehensive report published by the OPSS on 23 May 2022 (the “Report”).

We have outlined the key points below, focusing in particular on the benefits and challenges that AI in consumer products can bring, as well as whether the current regulatory framework for product safety and liability can be considered sufficient for these types of products?

AI: What’s the big deal?

“Smart” products, “connected” products, and consumer Internet of Things (“IoT”) products – these are all related terms that are used interchangeably with AI, but what does AI really mean? According to the Report, AI is a broad, constantly evolving term which generally points to “machines using statistics to find patterns in large amounts of data” and has “the ability to perform repetitive tasks with data without the need for constant human guidance”. Some examples of AI include voice recognition, facial and image recognition, machine learning and natural language processing.

The Report goes on to identify the key characteristics of AI applications relevant to product safety:

  • Data needs: AI consumer products require significant amounts of good quality data to function effectively;

  • Opacity: It is not always clear to a consumer when an AI system is in use and the workings of certain AI consumer products can be opaque; and

  • Mutability and autonomy: AI systems have the ability to learn and develop over time, instead of relying on explicit instructions, and so they can display autonomy in actions and decision-making.

 

While there is no doubt that the use of AI in consumer products is on the rise, the study identifies notable differences between the way in which it is used across various product groups. For example, whilst smart speakers commonly use AI (offering features such as speech recognition and voice assistant systems to understand and respond to user requests), domestic appliances are not as advanced in adopting AI into their design – likely due to cost, privacy and awareness barriers.

Despite this, there is little doubt that the use of AI in consumer products will continue to increase over the coming years – particularly as investment and innovation (partly spurred on by the reliance on technology during the COVID-19 pandemic) lead to improvements in both hardware and AI solutions. But what will this mean for product safety and liability?

The Report: The key details

Objectives

The Report focused on three specific objectives:

  1. Analysing the current and likely future applications of AI in the home, highlighting the advantages and disadvantages for consumers and product safety implications / risks;

  2. Assessing whether the current product safety framework is sufficient for a new generation of products that incorporate AI; and

  3. Examining what factors regulators should consider when responding to the new challenges posed by AI to ensure consumer safety and foster product innovation.

 

Findings

In relation to product safety, the Report outlines a number of opportunities and challenges when incorporating AI systems into manufactured consumer products. These include:

  • Opportunities:
  • Safer product design: AI can assist engineers and other professionals at the product design stage by enabling them to work with an algorithm to develop only solutions which are safe;
  • Enhanced consumer safety and satisfaction: Data collected with the support of AI can allow manufacturers to incorporate a consumer’s personal characteristics and preferences in the design process of a product, which can help identify the product’s future use and ensure it is designed in a way conducive to this;
  • Safer product assembly: AI tools such as visual recognition can assist with conducting quality inspections along the supply chain, ensuring all of the parts and components being assembled are safe thereby leaving little room for human error;
  • Prevention of mass product recalls: Enhanced data collection via AI during industrial assembly can enable problems which are not easy to identify through manual inspections to be detected, thereby allowing issue-detection before products are sold;
  • Predictive maintenance: AI can provide manufacturers with critical information which allows them to plan ahead and forecast when equipment may fail so that repairs can be scheduled on time;
  • Safer consumer use: AI in customer services can also contribute to product safety through the use of virtual assistants answering consumer queries and providing recommendations on safe product usage; and
  • Protection against cyber-attacks: AI can be leveraged to detect, analyse and prevent cyber-attacks that may affect consumer safety or privacy.
  • Challenges:
  • Products may not perform as intended: Product safety challenges may result from poor decisions or errors made in the design and development phase. A lack of “good” data can also produce discriminatory results, particularly impacting vulnerable groups;
  • AI systems lack transparency and explainability:  A consumer may not know or understand when an AI system is in use and taking decisions, or how such decisions are being taken. Such lack of understanding can in turn affect the ability of those that have suffered harm to claim compensation given the difficulty in proving how the harm has come about; and
  • Cyber security vulnerabilities can be exploited: AI systems can be hacked and/or lose connectivity which may result in safety risks e.g. if a connected fire alarm loses connectivity, the consumer may not be warned if a fire occurs.

AI: What’s next?

The Report found that for many existing AI consumer products, the current regulatory framework for product safety and liability and the mechanisms in place to monitor product safety are applicable and sufficient. Having said this, the development of more complex AI systems is likely to mean that gaps in the current UK legislative and regulatory regime become more apparent over the coming years. Some examples include:

  • Whether AI software is covered by current UK law (including for example the General Product Safety Regulations (2005)) is unclear and there is a need to explore the concept of what a ‘product’ really includes;

  • The introduction of AI in consumer products has resulted in complex supply chains with a number of different economic operators (including software developers) which in turn requires deeper consideration about where responsibility and liability for harms should lie;

  • The definition of 'damage' as well as ‘defect’ may also need further consideration, as notions of harm may increasingly include risks that have ‘non-physical’ effects such as damage to personal data or the mental health impacts of products (and not just physical health and safety effects as is currently the case);

  • Focusing on ensuring compliance at the point at which a product is placed on the market may not be sufficient in situations where a product has the potential to change autonomously and over time once in the hands of a consumer; and

  • There is a need for product standards to consider the use of AI in consumer products, which is not currently the case, meaning significant challenges currently exist for manufacturers, conformity assessment bodies and authorities in trying to understand what product compliance looks like for these types of products.

While the Report does not explicitly recommend that the UK introduces regulation to fill the gaps identified above, we expect there to be a growing consensus among stakeholders to adopt such regulation in the near future (particularly given the influence that European movements in this area are likely to have on the UK Government).  Hogan Lovells is actively monitoring developments in this space - keep an eye out for our future updates.

 

 

Authored by Valerie Kenyon, Eshana Subherwal, Vicki Kooner, and Daniel Lee.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.