Some of the most widely discussed challenges include the potential for algorithmic bias, undetected performance issues which inadvertently cause harm to individuals, and privacy risks resulting from the heavy reliance on personal data.
Due to the unique characteristics that are commonly associated with AI and algorithmic decision-making, these risks will often need to be addressed and mitigated through dedicated solutions and additional compliance measures.
The existing regulatory framework
While there is currently an absence of dedicated laws aimed at specifically addressing AI in most jurisdictions, this does not prevent existing regulatory frameworks from applying.
Firstly, data protection laws will, in many cases, be highly relevant. Personal data is often fundamental to how AI is both developed and implemented. Large volumes of ‘training data’ are generally required to configure machine learning models, so that they are able to appropriately respond to a variety of scenarios they may be confronted with. Similarly, data is needed for the testing of a model’s performance and will generally form the basis of the model’s inputs in a live environment. This close nexus between AI and privacy has resulted in certain authorities, such as the UK Information Commissioner (ICO) and Spain’s AEPD, developing detailed guidance on the topic.
Similarly, where AI is being used to influence or fully automate decisions about individuals, it can give rise to the risk of algorithmic bias against particular groups, including minorities. This may occur due to historic prejudices that are reflected in the training data set or wider configuration issues associated with the model. Where bias affects particular individuals, then this could result in infringements of anti-discrimination laws.
Issues associated with misuse of personal data and bias are also examples of harms which may fall within the scope of broader legislation such as consumer and competition laws, alongside sector-specific requirements associated with treating customers fairly (e.g., in financial services).
Organisations that are involved with the development of AI also need to consider a variety of other issues including the protection of their intellectual property and safety concerns that may arise from performance issues or malicious use of the models by unauthorised third parties. Safety and cybersecurity are particularly relevant to companies which are using AI as part of their critical infrastructure, such as in the automotive, aerospace, defence and energy industries.
The future of AI regulation
Yet, while it is apparent that many existing laws are directly relevant to the use of AI, there is a growing interest from regulators and policymakers across the world to go further.
One of the most ambitious proposals for legislative reform comes from the European Union. In 2020 the EU set out its plan to regulate what it considers to be ‘high-risk’ applications of AI, that may be utilised in particular sectors and involve the use of certain technologies such as facial recognition. The European Commission is expected to unveil further details about the nature of its proposals in April 2021.
Meanwhile, in the UK, various initiatives are underway. This includes the formation of a Digital Regulation Cooperation Forum, which has the ICO, Competition and Markets Authority (CMA), Financial Conduct Authority (FCA) and Ofcom as members. These four regulators have come together with the intention of establishing a coordinated approach to developing trust in the digital economy, including addressing algorithmic processing and AI.
The United States Government is also showing a growing interest in this area. In late 2020, the White House (under the previous administration) published a memorandum setting out the principles that should guide the regulation of AI by federal agencies.
These developments highlight the growing regulatory risks for organisations that develop or make use of artificial intelligence, along with the increasing importance of having a dedicated compliance governance framework in place to address these risks.
Our series on AI regulation
If the early part of the 21st century will be known for being the age of big data, then what we have now entered is the age of the algorithms.
Across industries, organisations are increasingly relying upon the use of artificial intelligence and machine learning technologies to automate processes, introduce innovative new products into consumer markets and enhance research and development.
This article is part one of a series of articles, which will further examine the existing and emerging legal challenges associated with AI and algorithmic decision-making. We will take a detailed look at key issues including algorithmic bias, privacy, consumer harms, explainability and cybersecurity. There will also be exploration of the specific impacts in industries such as financial services and healthcare, with consideration given to how existing policy proposals may shape the future use of AI technologies.
Authored by Dan Whitehead.