Background
The Convention is primarily a product of the Council of Europe. The Council of Europe is a body of 46 member states whose mission is to “promote democracy, human rights and the rule of law across Europe and beyond”.1 The Council of Europe is an independent international organisation which is separate from the EU, and includes the UK as a member state.
The Convention is open to signature by the Council of Europe’s member states, the EU and non-member states “which have participated in its elaboration”2 – which includes Japan and Australia.3 Current signatories include the UK, the EU and the US, in addition to smaller states.4
The Convention follows a series of political declarations on international AI regulation. The Convention’s explanatory report acknowledges negotiations took inspiration from:5
-
The Reykjavik summit of 16-17 May 2023
-
G7 Leaders’ statement on the Hiroshima process of 30 October and 6 December 2023 (see further detail in the Engage article here)
-
The Bletchley Declaration from the AI Safety Summit hosted by the UK on 1-2 November 2023 (see further analysis related to the summit in the Engage videos here)
Elements of these political declarations now have a legal foundation in the Convention.
The Convention’s principles and obligations
Who the Convention applies to
The Convention requires states to implement measures for three categories of actors:6
-
Public authorities. According to the explanatory report, a public authority means “any entity of public law of any kind or any level”, including regional and municipal authorities.
-
Private actors acting on behalf of public authorities. The explanatory report provides the example of public procurement.
-
Private actors not acting on behalf of public authorities. The Convention’s treatment of this category is more complex. Parties to the Convention can take one of two approaches to “address risks and impacts” arising from these private actors’ activities. They can either apply the Convention’s requirements to them or take “other appropriate measures”. “Other appropriate measures”, according to the explanatory notes, could include “administrative and voluntary measures” and, as such, the Convention’s requirements could apply to private actors through voluntary codes of practice, for example, as an alternative to new legislative requirements.
Types of AI and AI uses covered by the Convention
The Council of Europe states that the Convention takes a technology-neutral approach. This is “to stand the test of time” as technology evolves.7 Article 2 defines “artificial intelligence system” but the explanatory report envisages domestic legal systems may add “further legal certainty and precision” to it.
The Convention requires states to implement measures in relation to the entire “AI lifecycle”. The explanatory report indicates this is meant to cover “any and all activities from the design of an artificial intelligence system to its retirement”. The Convention could cover any of the activities within scope of the EU’s AI Act such as product manufacturing and deployment.
However, three types of AI lifecycle activities are outside the Convention’s scope:8
-
Activities related to the protection of national security interests, provided they are conducted consistently international law and democratic institutions. However, the explanatory report indicates the Convention does still cover “dual use” AI i.e. AI with multiple uses including non-national security uses.
-
R&D for AI systems not yet made available for use, provided it does not interfere with human rights, democracy and the rule of law.
-
Matters relating to national defence, which are not in the Council of Europe’s scope in any event.9
The Convention requirements
The explanatory report acknowledges that the Convention focuses on human rights, democracy and the rule of law rather than “the economic and market aspects” of AI.
The “principles and obligations” for states to implement fall into three areas:10
-
Fundamental principles: These are high-level principles such as “transparency and oversight”, “equality and non-discrimination” and “privacy and personal data protection”.
-
Remedies and procedural safeguards: This includes, for example, an “effective possibility for persons concerned to lodge a complaint to competent authorities” in relation to violations of human rights arising from “activities within the lifecycle of” AI systems. Persons interacting with AI should also be notified they are interacting with an AI system “as appropriate for the context”.
-
Risk and impact management: This includes assessing the need for a “moratorium or ban” on certain uses of AI that are incompatible with human rights, democracy or the rule of law. Unlike the EU’s AI Act, the Convention does not specify what would constitute a prohibited use of AI.11 However, it does encourage parties to consider bans (except that a ban would not apply to the three AI uses outside the Conventions’ scope above).
How states implement the Convention
The Convention does not apply directly; legislators in each jurisdiction have to implement it in domestic law.
The Convention requires states to “adopt or maintain appropriate legislative, administrative or other measures”. The “adopt or maintain” language is meant to give states flexibility to adopt new measures or apply existing ones, according to the explanatory report. Implementation of the Convention does not necessarily require new legislation. The explanatory report indicates that improving enforcement or making remedies more accessible may be sufficient. The explanatory report also notes states may take into account “compliance mechanisms and standards” and “industry agreements to facilitate self-regulation”.12
Impact on the public and private sectors
Impact on the public sector
Governments and legislative bodies will have to consider how to implement the Convention in domestic law. Although this does not necessarily require new legislation, the Convention does demand some accountability for implementation. States must provide a report to a conference of the parties to the Convention within two years of becoming a party, and periodically thereafter.13 States must also establish or designate an independent oversight mechanism, which can be based on existing human rights oversight bodies.14
Government departments and public authorities may also need to take certain practical measures concerning their own day-to-day operations. On the remedies and procedural safeguards, for example, the explanatory report states that “AI-enabled chatbots on government websites would likely trigger the notification obligation”. Public authorities may also need to consider their criteria for public procurement of AI. Measures could include model documents or clauses, such as the model AI clauses for public procurement in the EU available on the European Commission’s website here.
Impact on the private sector
Private actors operating on behalf of public authorities will also need to consider how governments may reflect the Convention in their procurement process. The private sector more broadly could be impacted by measures states take to implement the Convention in domestic law. For private actors not acting on behalf of a public authority, this may take the form of voluntary measures rather than new legislation.15 Private sector entities could also be proactive in developing standards or “industry agreements to facilitate self-regulation”, which may help states comply with the Convention.16
Impact on the UK’s AI regulation
Unlike the EU with its AI Act, the UK does not yet have legislation focused specifically on regulating AI. The previous government favoured a “sector-based approach”, allowing each regulator to develop AI regulation for its own sector based on five general “cross-sectoral principles”.17 However the current government, now in its second month, indicated in the King’s Speech it would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models” (see our Engage update here).18 Details have yet to emerge but the new legislation is expected to put the UK’s independent AI Safety Institute on statutory footing, as an arm’s length body that will be empowered to compel data from AI developers and test and assess their models prior to their deployment. In the meantime, each regulator will continue to develop its own position on AI. For example, the Information Commissioner’s Office is currently consulting on data protection compliance in the AI supply chain.19
The Convention is unlikely to divert the UK from its current direction of travel. The Convention’s “fundamental principles” overlap with the UK’s existing five “cross-sectoral principles”. The Convention’s “technology-neutral” spirit reflects the approach some UK regulators are already taking, such as the Financial Conduct Authority.20 And as the Convention does not require new legislation, there is no urgency for the government to pass its proposed legislation. Nor does it necessarily need to expand the scope beyond “the most powerful” AI models.
However, the Convention does raise some questions on the direction of UK AI policy.
-
How to define AI: The new UK legislation will most likely need to define “the most powerful” models falling within its scope. If the legislation’s scope touches on human rights, democracy and the rule of law, the definition may adopt or be similar to the Convention’s definition.
-
Whether to prohibit certain AI uses: The Convention requires states to assess the need for this. The 2024 election manifesto of the Labour party – now in government - did promise “binding regulation” to ban “the creation of sexually explicit deepfakes”.21 However, this Is likely to be introduced through separate criminal justice legislation and it remains to be seen whether the UK will follow the EU’s lead in prohibiting a more varied range of AI uses in new legislation.
-
What form the oversight mechanism will take: The UK will have to “designate or establish” an oversight mechanism to comply with the Convention. It may be determine existing human rights oversight bodies, such as the Equality and Human Rights Commission, can fulfil this obligation.
Next steps
As the Convention has to be implemented in domestic law, its true impact is yet to be seen. The Council of Europe is conscious it should be able to “stand the test of time” – the effect is unlikely to immediate in any jurisdiction.
However, the Convention is still likely to impact the public and private sectors in some form, even if through influencing legislation already in train. In the UK, that impact is likely to complement the existing approach.
Authored by Telha Arshad and Alex Nicol.
References
3 The Framework Convention on Artificial Intelligence - Artificial Intelligence (coe.int), “The Framework Convention was drafted by the 46 member states of the Council of Europe, with the >participation of all observer states: Canada, Japan, Mexico, the Holy See and the United States of America, as well as the European Union, and a significant number of non-member states: Australia, Argentina, Costa Rica, Israel, Peru and Uruguay”
9 Explanatory Report, para 36: “For the exemption of “matters relating to national defence” from the scope of the Framework Convention, the Drafters decided to use language taken from Article 1, d, of the Statute of the Council of Europe (ETS No. 1) which states that “[m]atters relating to national defence do not fall within the scope of the Council of Europe”.
11 AI Act, Art 5 (Prohibited AI Practices)