AI patch-up: Bridging the gap in global AI regulation
Council of Europe Framework Convention on AI is the start of a coordinated approach
The Convention represents the first legally binding international agreement on AI, signalling the start of a coordinated effort to govern the ethical development and deployment of AI systems. Nonetheless, limitations remain, and the Convention depends heavily on the commitment of signatories.
In an important move toward international AI governance, the United States, European Union, United Kingdom, and other countries have signed the Council of Europe’s Framework Convention on Artificial Intelligence (AI) and human rights, democracy, and the rule of law (the “Convention”). This treaty represents the first legally binding international agreement on AI, signalling the start of a coordinated effort to govern the ethical development and deployment of AI systems globally.
A New Era of International AI Standards
The Convention establishes a blueprint for consistent AI regulatory standards across regions, including Europe and North America. As AI technology races forward, lawmakers have at times struggled to set clear ethical boundaries that also allow for the innovation needed to harness AI’s benefits. By committing to the Convention’s shared standards, signatory countries have given companies some regulatory predictability, at least with respect to the key principles that will underpin regulation. These principles include:
- Human dignity and individual autonomy
- Equality and non-discrimination
- Respect for privacy and personal data protection
- Transparency and oversight
- Accountability and responsibility
- Reliability
- Safe innovation
These commitments, though high-level, form a foundation for signatory countries to implement the Convention’s principles domestically and allow for regionally tailored applications that fit distinct legal and cultural contexts.
Signatories also agree to take steps to establish, as appropriate, controlled environments (or “sandboxes”) for developing and testing AI and encouraging and promoting adequate digital literacy and digital skills for all segments of the population.
The Convention’s limitations
Despite its ambitions, the Convention has faced criticism for its lack of enforceability. The European Data Protection Supervisor (among others) has called this a “missed opportunity” to lay down a strong and effective AI legal framework with clear and strong safeguards for persons affected by AI systems, calling the Convention “largely declarative [in] nature” with a very high level of generality that could lead to divergent application.
Without direct enforcement tools or penalties for non-compliance, the Convention depends heavily on each country’s commitment to implementation. This reliance on individual countries has led some to see the Convention as largely symbolic rather than a strictly regulatory tool, raising concerns about consistent application and effectiveness.Navigating a Patchwork of Global Regulations
The Convention joins recent international efforts to address the complexities of AI governance, including the 2023 Bletchley Declaration and the G7’s commitment to fair AI standards. The Convention’s risk-based approach enables countries to focus resources on higher-risk AI applications while setting universal principles. However, its influence may be limited by significant regulatory differences worldwide. For example, the EU’s AI Act will impose stringent requirements on high-risk AI systems, while the UK maintains a more flexible, innovation-oriented approach. Thus, while the Convention provides broad alignment on ethical principles, it doesn’t eliminate the complexity of navigating a patchwork of laws in practice.
Practical Oversight: Reporting and Innovation Controls
Signatories are required to report periodically on their adherence to the Convention’s principles, a transparency measure that could help address inconsistencies in its application. Additionally, establishing “regulatory sandboxes” enables safe experimentation with AI technology under controlled conditions. However, the effectiveness of these provisions relies on each country’s commitment and resources, as political agendas and regulatory structures vary significantly.
As AI technology evolves, so too must regulatory frameworks. The Convention is a good step in promoting responsible AI governance on a global scale. For IT leaders and industry decision-makers, it brings a level of predictability by establishing clear ethical principles. However, without enforceable measures and with heavy reliance on national implementation, the Convention currently functions more as an aspirational guide than a strict regulatory instrument.
A Milestone with Room for Growth
The Convention represents meaningful progress toward unified AI governance, but it remains foundational. Much will depend on how effectively countries turn these principles into actionable regulations. For companies operating globally, understanding the nuances of this Convention and adapting to local implementations will be essential as the regulatory landscape continues to develop.