Multi Stakeholder AI Governance: The International Institutions Shaping Tomorrow’s AI Regulatory Frameworks

The rapid deployment of AI-empowered technologies, businesses, and products, and its increasing ubiquity in many aspects of global society, has sparked a variety of public debates on the question of who gets to shape AI’s role in society. This post will explore how multinational institutions, initiatives, and organizations are paving the way for multi stakeholder governance of artificial intelligence. It will first explain how international institutions are well-equipped to adapt to the rapid technological change introduced by artificial intelligence (AI) and highlight several such example institutions and initiatives, from a variety of sectors and regions, employing multi stakeholder governance models. 
In the context of this document, the term ‘AI Governance’ will refer to frameworks, mechanisms, or methodologies that intend to provide guidelines, oversight, or regulations for the development, deployment, and use of AI generally, with the aim of protecting users, systems, and society from tangible and intangible harms of poorly-managed or designed AI. While there is not yet a globally standardized definition of artificial intelligence, AI, in this context, will refer to general-purpose artificial intelligence unless noted otherwise. 
International Institutions: A Lesson in Flexibility 
From the perspective of the 2022 AI Government Readiness Index, produced by Oxford Insights, the “pace of change in AI capabilities has not been matched by the response of governments,” which are not typically equipped to “react quickly” to the potential or observed risks and harms presented by emergent technologies. The ubiquity of AI is far outpacing government response; however, in this space lies the opportunity for multi-stakeholder approaches (decision-making models involving the input of civil society, government, academia, and industry from a transnational or international scale, in this case) to provide stop-gap, and eventually long-term, solutions to AI governance while the world untangles its role in our communities.
Thanks to recent media scrutiny and public intrigue about AI and its (often unnoticed but present) role in our lives, ‘regulating AI’ is enjoying the type of political will that makes it a top item on the agendas of many government stakeholders, each attempting to respond to the ever-growing call for global leadership to ensure that human interactions with AI are safer, inclusive, and transparent. Despite several groundbreaking regulatory regimes that have been launched or are in development from governments around the world, formal laws on artificial intelligence have not yet been published (though they are soon to become reality after the ratification of the upcoming EU AI Act, which will likely be adopted in early 2024).
According to the 2022 AI Government Readiness Index produced by Oxford Insights, 70% of countries that have published national AI strategies are high-income countries, leaving many of the technology priorities concerning the Global South unaddressed, and at times, undiscussed. While governments are taking necessary steps to address the need for context-specific and localized AI governance, international institutions are perhaps the most agile mechanism available to global society today for governing AI. Given their ability to share resources, information, and dialogue between member states, and lower barriers to entry, international institutions may be best primed to address this ‘AI governance gap’. International knowledge exchange of regulatory and non-regulatory measures for AI may even speed up, not limit, innovation for the benefit of humanity.
International institutions have demonstrated their ability to adapt to new societal challenges, like the proliferation of artificial intelligence, in the past. The International Telecommunication Union (ITU) was established in 1932 to facilitate intergovernmental cooperation for the alignment of international telegraph and telephone communications standards and quality. The ITU has served the global community by evolving existing, and creating new governance infrastructure for several emergent technologies, including the internet. Recognizing the widespread use of the internet, and understanding that effective regulation would require a multistakeholder approach, the ITU leveraged its long-standing institutional legitimacy to coordinate intergovernmental discourse, agreements, and initiatives aimed at designing the global governance of the internet. Today, the governance of the internet has matured into an ever-complex landscape of data privacy, cybersecurity, e-commerce, and content moderation policies.
Despite these strides, there are very valid historical and real-time pitfalls in the ways that international institutions govern. Global inequality, geopolitical conflict, bias, and discrimination, as well as the organizational complexities required to implement speedy, effective decision-making by a group of ideologically diverse stakeholders, may limit the capacity for international institutions to govern inclusively. Historically, women and marginalized communities have been left out of key decision-making forums about the very technologies that disproportionately impact them, leaving critical voices out of the debate over the role of AI in society. Today’s public discourse surrounding the global governance of AI has, rightfully, highlighted the need to approach AI governance with a multi-stakeholder, multinational approach, given the many ways AI-empowered technologies feature in our day-to-day lives. This presents an excellent opportunity for international institutions to seek out globally representative stakeholders who are well-equipped to lead the charge on global AI alignment.
Example Cases: Multi Stakeholder Initiatives and Organizations
As the patchwork of national AI legislation slowly forms around the world, international efforts are well underway to confront the challenges, known and unknown, presented by the proliferation of AI. Five example cases, including a brief overview of their mission and scope, are detailed below.
African Observatory On Responsible Artificial Intelligence
Established in 2022, the African Observatory on Responsible Artificial Intelligence promotes the ethical development and use of AI technologies throughout the African continent. The Observatory network includes leaders from academia, civil society, industry, and policymakers who collectively amplify ‘African Voices’ in the global AI governance debate. The Observatory’s policy recommendations, oversight initiatives, and thought leadership are informed by Africa’s pre- and post-colonial context. In their own words: “Scholars in the social sciences and humanities are emphasizing that an ‘African’ view on AI and AI ethics is critical for ensuring that the development and adoption of these new technologies supports, and is not harmful to, African societies and ways of living.”
 AI4People’s Ethical Framework for A Good AI Society
A “crowdsourced global treatise,” the Ethical Framework provides four ethical principles for organizations, developers, governments, and businesses to consider, that map to opportunities and risks for society in the age of AI. The four ethical principles are beneficence, non-maleficence, autonomy, and justice. Although the recommendations are largely written to best address concerns in the European context, this framework heavily relied on a multi-stakeholder model.
Bletchley Declaration
After the November 2023 UK AI summit, 28 countries convened to discuss and align on the next steps for understanding, identifying, and mitigating the risks of AI. Each delegation signed on to the Bletchley Declaration, signaling their intention to coordinate efforts to build “respective risk-based policies across countries to ensure the safety” of frontier AI models. The joint agreement marks a first-of-its-kind decision on AI regulation among global leaders, with each signatory agreeing to share collective responsibility for the “risks, opportunities, and a forward process for international collaboration on frontier AI safety and research.” Encouragingly, the Declaration asserts that international collaboration, especially via shared scientific research, is the best path forward for global AI alignment.
G7 Guiding Principles and Code of Conduct
An annual intergovernmental forum for ‘leading countries’ has made global alignment for the regulation of AI a high priority at this year’s summit. The G7 membership, currently comprising the United States, the European Union, Canada, France, Germany, Italy, and Japan, has launched the Guiding Principles and Code of Conduct, a multilateral agreement which includes guidance for both public and private entities to implement accountability measures in the development, design, and use of AI. Suggested measures in the document include transparency reporting, data privacy best practices, considerations for the protection of intellectual property rights, and the development of a global standard for technical AI safety. The Code of Conduct is based on the G7’s 2023 Hiroshima AI Process, a code of conduct for developers that also underpins the importance of international collaboration to successfully regulate AI. 
UNESCO ‘Recommendation on the Ethics of Artificial Intelligence’
Perhaps the most recognizable international institution, the United Nations, with its 193 member states, has been establishing AI guidance since 2017. However, UNESCO published the world’s first global guidance on AI Ethics, entitled the ‘Recommendation on the Ethics of Artificial Intelligence,‘ in 2021. All UN member states agreed to adopt the guidelines. The Recommendation specifically encourages human oversight of AI technologies, suggesting key ‘policy areas’ to help inform policy makers looking to implement the values, like the promotion of fundamental rights and diversity and inclusiveness, via localized legislation or enterprise regulation. The document is an excellent example of knowledge-sharing among both Global South and high-income nations, providing a unique opportunity to harmonize global regulation of AI.
Smart Africa Alliance
The Smart Africa Alliance is a multinational, multi-disciplinary organization that seeks to identify opportunities and challenges wide-spread use of AI will have for Africa. It aims to make evidence-based AI policy recommendations and promote the sustainable development and use of AI-empowered technologies on the continent. One member-state, South Africa, is leading the charge on developing the updated iteration of the Artificial Intelligence Blueprint, a regional playbook on AI alignment published in 2021. 
Organization for Economic Cooperation and Development’s (OECD) AI Principles
An effort to encourage and standardize the development of human-centered, trustworthy AI, the 2019 OECD AI Principles reflect the commitment of the 44 member states and partner countries to adhere to the recommendations. The Principles outline design values that help nation states, enterprises, and developers to create responsible AI, such as transparency, fairness, explainability, sustainable development, accountability, and robustness, security, and safety. Designed to be adaptable to each country’s unique needs and context,  the OECD AI Principles directly influenced AI policy guidelines in several countries, including Japan’s Governance Guidelines for Implementation of AI Principles.  
Stepping into roles traditionally filled by governments, international institutions are adopting multi-stakeholder approaches to establish agile governance frameworks that are inclusive, collaborative, and quick to respond to challenges posed by AI. While it is important to note that none of the example initiatives above are legally binding, soft law—one of the main levers used by international institutions to influence and inform global policymaking—remains a powerful mechanism for addressing AI concerns in the short term. Add to this international insitution’s unique ability to provide global forums that allow members to co-create context-specific policy solutions, especially as AI proliferation continues in the Global South, one may have the belief that perhaps the work being done today will set the stage for a new international institution to develop a global framework that provides regulatory guidance and oversight of AI-empowered technologies. Until then, today’s initiatives, many concentrated in the West, are directly shaping the motivations, voices, and priorities that will continue to drive global alignment efforts—a dynamic that this blog series will continue to explore in future posts.

Amari Cowan is a 2023-2024 Fellow at the Portulans Institute and a Policy Performance Manager at TikTok (Bytedance). Her research focuses on the international governance of AI and global alignment. All views in this document are her own and are not associated with her professional role at TikTok.

The post Multi Stakeholder AI Governance: The International Institutions Shaping Tomorrow’s AI Regulatory Frameworks appeared first on Portulans Institute.