Big Frameworks Won’t Fix AI’s Global Governance Gaps; Small Steps Will Do Better
Commercial applications of artificial intelligence (AI) technologies took the world by storm and left governments scurrying to develop regulations in time. But national efforts usually aim at simpler faults in AI applications and are therefore insufficient to tackle the risks of more advanced models and their potential to threaten international security. This has led to urgent calls to develop global governance frameworks that provide safeguards against AI’s harmful potential, fill policy gaps, and aim to harmonize individual standards by nation states.
Looking For Reference: Unprecedented Challenges of AI Governance Framework
However, opinions vary on how to build that framework. Some prefer a single international agency responsible for AI safety, while others argue for a more polycentric approach, considering AI is not a single policy problem but rather involves various issues across several domains. A December 2023 report by the United Nations (UN) AI Advisory Body—a 38-member group of international experts convened by the UN Secretary General to advise on AI governance—rightly reflects the diversity of thought with caution. It adopts a “form-follows-function” approach that prioritizes the content of the framework as it continues to explore best options to provide tangible benefits and safeguards before its upcoming report in September 2024.
Source: UN AI Advisory Body Interim Report
There is also a growing sense of urgency for an effective and implementable global governance framework for AI, especially considering the prospect of Artificial General Intelligence (AGI) in the near future. This sense of urgency must be balanced with prudence, however, not least because it is next to impossible to reform international frameworks to keep up with the changing times once they are established. The UN Security Council, the organization primarily responsible for international security, and the World Trade Organization have shown unfortunate examples of even major frameworks underpinning today’s world order becoming paralyzed due to their structural dysfunctions amidst growing tensions among superpowers.
Learning from history will provide much benefit to avoid such tragedy. Recent studies have drawn lessons from existing governance frameworks in many domains such as civil aviation, financial transactions, and nuclear research.This article seeks to contribute to these efforts by drawing lessons from the international community’s decades-long efforts in cyberspace: the challenges of regulating a rapidly evolving domain, the increasing difficulty of international cooperation, and some hints to make practical advancements despite these obstacles.
The Technological Hare and the Regulatory Tortoise
The most evident lesson from the international community’s effort on cybersecurity governance is that the speed in developing new institutional frameworks and regulations is often much slower than the evolution of the technological environment they intend to govern, and the ensuing catch-up often leads to gaps or overreach with unintended consequences.
Before the advent of AI, the 20th century saw cyberattacks as an emerging technological threat to the stability of increasingly computer-dependent societies. The creation of ARPANET in 1969—a precursor to the internet—ushered in new notions of security in the cyber realm and introduced unprecedented challenges for protecting digital information and systems. The 70s and 80s saw the advent of software designed to infect computer systems such as viruses and worms.
Domestic legislation emerged to deal with malicious activities in cyberspace, such as the Computer Fraud and Abuse Act (CFAA) in the United States. However, all-encompassing rules against violations evolving as fast as computer technologies have often been found inadequate. Broad and vague definitions were prone to loopholes and overreach. Oftentimes, the CFAA has been accused of incriminating innocent activities as well as stifling research and innovation.
Beyond national borders, the inadequacies of comprehensive frameworks became even more pronounced. The UN began discussing cybersecurity in an international security context in 1998, with a goal of establishing a set of norms for responsible state behavior in cyberspace. However, it took more than a decade and several iterations of year-long Groups of Governmental Experts (GGE) discussions until 2015 to establish that international law, including the principles of wartime, applies to cyberspace. During these years, major cyber operations affected power grids, nuclear facilities, and election systems worldwide, causing significant damage and largely going unpunished despite their grave consequences.
Governance frameworks form through slow political negotiations between parties and nations, while technology evolves rapidly based on scientific and market dynamics. These highlight some of the inherent inadequacies of relying on developing comprehensive frameworks to govern new technologies and to provide effective and efficient safeguard against their potential harmful effects.
Geopolitical Deadlock and the End of Global Cooperation
Especially in the international realm, the return of intense geopolitical tensions in recent years is significantly adding to the challenges of governing technologies by comprehensive frameworks. Around the time the UN started discussing cybersecurity in 1998, strategic cooperation between Russia and the US seemed possible, with the two countries participating in international meetings on European security and strategic disarmament discussions among other efforts to deal with shared international security. However, the growing US-China rivalry, the 2014 conflict in Ukraine, and especially the 2022 war in Ukraine have changed this landscape drastically, to a point where even a semblance of such goodwill is distant.
Unsurprisingly, this shift has hindered progress in UN cybersecurity discussions that had been around for more than two decades. After the Fifth Group of Governmental Experts (GGE) failed to reach an agreement in 2017, differing preferences of the two rival powers—with the US favoring continued discussion among smaller group of nations at GGE to build on its long-time work, and Russia advocating for the involvement of all 190+ UN members through the establishment of a new Open-Ended Working Group—led to the discussion officially splitting into two. Ever since, the UN cyber norms discussion has substantially slowed, caught between factional disagreements on the content and nature of the meeting's outcomes.
The 2022 outbreak of war in Ukraine dealt a critical blow to the decline of the dialogues. As South Korea’s alternate representative at these meetings between 2021-22, I witnessed a lack of willingness to compromise from both sides of the geopolitical divide, where many delegates focused more on opposing their rivals' proposals than finding a common ground. Mistrust and tension between the West, Russia and their respective like-minded groups were palpable, and it was even difficult to achieve consensus on administrative annual reports.
Even without geopolitical tensions, getting sovereign nations to agree on specific rules regarding technology is immensely difficult due to the differing laws and cultural values. In cybersecurity, some countries prefer narrow regulations, while others want comprehensive rules. Privacy and personal freedom are priorities for some, while others focus on sovereign control of cyberspace.
In AI, I suspect nations will be less willing to compromise because many view AI as critical to their national competitiveness, and the first-mover advantage in this field is immense. All these factors points at pessimism towards the prospect of an effective global governance framework on AI. If the road is full of so many obstacles, can there be a way forward?
No Need to Reinvent the Wheel to Move Forward: Lessons from Vehicle Cybersecurity
Recent events in international vehicle cybersecurity standards regulation offer a glimpse of hope, where incremental updates in existing regulatory frameworks in a specific domain have made significant progress to safeguard an industry against the risks of an emerging technology without much fanfare.
While progress on general norms largely stalled at the all-member UN Open Ended Working Group, a more focused and technical sub-group of the UN has been silently making giant strides in automotive cybersecurity. In 2018, the UN Economic Commission for Europe (UN ECE), credited with a long history of normalizing vehicle safety standards such as safety belts began discussing cybersecurity standards for vehicles.
In 2020, only a couple of years later, the UN ECE’s World Forum for Harmonization of Vehicle Regulations (WP.29) approved UN Regulation No. 155 (UNR-155), the first-ever international regulatory framework that governs cybersecurity standards of vehicles. This landmark regulation created a single standard for performance and audit of vehicles’ cybersecurity features, providing greater harmony and confidence in automakers and consumers alike.
Another remarkable aspect about UNR-155 is its prompt expansion into a de facto global cybersecurity standard on transportation. After the regulation came into effect, the value of these standards caught the attention of major automakers such as Japan and South Korea, who quickly adopted it into their vehicle regulations despite not being parties to the 56-member European regional commission. Other non-participant countries also seek to extract key aspects of the standard and apply it to their local regulations. This year, the scope of the UNR-155 expanded to cover scooters and motorcycles. As the trend continues, both in the breadth of related industry coverage and number of countries following it, one can expect UNR-155 and related efforts to bring positive dividends to advancing international cybersecurity norms.
The OEWG and the ECE’s missions do differ in their scope and nature. The former aims at developing general rules, norms and principles of state behavior in cyberspace while the latter aims at creating industry standards for mainly European markets. Nevertheless, the contrast between the OEWG’s struggle to create comprehensive rules and the ECE’s clear albeit confined progress in advancing cybersecurity governance offers some lessons for emerging tech governance.
Shooting for the “MVP” of Global Governance on AI
The most striking takeaway is the utility of specific and incremental governance building rather than a comprehensive universal approach. When a new technology comes around, it is often the tendency of regulators to consider new governance frameworks entirely devoted to it. However, while a single international framework offers many benefits, they are extremely difficult to come by while nations bicker.
Given the immense stakes and urgent need for effective global governance in AI, we must explore alternative—or at least complementary—approaches. In the face of unprecedented challenges that demand innovative solutions, regulators could draw valuable insights from an unlikely wellspring: the agile world of tech startups, its apparent antithesis. Many successful companies begin with not a comprehensive suite of service, but a very small, focused prototype called Minimum Viable Product (MVP). This approach, centered around quick tests and iteration, is credited for remarkable growth in many startups.
More energy and attention should be directed to making “minimum viable progress” on existing regulations to become more “AI-ready” than on building new “AI frameworks” from scratch. In other words, specific AI-conscious improvement in individual regulatory domains will bring more benefits than a single comprehensive AI governance framework. For one, the former will undoubtedly yield more context-aware, agile regulations that can be effectively implemented and easily revised when needed.
In envisioning governance for AI, we should focus less on AI itself and more on the domains it affects. A domain-specific approach is essential fundamentally because AI is more an enabler of technological advancement than a stand-alone technology itself. As we navigate the uncharted waters of this potentially world-transforming technology, our regulatory strategy must prioritize adaptability and agility over comprehensive generality. The governance framework should, as the saying goes, “be like water” rather than the rigid container that holds it. Just as water adapts to its container, our governance framework must be fluid and responsive, molding itself to the ever-evolving landscape of AI. By embracing a domain-specific mindset and an incremental approach, we can harness AI's potential more effectively, creating a governance structure that grows and evolves alongside the technology it oversees.
The views expressed in this article are those of the author and do not represent those of any previous or current employers, the editorial body of SIPR, the Freeman Spogili Institute, or Stanford University.