Lethal Autonomous Weapons: The Next Frontier in International Security and Arms Control

The rapid advancement of artificial intelligence (AI) has revolutionized military technology, leading to the development of Lethal Autonomous Weapon Systems (LAWS), or “killer robots,” which can independently identify and engage targets without human intervention. This paper will explore how discussions about LAWS parallel nuclear arms control negotiations during the Cold War, highlighting an urgent need for a global arms control measure for LAWS. By analyzing the current state of LAWS development and deployment, as well as ongoing international efforts toward regulation, this article aims to provide a foundation for understanding the urgent need for robust safeguards to ensure the responsible use of AI in military operations.
Background
The term “Lethal Autonomous Weapon Systems (LAWS)” refers to weapons that can identify and kill targets (people) autonomously using AI without human intervention, often called “killer robots”. According to the United States (U.S.) Department of Defense, once activated, LAWS are capable of selecting and engaging military targets autonomously without further input from human operators. The development and deployment of AI-based LAWS represent a significant shift in military technology, with profound implications for the future of warfare. These systems, which operate with a high degree of autonomy, can identify, select, and engage targets without human intervention, marking a major evolution from traditional, human-controlled weapons. This integration of AI into military systems is not just about enhancing the effectiveness of combat operations; it is also about transforming the broader military strategy OODA (Observe - Orient - Decide - Act), where it can be used in all military decision-making processes including intelligence, surveillance, and reconnaissance (ISR). This signifies a shift from traditional computing support (such as ballistic calculations) to AI’s involvement and autonomous decision-making and operation throughout all phases of military action. LAWS could potentially operate across various platforms, including unmanned aerial vehicles, naval vessels, and ground robots, providing a multidimensional capability that extends from conventional battlefields to cyber and space domains.
Loitering Munition (Suicide Drone) on display at Berlin Airshow 2024
Source: Wikimedia Common
As the deployment of LAWS in real-world scenarios becomes more prevalent with the first combat deployment of LAWS in the Ukraine War, international interest in arms control of LAWS has begun to grow. The Ukraine War, described as the Silicon Valley of offensive AI, has become a testing ground for various future warfare technologies and has been dubbed a “drone war” due to the various drones being introduced and actively used, significantly innovating the traditional combat methods. The international community have begun to fear the rise of the possible unchecked use and proliferation of these weapons that could ultimately disrupt the international order, and as a result, begun debating the legal, institutional, ethical norms of this warfare. Some experts advocate for the establishment of a ‘Next Oppenheimer’ paradigm, drawing on historical lessons from nuclear weapons development.
Ongoing Developments and Global Discussion on LAWS
Major technologically advanced nations like the U.S., China, and Russia have already accelerated the development of military AI technologies in preparation for future warfare and incorporated it into their defense strategies. For example, in 2018, the U.S. published the “Department of Defense AI Strategy” which emphasized the role of AI in the military domain. Similarly, China and Russia have declared the adoption of AI technologies across all sectors of defense and national security.
(Left) DARPA ACE project AI-piloted F-16 (Right) Russia Marker unmanned ground robot. Source: Wikimedia Common
- United States: The Defense Advanced Research Projects Agency (DARPA) has initiated the Air Combat Evolution (ACE) research project. Following the completion and operational deployment of stealth fighters like the F-22 and F-35, the project focuses on converting existing F-16 fighters, now surplus to peacetime needs, into AI-powered fighters. Defense agencies are also strengthening its ties with tech leaders to utilize AI tools to rapidly process complex data.
- China: China has completed the development of the “Liaowangzhe II,” a fast unmanned patrol boat equipped with AI-driven automatic navigation and optimal route-finding capabilities, making it the second in the world to do so. The country is also developing “swarm technology” for unmanned missile craft, a tactic known as the “shark swarm,” intended to deter U.S. aircraft carrier groups from intervening in potential conflicts in the Taiwan Strait. Additionally, in cooperation with Russia, China is developing AI-powered autonomous weaponry, showcasing a gun-mounted robot dog manufactured by the Chinese company Unitree Robotics.
- Russia: Russia has developed “Marker,” an unmanned ground robot in the form of a tank equipped with autonomous driving capabilities and an AI system that analyzes images of enemy vehicles. This system identifies Western tanks and ground forces, allows AI to determine attack priorities, and can even decide when to engage in attack actions.
Despite the escalation of tension with an AI arms race among major powers and other key nations worldwide, discussions on the need for arms control have been insufficient. The main efforts have been towards creating guidelines or norms, and there is a low likelihood of any new treaties or agreements on AI arms control being established in the short term. The debate on the legality of using LAWS first began at the 2013 Convention on Certain Conventional Weapons (CCW). At the 2016 CCW meeting, an open-ended GGE on LAWS was established. Discussions at the GGE have primarily focused on the ethics and accountability of LAWS, exploring their compatibility with existing international law, particularly International Humanitarian Law (IHL). GGE’s document on LAWS on July 26, 2024, based on IHL, urge states to take the following actions:
- Ensure appropriate training and instructions for human operators of LAWS
- Ensure that LAWS operate with appropriate control and human judgement across the entire life cycle of the weapon systems
- Limit the types of targets that the system can engage
- Limit the number of engagements that LAWS can undertake
However, as stated in the document, nothing has reached a consensus, and while the framework provides a platform to discuss the issue, it is not rigorous enough to keep up with the rapidly changing trends. In fact, breakthroughs like GPT in text processing and RT-2 in image processing have made it much easier to rapidly handle large amounts of complex data. This is blurring the line between active human decision-making and passively following data-driven results that appear correct but merely keep humans in the loop.
Experts warn against the spread of military AI and call for initiating discussions on AI arms reduction. In Henry Kissinger and Graham Allison's articles titled “The Path to AI Arms Control,” they highlight the “Oppenheimer Moment” of the AI era, expressing concerns about the potential misuse of technology. For instance, military AI could make it easier to target non-combatant groups using advanced image recognition technology, or could lead to potential misinterpretations of data that might result in the bombing of civilian facilities. Kissinger and Allison particularly stress the need to control the use of AI in military applications. However, history shows how challenging it is to achieve arms control.
We might see history repeating itself in the governance of LAWS. The recognition of LAWS as a threat comparable to nuclear weapons marks the beginning of their consideration for arms control, but the road to negotiation is tough. The U.S. and China perceive LAWS as strategic assets, crucial for achieving strategic superiority. During the 2023 San Francisco Bilateral Meeting, the two countries initially agreed to hold their first AI arms control talks in 2024, but China later announced the suspension of talks. The 2024 Vienna Arms Control Conference marked the first discussions on the need to regulate LAWS, focusing on human control and accountability, as well as the ethics of algorithms. Following the Vienna conference, global media outlets like Bloomberg highlighted articles with the keywords, “AI Arms Reduction” and “Oppenheimer Moment,” emphasizing the perception of military AI as a threat to humanity on par with the emergence of nuclear weapons and underscoring the need for global arms control.
Future of Arms Control for Lethal Autonomous Weapons System
Since military AI is still in the early stages of development, most international discussions have primarily focused on maintaining human control adhering to the taboo concept of humans being killed by autonomous systems. Considering the rapid growth of the LAWS market following geopolitical competition between countries to have an advantage, AI companies’ growing involvement in the defense sector, and the real-world application, seen by the lethal technological advancements with the Ukraine War, it is foreseeable that the next “Oppenheimer Moment” could arrive sooner than anticipated. In fact, the possibility of remote war could also be amplified through increased autonomy with humans becoming further alienated from the use of force, making a proxy war utilizing the LAWS amore cost-efficient and tempting choice. This may lead to lowering the threshold to war, making military action more politically acceptable domestically and making conflict easier to enter.
First Day Cover commemorating UN Treaty on NPT. Source: Wikimedia Common
An empirical example to draw from is again, the nuclear arms control deals. After World War II, the nuclear-armed nations (P5) secured permanent seats on the UN Security Council and established the Nuclear Proliferation Treaty (NPT) system that prohibited other countries from acquiring nuclear weapons, thereby maintaining their exclusive status. As a result, nations that possessed nuclear weapons (the “haves”) became participants in arms control negotiations, while those without nuclear capabilities at the time (the “have-nots”) mostly remained subjects of the non-proliferation regime.
Given historical precedents and recognizing the strategic disadvantages of not possessing advanced military technologies, nations will likely become increasingly driven to intensify their research and development of LAWS, further accelerating the arms race and potential misuse of these technologies, especially by exploiting the regulatory gray areas where no clear international standards exist. This motivation is not solely driven by national security and sovereignty concerns but also by a desire to influence the frameworks and operational norms, such as in deciding the scope and establishing a data structure for the use of LAWS.
LAWS regulation may unfold at the backdrop of strategic alliances around AI. As technological advancements reshape global dynamics, nations will likely prioritize staying ahead in an era where AI could redefine state power—driving economic growth, enhancing military efficiency, and reshaping global influence. In addition, the development of LAWS involves complex challenges that require the formulation of robust algorithms, the establishment of rigorous military standards, and the comprehensive collection of operational data. This may lead to the formation of strategic alliances focused on technology and AI, which could serve to pool resources, share critical technological know-how, and negotiate as a bloc in international forums. These alliances might also work towards establishing common standards for AI implementation, including technical standards for interoperability, ethical guidelines, and regulatory frameworks to manage AI’s societal impact.
The formation of such blocs will reinforce the broader trend towards multipolarity, where regional leaders may emerge as rallying points for like-minded countries, uniting to align their standards and collectively advance technological development. At the same time, this shift is likely to exacerbate existing complexities in establishing governance on LAWS, as the power struggle between blocs to make their technology a standard or “key technology,” will further impede the development of a coherent governance structure and intensify the arms race for physical dominance.
Conclusion
In summary, the rapid development and deployment of Lethal Autonomous Weapon Systems (LAWS) represent a significant shift in military capabilities and strategies, prompting global discussions on arms control similar to historical nuclear weapons negotiations. As nations like the U.S., China, and Russia integrate AI into their military frameworks, the need for comprehensive international legal and ethical guidelines becomes urgent. The use of LAWS in conflicts, particularly noted in the Ukraine War, highlights the potential for these systems to alter traditional warfare and international security dynamics. Effective management and regulation of LAWS are critical to prevent a new arms race and ensure global stability, making the development of international standards and agreements on the use of military AI essential for future security architectures.
The views expressed in this article are those of the author and do not represent those of any previous or current employers, the editorial body of SIPR, the Freeman Spogli Institute, or Stanford University.
Stanford International Policy Review
Want to know more? Click on the following links to direct back to the homepage for more amazing content, or, to the submissions page where you can find more information about being a future author!