From Flash Crash to the Battlefield: Understanding Algorithmic Risks in Modern Warfare
An XQ-58A Valkyrie unmanned aerial vehicle flies in formation with an F-22 Raptor and F-35A Lightning. 9 Dec 2020 (US Air Force).

From Flash Crash to the Battlefield: Understanding Algorithmic Risks in Modern Warfare

May 28, 2031, Commander Alex Reed checked the screen in the control room. The time was 4:37 pm, and everything seemed normal. His autonomous drone swarm was patrolling a volatile conflict zone, and the live feeds showed the usual landscape of dusty roads and scattered settlements. The drones were programmed with advanced algorithms to identify and neutralize threats swiftly, reducing the need for human intervention in high-risk situations.

Suddenly, an alert flashed on Alex's screen. A drone had identified a potential hostile target: a convoy of vehicles approaching a secured area. The algorithm flagged the vehicles as high threat based on their speed and direction. Trusting the system, Alex authorized the engagement, and the drones swooped in.

Seconds later, explosions rocked the convoy. However, as the smoke cleared, Alex's heart sank. The live feed showed civilians scrambling from the wreckage—families and children. The convoy was not an enemy force but a group of refugees fleeing the conflict. The autonomous system had misidentified them due to a simple oversight in the threat identification algorithm.

Before Alex could react, the situation spiraled out of control. The autonomous drones, operating at blinding speeds, interpreted the chaos as continued hostility. They engaged further, compounding the error. Nearby allied forces, seeing the drone strikes, believed they were under attack by a significant threat and called for reinforcements.

In the enemy camp, commanders saw the drone activity and the sudden increase in allied forces as the beginning of an offensive. They launched a counter-attack, sending their own autonomous systems into the fray. Within minutes, the entire region was engulfed in conflict, escalating far beyond the initial incident.

A Real-World Parallel: The 2010 Flash Crash

This scenario is not just speculation; it has parallels in the 2010 flash crash. The crash was initiated by a massive sell order from the Waddell & Reed mutual fund, which executed a sell program of 75,000 E-mini S&P 500 futures contracts worth about $4.1 billion. This large sell order overwhelmed the market, and high-frequency trading algorithms exacerbated the situation. These automated trading bots initially absorbed the sell order but then began to sell aggressively to manage their risk. The algorithms fed off each other, amplifying the downward spiral. This cascading effect caused the market to lose a trillion dollars in a matter of seconds.

Understanding Algorithmic Logic

Algorithms work on programmed logic that, while more predictable than human decision-making, can lead to predictable yet uncontrollable escalation patterns when combined with certain human actions. In the case of the 2010 flash crash, the initial human action—a large sell order—triggered a series of automated responses from trading algorithms. These algorithms, operating at high speeds and based on predefined logic, interacted in ways that led to a rapid and severe market decline.

Similarly, autonomous weapons systems rely on algorithms to make split-second decisions based on predefined criteria. While this can increase efficiency and reduce human error, it also means that the systems can react in ways that humans might not anticipate. When these automated responses are triggered by human actions, the results can quickly spiral out of control. The inherent speed and complexity of these interactions can make it difficult for human operators to intervene effectively once the process has begun.

The Underlying Drive: The Security Dilemma

To understand the increasing drive towards giving weapons greater autonomy, it's important to consider the concept of the security dilemma. This dilemma arises when states, in their efforts to enhance their security, develop new military capabilities that inadvertently make their adversaries feel less secure. In response, those adversaries build up their own capabilities, leading to a continuous cycle of escalation. Autonomous weapons systems are a prime example of this dynamic. As one state enhances its autonomous capabilities, others feel compelled to follow suit, escalating tensions and increasing the risk of conflict.

The drive towards more autonomous weapons stems from the belief that these systems can offer significant advantages in terms of speed, precision, and reduced casualties among military personnel. However, the same features that make these weapons attractive also contribute to the risks of escalation. The need to stay ahead of potential adversaries in terms of technological capabilities creates a feedback loop, where advancements by one state prompt similar developments by others, perpetuating the arms race.

Challenges in Arms Control

Historically, arms control treaties have placed a braking effect on the security dilemma, and in some cases, have frozen the continuous feedback loop. Examples include the Limited Test Ban Treaty of 1963 and the various nuclear arms treaties that followed, which successfully limited the buildup of nuclear weapons in the United States and the Soviet Union, reducing the risks of escalation during the Cold War. These treaties worked because they addressed a limited number of actors with clear stakes in maintaining stability.

However, this approach is unlikely to succeed in today's multilateral world, characterized by rising tensions and the greater accessibility of AI technology. The rapid pace of technological advancement and the diverse range of actors involved complicate traditional arms control measures. Unlike the relatively stable bipolar world of the Cold War, today's international landscape includes numerous state and non-state actors with varying interests and capabilities. Additionally, the widespread availability of AI technology means that even small states and non-state actors can potentially develop and deploy autonomous weapons, making comprehensive arms control agreements more difficult to achieve.

The Three Pillars of Strategic Signaling

Given the challenges of relying on treaties to address the security dilemma posed by autonomous weapons, a new approach is needed.

Strategic signaling refers to the actions and communications by states designed to convey intentions, capabilities, and resolve to other actors to influence their behavior. By clearly communicating the vulnerabilities of adversaries' autonomous systems to hacking and other forms of disruption, states can create a deterrent effect. This signaling acts as a braking force, discouraging the unchecked advancement and deployment of autonomous weapons by highlighting the risks involved.

NATO with its robust framework for cooperation and intelligence sharing, is particularly well-positioned to leverage strategic signaling. By playing to its strengths in these areas, NATO can effectively limit the proliferation of highly autonomous weapons systems. The alliance's ability to coordinate and share critical information can enhance collective security and provide a unified stance against the risks posed by autonomous weapons.

Additionally, communicating the fundamental escalation risks of never completely being able to trust code on both sides can prevent unintended interactions that might escalate to war. It happened in the 2010 flash crash; it can happen again. When adversaries understand that autonomous systems might not interact in predictable ways and could lead to unintended escalations, it adds a layer of caution. This mutual understanding helps to slow down the race to hand over control to weapons systems, reinforcing the necessity of maintaining human oversight.

Finally, moving discussions around autonomous weapons from forums like the United Nations Convention on Certain Conventional Weapons (CCW) to a new forum specifically dedicated to these issues can significantly signal the potential threat of these weapons to humanity. A dedicated forum highlights the global recognition of the dangers posed by autonomous systems, fostering more focused and effective dialogue on mitigating these risks. It emphasizes the need for collaborative international efforts to better understand these weapons, enhancing our ability to manage and control their development from an informed, international perspective.

Strategy for Moving Forward

While strategic signaling can put brakes on the security dilemma, it cannot completely stop it. The objective should be to slow the development of giving weapons greater control. This pause allows us to better figure out how to deploy these weapons in ways that lower the probability of escalation compared to our current situation. By maintaining human oversight, rigorously testing and validating algorithms, and engaging in continuous dialogue with adversaries, we can work towards a more stable and secure world where the risks of automated warfare are carefully managed.

However, it is important to recognize that solving the security dilemma entirely is highly unlikely. The inherent nature of international relations, where states act to maximize their security, makes it almost impossible to completely eliminate the arms race dynamic. As long as states perceive threats from one another, they will continue to develop advanced weapons systems, including autonomous ones. The goal, therefore, should not be to eliminate this dynamic but to manage it in a way that reduces the risks of unintended escalation and conflict.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics