Summer Essay 9 -- AI WAR
Winding down the summer with daily reviews of essays from the current edition of Foreign Affairs as the United States barrels towards the most consequential election in decades, with US global leadership very much on the line. Looking forward to the conversations this may spark.
Tony Starke has arrived at Foreign Affairs. The chilling essay by Mark Milley and Eric Schmidt (“Losing the Wars of the Future”) could easily have been written by the Marvel Comics anti-hero. The essay could easily also inspire a movie. But it is not fiction.
Anyone reading the news from the war in Ukraine during August would know that Milley and Schmidt are right. Technological prowess, ingenuity, and the necessity for survival have been reinventing the mechanisms of war as the front pushes east into Russian territory. The triumph of the drone paired with impressive data feeds makes for a sobering read. Milley and Schmidt also provide an excellent summary of how technology and warfare have been inextricably linked throughout recorded history.
But this is not an essay about battle tactics. It is not an essay about drones, robots, and physical warfare. It is not an essay about defense contracting reforms. Although all these topics are covered. These topics are about today’s warfare, mostly as it is being waged in Ukraine and Gaza.
The future of war referenced in the article’s title is all about AI. The most important part of the essay occurs in the last two pages. The authors start with a scenario that sounds familiar, because it is the plot of the old movie War Games…..without the happy ending.
The War Games movie featured a computer as the central character that behaved just like the models referenced by Milley and Schmidt: “War games conducted with AI models from OpenAI, Meta, and Anthropic have found that AI models tend to suddenly escalate to kinetic war, including nuclear war, compared with games conducted by humans.” But unlike the movie, the essay suggests that training the computers on the game of tic-tac-toe will not teach the computer that war is a zero-sum game. The fictional computer concludes that “the only winning move is not to play.”
It seems that our most advanced computers have not yet reached this conclusion. Or they have not been provided the correct training data to help them reach the conclusion. If suggesting that military and strategic AI be trained on tic-tac-toe seems too pedestrian, it would at least be reassuring to know that the systems were trained on maxims from Sun Tzu and other military strategists that counseled both restraint and the intelligent, minimal use of force rather than just the manuals for weapons systems.
The last page of the essay should worry every reader. Milley and Schmidt do not wrestle with the training data question. They do not question the outcome. Instead, they pivot to risk management mechanisms.
Let that sink in for a minute. It is one of the more surprising features of the tech industry that I have seen in the few years I have been in this arena. Those closest to the most innovative tech tend not to question the outputs…..they go out of their way to find reasons to blame user error for problematic outputs.
Recommended by LinkedIn
The recommendations in the essay are responsible, appropriate, and necessary. They include the military version of ensuring that there is a “human in the loop” before an AI output becomes an operational reality that endangers human lives. They include restrictions on which choices are inappropriate (e.g., “distinguish between military and civilian targets”). Some of those instructions could be difficult to implement in urban warzones or where combatants use human shields.
We should worry about a world in which it is necessary to envision a universe in which countries would refuse to implement these common sense risk controls. We should worry about a world where the recommended mechanism to incentivize compliance is to “use economic restrictions to limit their access to military AI.” It begs the question: why aren’t those access limits already in place? This is probably a good place to note that economic restrictions and sanctions are notoriously ineffective in changing objectionable sovereign behavior.
It is difficult to argue with the last recommendation: “The next generation of autonomous weapons must be built in accordance with liberal values and a universal respect for human rights – and that requires aggressive U.S. leadership.” It is the corollary to the Kyoto Principles for AI policy agreed under Japan’s 2023 leadership of the G7.
It would be a cheap shot to observe that weapons designed to extinguish human life might not be capable of respecting human rights. Since the 17th century, the concept of a “just war” has provided the paradigm for military engagement culminating in a large number of formal treaties that set out Laws of War which attempt to place guardrails on military behavior as well as the treatment of prisoners of war. Milley and Schmidt effectively are suggesting that the Laws of War must be updated for 21st century mechanisms.
Now consider the logical next scenario: what happens if Washington and its allies deliver better training data, implement all the recommended risk management processes, find themselves in a and prosecute that war following the 21st century equivalent of the laws of war…..but they end up in combat against decision systems that were not trained in the same manner?
AI systems, at their core, are pattern-matching machines. The unsupervised learning aspect approximates the human capacity to connect seemingly unrelated items, giving many the impression that the machine is thinking. But the machine is actually just calculating correlated proximities subject to a set of rules and, increasingly, restrictions crafted by humans. So what happens when a machine trained in this manner is given an input that is beyond the realm of what is permissible?
Movies tell us that the machines lock the human out of the loop. Let’s hope that this is just fiction.
Barbara C. Matthews is a globally recognized public policy and quantitative finance leader. Her track record of successful innovation and leadership spans five continents in both the private and public sectors, including service as the first US Treasury Attache to the EU with the Senate-confirmed diplomatic rank of Minister-Counselor. She has consistently been the first executive to forge new paths that add lasting value with durable, high-performing teams. She is currently the Founder and CEO of BCMstrategy, Inc., a company that delivers ML/AI training data and predictive analytics that provide ground-breaking transparency and metrics about government policy globally. The company uses award-winning, patented technology to measure public policy risks and anticipate related reaction functions. Ms. Matthews is the author of the patent.