Summer Essay 9 -- AI WAR
Foreign Affairs | September/October 2024

Summer Essay 9 -- AI WAR

Winding down the summer with daily reviews of essays from the current edition of Foreign Affairs as the United States barrels towards the most consequential election in decades, with US global leadership very much on the line.  Looking forward to the conversations this may spark.


Tony Starke has arrived at Foreign Affairs.  The chilling essay by Mark Milley and Eric Schmidt (“Losing the Wars of the Future”) could easily have been written by the Marvel Comics anti-hero.  The essay could easily also inspire a movie.  But it is not fiction.

 Anyone reading the news from the war in Ukraine during August would know that Milley and Schmidt are right. Technological prowess, ingenuity, and the necessity for survival have been reinventing the mechanisms of war as the front pushes east into Russian territory.  The triumph of the drone paired with impressive data feeds makes for a sobering read.  Milley and Schmidt also provide an excellent summary of how technology and warfare have been inextricably linked throughout recorded history.

 But this is not an essay about battle tactics.  It is not an essay about drones, robots, and physical warfare.  It is not an essay about defense contracting reforms. Although all these topics are covered.  These topics are about today’s warfare, mostly as it is being waged in Ukraine and  Gaza.

 The future of war referenced in the article’s title is all about AI. The most important part of the essay occurs in the last two pages.  The authors start with a scenario that sounds familiar, because it is the plot of the old movie War Games…..without the happy ending.

 The War Games movie featured a computer as the central character that behaved just like the models referenced by Milley and Schmidt:  “War games conducted with AI models from OpenAI, Meta, and Anthropic have found that AI models tend to suddenly escalate to kinetic war, including nuclear war, compared with games conducted by humans.”  But unlike the movie, the essay suggests that training the computers on the game of tic-tac-toe will not teach the computer that war is a zero-sum game.  The fictional computer concludes that “the only winning move is not to play.” 

 It seems that our most advanced computers have not yet reached this conclusion. Or they have not been provided the correct training data to help them reach the conclusion. If suggesting that military and strategic AI be trained on tic-tac-toe seems too pedestrian, it would at least be reassuring to know that the systems were trained on maxims from Sun Tzu and other military strategists that counseled both restraint and the intelligent, minimal use of force rather than just the manuals for weapons systems.

 The last page of the essay should worry every reader.  Milley and Schmidt do not wrestle with the training data question. They do not question the outcome. Instead, they pivot to risk management mechanisms.

 Let that sink in for a minute. It is one of the more surprising features of the tech industry that I have seen in the few years I have been in this arena. Those closest to the most innovative tech tend not to question the outputs…..they go out of their way to find reasons to blame user error for problematic outputs.

 The recommendations in the essay are responsible, appropriate, and necessary. They include the military version of ensuring that there is a “human in the loop” before an AI output becomes an operational reality that endangers human lives. They include restrictions on which choices are inappropriate (e.g., “distinguish between military and civilian targets”).  Some of those instructions could be difficult to implement in urban warzones or where combatants use human shields.

 We should worry about a world in which it is necessary to envision a universe in which countries would refuse to implement these common sense risk controls.  We should worry about a world where the recommended mechanism to incentivize compliance is to “use economic restrictions to limit their access to military AI.”  It begs the question: why aren’t those access limits already in place?  This is probably a good place to note that economic restrictions and sanctions are notoriously ineffective in changing objectionable sovereign behavior.

 It is difficult to argue with the last recommendation: “The next generation of autonomous weapons must be built in accordance with liberal values and a universal respect for human rights – and that requires aggressive U.S. leadership.”  It is the corollary to the Kyoto Principles for AI policy agreed under Japan’s 2023 leadership of the G7. 

 It would be a cheap shot to observe that weapons designed to extinguish human life might not be capable of respecting human rights.  Since the 17th century, the concept of a “just war” has provided the paradigm for military engagement culminating in a large number of formal treaties that set out Laws of War which attempt to place guardrails on military behavior as well as the treatment of prisoners of war.  Milley and Schmidt effectively are suggesting that the Laws of War must be updated for 21st century mechanisms. 

 Now consider the logical next scenario: what happens if Washington and its allies deliver better training data, implement all the recommended risk management processes, find themselves in a and prosecute that war following the 21st century equivalent of the laws of war…..but they end up in combat against decision systems that were not trained in the same manner? 

 AI systems, at their core, are pattern-matching machines. The unsupervised learning aspect approximates the human capacity to connect seemingly unrelated items, giving many the impression that the machine is thinking. But the machine is actually just calculating correlated proximities subject to a set of rules and, increasingly, restrictions crafted by humans.  So what happens when a machine trained in this manner is given an input that is beyond the realm of what is permissible?

 Movies tell us that the machines lock the human out of the loop.  Let’s hope that this is just fiction. 


Barbara C. Matthews is a globally recognized public policy and quantitative finance leader.  Her track record of successful innovation and leadership spans five continents in both the private and public sectors, including service as the first US Treasury Attache to the EU with the Senate-confirmed diplomatic rank of Minister-Counselor.  She has consistently been the first executive to forge new paths that add lasting value with durable, high-performing teams.  She is currently the Founder and CEO of BCMstrategy, Inc., a company that delivers ML/AI training data and predictive analytics that provide ground-breaking transparency and metrics about government policy globally.  The company uses award-winning, patented technology to measure public policy risks and anticipate related reaction functions. Ms. Matthews is the author of the patent.

To view or add a comment, sign in

More articles by Barbara C. Matthews

  • Summer Essay 11 -- Retrospective

    Summer Essay 11 -- Retrospective

    Final Summer Essay (#11) reflecting on the ideas raised by the current edition of Foreign Affairs as the United States…

  • Summer Essay 10 -- National Security

    Summer Essay 10 -- National Security

    Winding down the summer with daily reviews of essays from the current edition of Foreign Affairs as the United States…

  • Summer Essay 8 of 11 -- Knowledge

    Summer Essay 8 of 11 -- Knowledge

    Winding down the summer with daily reviews of essays from the current edition of Foreign Affairs as the United States…

  • Summer Essay 7 -- Options

    Summer Essay 7 -- Options

    So this is how I am spending the last week of summer vacation: daily reviews of essays from the current edition of…

  • Summer Essays 5 and 6: China

    Summer Essays 5 and 6: China

    So this is how I am spending my summer vacation: daily reviews of essays from the current edition of Foreign Affairs as…

  • Summer Essay 4 of 11: Language Models, really

    Summer Essay 4 of 11: Language Models, really

    So this is how I am spending my summer vacation: daily reviews of essays from the current edition of Foreign Affairs as…

  • Summer Essay 3 of 11: Clean Energy

    Summer Essay 3 of 11: Clean Energy

    So this is how I am spending my summer vacation: daily reviews of essays from the current edition of Foreign Affairs as…

  • Summer 2024 Essays (2 of 11): Pragmatism

    Summer 2024 Essays (2 of 11): Pragmatism

    So this is how I am spending my summer vacation: daily reviews of essays from the current edition of Foreign Affairs as…

    1 Comment
  • Summer 2024 Essays (1 of 11): Isolationism

    Summer 2024 Essays (1 of 11): Isolationism

    So this is how I am spending my summer vacation. Every day that remains in August I’ll be reviewing an essay from the…

  • Top 5 Reasons Why PolicyScope Data is on Databricks -- #4: Safe LLMs

    Top 5 Reasons Why PolicyScope Data is on Databricks -- #4: Safe LLMs

    This is a story about game-changing innovation and the ability to position proactively for market demand just as it…

Insights from the community

Others also viewed

Explore topics