National Security AI and the Hurdles to International Regulation
Small-group cooperation and unilateral efforts to develop settled expectations around the use of national security AI are far more likely than an international regime analogous to nuclear arms control.
Published by The Lawfare Institute
in Cooperation With
States are increasingly turning to artificial intelligence systems to enhance their national security decision-making. The real risks that states will deploy unlawful or unreliable national security AI (NSAI) make international regulations seem appealing, but approaches built on nuclear analogies are deeply flawed. Instead, and as I argue in this paper, regulation of NSAI is more likely to follow the path of hostile cyber operations (HCOs).
Efforts to develop new cyber norms teach us that reaching global agreement about what types and uses of NSAI are acceptable will be very difficult, absent an international crisis. Modest transnational work can be done in other ways, though, including in discussions among close allies. However, much of the work in establishing norms for the use of NSAI will, at least in the near term, take place domestically. In fact, for both HCOs and NSAI, there is likely to be a reduced emphasis on securing binding agreement about legal norms; instead, small groups of like-minded states will simply focus on developing their tools in a way that comports with their own values, while using levers such as espionage, covert action, sanctions, and criminal prosecution to slow and contest their adversaries’ perceived misuse of those tools.