Why Big Tech’s push into military AI matters

| Jun 25, 2025

Big Tech’s push into military AI is troubling

Silicon Valley firms are beefing up their national security teams, but scrutiny is sorely needed

By Jonathan Guyer, Program Director

This article appeared in The Financial Times on June 20, 2025


When OpenAI and Mattel announced a partnership earlier this month, there was an implicit recognition of the risks. The first toys powered with artificial intelligence would not be for children under 13.

Another partnership last week came with seemingly fewer caveats. OpenAI separately revealed that it had won its first Pentagon contract. It would pilot a $200mn program to “develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” according to the US Department of Defense

That a major tech company could launch military work with so little public scrutiny epitomizes a shift. The national security application of everyday apps has in effect become a given. Armed with narratives about how they’ve supercharged Israel and Ukraine in their wars, some tech companies have framed this as the new patriotism, without having a conversation about whether it should be happening in the first place, let alone how to ensure that ethics and safety are prioritized.

Read more of Jonathan’s article in The Financial Times


Written by Jonathan Guyer

Jonathan is the Program Director of the Institute for Global Affairs at Eurasia Group’s Independent America program.

This post is part of Independent America, a research program led out by Jonathan Guyer, which seeks to explore how US foreign policy could better be tailored to new global realities and to the preferences of American voters.

A brighter future for all