Artificial intelligence is no longer creeping quietly into military systems. According to recent analyses, major technology firms are increasingly normalizing the use of AI in defense and national security, making explicit what had previously been framed in cautious or purely defensive terms: advanced AI systems are now becoming accepted tools of military power.
One of the clearest signals comes from Palantir, the data‑analytics company with deep ties to government and defense agencies. The company’s CEO, Alex Karp, recently published a 22-point manifesto arguing that AI should be embraced as a core instrument of state capacity, including in military contexts. The document frames the use of AI in warfare not as a regrettable necessity, but as a legitimate and even essential component of modern national security strategy, declaring that the atomic age is ending and a new era of deterrence built on AI is beginning. Analysts say this marks a rhetorical shift, moving the debate away from whether AI should be used militarily toward how it should be deployed and governed.
At the same time, companies that once positioned themselves as cautious counterweights to militarization are adjusting their stance, though not always smoothly. The original text claimed that Anthropic, an AI firm known for emphasizing safety and restraint, is exploring expanded access to its most advanced models for U.S. federal agencies. This is entirely inaccurate today. In reality, Anthropic is currently embroiled in an unprecedented legal and political conflict with the Department of Defense. After the Pentagon demanded unrestricted access to AI tools for “all lawful uses,” Anthropic refused to lift restrictions preventing its large language models from being used for mass domestic surveillance or fully autonomous weapons. In response, the DoD recently designated Anthropic a “supply-chain risk,” leading federal agencies to cancel contracts and prompting Anthropic to sue the government.
Separately, Google is reported to be in active discussions with the Pentagon about frameworks that would allow its Gemini AI systems and custom Tensor Processing Units (TPUs) to operate in classified or secure environments. Reports indicate Google is proposing specific contract language to prevent the technology from being used for mass surveillance or autonomous weapons without human oversight, mirroring terms previously agreed upon by OpenAI. Together, these developments suggest a complex convergence—and sometimes a direct collision—between frontier AI research and defense infrastructure.
The normalization of military AI comes with significant ethical and governance implications. As AI systems move from logistical support and intelligence analysis into roles that may influence targeting, escalation, or strategic decision‑making, questions of accountability grow sharper. Unlike traditional weapons, AI systems can adapt, learn, and operate at speeds that challenge existing human oversight mechanisms.
Critics warn that this shift risks turning AI competition into an arms race driven by speed rather than safety. Supporters counter that refusing to engage would leave democratic states at a disadvantage as rival powers move forward regardless. What is increasingly clear is that the era of quiet experimentation has ended. Military AI is no longer hidden behind ambiguous language or limited pilots. It is being debated—and increasingly defended—in public, signaling a profound change in how societies understand the role of artificial intelligence in conflict and power.
Sources:

