Anthropic’s AI hypothesis behind the Iran raids, how US Central Command may have used Claude for the attack

Image generated with AI.

The US Central Command may have resorted to artificial intelligence tools to support the planning and execution of the recent war in Iran that began with the Israel-US attacks on February 28. According to the reconstructions of Wall Street Journal And Axiosthe US military reportedly used Claude, Anthropic’s language model, during the joint military operation with Israel. In this case, AI would have been used in intelligence fields, in the selection of targets and also in the construction of operational simulations.

All this despite an order from Donald Trump which required (at least until a few hours before the attack) the interruption of all collaboration with the company. For the record, the US president had even declared on his social network Truth that Anthropic is «a radical left-wing AI company run by people who have no idea what the real world is». The friction with Anthropic arose following the raid perpetrated last January against the president of Venezuela, Nicolás Maduro. On that occasion, news came out that the Army had used Claude to perpetrate the mission, and Anthropic expressed its disappointment about it, noting that the service’s terms of use do not allow Claude to be used for violent and surveillance purposes.

How US Central Command would use Anthropic’s AI against Iran

According to the mutually agreeing reconstructions made by Wall Street Journal and from Axiosduring the joint Israeli-US bombing, the US Central Command would continue to use Claude to develop intelligence assessments, i.e. structured analyzes of information from different sources, and to conduct battlefield simulations, digital tools that allow you to virtually test the outcome of different tactical options. AI was also used for target identification.

Now, by this we do not mean that an algorithm “decides” what and who to hit, but that it could have helped human analysts to identify correlations, priorities and possible consequences of attacks, speeding up processes and decisions that would otherwise have taken much longer.

On an operational level, the attack saw the use of Tomahawk cruise missiles, stealth fighters and unidirectional attack drones, i.e. unmanned aircraft designed to hit a single target and not return to base. It is important to clarify that AI does not directly control these weapons: its contribution concerns the analysis and planning, therefore the strategic part of the attacks perpetrated by the USA against Iran, and not the executive part of the same.

Trump’s breakup and the inclusion of OpenAI

The breaking point between Anthropic and the Pentagon was reached a few hours before the start of operations against Iran, when President Donald Trump ordered the immediate termination of all relations with Anthropic, publicly accusing the company of having an ideological orientation incompatible with the needs of national defense. The presidential decision included a six-month period to phase out the company’s products from government systems. However, as several technical sources have underlined, when an AI model is already integrated into classified networks (i.e. IT infrastructures that manage secret data), its removal is not a simple administrative act. New security certifications, staff training, parallel tests and expensive alternative integrations are needed. Midhun Krishna M, co-founder and CEO of TknOps.io, a cost tracking service for LLMs, explained:

When AI tools are already integrated into simulation and real-time intelligence systems, decisions made at the top do not immediately translate into concrete changes. (…) When a model is incorporated into classified simulation and intelligence systems, there are sunk costs of integration, redevelopment, new security certifications and parallel testing, so a six-month phase-out may seem decisive, but the real financial and operational burden is much deeper.

The relationship between Anthropic and the United States, as anticipated at the beginning, has its roots in a previous episode: the use of AI in an operation against the president of Venezuela, Nicolás Maduro, which had prompted Anthropic to underline how the use of Claude for the case in question had represented a clear violation of the terms of use of the service. These explicitly prohibit violent applications, the development of autonomous weapons or mass surveillance, i.e. the systematic and large-scale monitoring of civilian populations. Since then, relations between the Pentagon and the company have deteriorated. Defense Secretary Pete Hegseth accused Anthropic of representing a «national security risk in the supply chain».

After the breakup with Anthropic OpenAI announced an agreement with the Pentagon for the use of its tools, including ChatGPT, on classified military networks, as confirmed by the company itself in the post on X which you can read below.