Claude is an artificial intelligence (AI) assistant developed by Anthropic, designed to be helpful, honest, and harmless. It can handle tasks like summarisation, search, writing, quarry response, and coding with strong reliability and predictability. Anthropic is a prominent US AI safety and research company, started in 2021 by former OpenAI persons. Anthropic researches into training helpful, honest, and harmless AI systems. Today, this company is in news since it has been blacklisted by Pentagon over ethics concerns. Sometime back on Truth Social, US President Donald Trump had stated that the US would not allow a ‘radical left, woke’ company to dictate how the military operates. He criticised Anthropic for allegedly attempting to force the Department of War to follow its terms of service instead of adhering to constitutional principles. Trump further said he wants all federal agencies to immediately cease using Anthropic’s technology, with a six-month phase-out period. He also warned that the company would face consequences, if it failed to comply with the needs of the Department of War.Interestingly, in the ongoing Iran conflict (Operation Epic Fury) the US forces are allegedly depending on Anthropic’s Claude system even after Trump’s dictate to stop using the AI tools of this company. It has been reported that Claude system is put in use during the ongoing air operation against Iran. The US Central Command (Centcom) has been using Claude in operational environments for intelligence assessments, target identification, and simulated battle planning. This highlights a clear contradiction in Trump’s orders and what the Pentagon is actually doing on ground. This clearly shows that how deeply embedded AI systems have become in military planning and operations. Also read: Modi’s Silence Over US-Israeli Attack on Iran Is at Odds with Roots of India’s Foreign PolicyIn relative terms, AI is still in its infancy. Yet tech savvy US and Israeli militaries have already begun relying heavily on large language models (LLMs). AI is increasingly making inroads into operational and even combat support roles; whereas earlier its use was largely confined to intelligence gathering, logistics, and training.AI systems are processing thousands of data signals per hour and helping accelerating analytic timelines. Possibly, AI systems are contributing directly to real-time operational planning. Based on the historical use of AI in the military, it is reasonable to assume that the US and Israeli forces in all likelihood would have employed AI tools for logistics optimisation, simulation training, and strategic forecasting before the actual combat phase of the operation began. Now use of AI in combat phase indicates that military AI systems are fast emerging as crucial force multipliers. The tussle between the US military and Anthropic has been simmering for some time now. There were issues about how far AI safeguards should go in defence applications. During negotiations with Pentagon, Anthropic’s CEO Dario Amodei had refused the demands to lift restrictions on the use of its Claude model for autonomous targeting and surveillance. The company had argued that current AI is not reliable enough for such roles and that mass domestic surveillance would violate fundamental rights. In response, defence secretary Pete Hegseth moved to designate Anthropic as a supply chain risk. Normally, this label is reserved for foreign adversaries. Today, Anthropic faces the potential cancellation of its $200 million contracts with the Pentagon, highlighting the high stakes of its ongoing dispute with US authorities over AI ethics and operational use. While Anthropic is willing to prioritise ethical considerations over immediate profits, insisting on responsible and lawful use of its AI tools, companies like OpenAI are ready to fulfil the US military demands. Sam Altman, OpenAI’s CEO, has struck a deal with the federal government just hours after negotiations between the Pentagon and Anthropic fell through. Altman claims that the military would not use ChatGPT for autonomous killing systems or mass surveillance. Interestingly, the same safeguards were put by Anthropic also. So, it is not understood why the US government would abandon its partnership with Anthropic, only to strike a deal with OpenAI that wants to follow the same safeguards. There could be some possible unknown arrangements with OpenAI in this regard. The debate over the potential military use of AI and its possible adverse impacts is not new. In 2015, more than 1,000 tech experts, scientists, and researchers including Stephen Hawking, Elon Musk, Steve Wozniak, MIT professor Noam Chomsky, Google AI chief Demis Hassabis, and Daniel Dennett had signed a letter warning about the dangers of autonomous weapons and killer robots, calling a military AI arms race a bad idea. They had urged for a ban on AI-managed weapons that would operate ‘beyond meaningful human control’. They had argued that like most chemists and biologists avoid building chemical or biological weapons, AI researchers also do not want to develop lethal weapons using AI. On January 9, 2026, President Trump stressed in an executive order that the US aims to maintain global AI dominance to advance human flourishing, economic competitiveness, and national security. He noted that AI aided warfare will redefine military affairs in the next decade, driven by rapid commercial AI innovation. Through this order, he even directed the Department of War to accelerate military AI dominance in order to make the US an ‘AI-first’ force across all components to enhance lethality and efficiency. Obviously, the US establishment has no stomach for views of scientists or moral or ethical aspects when it comes to the use of AI in military. Recent US military actions like operations against Venezuela to the ongoing offensive against Iran clearly indicate that the era of Just War is over. The rise of Lethal Autonomous Weapon Systems (LAWS) which can select and engage targets without human intervention are known to be fundamentally threatening the traditional framework of Just War Theory. It is heartening to see that while states like the US and Israel are unilaterally undertaking military operations by juxtaposing AI on their defence systems without due diligence, opacity in legal justification and/or adherence to international norms, a private company like Anthropic is willing to take a stand for the ethical use of technology.Today, the ongoing conflict in Iran highlights the increasing reliance of modern defence forces on AI, emphasising its growing strategic importance in operational planning, intelligence analysis, and combat support. It also reveals the ethical dilemmas inherent in such reliance, showing how militaries could deploy AI in ways that blur the line between lawful and morally questionable actions. Companies like Anthropic have responded cautiously, insisting on ethical guardrails and restricting their AI tools to lawful military applications, even under pressure from the US to relax these standards. These are still early days in the integration of AI into military operations. Hence militaries must exercise caution to avoid over reliance on untested systems. Watchful oversight, rigorous testing, and ethical safeguards are vital to ensure that AI gets used in military domain without compromising legal, ethical and moral standards in combat.Ajey Lele is a researcher and is the author of the book Institutions That Shaped Modern India: ISRO.