News | National
27 Feb 2026 11:58
NZCity News
NZCity CalculatorReturn to NZCity

  • Start Page
  • Personalise
  • Sport
  • Weather
  • Finance
  • Shopping
  • Jobs
  • Horoscopes
  • Lotto Results
  • Photo Gallery
  • Site Gallery
  • TVNow
  • Dating
  • SearchNZ
  • NZSearch
  • Crime.co.nz
  • RugbyLeague
  • Make Home
  • About NZCity
  • Contact NZCity
  • Your Privacy
  • Advertising
  • Login
  • Join for Free

  •   Home > News > National

    Anthropic v the US military: what this public feud says about the use of AI in warfare

    At the heart of this dispute is how Anthropic’s large language model Claude is being used in a military context.

    Elke Schwarz, Professor of Political Theory, Queen Mary University of London, Neil Renic, Lecturer in Ethics, University of New South Wales; Fellow of the Centre for Military Studies, University of Copenhagen
    The Conversation


    The very public feud between the US Department of Defense (also known these days as the Department of War) and its AI technology supplier Anthropic is unusual for pitting state might against corporate power. In the military space, at least, these are usually cosy bedfellows.

    The origin of this disagreement dates back months, amid repeated criticisms from Donald Trump’s AI and crypto “czar”, David Sacks, about the company’s supposedly woke policy stances.

    But tensions ramped up following media reports that Anthropic technology had been used in the violent abduction of former Venezuelan president Nicolás Maduro by the US military in January 2026. It was alleged this caused discontent inside the San Francisco-based company.

    Anthropic has denied this, with company insiders suggesting it did not find or raise any violations of its policies in the wake of the Maduro operation.

    Nonetheless, the US secretary of defense, Pete Hegseth, has issued Anthropic with an ultimatum. Unless the company relaxes its ethical limits policy by 5.01pm Washington time on Friday, February 27, the US government has suggested it could invoke the 1950 Defense Production Act. This would allow the Department of Defense (DoD) to appropriate the use of this technology as it wishes.

    At the same time, Anthropic could be designated a supply chain risk, putting its government contracts in danger. These extraordinary measures may appear contradictory, but they are consistent with the current US administration’s approach, which favours big gestures and policy ambiguity.

    Video: France 24.

    At the heart of the dispute is the question of how Anthropic’s large language model (LLM) Claude is used in a military context. Across many sectors of industry, Claude does a range of automated tasks including writing, coding, reasoning and analysis.

    In July 2024, US data analytics company Palantir announced it was partnering with Anthropic to “bring Claude AI models … into US Government intelligence and defense operations”. Anthropic then signed a US$200 million (£150 million) contract with the DoD in July 2025, stipulating certain terms via its “acceptable use policy”.

    These would, for example, disallow the use of Claude in mass surveillance of US citizens or fully autonomous weapon systems which, once activated, can select and engage targets with no human involvement.

    According to Anthropic, either would violate its definition of “responsible AI”. Hegseth and the DoD have pushed back, characterising such limits as unduly restrictive in a geopolitical environment marked by uncertainty, instability and blurred lines.

    Responsible AI should, they insist, encompass “any lawful use” of AI models by the US military. A memorandum issued by Hegseth on January 9 2026 stated:

    Diversity, Equity and Inclusion and social ideology have no place in the Department of War, so we must not employ AI models which incorporate ideological ‘tuning’ that interferes with their ability to provide objectively truthful responses to user prompts.

    The memo instructed that the term “any lawful use” should be incorporated in future DoD contracts for AI services within 180 days.

    Anthropic’s competitors are lining up

    Anthropic’s red lines do not rule out the mass surveillance of human communities at large – only American citizens. And while it draws the line at fully autonomous weapons, the multitude of evolving uses of AI to inform, accelerate or scale up violence in ways that severely limit opportunities for moral restraint are not mentioned in its acceptable use policy.

    At present, Anthropic has a competitive advantage. Its LLM model is integrated into US government interfaces with sufficient levels of clearance to offer a superior product. But Anthropic’s competitors are lining up.

    Palantir has expanded its business with the Pentagon significantly in recent months, giving rise to more AI models.

    Meanwhile, Google recently updated its ethical guidelines, dropping its pledge not to use AI for weapons development and surveillance. OpenAI has likewise modified its mission statement, removing “safety” as a core value, and Elon Musk’s xAI (creator of the Grok chatbot) has agreed to the Pentagon’s “any lawful use” standard.

    A testing point for military AI

    For C.S. Lewis, courage was the master virtue, since it represents “the form of every virtue at the testing point”. Anthropic now faces such a testing point.

    On February 24, the company announced the latest update to its responsible scaling policy – “the voluntary framework we use to mitigate catastrophic risks from AI systems”. According to Time magazine, the changes include “scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance”.

    Anthropic’s chief science officer, Jared Kaplan, told Time: “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”

    Ethical language saturates the press releases of Silicon Valley companies eager to distinguish themselves from “bad actors” in Russia, China and elsewhere. But ethical words and actions are not the same, because the latter often entails a real-world cost.

    That such a highly public spectacle is happening at this time is perhaps no accident. In early February, representatives of many countries – but not the US – came together for the third time to find ways to agree on “responsible AI” in the military domain. And on March 2-6, the UN will convene its latest conference discussing how best to limit the use of emerging technologies for lethal autonomous weapons systems.

    Such legal and ethical debates about the role of AI technology in the future of warfare are critical, and overdue. Anthropic deserves credit for apparently resisting the US military’s efforts to undercut its ethical guidelines. But AI’s role is likely to be tested in many more conflict situations before agreement is reached.

    The Conversation

    Elke Schwarz is affiliated with the International Committee for Robot Arms Control (ICRAC)

    Neil Renic is affiliated with the International Committee for Robot Arms Control (ICRAC)

    This article is republished from The Conversation under a Creative Commons license.
    © 2026 TheConversation, NZCity

     Other National News
     27 Feb: Politicians say immigration threatens ‘Australian values’, but our research shows no one knows exactly what that means
     27 Feb: Queenstown's Millbrook Resort's had a record year for property sales, hotel stays, and golf rounds
     27 Feb: Michael Caine’s voice is iconic. Why would he sell that to AI?
     27 Feb: Ukraine: after four years of war, exhaustion on both sides is the main hope for peace
     27 Feb: Police are scaling back their search for a man missing in Palmerston North's Manawatu River
     27 Feb: Deeper ocean ecosystems are unique – and uniquely vulnerable without better protection
     27 Feb: The two dogs involved in the attack that left three people injured in Christchurch's Bryndwr at the weekend, have been euthanised
     Top Stories

    RUGBY RUGBY
    Portia Woodman-Wickcliffe has retired from rugby for a second time, aged 34 More...


    BUSINESS BUSINESS
    Rocket Lab's continuing to thunder to new heights, posting a record revenue of 300 million New Zealand dollars in the last three months of 2025 More...



     Today's News

    Entertainment:
    Thames Valley Police has concluded its search of Andrew Mountbatten-Windsor's former home 11:51

    International:
    Hillary Clinton slams Epstein committee for grilling her instead of Donald Trump 11:27

    Soccer:
    All Whites football captain Chris Wood is expected to return from injury by April, which also marks positive news for New Zealand's World Cup campaign 11:27

    Entertainment:
    Martin Short's daughter Katherine has died aged 42 11:21

    National:
    Politicians say immigration threatens ‘Australian values’, but our research shows no one knows exactly what that means 11:17

    Business:
    Rocket Lab's continuing to thunder to new heights, posting a record revenue of 300 million New Zealand dollars in the last three months of 2025 11:07

    Entertainment:
    Chris Hemsworth thinks relocating to Australia has been a "great decision" for his family 10:51

    Business:
    The figures showing the number of people crossing the Tasman may look worse than they actually are 10:47

    Business:
    Queenstown's Millbrook Resort's had a record year for property sales, hotel stays, and golf rounds 10:47

    National:
    Michael Caine’s voice is iconic. Why would he sell that to AI? 10:37


     News Search






    Power Search


    © 2026 New Zealand City Ltd