A new report has claimed that an artificial intelligence model developed by Anthropic was used during a United States military operation that led to the capture of former Venezuelan president Nicolas Maduro. The development has sparked fresh debate over the growing role of commercial AI tools in military missions.
Okay News reports that The Wall Street Journal cited people familiar with the matter as saying Anthropic’s AI model, Claude, was deployed in the operation carried out in Venezuela. According to the report, the mission involved coordinated strikes in Caracas before Maduro was apprehended in an early January raid and later transported to New York to face drug trafficking charges. Reuters stated it could not independently verify the claims, while the US Defense Department and the White House did not immediately comment.
The report said Claude’s use in the operation occurred through Anthropic’s partnership with Palantir Technologies, a data analytics company whose platforms are widely used by the Pentagon and federal law enforcement agencies. Anthropic declined to confirm whether its model was used in the specific mission, stating that it cannot comment on operational details. The company added that any deployment of Claude must comply with its usage policies, which prohibit supporting violence, weapons development or surveillance.
The Wall Street Journal further reported that Anthropic is currently the only major AI developer whose system is accessible in classified government settings through third parties, though users are still bound by company policies. The Pentagon has reportedly been encouraging AI firms to make their tools available on classified networks with fewer restrictions. The situation highlights intensifying discussions around the ethical boundaries, oversight and regulatory frameworks guiding the integration of artificial intelligence into military and national security operations.
Anthropic’s Claude is an advanced artificial intelligence model designed for text generation, reasoning, coding and data analysis. Developed by the US-based AI company Anthropic, it competes with systems such as OpenAI’s ChatGPT and Google’s Gemini.
Founded in 2021 by former OpenAI executives including CEO Dario Amodei, Anthropic has positioned itself as a safety-focused AI company. Claude is available to enterprise and government clients and can operate on certain classified networks through approved partnerships, subject to compliance with its usage policies.

