In The News

Back to all news

Anthropic Set A ‘Red Line,’ It Won’t Be The Only AI Company To Do So

As Featured On:

WASHINGTON, D.C. (Forbes) — Artificial intelligence developer Anthropic set a “red line” that governs how the United States Department of Defense could use its technology. That included no use of its Claude AI model in mass domestic surveillance or in fully autonomous weapons.

Last week, the Pentagon demanded that the company remove such restrictions and allow for “all lawful use” of its AI in defense systems. It then threatened to blacklist Anthropic as a “supply chain risk” by invoking the Defense Production Act.

As the company refused to comply, Anthropic’s technology is now being phased out across federal agencies, including the intelligence community.

“It’s about the principle of standing up for what’s right,” said Dario Amodei, CEO of Anthropic, even as that decision resulted in the company being banned from across the federal government.

A National Security Risk

On Friday, President Donald Trump ordered every government agency to “immediately cease” using Claude or any technology from Anthropic. In a post on social media, the president claimed the terms of service imposed by Anthropic would somehow put American lives at risk, put “Troops in danger,” and even be a national security threat.

Yet, despite the bolster from the president, the U.S. military still relied on Anthropic’s Claude to support the Operation Epic Fury attacks on Iran over the weekend. According to a report from The Wall Street Journal, Claude was used to assess intelligence, identify targets and simulate battle scenarios.

It would seem difficult to reconcile how Claude is such a threat if it was then used in the recent strikes against the Islamic Republic, which the administration claimed were carried out flawlessly.

“Here is an administration that shoots down its own drones because its agencies can’t work and play well with one another. It isn’t a great look,” suggested Dr. Jim Purtilo, associate professor of computer science at the University of Maryland.

“An administration that blithely distorts statute to conform to what it wants to do argues that they should be allowed to use Anthropic products for anything lawful, while the company – among the most open about working with the Pentagon – expresses concern about tech being used for pervasive domestic surveillance or autonomous, agentic weapons systems,” said Purtilo, adding that the company knows limits of its technology and of safety. “It all looks like Anthropic knows more about the intended application of these things and justly drew the line.”

As previously reported by Paulo Carvão for Forbes.com, Anthropic signed a contract last summer with the Pentagon worth up to $200 million. This standoff exposed tensions between the tech sector and the U.S. government, and their “competing visions of national security and safety.” Anthropic has been the company that has resisted Pentagon demands to drop certain restrictions.

“This is an example of extreme abuse of power, the DoD wants Anthropic to remove all of its ethical rules with regards to the use of their tool and, since no one in their right mind would do that, the DoD is threatening them with destruction claiming their ethical rules somehow makes them unsecure,” said technology industry Rob Enderle of the Enderle Group.

In an email, Enderle said no part of the Pentagon’s ban makes sense.

“If you don’t want an ethical tool, then build or buy one that isn’t ethical, but claiming that adhering to ethics somehow makes a product unsafe is like telling a car company to get rid of brakes because they make their cars unsafe and, if they don’t do that, their cars won’t be allowed on the roads,” Enderle added.

Banned In The Nation

The way the Pentagon has gone about the “ban” also raises some serious questions, including whether it was handled legally.

“Absolute bans need to be done correctly, through the debarment process, which eventually goes to generally a U.S. District Court,” Dan Meyer, managing partner of the Washington, D.C., office of Tully Rinckey, PLLC, wrote in an email.

“The Administrative Procedure Act controls, and the standard is whether the decision was ‘arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law,'” added Meyer.

The Dangers May Be Overstated

Even regarding why and how Claude was banned, it should be noted that Anthropic isn’t alone in setting such conditions for how the government, specifically the military, uses emerging AI technology.

Just this weekend, OpenAI, the developer of the popular ChatGPT platform, announced its three red lines, which also include a ban on using its technology for mass domestic surveillance, in fully autonomous weapons, and in “high-stakes automated decisions.”

The Pentagon accepted those terms, even as it refused to do so to continue to use Claude. Other companies are likely to follow suit in setting terms, raising questions about how the DoD will respond.

“If all major AI developers took Anthropic’s position here and maintain the same ‘red line’ approach, the implications for the DoD and U.S. defense would be significant,” said Northern California-based venture finance attorney Lindsey Mignano.

She said in an email that first, DoD could find itself unable to deploy cutting-edge AI models for certain high-impact tasks, ones that go far beyond surveillance and autonomous weapons. Moreover, the DoD could invoke emergency powers, such as the Defense Production Act, to compel access. That would raise legal and constitutional challenges.

“The U.S. military’s competitive edge — especially against countries that don’t adopt such ethical limits — could diminish unless alternate defense strategies or regulations are developed.”

This may result in political pressure for new federal AI safeguards, transparency laws, or firmer ethical standards to govern defense contracts, and companies could push for statutory protections for red lines so the DoD can’t override them via contract language, Mignano further suggested.

It remains unclear how this will play out.

“On one hand, if the White House wins this standoff, they could have unlimited access to dangerous equipment that could be used to take the lives of countless human beings, which they may argue is necessary to protect the lives of the American soldiers whose lives would be jeopardized if they were to be deployed to those locations to fight the same enemy that unmanned robots could have destroyed,” said Anthony Kuhn, managing partner at Tully Rinckey PLLC.

Should Anthropic win this standoff, it raises the question of where the line is drawn regarding their corporate leadership and their imposition of morals on a government tasked with protecting the American people and warfighters.

“The two sides will likely come together and strike a deal that benefits both sides,” added Kuhn. “But it will be interesting to see who will control the power to decide how far AI can go and who gets to make that decision with this type of initiative in the future.”

Read More

Recent Posts

You can contact us 24 hours a day, 7 days a week via phone at 8885294543, by e-mail at info@tullylegal.com or by clicking the button below:

Ready to book your consultation? Click below to pay our consultation fee and book your meeting with an attorney today!

Contact us today to schedule your consultation.

Get Started