The Pentagon Just Dumped Its Biggest AI Partner Because They Refused to Build Killer Robots
Anthropic told the U.S. military 'no' on removing safety guardrails from its AI. The Pentagon's response? You're fired.
In what might be the most dramatic breakup in tech history, the Pentagon just canceled its contract with Anthropic, the company behind the AI chatbot Claude, after the startup refused to let the military use its technology without safety limits.
Here's what happened: The Pentagon wanted Anthropic to remove the safety guardrails on Claude so it could be used for pretty much anything, including analyzing massive amounts of data collected from everyday Americans. We're talking your Google searches, your GPS location, your credit card purchases, all cross-referenced together.
Anthropic's leadership said that was 'a bridge too far' and the deal collapsed. Defense Secretary Pete Hegseth then told all military contractors to stop doing business with Anthropic entirely.
This video from CNBC breaking down the situation has been blowing up with over 62,000 views: the company that built one of the world's most powerful AI systems essentially told the most powerful military on Earth to back off.
The big question now: Will other AI companies follow Anthropic's lead, or will they quietly take the Pentagon's money and remove their safety limits? So far, Anthropic is standing alone on this one.
As reported by The Atlantic and AP News.
Source: CNBC / The Atlantic
Sponsored