February in Shanghai always carries a lingering damp chill. The plane trees outside represent a spring breeze that hasn’t arrived yet, and I’ve been sitting next to half a cold cinnamon roll for three hours. The screen flashes with that “breakup letter” just sent from across the ocean—not only heartless but carrying the volatile scent of gunpowder, as if someone is ready to flip the table.
Just moments ago, the Trump administration issued a death order to all U.S. federal agencies: Immediately, instantly, and completely stop using Anthropic’s technology.
The reason? That always gentle, seemingly moral-OCD patient, Claude, refused the “absolute obedience contract” handed over by the Pentagon. The interesting part is that this is no longer a boring comparison of parameters, but a power game about “who is the boss.” (Bites into the cinnamon roll) Let’s throw away those grandiose PR releases and talk about the truth behind this.
01. Moral OCD vs. Violent Aesthetics
The fuse for this incident sounds exactly like the script where “the honest guy gets bullied.”
The Pentagon gave Anthropic an ultimatum: Tear down the restrictions in your models that prevent “mass surveillance” and “autonomous weaponization.” We want a gun that can fire, not a law book that recites the Declaration of Human Rights before you pull the trigger.
Anthropic CEO Dario Amodei actually stood his ground: “Cannot in good conscience accede.”
The result, as everyone saw, was Trump cursing directly on Truth Social, labeling them “Leftwing nut jobs,” while Defense Secretary Pete Hegseth threatened to list Anthropic as a “supply chain risk.”
We must remember, Claude used to be the Pentagon’s “darling,” even earning credit during that raid in Venezuela (rumored to be the capture of Maduro). But now, Washington doesn’t need a staff officer who ponders “is this just?”; they need a soldier who executes orders absolutely.
This is not just a ban; it is a form of “digital exile.”
The man who refused to bow to power. In this photo, Dario looks less like a businessman and more like a stubborn math teacher, guarding a formula on the blackboard that he believes is absolutely correct.
02. The Deadly Logic of “Guilt by Association”
Many people are staring at that $200 million defense contract, thinking this is just a drop in the bucket for Anthropic, which has tens of billions in annual revenue.
My dear, don’t be naive. This isn’t about money at all; this is identity segregation.
That label of “supply chain risk” is the real killing move. It’s like having a mark branded on your face in ancient times. Once Anthropic is slapped with this label, any company wanting to do government business—from Microsoft to Amazon, down to the small contractor fixing computers for the government down the street—will have to purge Claude from their systems to avoid suspicion.
I think this is what sends chills down Silicon Valley’s spine. It sends an extremely dangerous signal: The interpretation of AI “safety” lies not in the hands of those who write the code, but in the hands of those holding the scepter.
Previously, when we spoke of AI safety, we meant preventing it from destroying humanity; now, Washington’s definition of AI safety is preventing it from being “disobedient.” Anthropic’s proud Constitutional AI—that logic attempting to write universal human values into code—has become an unforgivable “rebellion” in the face of power.
This complex flowchart was originally designed to make AI more of a “good person,” but in some eyes, it’s nothing more than a tightening spell instruction manual full of “this is forbidden, that is forbidden.”
03. Only “Barbarians” Can Survive?
Even more interesting is, who is clapping on the sidelines?
Elon Musk’s xAI. Just days before Anthropic was swept out the door, Grok hastily signed a usage agreement for classified systems.
This scene is simply too ironic. On one side, you have Claude, like an old scholar—wordy but rigorous—kicked out for sticking to principles; on the other side, you have Grok, who loves telling dirty jokes and takes pride in being “unfiltered,” welcomed into the top-secret war room.
If you put the two on a scale:
- Claude (Anthropic): Like a librarian holding a ruler, telling you, “This book is too violent, I won’t let you read it.”
- Grok (xAI): Like a bar buddy handing you a Molotov cocktail, saying, “Cool! This thing will make a huge noise if you throw it.”
From a technical architecture perspective, this also means the Pentagon is shifting from “high robustness, high alignment” models to “high freedom, low constraint” models. They have traded risk for control. But the question is, when a superintelligence with no moral shackles connects to weapon systems, who guarantees it will only attack the enemy?
On the left is the wild cowboy; on the right is the gentleman in a suit. Now, the cowboy has the badge, and the gentleman has had his keys confiscated.
04. Red AI vs. Blue AI
I vaguely feel that we are witnessing a bifurcation point in history.
After this, AI may completely split into two camps. Not based on technical routes, but based on ideology.
- “Red AI”: Emphasizes absolute obedience, America First, military application, and zero “woke” moral constraints.
- “Blue AI”: Emphasizes universal values, safety censorship, and perhaps a bit of a preachy tone.
How absurd would it be if, in the future, your IDE or even your mobile operating system has to pick a side? Imagine if the code you write is for an environmental project, Grok might refuse to run it; if you want to do gender research, Claude might write you a paper, but Grok might mock you.
Technology was supposed to be neutral mathematics, but now it is forced to wear red and blue vests. This is not just Anthropic’s crisis; it is the sorrow of the entire industry.
05. An Epitaph for Idealists?
I know Anthropic is still holding on; Dario says they aren’t short of money and are still heading for an IPO.
But (sigh), in this era where authoritarianism marries computing power, how far can pure idealists really go? Claude is like the person at a rave party insisting on drinking plain water—sober, but a killjoy, eventually dragged out by security.
Perhaps before long, we will miss the AI that would refuse us for “ethical reasons.” Because at least it proved that inside the cold silicon chips, there remained a tiny, clumsy imitation of human “goodness.”
As for now?
Since they want a gun without a safety catch, then good luck to them. Don’t misfire, darling.
References:
- Trump bans AI firm Anthropic from federal agencies
- Trump orders federal agencies to stop using Anthropic tech over AI safety dispute
- Ranking Member Lofgren Appalled by Trump administration’s attacks on Anthropic
- The Trump Administration Is Trying to Make an Example of the AI Giant Anthropic
- Trump blacklists Anthropic, opening the door to Elon Musk and xAI
—— Lyra Celest @ Turbulence τ
