Skip to main content

Anthropic

T
Terrence O'Brien
The US used Anthropic AI for strikes in Iran despite ban.

On Friday, Donald Trump announced a ban on the federal government’s use of Claude. Though he had to walk back his demand that agencies “IMMEDIATELY CEASE” using it, instead saying there would be a six-month phaseout. Part of that might be because planning for Saturday’s strikes against Iran was underway and relied on Claude for intelligence assessments and target identification. According to the Wall Street Journal:

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools.

T
Terrence O'Brien
A former Trump advisor calls the fight with Anthropic “attempted corporate murder.”

Dean Ball, who worked as a senior AI policy advisor, said on X that designating Anthropic as a “supply chain risk” or threatening to invoke the Defense Production Act could have a chilling effect on the entire industry. Alan Rozenshtein, a former DOJ official specializing in technology law, told Politico this could be the first step toward partial nationalization of the AI industry.

H
Hayden Field
OpenAI reached a new agreement with the Pentagon.

CEO Sam Altman wrote on X that the agreement allowed the US military to “deploy our models in their classified network.” He said the agreement reflects OpenAI’s desire for prohibitions on domestic mass surveillance and “human responsibility for the use of force, including for autonomous weapon systems.” Altman also wrote that OpenAI is “asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.” This follows a rollercoaster week of negotiations between Anthropic and the Pentagon.

Sam Altman’s post

[X (formerly Twitter)]

Trump orders federal agencies to drop Anthropic’s AITrump orders federal agencies to drop Anthropic’s AI
Hayden Field and Richard Lawler
H
Hayden Field
Even Ilya Sutskever weighed in on the Anthropic-Pentagon situation.

The OpenAI co-founder, who left after CEO Sam Altman’s ouster and reinstatement and then started his own AI startup called Safe Superintelligence, posted on X:

It’s extremely good that Anthropic has not backed down, and it’s siginficant that OpenAI has taken a similar stance.

In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside. Good to see that happen today.

We don’t have to have unsupervised killer robots

AI companies could stand together to draw red lines on military AI — why aren’t they?

Hayden Field
T
Tina Nguyen
The Pentagon is making moves.

In what appears to be preparations to fully blacklist Anthropic for not budging on their acceptable use policies, the Defense Department has begun reaching out to contractors to assess their exposure to the AI company’s products. Boeing and Lockheed Martin, two of the biggest companies in the defense space, have reportedly been contacted.

Does Anthropic think Claude is alive? Define ‘alive’

Anthropic calls its chatbot ‘a new kind of entity’ that might be conscious — and it’s opening a huge can of worms.

Hayden Field
Inside Anthropic’s existential negotiations with the Pentagon

It’s more than just a $200 million military contract at stake.

Tina Nguyen and Hayden Field
Money no longer matters to AI’s top talent

The AI industry is rife with defections, FOMO, and radical mission statements. It’s about to get supercharged.

Nilay Patel
S
Stevie Bonifield
Anthropic’s new Sonnet 4.6 model is better at using computers.

Anthropic launched the latest version of Claude Sonnet on Tuesday, which it says “approaches Opus-level intelligence,” featuring improvements in coding and computer use with tasks like navigating spreadsheets or filling out web forms. Sonnet 4.6 is now replacing Sonnet 4.5 as the default model for free and pro Claude users.

C
Charles Pulliam-Moore
Is “apocaloptimist” the new word for AI hype man?

Focus Features is billing The AI Doc: Or How I Became An Apocaloptimist as an “eye-opening” exploration of “the most powerful technology humanity has ever created.” You’d think the doc might feature some critical voices, but its new trailer makes it feel like it might be one big commercial. The film premieres on March 27th.

D
Dominic Preston
A marketing opportunity.

As Axios reports that the Department of Defense and / or War is preparing to brand Anthropic a “supply chain risk,” one commenter wonders if the Claude company might revisit its Super Bowl ad to turn that to its advantage.

hodgdon:

“Extrajudicial killings are coming to AI. But not to Claude.”

Get the day’s best comment and more in my free newsletter, The Verge Daily.

J
Jay Peters
The Department of Defense may designate Anthropic as a “supply chain risk.”

Should Anthropic get the designation, “anyone who wants to do business with the U.S. military has to cut ties with the company,” Axios says. The two sides have apparently been negotiating for months over how the military can use Anthropic’s AI tools.

J
Jess Weatherbed
Claude gets more free features to capitalize on ChatGPT ads.

After already dunking on OpenAI’s plan to bring ads to ChatGPT, Anthropic is bolstering its own chatbot to attract anyone jumping ship. Free Claude users can now create and edit files (including spreadsheets, presentations, and PDFs), access Skills for specialized tasks, connect to third-party services, and more — features previously limited to paying subscribers.

R
Richard Lawler
Anthropic’s Super Bowl ad has a change that made it less directly about OpenAI and ChatGPT.

The round of Big Game ads Anthropic previewed earlier this week set Sam Altman off, as he called them “clearly dishonest.”

Now, while the original ad says, “Ads are coming to AI. But not to Claude,” nodding to OpenAI’s plans, the one that aired replaced it with a new tagline: “There is a time and place for ads. Your conversations with AI should not be one of them.”

Screenshot from Anthropic ad saying “ads are coming to AI. But not to Claude.”
Anthropic’s original Super Bowl ad’s closing message, which is not the same as the one that aired on Sunday.
Image: Anthropic
Claude has been having a moment — can it keep it up?

“Now you’re just like, ‘Here’s the magic castle. Build it.’ And it gets done.”

Hayden Field
H
Hayden Field
Anthropic expanded its Cowork tool with “plugins,” leaning further into agentic AI.

The plugins are designed to allow Cowork to act like a “domain expert” in areas like sales, legal, finance, marketing, data analysis, customer support, product management, biology research, and more, according to a release. The feature is available now in research preview to all paid subscription tiers.

Cowork update

[Anthropic]

I used Claude to vibe-code my wildly overcomplicated smart home

After years of trying to switch to Home Assistant, Claude Code got me (mostly) there in one afternoon.

Jennifer Pattison Tuohy