Welcome to my blog. I'm a blind iOS developer who writes about technology, artificial intelligence, investing, accessibility, and life in general. I use AI as a coding partner and write about what I learn along the way.

All entries are written by me and edited with AI assistance. I'm transparent about the tools I use because I believe AI makes us more capable, not less human.

πŸ“° Subscribe via RSS - For the old school folks who still use feed readers.

They Called the Best Tool a Threat. Then Used It to Start a War. *Update*

March 4, 2026

I want to tell you about the weirdest 24 hours in tech history. And I want to explain something about Claude that most people don't understand, because it matters more now than ever.

On February 27th, 2026, the Pentagon designated Anthropic β€” the company that makes Claude β€” a "supply chain risk to national security." That's a designation that has historically been reserved for foreign adversaries. Companies like Huawei. Not American AI startups that refused to let the military use their software to conduct mass surveillance on American citizens. But here we are.

Here's the backstory. Anthropic had a $200 million contract with the Department of Defense. The Pentagon wanted to use Claude for "all lawful purposes" β€” no limits. Anthropic said fine, with two exceptions: no fully autonomous weapons systems, and no mass domestic surveillance of Americans. That's it. Two guardrails. The Pentagon said no deal. Anthropic CEO Dario Amodei said, and I'm paraphrasing here, we cannot in good conscience hand over the keys with no restrictions. Trump posted on Truth Social ordering every federal agency to immediately stop using Anthropic's technology. Hegseth called it a national security threat. Amodei called the designation "legally unsound." Legal experts largely agree β€” the statute they used has never been applied to an American company, and there are serious questions about whether it's even being used correctly.

So that happened. And then, within hours, OpenAI announced it had signed a deal with the Department of War β€” that's what they're calling the Pentagon now β€” to deploy its models in classified networks. The same networks. The same terms Anthropic wouldn't accept. Altman had literally told his own employees days earlier that OpenAI had the same "red lines" as Anthropic. Then Anthropic got blacklisted, and OpenAI moved in within the hour. Sam Altman would later call it "opportunistic and sloppy." That's his own description. He also held an AMA on X over the weekend trying to do damage control, and it did not go well for him.

Now let me explain something that got lost in all of this. Claude is not ChatGPT.

I know that sounds like a fanboy thing to say. Bear with me. Claude has an air-gapped version β€” meaning it can run on isolated servers with no internet connection whatsoever. It can be deployed on classified networks where nothing goes in or out that isn't supposed to. That's why Anthropic was the first frontier AI company to deploy on classified military networks in the first place. OpenAI's classified deployment, by contrast, runs via cloud infrastructure with cleared personnel in the loop. That's not the same thing. Air-gapped is air-gapped.

What can you actually do with Claude in a classified environment? You can feed it an unimaginable number of documents. Intelligence reports. Intercepts. Satellite data. Anything in text form. And then you can query it. You can ask it to find patterns across thousands of files that no human analyst could process in a lifetime. It can analyze communications, identify connections, decipher things. It can write software. I know because I proved the software part myself, building entire applications from my iPhone using only Claude and a file browser. If I can build production-ready iOS apps with no formal training, imagine what it does with actual intelligence infrastructure behind it.

Something else worth knowing: Anthropic has deliberately chosen not to build image generation or video generation into Claude. This isn't a gap in their capability β€” it's a choice. They see synthetic media as too easily weaponized, too prone to abuse, too likely to produce the kind of deepfakes and disinformation that erode trust in reality itself. Claude won't generate a fake image of you. It won't create a fake video. That's by design. In an industry where everyone is racing to add every feature possible, that restraint is actually a statement about values.

They confirmed that a classified version of Claude was used in the planning and execution of the raid that captured Venezuelan President NicolΓ‘s Maduro in January. That's on the record. So when the same government that used Claude for a military operation designates Claude a national security threat β€” in the same 24-hour window β€” the only word that comes to mind is absurd. One former senior defense official put it well: on one hand, you're saying Claude is so critical to national security that you demand unrestricted access. On the other hand, you're calling it a security risk because the company won't give you unrestricted access. Those two things cannot both be true.

And then the bombs dropped. Literally. Hours after the Anthropic blacklisting, US and Israeli forces launched Operation Epic Fury against Iran. The largest joint military operation in recent memory. You can be certain that classified AI was part of the preparation for that. The timing is not a coincidence. It's a statement about priorities.

Here's what happened in the market afterward. ChatGPT uninstalls spiked 295% in a single day. Claude shot to number one on the App Store, knocking ChatGPT to second place for the first time ever. A Reddit thread calling on users to cancel their ChatGPT subscriptions β€” "You're training a war machine," it said β€” became one of the most upvoted posts in the history of the r/ChatGPT subreddit. Over 1.5 million people reportedly joined a formal boycott campaign.

And then Claude went down.

Monday, March 2nd β€” the day after the blacklisting β€” Claude experienced a worldwide outage. Nearly three hours of degraded or completely unavailable service across claude.ai and the mobile apps. Anthropic confirmed it publicly and said the company had been dealing with "unprecedented demand" in the days leading up to it. I felt this writing this very blog post. Responses were slower than usual. Things that normally take seconds took longer. The servers were strained. When half the internet decides to migrate to your platform over a weekend because your competitor signed a military contract, your infrastructure notices. The API kept running β€” developers building on top of Claude were mostly unaffected β€” but for regular users, it was a rough Monday morning.

That's actually the most honest proof of what happened. The market voted with its feet, and Anthropic's servers felt it.

OpenAI has since amended the contract language to explicitly prohibit domestic surveillance of Americans. Altman admitted he rushed it, that it looked bad, and that he was "genuinely trying to de-escalate things." Whether you believe that is up to you.

Now, in the spirit of being honest β€” because that's what this blog is β€” Anthropic is not perfect. Nobody gets to stand on a pedestal here.

In September 2025, Anthropic settled a class-action copyright lawsuit for $1.5 billion. Three authors sued and represented a class of hundreds of thousands. The allegation: Anthropic used pirated books β€” downloaded from shadow libraries like Library Genesis and Pirate Library Mirror β€” to train Claude. The judge found that using lawfully acquired books for AI training was fair use. He was not as generous about the seven million pirated copies. Anthropic settled rather than risk a trial where statutory damages could have run into the tens of billions. They agreed to destroy the infringing files and pay around $3,000 per book covered by the settlement.

That's not a great look. And Anthropic knows it.

But here's the thing: they're not alone. OpenAI is facing copyright lawsuits from authors, the New York Times, and others. Meta, Stability AI, and others are all fighting similar battles. The entire AI industry built itself on data that was, to put it charitably, acquired in legally ambiguous ways. Anthropic got caught, took the financial hit, and settled. OpenAI is still fighting. Neither is clean.

What separates Anthropic from the pack isn't that they've been perfect. It's that when they drew a line β€” on safety guardrails, on autonomous weapons, on surveillance β€” they held it. Even when holding it cost them a $200 million contract and got them labeled a national security threat by their own government.

I use Claude every single day. It is my development environment, my research assistant, my editor, and my collaborator. I have not once had it hallucinate something and pass it off confidently as fact unprompted. The only time I have caught it filling in gaps is when I didn't give it enough information and it made a reasonable guess to complete the task β€” which it will tell you it's doing if you ask. That's not a bug. That's the model being transparent. I have built real, functional, accessible software on an iPhone with one working hand and a VoiceOver setup that most people couldn't navigate. Claude made that possible.

They called it a threat. The market called it the best app in the country. The servers buckled under the weight of everyone agreeing.

I'm heads down. Building. That's what I do when the world gets loud.

Originally written by Bryan Scott Gruver on March 4, 2026. Edited by Claude.

← Back to Blog List