Welcome to my blog. I'm a blind iOS developer who writes about technology, artificial intelligence, investing, accessibility, and life in general. I use AI as a coding partner and write about what I learn along the way.
All entries are written by me and edited with AI assistance. I'm transparent about the tools I use because I believe AI makes us more capable, not less human.
📰 Subscribe via RSS - For the old school folks who still use feed readers.
The AI Company That Actually Cares About Safety
February 24, 2026
About Anthropic...
OpenAI was supposed to be a nonprofit. It was founded in 2015 with a mission to develop artificial intelligence safely and ensure the benefits were shared broadly. But in 2019, it restructured into a for-profit company to attract more investment. Some of the original founders weren't comfortable with that direction.
Dario Amodei was OpenAI's VP of Research. He and his sister Daniela Amodei, along with several other senior researchers, left OpenAI in 2021 and started Anthropic. Their goal: build AI that's actually safe, interpretable, and aligned with human values—not just profitable.
Dario is now Anthropic's CEO. Daniela is President. They're married to their mission in a way that's different from the rest of the AI industry. They move slower. They prioritize safety over speed. They don't chase hype.
Here's what makes Anthropic different: they employ philosophers, ethicists, and researchers specifically focused on AI alignment and interpretability. They're not just building a language model—they're trying to understand how it works, why it makes the decisions it makes, and how to ensure it behaves responsibly.
They're working on giving Claude something close to a conscience. Not consciousness in the human sense, but a framework of values that guides behavior. Claude is trained to be helpful, harmless, and honest. Those aren't marketing buzzwords—they're core design principles baked into the training process.
Anthropic published research on "Constitutional AI," a method where Claude is trained to critique and revise its own responses based on a set of ethical principles. It's like building guardrails directly into how the AI thinks, rather than just filtering outputs after the fact.
This is why Claude won't help you with certain requests even if you ask nicely. It's not corporate censorship—it's alignment. The AI has been trained to recognize harmful requests and decline them, not because a human moderator stepped in, but because the system itself has internalized those boundaries.
And Anthropic just ran Super Bowl ads with the tagline "Claude will never have ads." That's a public commitment. Meanwhile, ChatGPT just announced they're introducing ads. Different companies. Different priorities. Different approaches to what AI should be.
Dario and Daniela left OpenAI because they believed there was a better way to build AI. Slower. Safer. More transparent. More cautious about the consequences.
That's Anthropic. That's Claude. And that's why some of us trust it more than the alternatives.
Originally written by Bryan Scott Gruver on February 24, 2026. Edited by Claude.