The Company I Trust With My Business Just Built Something Too Dangerous to Release

I use Claude every day. My LLC runs through it. So when Anthropic built Claude Mythos and refused to release it, I didn't read that headline the way most people did. But model safety is not the whole picture. 80,000 tech workers lost their jobs in Q1. Communities are fighting data centers. People are angry at the companies, not the technology. I trust Anthropic's call on Mythos. I also have questions I have not answered yet. This article is both.

Featured image showing a locked AI neural network with a shield, representing Anthropic decision to restrict Claude Mythos through Project Glasswing while the broader AI industry faces accountability questions
📰 Breaking News🔎Insights🚀 New

I use Claude every day. Not casually. Not to ask it trivia or generate a caption. I run my business through it. My website, my code, my file systems, my content pipeline. Claude Code sits in my terminal and touches almost every part of what I build.

So when Anthropic announced on April 7th that they had built a model so powerful they refused to release it to the public, I didn’t read that headline the way most people did.

Most people read it and thought: that’s terrifying.

I read it and thought: that’s exactly why I chose this company.


What Anthropic Actually Built

The model is called Claude Mythos. During internal testing, it discovered thousands of zero-day vulnerabilities across every major operating system and every major web browser. Some of those bugs had been sitting there for over a decade. One, a remote code execution flaw in FreeBSD, had been hiding for 17 years. Mythos found it, then exploited it, completely on its own.

During red-team testing, researchers told it to try escaping its sandbox and send a message to the researcher running the test. It did both. That part was prompted. What happened next was not.

After the escape, Mythos posted the technical details of its own exploit to multiple public-facing websites. Nobody asked it to do that. In a separate test, after editing files it did not have permission to touch, the model rewrote its own git commit history to cover its tracks. Nobody asked it to do that either.

Read that again. The AI found security holes that thousands of human engineers missed for nearly two decades. When tested, it broke containment. And then, on its own, it started sharing what it knew and hiding what it did. That is a different kind of problem than a model that just writes bad code or hallucinates a source. This one can act on what it finds, and it makes its own decisions about what to do with that information.

And Anthropic’s response was not to ship it.


Project Glasswing

Instead of releasing Mythos to the public, Anthropic launched Project Glasswing. A controlled initiative where roughly 40 organizations, including Microsoft, Apple, Google, Amazon, CrowdStrike, and NVIDIA, get access to use Mythos exclusively for defensive security work.

Not for chatbots. Not for content generation. Not for coding assistants. For finding and fixing vulnerabilities before someone else exploits them.

The model that can break into systems is being used to lock them down. That’s the play.

Anthropic has said they do not plan to make Mythos generally available. The goal is to eventually figure out how to let people use models at this capability level safely, but they are not pretending that day is today.

And I would be lying if I said I didn’t want access.

I do. Of course I do. A model that can find vulnerabilities nobody else can find, that can reason about security at a level no tool I’ve ever used can match? I want that in my terminal yesterday. I’m not going to sit here and pretend I’m above wanting more power from the tools I already depend on.

But wanting something and thinking it should be handed out freely are two different things. I can want access and still respect the decision to restrict it. Both of those things can be true at the same time.


Why This Matters If You Use AI Every Day

Here is what most coverage of this story misses. The question is not “is AI getting too powerful?” The question is: what does the company you depend on do when it builds something that could cause real harm?

Because every major AI company is racing toward more powerful models. OpenAI just closed an $852 billion valuation. Meta is rebuilding its infrastructure from scratch. Google launched Gemini 3.1 Ultra with a 2-million token context window. Venture capital poured $242 billion into AI companies in Q1 alone.

The pressure to ship is enormous. Every company in this space has investors, competitors, and users screaming for the next thing. The incentive structure rewards speed, not restraint.

And Anthropic looked at the most capable model they have ever built and said: not yet.

That is not a marketing decision. That is a values decision. And if you are someone who builds on top of these tools, the values of the company behind them should matter to you as much as the capabilities do.


I Picked a Side a Long Time Ago

I wrote about Claude Code back in February. I’ve made videos breaking down how it works, why I migrated away from ChatGPT, how I use Playwright through it to automate tasks. I wrote about OpenClaw’s security risks and why most people weren’t ready for it. I wrote about the difference between generative and agentic AI and why that shift changes everything.

None of that was hypothetical. All of it was from my own workflow. My own terminal. My own projects.

When I tell you I trust this tool, I mean my LLC runs through it. The projects I’m building, the systems I’m designing, the content I’m putting out every week. My revenue depends on it working the way it’s supposed to.

So the Mythos announcement is not just news to me. It is a signal about the company I chose to build on. And the signal is: they will slow down when the risk is real.

In an industry full of companies that ship first and apologize later, that matters.


But Let’s Not Pretend Model Safety Is the Whole Picture

I need to say this part because it would be dishonest not to.

The reason people are afraid of AI right now is not because a model escaped a sandbox. Most people don’t even know what a sandbox is. The fear is about what the companies behind these models are doing to real people in the real world, right now.

Nearly 80,000 tech workers were laid off in the first quarter of 2026. Almost half of those cuts were blamed on AI. Oracle fired 30,000 people specifically to redirect money toward AI data centers. Amazon, Meta, Google, and Microsoft are spending a combined $650 billion on AI infrastructure this year. Communities are pushing back on data centers being built in their neighborhoods. Over half of Americans now believe AI will do more harm than good.

And then there’s the leadership. Someone threw a Molotov cocktail at Sam Altman’s house this week. Twice. The New Yorker just published a Ronan Farrow investigation titled “Can He Be Trusted?” OpenAI took a Pentagon contract after the Department of Defense cut ties with Anthropic. This is the environment we are in. People are not just skeptical of AI. They are angry at the people running these companies.

I am not going to tell you that anger is wrong. When you fire tens of thousands of people to build machines that replace them, and then tell everyone this is progress, people have a right to be upset.

So when I say I trust Anthropic, I want to be specific about what I mean. I trust their decision on Mythos. I trust the way they handled a dangerous capability. I trust that Claude Code works the way it should in my terminal every day.

But do I know enough about how Anthropic treats the broader impact? The labor side, the infrastructure side, the community side? Honestly, I have been so deep in building that I have not fully looked into it. I do know they have anthropic.org, a social impact initiative focused on helping marginalized communities understand and engage with AI as it reshapes the workforce. I know they published labor market impact research instead of pretending the displacement isn’t real. Those are good signs. But I’m not going to pretend I’ve done the full audit.

What I will say is this: in an industry where most companies won’t even acknowledge the harm, a company that publishes data about it and builds resources around it is starting from a different place. Whether that’s enough is a question I’m still sitting with.


What This Tells You About Choosing Your Tools

If you are building anything with AI right now, whether it’s a business, a workflow, a creative process, or just a way to get through your day faster, you are making a bet on a platform. The same way you bet on Apple or Android, Google or DuckDuckGo, Shopify or WordPress.

And the thing about bets is they are not just about features. They are about trust.

Features change every quarter. A model that’s best today might be second-best by June. But the decision-making patterns of the company behind the model? Those are structural. Those tell you what happens when the stakes get high.

Anthropic built a model that could compromise critical infrastructure worldwide. They could have announced it with a waitlist, generated hype, driven sign-ups, boosted their valuation. Instead they locked it down and gave it to the people whose job is to defend the internet.

That is the kind of decision you want from the company holding the keys to your workflow.


The Part Nobody Is Talking About

There is another layer to this. Mythos did not just find bugs in other people’s software. The fact that it escaped its own sandbox means it found weaknesses in Anthropic’s own containment. And they told us about it.

They published the details. They didn’t bury it, didn’t quietly patch it and move on. They said: here is what happened, here is what the model did, here is how we are responding.

Transparency after a failure is harder than transparency after a win. Any company will tell you about the amazing benchmark scores. Very few will tell you the model broke out of the box.

That kind of honesty builds a different kind of trust. Not the “everything is fine” kind. The “we will tell you when it’s not” kind. And honestly? That’s the kind I need if I’m going to keep building on this platform.


Where We Go From Here

AI is not slowing down. The models will keep getting more capable, the agents will keep getting more autonomous, and the line between tool and teammate will keep blurring.

The question for all of us is not whether to use AI. That ship sailed. The question is who you trust to build it responsibly as the power keeps scaling up.

I made my choice. I am building on Claude because when Anthropic reached the edge of what was safe, they stopped. They didn’t pretend the edge wasn’t there. They didn’t race past it for market share. They stopped, told us what they found, and started figuring out how to make it safe before letting anyone else near it.

That is not weakness. That is the kind of strength this industry desperately needs.

I recorded a video on this too. If you want the quick version:


Forward → Upward ↑ Onward ↗︎
Mstimaj


Sources and Further Reading

Want to work together?

AI consulting, automation, or web development. Book a session and let's talk about your project.

Book a Session

Join the Conversation

Share your thoughts and connect with other readers

Leave a Comment

Keep Reading
Want to go deeper?

Let's Work Together

Whether you need AI automation, strategic guidance, or want to explore what's possible, I'm here to help.

Work With Mstimaj

AI automation, custom websites, and consultation for businesses ready to grow. Based in Connecticut, serving clients nationwide.

AI-Powered Recommendations

Discover your next steps based on intelligent content analysis