Sam Altman Calls Out Anthropic: The 'Most Dangerous AI' Is Just Fear Marketing

**OpenAI CEO Sam Altman has openly questioned Anthropic's Claude Mythos model, calling it a fear-driven marketing ploy to manufacture scarcity and seize control over AI.** ![Sam Altman Calls Out Anthropic: The 'Most Dangerous AI' Is Just Fear Marketing](https://coinalx.com/d/file/upload/2026/528btc-116385051.jpg) ## More Than a Safety Warning—It's a Power Grab Anthropic recently unveiled Claude Mythos, a model it claims can autonomously identify software vulnerabilities and execute complex cyberattacks. Security experts and government agencies quickly raised alarms. But Sam Altman, in a recent podcast, cut through the noise: this is just "fear marketing"—first create a bomb, then sell the bunker at a premium. Altman's words were blunt: "They want to say, 'We need to control AI, and only we can do it because we're trustworthy.' And fear-based marketing is the most effective way to justify that." In other words, Mythos's dangers may be real, but Anthropic is deliberately amplifying the fear to lock AI capabilities in the hands of a few. ## The Bunker Business: A $100 Million Ticket Altman used a sharp analogy: "We've built a bomb and we're about to drop it on you. Then we sell you a bunker for $100 million—provided you're selected as a customer." That's precisely Anthropic's "Project Glasswing" strategy: Mythos is only accessible to a handful of giants like Amazon, Apple, and Microsoft for testing. Ordinary developers can't even get a peek. This scarcity is itself a form of power. What's more telling: Anthropic simultaneously highlights Mythos's offensive risks and its defensive value—it can find hundreds of Firefox vulnerabilities and simulate multi-stage cyberattacks. This "double-edged sword" narrative provides the perfect excuse to restrict access. ## The Flaw in Safety Evaluations: Benchmarks Are Obsolete Anthropic itself admits that existing cybersecurity benchmarks are insufficient to assess Mythos's capabilities. Tests by the UK AI Safety Institute show the model can autonomously execute complex cyber operations. But here's the catch: if evaluation standards can't keep up, how can we trust that "restricting access" is the right answer? Last week, a research team claimed to have replicated some of Mythos's results using public models. That suggests the so-called "exclusive capabilities" may not be so exclusive. ## What the Market Is Betting On Despite calls from within the U.S. government to pause, the NSA has already begun testing a preview version of Mythos. Prediction markets show a 49% probability of public release by June 30. This reveals a paradox: the more regulators panic, the more the market believes that "eventually it will be released." Because the real danger isn't the model itself—it's competitors getting it first. ## The Real Bet for Investors Altman delivered a final jab: "There will be a lot of debate about which models are too dangerous to release. But there will also be genuinely dangerous models that must be released differently." He hinted that OpenAI has its own "dangerous model" plans—just without the fear marketing. For investors, this spat boils down to two competing strategies: - **The Closed Camp** (Anthropic): Manufacture scarcity, turn AI capabilities into high-end subscriptions, and extract supernormal profits. - **The Open Camp** (OpenAI): Push for broad distribution, profit from scale effects and ecosystem. Right now, the market leans toward the latter. But the real turning point: if Mythos actually goes public by end of June, the closed narrative collapses. If it's shelved indefinitely, fear marketing becomes the industry standard. ## So What? Don't be fooled by the "most dangerous AI" headlines. This is fundamentally a battle over pricing power. Anthropic is betting that fear can command a high price; OpenAI is betting that openness wins the future. As an investor, watch two signals: 1. Will Mythos be publicly released on schedule? 2. Will regulators actually impose restrictive policies? If both fall through, the fear-marketing bubble bursts. If they materialize, the AI industry's "bunker business" is just getting started.

Recommended reading: