, Anthropic and OpenAI have experienced security incidents in succession, drawing market attention to the security of AI models themselves. Currently, Anthropic is investigating a possible case of unauthorized user access to its Claude Mythos model. Almost simultaneously, OpenAI was also reported to have accidentally opened access to several unreleased models within its Codex application.Analysts believe that such incidents highlight that even AI model providers focused on cybersecurity capabilities still face significant security challenges. While AI is increasingly used for cyber defense, platform security and access control are becoming critical risk points.Industry insiders point out that these vulnerability incidents have intensified scrutiny over the security governance capabilities of AI companies, and also reflect that the security systems of current AI technology still need improvement amid rapid development. (The Information)
Codex
Codex
Codex
Codex
Mythos
According to Decrypt, OpenAI CEO Sam Altman stated that Anthropic is promoting its AI model Claude Mythos through “fear-based marketing,” using narratives about security risks to justify its limited-open strategy. Claude Mythos has recently drawn attention for its ability to autonomously discover software vulnerabilities and perform complex cybersecurity operations. The report notes that Mozilla previously disclosed that the model identified 271 vulnerabilities in the Firefox browser during testing. Meanwhile, discussions surrounding the model’s potential offensive cybersecurity risks continue to intensify. Altman also emphasized that OpenAI will not scale back its infrastructure investments and will continue expanding its computational capabilities.
Based
based
Based
Decrypt
Fear
Meanwhile
23pds, Chief Information Security Officer of SlowMist Technology, retweeted a post from the dark web intelligence account Dark Web Intelligence (@DailyDarkWeb), stating that the hacker group ShinyHunters claims to have breached internal systems related to Anthropic’s Mythos model and has shared screenshots—including those of the user management panel, AI experiment dashboard, and model performance and cost analysis. As of now, Anthropic has not officially confirmed the authenticity of this claim. Given that numerous enterprises have already applied for trial access to the relevant models, if this report proves true, it could pose indirect security risks to leading technology firms and crypto-related businesses.
Mythos
According to Decrypt, Mozilla recently revealed that Anthropic’s latest AI model, Claude Mythos, identified 271 security vulnerabilities during internal testing of the Firefox browser; all related vulnerabilities were patched this week. For comparison, a previous Anthropic model had detected only 22 security-sensitive vulnerabilities. Mozilla stated that all discovered vulnerabilities fell within the scope of what top human researchers could identify. Claude Mythos was officially launched in March 2026 and is Anthropic’s most powerful model to date for reasoning, coding, and cybersecurity. It is currently available exclusively to vetted partners—including Amazon, Apple, and Microsoft—under Anthropic’s “Project Glasswing” initiative.
Apple
Decrypt
Mythos
what
According to Cointelegraph, the Monetary Authority of Singapore (MAS) has urged banks to strengthen their cybersecurity defenses amid heightened regulatory attention triggered by the spread of Anthropic’s Mythos AI model across Asia.
Cointelegraph
Massa
Mythos
Odaily News Financial officials have warned that Anthropic's Claude Mythos Preview model poses a serious threat to the cybersecurity defenses of the global banking system. (Financial Times)
Mythos
Odaily News Anthropic has decided to restrict the public release of its Mythos model due to its highly automated cyber attack capabilities. Reports indicate that during internal testing, the model was already capable of independently completing vulnerability discovery and exploitation processes, and generating multi-step attack plans.Informed sources stated that in early testing, Mythos could autonomously build intrusion tools targeting Linux systems and, with guidance, execute complex vulnerability chain attacks. These capabilities were assessed as potentially posing risks to global infrastructure.Anthropic's management ultimately positioned Mythos as a cyber defense tool and opened it for testing to select institutions in a restricted manner. Industry insiders pointed out that similar models could significantly enhance the efficiency of cyber offense and defense, while also potentially introducing new security challenges. (Bloomberg)
Mythos
According to The Information, Coinbase and Binance are seeking access to Anthropic’s restricted AI model, Mythos, as cryptocurrency exchanges and custodians prepare for a new wave of AI-driven cyber threats. The report states that Coinbase is in close communication with Anthropic regarding Mythos; Philip Martin, Coinbase’s Chief Security Officer, said the model will accelerate digital threat detection and defense. Meanwhile, Binance and Fireblocks are also taking steps to assess how significantly it could reshape cyber offense and defense tools.
Binance
Coinbase
Fireblocks
Meanwhile
Mythos
According to Caixin Global, Wall Street banks including Goldman Sachs and Morgan Stanley are testing Anthropic’s Mythos large language model.
Mythos
Street