Mythos aims to democratize the gaming world and allow for players and creators to participate in the value chain. It is grounded in the support of multi-chain ecosystems, unified marketplaces, decentralized financial systems, decentralized governance mechanisms and multi-token game economies.
, Anthropic and OpenAI have experienced security incidents in succession, drawing market attention to the security of AI models themselves. Currently, Anthropic is investigating a possible case of unauthorized user access to its Claude Mythos model. Almost simultaneously, OpenAI was also reported to have accidentally opened access to several unreleased models within its Codex application.Analysts believe that such incidents highlight that even AI model providers focused on cybersecurity capabilities still face significant security challenges. While AI is increasingly used for cyber defense, platform security and access control are becoming critical risk points.Industry insiders point out that these vulnerability incidents have intensified scrutiny over the security governance capabilities of AI companies, and also reflect that the security systems of current AI technology still need improvement amid rapid development. (The Information)
According to Cointelegraph, the Monetary Authority of Singapore (MAS) has urged banks to strengthen their cybersecurity defenses amid heightened regulatory attention triggered by the spread of Anthropic’s Mythos AI model across Asia.
, Anthropic and OpenAI have experienced security incidents in succession, drawing market attention to the security of AI models themselves. Currently, Anthropic is investigating a possible case of unauthorized user access to its Claude Mythos model. Almost simultaneously, OpenAI was also reported to have accidentally opened access to several unreleased models within its Codex application.Analysts believe that such incidents highlight that even AI model providers focused on cybersecurity capabilities still face significant security challenges. While AI is increasingly used for cyber defense, platform security and access control are becoming critical risk points.Industry insiders point out that these vulnerability incidents have intensified scrutiny over the security governance capabilities of AI companies, and also reflect that the security systems of current AI technology still need improvement amid rapid development. (The Information)
According to Decrypt, OpenAI CEO Sam Altman stated that Anthropic is promoting its AI model Claude Mythos through “fear-based marketing,” using narratives about security risks to justify its limited-open strategy. Claude Mythos has recently drawn attention for its ability to autonomously discover software vulnerabilities and perform complex cybersecurity operations. The report notes that Mozilla previously disclosed that the model identified 271 vulnerabilities in the Firefox browser during testing. Meanwhile, discussions surrounding the model’s potential offensive cybersecurity risks continue to intensify. Altman also emphasized that OpenAI will not scale back its infrastructure investments and will continue expanding its computational capabilities.
23pds, Chief Information Security Officer of SlowMist Technology, retweeted a post from the dark web intelligence account Dark Web Intelligence (@DailyDarkWeb), stating that the hacker group ShinyHunters claims to have breached internal systems related to Anthropic’s Mythos model and has shared screenshots—including those of the user management panel, AI experiment dashboard, and model performance and cost analysis. As of now, Anthropic has not officially confirmed the authenticity of this claim. Given that numerous enterprises have already applied for trial access to the relevant models, if this report proves true, it could pose indirect security risks to leading technology firms and crypto-related businesses.
According to Decrypt, Mozilla recently revealed that Anthropic’s latest AI model, Claude Mythos, identified 271 security vulnerabilities during internal testing of the Firefox browser; all related vulnerabilities were patched this week. For comparison, a previous Anthropic model had detected only 22 security-sensitive vulnerabilities. Mozilla stated that all discovered vulnerabilities fell within the scope of what top human researchers could identify. Claude Mythos was officially launched in March 2026 and is Anthropic’s most powerful model to date for reasoning, coding, and cybersecurity. It is currently available exclusively to vetted partners—including Amazon, Apple, and Microsoft—under Anthropic’s “Project Glasswing” initiative.
Odaily News Anthropic has decided to restrict the public release of its Mythos model due to its highly automated cyber attack capabilities. Reports indicate that during internal testing, the model was already capable of independently completing vulnerability discovery and exploitation processes, and generating multi-step attack plans.Informed sources stated that in early testing, Mythos could autonomously build intrusion tools targeting Linux systems and, with guidance, execute complex vulnerability chain attacks. These capabilities were assessed as potentially posing risks to global infrastructure.Anthropic's management ultimately positioned Mythos as a cyber defense tool and opened it for testing to select institutions in a restricted manner. Industry insiders pointed out that similar models could significantly enhance the efficiency of cyber offense and defense, while also potentially introducing new security challenges. (Bloomberg)
, Anthropic and OpenAI have experienced security incidents in succession, drawing market attention to the security of AI models themselves. Currently, Anthropic is investigating a possible case of unauthorized user access to its Claude Mythos model. Almost simultaneously, OpenAI was also reported to have accidentally opened access to several unreleased models within its Codex application.Analysts believe that such incidents highlight that even AI model providers focused on cybersecurity capabilities still face significant security challenges. While AI is increasingly used for cyber defense, platform security and access control are becoming critical risk points.Industry insiders point out that these vulnerability incidents have intensified scrutiny over the security governance capabilities of AI companies, and also reflect that the security systems of current AI technology still need improvement amid rapid development. (The Information)
According to Decrypt, Mozilla recently revealed that Anthropic’s latest AI model, Claude Mythos, identified 271 security vulnerabilities during internal testing of the Firefox browser; all related vulnerabilities were patched this week. For comparison, a previous Anthropic model had detected only 22 security-sensitive vulnerabilities. Mozilla stated that all discovered vulnerabilities fell within the scope of what top human researchers could identify. Claude Mythos was officially launched in March 2026 and is Anthropic’s most powerful model to date for reasoning, coding, and cybersecurity. It is currently available exclusively to vetted partners—including Amazon, Apple, and Microsoft—under Anthropic’s “Project Glasswing” initiative.
Odaily News Anthropic has decided to restrict the public release of its Mythos model due to its highly automated cyber attack capabilities. Reports indicate that during internal testing, the model was already capable of independently completing vulnerability discovery and exploitation processes, and generating multi-step attack plans.Informed sources stated that in early testing, Mythos could autonomously build intrusion tools targeting Linux systems and, with guidance, execute complex vulnerability chain attacks. These capabilities were assessed as potentially posing risks to global infrastructure.Anthropic's management ultimately positioned Mythos as a cyber defense tool and opened it for testing to select institutions in a restricted manner. Industry insiders pointed out that similar models could significantly enhance the efficiency of cyber offense and defense, while also potentially introducing new security challenges. (Bloomberg)
, Anthropic and OpenAI have experienced security incidents in succession, drawing market attention to the security of AI models themselves. Currently, Anthropic is investigating a possible case of unauthorized user access to its Claude Mythos model. Almost simultaneously, OpenAI was also reported to have accidentally opened access to several unreleased models within its Codex application.Analysts believe that such incidents highlight that even AI model providers focused on cybersecurity capabilities still face significant security challenges. While AI is increasingly used for cyber defense, platform security and access control are becoming critical risk points.Industry insiders point out that these vulnerability incidents have intensified scrutiny over the security governance capabilities of AI companies, and also reflect that the security systems of current AI technology still need improvement amid rapid development. (The Information)
According to Decrypt, OpenAI CEO Sam Altman stated that Anthropic is promoting its AI model Claude Mythos through “fear-based marketing,” using narratives about security risks to justify its limited-open strategy. Claude Mythos has recently drawn attention for its ability to autonomously discover software vulnerabilities and perform complex cybersecurity operations. The report notes that Mozilla previously disclosed that the model identified 271 vulnerabilities in the Firefox browser during testing. Meanwhile, discussions surrounding the model’s potential offensive cybersecurity risks continue to intensify. Altman also emphasized that OpenAI will not scale back its infrastructure investments and will continue expanding its computational capabilities.
23pds, Chief Information Security Officer of SlowMist Technology, retweeted a post from the dark web intelligence account Dark Web Intelligence (@DailyDarkWeb), stating that the hacker group ShinyHunters claims to have breached internal systems related to Anthropic’s Mythos model and has shared screenshots—including those of the user management panel, AI experiment dashboard, and model performance and cost analysis. As of now, Anthropic has not officially confirmed the authenticity of this claim. Given that numerous enterprises have already applied for trial access to the relevant models, if this report proves true, it could pose indirect security risks to leading technology firms and crypto-related businesses.
According to Decrypt, Mozilla recently revealed that Anthropic’s latest AI model, Claude Mythos, identified 271 security vulnerabilities during internal testing of the Firefox browser; all related vulnerabilities were patched this week. For comparison, a previous Anthropic model had detected only 22 security-sensitive vulnerabilities. Mozilla stated that all discovered vulnerabilities fell within the scope of what top human researchers could identify. Claude Mythos was officially launched in March 2026 and is Anthropic’s most powerful model to date for reasoning, coding, and cybersecurity. It is currently available exclusively to vetted partners—including Amazon, Apple, and Microsoft—under Anthropic’s “Project Glasswing” initiative.
According to Cointelegraph, the Monetary Authority of Singapore (MAS) has urged banks to strengthen their cybersecurity defenses amid heightened regulatory attention triggered by the spread of Anthropic’s Mythos AI model across Asia.
Odaily News Financial officials have warned that Anthropic's Claude Mythos Preview model poses a serious threat to the cybersecurity defenses of the global banking system. (Financial Times)