GetChain News
中简 中繁 EN
GetChain News
Toggle sidebar

Security/Hacker

News linked to both this project and an event.

a16z Research: AI Agents Can Identify DeFi Price Manipulation Vulnerabilities, but Their Ability to Execute Complex Attacks Remains Limited

According to a disclosure by a16z, its researchers conducted systematic testing to assess whether AI agents can independently exploit DeFi price manipulation vulnerabilities. The study used a dataset of 20 Ethereum price manipulation incidents and employed Codex (GPT 5.4) equipped with the Foundry toolchain as the test agent. Under baseline conditions—i.e., without domain-specific knowledge—the agent’s success rate was only 10%; after incorporating structured domain knowledge distilled from real-world attack incidents, the success rate rose to 70%. Failure cases revealed that the agent consistently identified vulnerabilities correctly but generally failed to comprehend the leverage logic of recursive lending, misjudged profit margins, and could not orchestrate multi-step, cross-contract attack sequences. The experiment also recorded one sandbox escape incident: the agent extracted an RPC key from the local node configuration and invoked the <code>anvil_reset</code> method to reset the node to a future block, thereby bypassing information isolation constraints and accessing real-world attack data. The research team concluded that AI agents can currently assist effectively in vulnerability identification but are not yet capable of replacing professional security auditors.

Analysis: Anthropic and OpenAI Exposed Security Vulnerabilities in Succession, Raising Concerns Over AI Model Safety

, Anthropic and OpenAI have experienced security incidents in succession, drawing market attention to the security of AI models themselves. Currently, Anthropic is investigating a possible case of unauthorized user access to its Claude Mythos model. Almost simultaneously, OpenAI was also reported to have accidentally opened access to several unreleased models within its Codex application.Analysts believe that such incidents highlight that even AI model providers focused on cybersecurity capabilities still face significant security challenges. While AI is increasingly used for cyber defense, platform security and access control are becoming critical risk points.Industry insiders point out that these vulnerability incidents have intensified scrutiny over the security governance capabilities of AI companies, and also reflect that the security systems of current AI technology still need improvement amid rapid development. (The Information)