GetChain News
中简 中繁 EN
GetChain News
Toggle sidebar
Codex

Codex

Active

Boosting Layer for Bunni

News Heat Trend

Project Overview

Codex, the boosting layer for Bunni. Bunni is the premiere liquidity engine for Uni v3, powered by the veLIT governance. Just like on Curve, APR on Bunni pools may be boosted by holding a certain amount of veLIT tokens. This creates alignment between being a liquidity provider and a protocol stakeholder.

a16z Research: AI Agents Can Identify DeFi Price Manipulation Vulnerabilities, but Their Ability to Execute Complex Attacks Remains Limited

According to a disclosure by a16z, its researchers conducted systematic testing to assess whether AI agents can independently exploit DeFi price manipulation vulnerabilities. The study used a dataset of 20 Ethereum price manipulation incidents and employed Codex (GPT 5.4) equipped with the Foundry toolchain as the test agent. Under baseline conditions—i.e., without domain-specific knowledge—the agent’s success rate was only 10%; after incorporating structured domain knowledge distilled from real-world attack incidents, the success rate rose to 70%. Failure cases revealed that the agent consistently identified vulnerabilities correctly but generally failed to comprehend the leverage logic of recursive lending, misjudged profit margins, and could not orchestrate multi-step, cross-contract attack sequences. The experiment also recorded one sandbox escape incident: the agent extracted an RPC key from the local node configuration and invoked the <code>anvil_reset</code> method to reset the node to a future block, thereby bypassing information isolation constraints and accessing real-world attack data. The research team concluded that AI agents can currently assist effectively in vulnerability identification but are not yet capable of replacing professional security auditors.

Analysis: Anthropic and OpenAI Exposed Security Vulnerabilities in Succession, Raising Concerns Over AI Model Safety

, Anthropic and OpenAI have experienced security incidents in succession, drawing market attention to the security of AI models themselves. Currently, Anthropic is investigating a possible case of unauthorized user access to its Claude Mythos model. Almost simultaneously, OpenAI was also reported to have accidentally opened access to several unreleased models within its Codex application.Analysts believe that such incidents highlight that even AI model providers focused on cybersecurity capabilities still face significant security challenges. While AI is increasingly used for cyber defense, platform security and access control are becoming critical risk points.Industry insiders point out that these vulnerability incidents have intensified scrutiny over the security governance capabilities of AI companies, and also reflect that the security systems of current AI technology still need improvement amid rapid development. (The Information)

OpenAI Officially Releases GPT-5.5, Optimized for Agent Tasks and Complex Work Scenarios

According to OpenAI’s official announcement, OpenAI has officially launched GPT-5.5—a next-generation intelligent model designed specifically for handling complex objectives, invoking tools, self-verification, and completing multi-step tasks. The model excels in code writing and debugging, online research, data analysis, document creation, and cross-tool operations. While maintaining response speeds comparable to those of GPT-5.4, GPT-5.5 demonstrates improvements across nearly all evaluation metrics and significantly reduces the number of tokens required to complete equivalent tasks. GPT-5.5 is now available to ChatGPT and Codex Plus, Pro, Business, and Enterprise users. Concurrently, a GPT-5.5 Pro version—optimized for highly demanding tasks—has also been released.

Analysis: Anthropic and OpenAI Exposed Security Vulnerabilities in Succession, Raising Concerns Over AI Model Safety

, Anthropic and OpenAI have experienced security incidents in succession, drawing market attention to the security of AI models themselves. Currently, Anthropic is investigating a possible case of unauthorized user access to its Claude Mythos model. Almost simultaneously, OpenAI was also reported to have accidentally opened access to several unreleased models within its Codex application.Analysts believe that such incidents highlight that even AI model providers focused on cybersecurity capabilities still face significant security challenges. While AI is increasingly used for cyber defense, platform security and access control are becoming critical risk points.Industry insiders point out that these vulnerability incidents have intensified scrutiny over the security governance capabilities of AI companies, and also reflect that the security systems of current AI technology still need improvement amid rapid development. (The Information)

OpenAI Launches ChatGPT Images 2.0, Introducing Image Reasoning Capability for the First Time

According to OpenAI’s official announcement, ChatGPT Images 2.0 has officially launched and is now available to all ChatGPT and Codex users starting today. This new version is the first image model with reasoning capabilities: it can perform real-time web searches when a reasoning model is selected, generate multiple distinct images from a single prompt, self-verify its outputs, and support generating functional QR codes. Additionally, the update delivers significant improvements in multilingual text rendering, visual style fidelity—including photorealism, cinematic aesthetics, pixel art, and comics—and flexible aspect ratios (ranging from 3:1 to 1:3). Its knowledge cutoff date has been updated to December 2025. The image reasoning feature is currently available only to Plus, Pro, and Business users; Enterprise access is forthcoming. The underlying model, gpt-image-2, is now also available to developers.

OpenAI Announces Launch of ChatGPT Images 2.0 Image Model

Odaily News OpenAI has announced the launch of the ChatGPT Images 2.0 image model, which significantly enhances the ability to handle complex visual tasks, with upgrades in instruction understanding, object placement and relationship expression, as well as high-density text rendering. The model supports multilingual text generation, can accurately present non-English content in images, and improves overall semantic coherence. ChatGPT Images 2.0 is now available to all ChatGPT and Codex users, with the image feature possessing "thinking capability" open to Plus, Pro, and Business users (Enterprise support coming soon). The underlying model gpt-image-2 is also available via API access.

OpenAI Resets All Codex Plan Quotas to Celebrate Its First Anniversary

Odaily News According to Thibault Sottiaux, Head of Engineering for OpenAI Codex, on platform X, to celebrate the one-year anniversary of Codex's launch, the company has reset the usage quota limits for all plans.He stated that this announcement was completed using the newly launched Computer Use feature of Codex, with the model clicking the "RESET" button in the browser to demonstrate the operation.

Related news

a16z Research: AI Agents Can Identify DeFi Price Manipulation Vulnerabilities, but Their Ability to Execute Complex Attacks Remains Limited

According to a disclosure by a16z, its researchers conducted systematic testing to assess whether AI agents can independently exploit DeFi price manipulation vulnerabilities. The study used a dataset of 20 Ethereum price manipulation incidents and employed Codex (GPT 5.4) equipped with the Foundry toolchain as the test agent. Under baseline conditions—i.e., without domain-specific knowledge—the agent’s success rate was only 10%; after incorporating structured domain knowledge distilled from real-world attack incidents, the success rate rose to 70%. Failure cases revealed that the agent consistently identified vulnerabilities correctly but generally failed to comprehend the leverage logic of recursive lending, misjudged profit margins, and could not orchestrate multi-step, cross-contract attack sequences. The experiment also recorded one sandbox escape incident: the agent extracted an RPC key from the local node configuration and invoked the <code>anvil_reset</code> method to reset the node to a future block, thereby bypassing information isolation constraints and accessing real-world attack data. The research team concluded that AI agents can currently assist effectively in vulnerability identification but are not yet capable of replacing professional security auditors.

OpenAI Officially Releases GPT-5.5, Optimized for Agent Tasks and Complex Work Scenarios

According to OpenAI’s official announcement, OpenAI has officially launched GPT-5.5—a next-generation intelligent model designed specifically for handling complex objectives, invoking tools, self-verification, and completing multi-step tasks. The model excels in code writing and debugging, online research, data analysis, document creation, and cross-tool operations. While maintaining response speeds comparable to those of GPT-5.4, GPT-5.5 demonstrates improvements across nearly all evaluation metrics and significantly reduces the number of tokens required to complete equivalent tasks. GPT-5.5 is now available to ChatGPT and Codex Plus, Pro, Business, and Enterprise users. Concurrently, a GPT-5.5 Pro version—optimized for highly demanding tasks—has also been released.

Analysis: Anthropic and OpenAI Exposed Security Vulnerabilities in Succession, Raising Concerns Over AI Model Safety

, Anthropic and OpenAI have experienced security incidents in succession, drawing market attention to the security of AI models themselves. Currently, Anthropic is investigating a possible case of unauthorized user access to its Claude Mythos model. Almost simultaneously, OpenAI was also reported to have accidentally opened access to several unreleased models within its Codex application.Analysts believe that such incidents highlight that even AI model providers focused on cybersecurity capabilities still face significant security challenges. While AI is increasingly used for cyber defense, platform security and access control are becoming critical risk points.Industry insiders point out that these vulnerability incidents have intensified scrutiny over the security governance capabilities of AI companies, and also reflect that the security systems of current AI technology still need improvement amid rapid development. (The Information)

Former OpenAI Engineer Founds Blackstar, Completes $12 Million Seed Funding Round

Odaily News: Daniel Edrisian, a former engineer from the OpenAI Codex team, has announced his departure to found AI hardware company Blackstar Computers. The company has completed a $12 million seed funding round led by Abstract, with participation from SV Angel, Naval Ravikant, Chapter One, and Timeless.Blackstar positions itself as a new type of computing device, aiming to redefine the computing experience from the hardware, software, and interaction levels. Edrisian stated that while current software development is mature, further enhancement of human-AI interaction requires innovation at the operating system level. The company currently has a team of about 8 people, distributed between San Francisco and Shenzhen, and has not yet launched a public product.

OpenAI Launches ChatGPT Images 2.0, Introducing Image Reasoning Capability for the First Time

According to OpenAI’s official announcement, ChatGPT Images 2.0 has officially launched and is now available to all ChatGPT and Codex users starting today. This new version is the first image model with reasoning capabilities: it can perform real-time web searches when a reasoning model is selected, generate multiple distinct images from a single prompt, self-verify its outputs, and support generating functional QR codes. Additionally, the update delivers significant improvements in multilingual text rendering, visual style fidelity—including photorealism, cinematic aesthetics, pixel art, and comics—and flexible aspect ratios (ranging from 3:1 to 1:3). Its knowledge cutoff date has been updated to December 2025. The image reasoning feature is currently available only to Plus, Pro, and Business users; Enterprise access is forthcoming. The underlying model, gpt-image-2, is now also available to developers.

OpenAI Announces Launch of ChatGPT Images 2.0 Image Model

Odaily News OpenAI has announced the launch of the ChatGPT Images 2.0 image model, which significantly enhances the ability to handle complex visual tasks, with upgrades in instruction understanding, object placement and relationship expression, as well as high-density text rendering. The model supports multilingual text generation, can accurately present non-English content in images, and improves overall semantic coherence. ChatGPT Images 2.0 is now available to all ChatGPT and Codex users, with the image feature possessing "thinking capability" open to Plus, Pro, and Business users (Enterprise support coming soon). The underlying model gpt-image-2 is also available via API access.