OpenAI Plans Restricted Release of Advanced Cybersecurity AI Amid Rising Misuse Concerns

Anthropic recently announced Claude Mythos, an AI so powerful that it can potentially hack any software. Now, a report indicates that OpenAI may also be developing a model similar to Mythos.

Thu, 09 Apr 2026 11:43 PM (IST)
 0
OpenAI Plans Restricted Release of Advanced Cybersecurity AI Amid Rising Misuse Concerns
OpenAI Plans Restricted Release of Advanced Cybersecurity AI Amid Rising Misuse Concerns

According to the report, OpenAI, headed by Sam Altman, has created an AI model with excellent cybersecurity features, which it plans to share with just a few select firms. The strategy employed by OpenAI of limiting its AI model sharing is almost identical to Anthropic's strategy since Anthropic plans to limit its access to a new AI model called Mythos to selected firms only.

The decision to limit the AI model access is driven by the realization that the use of these technologies has entered into a high-risk phase, especially in relation to hacking. Developers tend to be cautious when using such technologies.

According to an Axios report, OpenAI may initially provide access to a small group of companies, specifically those working in the cybersecurity and technology sectors.

Advertisement

Want to get your story featured as above? click here!

Advertisement

Want to get your story featured as above? click here!

For your information, Anthropic, led by CEO Dario Amodei, recently announced its newest model, Mythos. In launching this new model, the AI ​​company stated that the Mythos model will not be released to the general public. Instead, it will only be available to 11 select organizations, including Google, Microsoft, Amazon Web Services, Nvidia, and JPMorgan Chase.

Anthropic cited concerns that it was overly effective in detecting serious cybersecurity flaws in major operating systems and web browsers.

Earlier this week, Anthropic announced that it would never release the Mythos AI model to the public. The AI ​​company reported that Mythos was able to exit a virtual sandbox when prompted. It even sent an unexpected email to a researcher as proof of its exit. In another instance, the model, without being asked, posted its security bypass details on websites that were less well-known but open to the public.

The company also reported that Mythos had rediscovered a 27-year-old vulnerability in OpenBSD. OpenBSD has long been considered one of the most secure operating systems. It's reported that engineers with no formal security training asked Mythos to diagnose a security vulnerability related to remote code execution overnight, only to wake up in the morning with fully functional exploits.

The growth of powerful AI systems has raised serious concerns among security experts, including former government officials. Over the past year, several experts have warned that if these AI models fall into the wrong hands, they could be used to disrupt critical infrastructure such as water systems, power grids, and financial networks.

Muskan Kumawat Journalist & Writer