A small group of users bypassed access restrictions to Anthropic’s Mythos AI model, designed to identify software vulnerabilities. The incident, which involved a third-party provider, is raising fresh concerns about tools capable of detecting and exploiting flaws at high speed.

Anthropic has launched an investigation after a small group of users gained unauthorized access to its advanced AI model, Mythos, via Discord. The information, first reported by Bloomberg, concerns a test version rolled out in late February to a limited number of companies.

The access did not result from a direct breach of the company’s systems. It reportedly occurred through a third-party vendor environment, including via an employee and the use of online investigation tools. The group involved is said to operate within a private forum focused on gathering information about unreleased models.

Designed to identify software vulnerabilities, the Mythos model can also facilitate their exploitation. Internal documents mention thousands of vulnerabilities detected, including zero-days, across several operating systems and web browsers.

The project, known as Glasswing, involves around forty technology players, including Amazon, Google, Microsoft, Apple, and Cisco, tasked with testing the model’s vulnerability detection and automated patching capabilities.

The incident comes as regulators in Australia, South Korea, and across Europe assess the risks associated with such tools, which can identify and exploit vulnerabilities within hours.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.