Business

Anthropic Investigates Possible Security Breach of Mythos AI Model

Anthropic, the artificial intelligence company behind the chatbot Claude, is investigating a possible security breach involving its recently released Mythos AI model. The company rolled out Mythos earlier this month to a limited number of major companies, including Amazon, Apple, Cisco, JPMorgan Chase, and Nvidia, as a tool to detect software vulnerabilities.

According to an Anthropic spokesperson, the possible unauthorized access occurred within one of its third-party vendor environments. Anthropic confirmed the investigation on April 22, following a Bloomberg report that cited a source familiar with the matter, noting that a small group of unauthorized users gained access to the tool.

Anthropic emphasized that so far, no breaches or compromises have been detected outside its vendor environment or within its core systems. The company works with a small number of third-party vendors to help develop its AI models, which includes Mythos.

Mythos was introduced under Anthropic’s initiative called Project Glasswing, designed to help large corporations enhance their cybersecurity by identifying vulnerabilities more effectively than existing AI solutions. The limited release reflected concerns about potential exploitation of the tool by hackers, as Mythos could be used both for defensive purposes and potentially to identify weaknesses in sensitive IT infrastructure.

Why it matters

Security experts and federal officials have expressed concern about the dual-use risk of AI models like Mythos. While the tool aims to bolster defenses against cyber threats, there is apprehension that malicious actors could leverage it to accelerate attacks on critical systems, including those in banking, healthcare, and government sectors. The speed and capability of AI-driven hacking present challenges that traditional cybersecurity methods may struggle to match.

Alissa Valentina Knight, CEO of cybersecurity AI company Assail, noted previously that defending networks against human hackers was already difficult, and the introduction of AI-powered attacks could significantly escalate risks.

Background

Anthropic unveiled Mythos as part of an effort to provide advanced vulnerability detection tools to a small group of trusted companies. This cautious deployment was intended to prevent broad exposure while enabling these companies to strengthen their defenses. The model’s introduction highlighted ongoing efforts in the AI industry to balance innovation with security considerations amid rising concerns about AI misuse.

Read more Business stories on Goka World News.

Giorgio Kajaia
About the author

Giorgio Kajaia

Giorgio Kajaia is a writer at Goka World News covering world news, U.S. news, politics, business, climate, science, technology, health, security, and public-interest stories. He focuses on clear, factual, and reader-first reporting based on credible reporting, official statements, publicly available information, and relevant source material.

View all posts by Giorgio Kajaia