SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Flux result 5b263814 3fad 44ef 9433 96aeced156c1

Anthropic launches Project Glasswing for cyber defence

Thu, 9th Apr 2026

Anthropic has launched Project Glasswing with a group of technology, finance and security organisations. The initiative focuses on using Anthropic's Claude Mythos Preview model for defensive cybersecurity work.

Anthropic named Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks as launch partners. More than 40 additional organisations that build or maintain critical software infrastructure have also been given access.

The project is intended to help organisations responsible for widely used software and infrastructure identify and address security weaknesses. Anthropic also plans to share lessons from the work with the wider industry.

Anthropic has committed up to USD $100 million in usage credits and USD $4 million in donations to open-source security organisations connected to the effort. Participants will use Claude Mythos Preview through the Claude API, Amazon Bedrock, Google Cloud's Vertex AI and Microsoft Foundry.

Claude Mythos Preview is a general-purpose frontier model focused on coding and agentic tasks, according to Anthropic. The company says the model has already identified thousands of zero-day vulnerabilities across critical infrastructure and is being made available as a gated research preview.

Security Focus

The project comes as technology groups seek to show that advanced AI systems can be used not only to generate software and automate tasks, but also to strengthen cyber defence. It also reflects a broader push by AI developers and large software providers to work more closely with security vendors and maintainers of open-source systems.

Melissa Ruzzi, Director of AI at AppOmni, said the use of AI in cybersecurity still depends heavily on specialist knowledge and careful handling of data.

"Efficiently applying AI in complex topics with high volumes of data such as cybersecurity is no simple task. Simply feeding untreated data directly into an LLM will most likely not provide the expected added value, even with the most sophisticated model, due to the intrinsic limitations of LLMs that are inherently non-deterministic and focused on language handling," said Melissa Ruzzi, Director of AI, AppOmni.

She said expertise across multiple areas of security remains a central challenge for companies building AI systems for cyber work.

"Domain expertise combined with AI expertise is key for any AI application in security. The big challenge here is having expertise within each of the different security domains involved, such as identity security, endpoint security and cloud security," said Ruzzi.

Her comments highlight one of the main constraints facing AI-led vulnerability discovery: the practical difficulty of applying large models to fragmented, highly technical security environments. Security teams often work across identity systems, cloud environments, endpoint tools and software supply chains, each with different data and operational requirements.

Ruzzi also said broader collaboration across the industry could help improve safety around AI use.

"Adding more security coverage to AI is in general always a good move, especially when different security companies come together to join forces on a common mission of making AI use safer," said Ruzzi.

The growth of software-as-a-service products with AI features is adding another layer of risk for security teams. It creates a wider set of systems, integrations and workflows that may need to be reviewed for vulnerabilities.

"It's also important to keep in mind the sheer volume of AI used as part of SaaS - both tools that are used within SaaS apps, and those that expand further into other functionalities. This can create unexpected risks that require careful attention and consideration," said Ruzzi.