SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Flux result b9088d52 ffc8 48b1 a485 cd96e7c6e378

OpenAI launches Trusted Access for Cyber with major names

Fri, 17th Apr 2026 (Yesterday)

OpenAI has launched Trusted Access for Cyber and named the first organisations taking part. The programme includes security researchers, software security groups and large companies.

The initiative uses a tiered access model for advanced cyber tools, with access tied to trust, validation and safeguards. The aim is to broaden access to cyber defence tools while tightening controls as systems become more capable.

Alongside the launch, OpenAI committed USD $10 million in API credits through a Cybersecurity Grant Program. The funding is intended to support groups working on open-source software security, vulnerability research and critical infrastructure protection.

Initial recipients include Socket and Semgrep, which focus on software supply chain security, as well as Calif and Trail of Bits, which combine frontier AI models with vulnerability research. OpenAI said it wants to work with more teams that have a record of finding and fixing vulnerabilities in open-source software and critical systems.

Broad group

The first organisations named as supporters of the wider effort span banking, technology and cyber security. They include Bank of America, BlackRock, BNY, Citi, Cisco, Cloudflare, CrowdStrike, Goldman Sachs, iVerify, JPMorgan Chase, Morgan Stanley, NVIDIA, Oracle, Palo Alto Networks, SpecterOps and Zscaler.

The list suggests a dual focus: one part of the programme is aimed at specialist defenders and researchers, while another is designed to test how advanced cyber tools work in some of the most complex corporate computing environments.

According to OpenAI, the participating organisations protect digital infrastructure used widely across the economy and will provide feedback from real-world use. That feedback is expected to inform the company's safety systems and shape how defensive tools are deployed more broadly.

OpenAI has also provided access to GPT-5.4-Cyber to the U.S. Centre for AI Standards and Innovation and the UK AI Security Institute. Those bodies are expected to evaluate the model's cyber-related performance and safeguards.

Defensive focus

The launch comes as AI developers face growing pressure to show that increasingly capable models can be used in security work without making offensive misuse easier. Cyber security has become one of the clearest test cases for that balancing act, because the same systems that help defenders analyse code, detect weaknesses and speed up incident response can also be relevant to attackers.

Trusted Access for Cyber appears to be OpenAI's answer to that tension. Rather than making advanced cyber features broadly available at once, it is tying access to a combination of identity, track record and operational safeguards.

The emphasis on smaller teams and open-source maintainers is also notable. Many organisations responsible for widely used software components operate with limited security resources, despite the downstream importance of their code to governments, companies and consumers. OpenAI's reference to teams without round-the-clock incident response staff suggests it sees a gap between the cyber resources available to major corporations and those available to developers whose software underpins large parts of the digital economy.

That gap has become harder to ignore after a series of software supply chain incidents in recent years showed how weaknesses in a single product or component can spread quickly across sectors. By directing credits to groups focused on supply chain security and vulnerability discovery, OpenAI is placing early emphasis on areas where small improvements can have broad effects.

Oversight tests

The inclusion of public-sector evaluators in the US and UK adds an oversight element to the programme. Independent or semi-independent testing of cyber-related safeguards has become more important as governments and standards bodies try to understand where advanced AI tools may help defenders and where they may introduce new risks.

For large financial institutions and security vendors, participation may also offer a way to shape how AI cyber tools are governed before they become more common in day-to-day security operations. Banks and infrastructure-heavy companies have strong incentives to improve detection and response, but they also tend to face strict internal controls and regulatory scrutiny when adopting new technology in sensitive environments.

OpenAI framed cyber defence as a shared problem that depends on many types of organisations, including public institutions, nonprofits, maintainers, researchers and businesses. It said the programme is intended to reflect that range and build the trust, verification and accountability needed to expand access to advanced defensive tools.

OpenAI plans to keep expanding Trusted Access for Cyber as it learns from participants, with safeguards that increase alongside model capability.