SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Digital shield ai brain patterns secure locks network asia pacific cybersecurity

AI, zero trust & new work patterns set 2026 security agenda

Thu, 4th Dec 2025

Security experts are urging organisations to reevaluate their approaches to technology adoption in 2026, as advances in artificial intelligence, quantum computing, automation and work patterns raise complex new risks and challenges for digital trust and resilience.

AI both sides

Artificial intelligence is shaping cybersecurity defences and attacks alike. Automated threat detection, incident response and analytics are improving, but so are the techniques of criminal groups empowered by AI. Organisations find themselves defending against increasingly sophisticated adversaries able to launch targeted operations with speed and precision.

"These 'AI-powered' threats highlight the importance of identity and access management within AI environments. Implementing least-privileged access, continuous session monitoring and role-based permissions ensures that only authorised users - human or machine - can interact with sensitive datasets and training models. In 2026, success will belong to those who treat AI security not as an afterthought but as a prerequisite for innovation," said Takanori Nishiyama, SVP APAC and Japan Country Manager, Keeper Security.

The dual use of AI for defence and exploitation has shortened reaction time and increased the margin for error. Attacks now leverage prompt injection and data poisoning, raising the stakes for protecting both datasets and AI models from malicious manipulation.

Zero trust priority

Across the Asia-Pacific region, rising digital adoption is matched by growing attack sophistication. The adoption of a zero-trust security model is emerging as a fundamental requirement. This approach verifies every access request, restricts privileges, and treats no device or identity as trusted by default. The practice is critical in environments increasingly reliant on both machine-to-machine and human interactions.

"A zero-trust security model where every access request is verified, and every privilege is temporary, provides that adaptability. In a world of autonomous systems and machine-to-machine communication, zero trust ensures that no identity, device or process is trusted by default. When paired with Privileged Access Management (PAM), zero trust enforces strict oversight of high-level accounts, reduces lateral movement after compromise and strengthens defences against both human and AI-driven attacks. This layered approach aligns directly with evolving global directives that emphasises identity-first security, secure software development and least-privilege access as foundational cybersecurity principles," said Nishiyama.

Non-human identities

The expansion of AI and automation is forcing organisations to contend with a growing population of non-human identities (NHIs), such as bots, service accounts and AI agents. These automated digital entities can autonomously access data, APIs and applications, but if left unmonitored can create critical security blind spots and vulnerabilities.

"Applying zero-trust and least-privilege principles to machine identities must be considered essential. Every Non-Human Identity (NHI) should be uniquely identifiable, auditable and subject to the same access policies as human users. Extending identity and access management frameworks to include these automated entities ensures accountability and prevents credential misuse in increasingly autonomous environments," added Nishiyama.

Industry forecasts suggest the number of AI agents could soon outnumber people online, multiplying the scale of oversight required. Prakash Mana, CEO of Cloudbrink, said, "2026 will be the year that AI agents outnumber people. By the end of the year expect to see at least one agent per connected person. In 3 years, it will be up to 10 AI agents per connected person. This is a huge security issue that security teams should be planning for now. Most AI agent developers are focused on efficiency, not security. If you don't have an AI policy already, you need to create one now. Step one to ensuring users comply with the policy is to create visibility. Figure out how to monitor AI to see what it's accessing, which users are using it, and what they're using it for."

Secure-by-design

Experts are underscoring the importance of secure-by-design software development. Integrating security mechanisms, such as multi-factor authentication and comprehensive logging, from the outset reduces the likelihood of reactive fixes later. AI tools themselves need protection from data poisoning, bias and unauthorised modification.

"AI itself must be protected from model bias, data poisoning and unauthorised manipulation, reinforcing the need for identity controls, PAM and zero-trust architectures as the foundation of secure software ecosystems," said Nishiyama.

Quantum concerns

The advent of quantum computing brings both new possibilities and significant risks, particularly for encryption. Cybercriminals are already harvesting data with the intention of decrypting it in the future once quantum capabilities are mature enough to break current algorithms.

"Preparing for the post-quantum era requires organisations to begin adopting quantum-resistant encryption now. The 'store-now, decrypt-later' threat where adversaries harvest encrypted data today for decryption once quantum capabilities mature, demands proactive mitigation through cryptographic agility and long-term data protection strategies. At the same time, regulatory frameworks across APAC are tightening around privacy, data residency and AI governance. Organisations that embed compliance into their security architecture - rather than treating it as a box-checking exercise - will be better positioned to adapt to new standards while maintaining innovation and speed," said Nishiyama.

Work patterns shift

Changes in workplace habits are adding to the complexity. Analysis of workforce access data indicates that 'work from anywhere' employees increasingly blend office and remote work, often working outside traditional hours. Increased device connectivity through smart wearables, translation earbuds and personal robots further strains network and security infrastructure.

Mana said, "Work from anywhere will become work anytime. Back-to-office mandates have pulled many workers back to the office, but WFH habits die hard. Many tech workers are used to logging in at times convenient for their schedule or work habits. Our usage data early this year showed heavy transfer of data on Fridays, an indication that 'work from anywhere' employees actually put in longer hours than their '9 to 5' counterparts - with heavy usage starting at 7:00 am and continuing to 7:00 pm. In 2026 we expect to see more workers logging in both at the office and at home in their off-hours, which may temporarily increase productivity, but burn workers out more quickly. Companies will need to focus on worker experience as well as productivity."

AI threat landscape

Corporates face threats from deepfake attacks and AI-driven scams. Advancements in real-time cloning technology enable attackers to impersonate executives or alter digital communications, making phishing and business email compromise schemes harder to detect and mitigate.

"Deepfake-driven attacks will become the norm in the corporate world as cybercriminals embrace AI. Imagine attacks that use real-time voice and video cloning to impersonate executives, or fake 'live' Zoom/Teams scams, or AI-written business email compromise (BEC) attacks that adapt mid-conversation. If you can imagine it, cybercriminals can do it. Not only are these attacks more difficult to detect, they are cheaper and easier for criminals who can now focus on compromising people to get at a company. Add these individual AI attacks to employees that work from anywhere and it becomes critical for corporate security controls to move away from protecting just the office or the organisation with perimeter or network security. Every user, and every device, should be verified every time, regardless of location," said Mana.

AI and infrastructure

The rise in AI adoption comes with significant infrastructure demands. As companies deploy more AI-powered applications, the need for robust networking and tailored hardware, including GPU-shared infrastructure, will grow. IT leaders must plan for increased data throughput and maintain seamless user experiences in distributed work environments.

"Training AI is about to give corporate networks a workout. With more companies adopting agents creating AI apps, the onus will be on IT and netops to condition their networks for the big lift in training AI. When AI apps are in learning mode they can access terabytes or petabytes of data very quickly, and they need high speeds to do it. Companies may need to alter their architecture to leverage the GPU on user machines, and create a time-sharing GPU infrastructure that distributes the AI processing towards users of AI rather than centralised data centres. With AI-capable devices and laptops taking some of the load, all users will get a better experience." said Mana.

"Cybersecurity can no longer lag behind transformation cycles; it must define them. Enterprises that combine AI-enhanced defences with zero-trust principles, enforce PAM to govern both human and non-human identities and integrate secure-by-design practices will strengthen both resilience and reputation," concluded Nishiyama.
Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X