SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Business executives singapore office ai integration data security risks

Singapore executives confident in AI, but lag in risk controls

Wed, 9th Jul 2025

Senior business leaders in Singapore remain confident in their artificial intelligence (AI) systems, despite many organisations reporting a lack of robust governance to adequately manage potential risks.

According to the 2025 EY Responsible AI Pulse survey, all respondents in Singapore—reflecting a global trend—reported that their organisations have either already integrated AI into most or all of their business initiatives or are in the process of doing so as part of a broader strategic plan. However, only 53% of executives in Singapore, compared to 71% globally, indicated their firms possess moderate to strong controls to protect AI systems from dangers such as unauthorised access, corruption or theft.

The survey gathered responses from 975 senior leaders spanning 21 countries, including 30 from Singapore, between March and April 2025. It aimed to gauge how C-suite executives are incorporating responsible AI practices into business strategies, decision-making, and innovation.

These findings contrast with those from an earlier EY study, Reimagining Industry Futures, conducted in November 2024, which found that just 30% of Singapore respondents had fully integrated or selectively rolled out AI into core business and IT workflows, while the majority were still at pilot or proof-of-concept stages. The earlier study also canvassed C-suite leaders and senior decision makers, such as operations directors and product heads. These differing figures may suggest a lack of clarity or consistency in how executives define and interpret AI integration within their organisations.

Ambition and operational reality

"There is clear ambition to scale AI across organizations, but ambition must be grounded in operational reality. True integration requires reengineering core business processes and redesigning functional roles. With agentic AI, there may be a complete rewiring of workflows. This in turn, reshapes workforce structures. It is essential to embed systemic measurements and compliance checks to ensure that human-centered, AI-powered services remain robust and adaptable as transformation unfolds.
In Singapore, the government has taken proactive steps to support enterprises on their AI journey through programs like the Enterprise Compute Initiative. Even with this strong ecosystem, organizations need alignment between business and technology leaders. Otherwise, many may risk overestimating their progress."

This was the perspective shared by Manik Bhandari, Asean Artificial Intelligence and Data Leader at EY.

Governance challenges

The EY Responsible AI Pulse survey highlighted ongoing challenges with the development of governance frameworks. Nearly half of senior executives surveyed—47% in Singapore (51% globally)—agreed it remains difficult to create such frameworks for current AI technologies.

Looking ahead to next-generation AI, the difficulty appears even more pronounced. Sixty-seven percent of Singapore respondents, compared to 49% globally, agreed that their current approach to technology risk management would not be sufficient to face new challenges on the horizon.

Despite increased AI adoption, less than half of Singapore executives—43%, compared to 50% globally—are investing in robust governance structures designed to manage the risks associated with emerging AI technologies.

Adoption outpaces risk awareness

The majority of C-suite leaders expect to work with new AI technologies over the coming two years. In Singapore, 90% of executives (global 94%) confirmed they are already using or planning to use agentic AI, which refers to next-wave, more autonomous AI systems. Yet only 55% of this group said they are familiar with the risks these technologies pose.

Knowledge gaps widen further with multi-modal AI, which can process multiple types of data inputs, such as text, images, and audio. Of Singapore's decision makers, 86% plan to use such technologies within two years (global 94%), but only 43% said they are aware of the associated risks.

"As organizations push to scale AI, governance must evolve in tandem. Without the right oversight, even well-intentioned AI deployments can lead to unintended consequences, from ethical risks to reputational damage. Embedding clear accountability and control mechanisms will be essential to sustain AI's long-term value."

Bhandari stressed the importance of ensuring governance frameworks keep pace with the rapid expansion of AI implementation across enterprises.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X