With no end in sight for major security breaches, it seems assured that security spending will continue to rise.
In 2019, cyber attacks will continue to have a significant impact, raising the urgency of an approach to security that goes beyond “more of the same”.
The internet was designed with the objective of making it easy for computers across the world to communicate with each other.
Indeed, it has proven extraordinarily successful in achieving connectivity at scale.
Unfortunately, as its designers acknowledge, security was not part of the design.
Hence, as enterprises accumulate more data and become more connected, there is increasing motivation to consider architectures in which security is built in from the outset.
Enterprises across the region can achieve fundamentally better security by adopting one of the foundational concepts of computer science, the principle of least privilege, combined with newer technologies like network virtualisation, to achieve an intrinsically secure architecture.
For example, in a well-documented hack of a retailer, credentials provided to a heating and cooling contractor were used to ultimately gain access to the payments network.
This is a clear demonstration of how least privilege has not been applied – the contractors' credentials provided much more privilege than what was needed to do the job.
Such wide-open network access is commonplace, in large part because technologies to apply least privilege to networking – such as network virtualisation and microsegmentation – have only become available relatively recently and are still gaining widespread adoption.
In a related development, security needs to move away from the traditional approach of chasing after arbitrary forms of malware.
There are many millions of different strains of malware designed with the explicit goal of escaping detection.
Chasing after malware is analogous to looking for a needle in a haystack.
A better approach is to focus on “known good” – ensuring that the code running on enterprise systems is the correct code that was provisioned to run, and nothing more.
We can move from chasing bad to ensuring good.
Again, the concept is not new, but new technologies are making this feasible.
For example, modern data centers use automation tools to provision software, giving us access to a manifest of the expected good behaviour.
Virtualisation gives us an enforcement point from which to observe the behaviour and ensure it conforms to what is expected.
Machine learning algorithms can also play a role.
Machine learning systems are poor at extrapolation – they recognise what they have seen before, whether being used for image classification or to observe the software running in a data center.
Thus, machine learning is unlikely to recognise new forms of malware that were not part of the training dataset.
Conversely, these algorithms can be trained with reference datasets on how non-compromised applications and processes behave.
They can be trained to monitor “known good” behaviour and alert or take other pre-emptive actions when unexpected behaviour, indicative of a breach, is observed.
With IDC predicting that more than 50% of security alerts will be handled by AI-powered automation by 2022, machine learning is ready for primetime, but we must be acutely aware of its strengths and limitations.
Finally, while least privilege and ensuring good are key principles, enterprises in the Asia-Pacific region cannot ignore other basic cyber hygiene practices like patching, encryption of data at rest and in motion, and multi-factor authentication.
One of the most serious compromises of corporate data that was widely reported in 2017 happened because the company failed to patch for known vulnerabilities.
In fact, the Online Trust Alliance reported earlier this year that 93% of breaches are preventable through good cyber hygiene.