Never a day goes by without headlines of yet another cyber-attack, data breach or identity theft - even boardrooms have finally become familiar with terms like "ransomware" - an alien concept in the past that belonged to the "geeks in IT".
Businesses, education institutions, and indeed all levels of government have become more aware of the damage such actions can inflict.
Criminals continue to use old-school methods such as phishing scams to lure their victims but they also have a keen eye on the next technological step where more sophisticated tools and techniques are employed to dupe people.
We've now seen the advent of artificial intelligence being used for cybercrime, with a recent widely reported case a flavour of what's to come.
In March, the CEO of a UK-based energy firm received a phone call from his "boss" - or so he thought - who runs its German parent company.
His "boss" instructed him to transfer €220,000 to a Hungarian supplier, saying it was urgent and needed to be done in an hour.
The UK chief executive had no reason to question the authenticity of the call as he recognised his boss's slight German accent and tone of voice, the company's insurance firm, Euler Hermes Group SA revealed.
He had no reason to believe it was all a hoax, and could well be one of the first voice-spoofing cybercrimes using AI-based software.
Does this sound familiar?
Remember world-renowned hacker Kevin Mitnick? He gained notoriety after his social engineering skills landed him behind bars.
Mitnick would call big US corporations and obtain key information by gaining the trust of employees.
In fact, when he was a teenager, he once phoned the system manager at Digital Equipment and posed as "Anton Chernoff", one of DEC's lead developers.
"Anton" told the manager he had trouble logging in to the company's dial-up modem (Mitnick's friends found an old, discarded unit) and was immediately provided with login details and high-level access to DEC's internal systems.
Voice phishing or using social engineering over the phone to gain personal and financial information isn't new, and criminals use modern tools and technologies such as caller ID spoofing to hide their true location (and identity).
Fraudsters using AI to mimic voices - as per the European energy firm case - is just an evolution in technology: it's akin to how modern-day layby services such as Afterpay and Zip have gained overnight popularity.
The layby or buy now, pay later concept isn't new but these startups have used new technology - combining online, mobile apps and a great interface and user experience - to completely reinvent the way goods and services are purchased.
Nothing much has changed since Mitnick conned DEC some 40-odd years ago when he was 16: humans can still be gullible in this day and age.
Another information-gathering technique has been to call and hang up when a person answers - purportedly to build a voice/biometric profile for identity theft.
Some organisations offer voice as a replacement for passwords to confirm a customer's identity by relying on hundreds of unique characteristics.
ANZ Bank's app, for example, allows customers to pay anyone over $1000 and make BPAY payments of more than $10,000 by using the phrase "my voice confirms my identity".
While there haven't been widespread reports of voice phishing or deepfakes locally, criminals thrive on being ahead of the game and AI is just one of many tools in their arsenal. We cannot fight against this tide but need to urgently adopt a "trust but verify" mindset and approach.
We can trust but always verify a request - whether its someone on the other line, a friend or contact who sends through a link to click on or a text message with a call to action.
We can only manage and control our own behaviour and how we react to and protect from these scams.
Bruce Carney is the newly appointed product head at global cybersecurity firm Wontok.
He started his career as a Research Engineer at the University of Newcastle, and has had senior roles at Atlassian, Telstra and Nokia in Australia, the US and UK.