SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Story image
Interview: Inside the crybercriminal gig economy for bots
Tue, 5th Feb 2019
FYI, this story is more than a year old

Cybercriminals are moving towards a gig economy model as the internet makes it increasingly more convenient for hackers to buy and sell services from one another.

This ‘cybercriminal gig economy' is driving specialisation, and marketisation, across different attack verticals.

TechDay spoke to Akamai Asia Pacific security technology and strategy head Fernando Serto about how organisations are being impacted by the rise of the cybercriminal gig economy and specialised bot attacks.

What changes in trends have enabled the development of a cybercriminal gig economy?

The shift to a gig economy has been enabled by the launch of task-oriented platforms, where specialisation is rewarded, and finding skills has become as easy as opening an app and making a request.

We've seen a similar behaviour on marketplaces in the dark corners of the web, powering the ‘cybercriminal gig economy'.

These marketplaces operate similarly to legitimate apps, where specific jobs are posted and attackers are ranked according to a rating system evaluating them on the accuracy of the data they are selling, or the efficacy of the tools they are selling.

One example is the marketplace for validated credentials, where the sellers of these credentials are providing lists of credentials they already went through the effort of validating.

Therefore the accuracy of the data is extremely important for the person acquiring these with the intent of launching account take over attacks, and eventually fraud.

In addition, anonymous cryptocurrencies have also contributed to a shift in behaviour.

How can businesses distinguish between bots that benefit their sites vs bots that negatively impact their business?

For a business to be able to answer this question, it's paramount that they have visibility into which bots are hitting their applications, and once they do, what exactly are they accessing and how often.

Even bots that benefit a business, such as search engine crawlers, site monitoring services or content aggregators, can have a negative impact to applications.

For example, if an application is getting too many hits from known ‘good bots', there can still be a negative impact to the business from an application performance perspective at peak times.

It's a lot easier for an organisation to identify good bots, as they typically identify themselves with a static User Agent, as well as a URL to their company.

On the other hand, identifying bad bots becomes even more challenging, as they tend to use highly distributed IP addresses, User Agents and behaviour that mimics real browsers.

What are some of the evasion tactics hackers who use bots are utilising?

Bot operators are extremely creative and continuously come up with new attempts to evade security defences.

There are several techniques that range in effort and complexity.

A very simple technique is to change certain characteristics, such as the User Agent or other HTTP header values, in an attempt to impersonate a real user.

Operators will also use multiple IP addresses to avoid IP address-based security controls.

This technique is also used to launch “low and slow” attacks, which are a lot harder to detect as the application owners don't see any spike in traffic or anything that leads them to believe they are under attack.

Other techniques include the use of VPNs and Tor in an attempt to bypass any geo-fencing controls customers may have in place.

How can organisations mitigate this threat?

When we're talking about the simplest techniques for evasion, an organisation can block the IP addresses of known bad bots.

However, as soon as an organisation starts to get targeted by more complex bots, the level of effort and difficulty to mitigate them go up significantly.

We also see several of our customers getting targeted by multiple bots, but some of those bots are capable of utilising multiple evasion techniques.

For example, bots that leverage thousands of IP addresses, randomise User Agents, impersonate browsers and session replay.

These evasion tactics add a high level of complexity and increase the level of effort to mitigate. When bots are very complex, it's not feasible to apply the same security controls anymore.

Bots can also change their behaviour if they think they've been detected.

Therefore it's important to accurately differentiate a bot from a real user.