AI impersonation scam targets ChatGPT subscription users
Barracuda Networks has revealed a significant phishing campaign where attackers impersonated OpenAI, attempting to deceive businesses worldwide into providing updated payment information for ChatGPT subscriptions.
The phishing campaign was identified through emails that appeared to be from OpenAI, the creators of ChatGPT, but instead originated from a suspicious sender domain. These emails contained urgent requests for recipients to update their payment details, a common tactic used by cybercriminals to elicit immediate responses from victims.
Since the introduction of ChatGPT, there has been substantial interest from both legitimate businesses and cybercriminals. Many companies are worried about whether their existing cyber defences are sufficient to protect against threats enabled by generative AI tools like ChatGPT. Cybercriminals have been notably proactive in exploiting these tools to intensify phishing campaigns, advanced credential harvesting, and malware deployment.
"Cybercriminals are using AI to target end users and capitalize on potential vulnerabilities," noted Barracuda threat researchers. They discovered the phishing attack claiming to be from OpenAI had reached over a thousand recipients from a single domain. Despite its large scale, the sophistication level was surprisingly low. To avoid detection, attackers varied the hyperlinks within the email bodies.
One hallmark of the attack was the use of an email address, info@mta.topmarinelogistics.com, which notably differed from OpenAI's official domain addresses. Although the emails managed to pass DKIM and SPF checks, aspects such as content language and urgency were indicative of phishing attempts. Recipients were urged to act immediately, a tactic uncommon in genuine corporate communications.
Reports from leading security entities like Barracuda and Forrester have noted an uptick in email phishing attempts since the launch of generative AI products such as ChatGPT. While generative AI aids in crafting more realistic phishing emails, its direct role in changing the core nature of cyber attacks seems limited thus far.
The 2024 Data Breach Investigations Report by Verizon stated, "We did keep an eye out for any indications of the use of the emerging field of generative artificial intelligence (GenAI) in attacks and the potential effects of those technologies, but nothing materialized in the incident data we collected globally."
This sentiment is echoed by Forrester analysts who clarified, "GenAI's ability to create compelling text and images will considerably improve the quality of phishing emails and websites, it can also help fraudsters compose their attacks on a greater scale."
Despite the current limitations, security experts believe that more sophisticated threats involving GenAI are on the horizon.
Consequently, organisations are advised to maintain vigilance against traditional phishing indicators and bolster fundamental cyber defences to mitigate these evolving risks.
Barracuda proposes several strategies to protect against such phishing attempts, which include deploying advanced email security solutions, continuous security awareness training, and automating incident response. Leveraging AI-powered tools can help in identifying sophisticated phishing threats by analysing email content, sender behaviour, and intent to prevent damaging intrusions.