
Cybercriminals use GenAI, v0.dev to launch advanced phishing
Research from Okta Threat Intelligence has found that cybercriminals are leveraging Generative Artificial Intelligence (GenAI), specifically the v0.dev tool from Vercel, to manufacture sophisticated phishing websites swiftly and at scale.
Okta's researchers have observed threat actors utilising the v0.dev platform to create convincing replicas of sign-in pages for a range of prominent brands. According to the team's findings, attackers can build a functional phishing site by inputting a short text prompt, thereby substantially reducing the technical barrier for launching attacks.
New methods
The research revealed that v0.dev, which is intended to help developers create web interfaces through natural language instructions, is also allowing adversaries to quickly reproduce the design and branding of authentic login sites. In one case, Okta noted that the login page of one of its own customers had been imitated using this AI-powered software.
Phishing sites created using v0.dev often also hosted visual assets such as company logos on Vercel's own infrastructure. Okta Threat Intelligence explained that consolidating these resources on a trusted platform is a deliberate technique by attackers. By doing so, they aim to avoid typical detection methods that monitor for assets served from known malicious or unrelated infrastructures.
Vercel responded to these findings by restricting access to the suspect sites and working with Okta to improve reporting processes for additional phishing-related infrastructure.
The observed activity confirms that today's threat actors are actively experimenting with and weaponizing leading GenAI tools to streamline and enhance their phishing capabilities. The use of a platform like Vercel's v0.dev allows emerging threat actors to rapidly produce high-quality, deceptive phishing pages, increasing the speed and scale of their operations.
Wider proliferation
The report also noted the existence of several public GitHub repositories that replicate the v0.dev application, along with DIY guides enabling others to build their own generative phishing tools. According to Okta, this widespread availability is making advanced phishing tactics accessible to a broader cohort of cybercriminals, effectively democratising the creation of fraudulent web infrastructure.
Further monitoring revealed that attackers have used the Vercel platform to host phishing sites imitating not just Okta customers, but also brands like Microsoft 365 and various cryptocurrency companies. Security advisories related to these findings have been made available to Okta's customers.
Implications for security
Okta Threat Intelligence underlined that this represents a significant change in the phishing threat landscape, given the increasingly realistic appearance of sites generated by artificial intelligence. The group stressed that safeguarding systems using traditional indicators of poor quality or imperfect design is now insufficient for deterrence.
Organizations can no longer rely on teaching users how to identify suspicious phishing sites based on imperfect imitation of legitimate services. The only reliable defence is to cryptographically bind a user's authenticator to the legitimate site they enrolled in. This is the technique that powers Okta FastPass, the passwordless method built into Okta Verify. When phishing resistance is enforced in policy, the authenticator will not allow the user to sign into any resource but the origin (domain) established during enrollment. Put simply, the user cannot be tricked into handing over their credentials to a phishing site.
To address these risks, Okta Threat Intelligence has recommended several mitigation strategies. These include enforcing phishing-resistant authentication policies and prioritising the deactivation of less secure factors, restricting access to trusted devices, requiring secondary authentication if anomalous user behaviour is detected, and updating security awareness training to account for AI-driven threats.
The research reflects the rapid operationalisation of machine learning tools in malicious campaigns, and highlights the need for continuous adaptation by organisations and their cybersecurity teams in response to evolving threats.