AI girlfriend apps exposed private chats in security audit
Researchers have identified critical security flaws in 17 AI companion and "AI girlfriend" apps on Google Play, potentially exposing private chat histories in services used by more than 150 million people.
An audit by mobile app security firm Oversecured found 14 critical and 311 high-severity issues across the apps. In 10 of the 17, attackers could find a route to users' stored conversations, while six contained critical vulnerabilities that could provide direct access to chat data.
The findings focus on a corner of the chatbot market where users often disclose sexual content, relationship problems and highly personal emotional information. Unlike general-purpose assistants, many of the affected services are marketed as virtual romantic partners, dating simulators or roleplay apps, and store conversations on remote servers linked to user accounts.
Private Chats
The risks go beyond ordinary account compromise. Exposed data may include explicit exchanges, discussions of extramarital affairs, suicidal thoughts, disclosures about sexual orientation and accounts of domestic conflict. Some services also cache chats, photos, voice messages and authentication data on devices, creating additional points of exposure when an app is poorly secured.
Among the most serious issues was the discovery of hardcoded cloud credentials in one app with more than 10 million installs. The credentials included an OpenAI token and a Google Cloud private key embedded in the Android application package, allowing anyone with basic reverse-engineering skills to extract them.
Another app with more than 10 million downloads had a cross-site scripting flaw in its chat interface. That weakness could allow malicious code to be injected into the chat window, enabling an attacker to read messages displayed on screen, steal session tokens and insert false messages into what appears to be a private conversation.
A separate app known for adult content contained a file theft vulnerability. In that case, any file held in the app's internal storage could potentially be extracted, including local chat databases, cached media and login tokens.
One app with more than 50 million installs was found to have a weakness in its advertising software development kit. A malicious advert could exploit the flaw to launch internal components and query database tables containing conversations, creating a supply-chain-style risk through ad delivery.
Another case involved arbitrary component launch combined with a hardcoded token in an app with more than 10 million downloads. That issue could expose authentication and session-handling functions and might be used to redirect users to attacker-controlled servers from inside the app.
Regulatory Gap
The research also highlights what security specialists describe as a regulatory blind spot. AI companion apps are not treated as healthcare products, despite often collecting disclosures that resemble those made in therapy settings. In several jurisdictions, existing rules have focused largely on child safety, suicide prevention measures and transparency over whether users are speaking to a machine, rather than on how these apps secure stored conversations.
That distinction matters because the records held by these services can form long-term archives of intensely personal exchanges. If attackers obtain cloud keys, session tokens or access to local files, they may be able to retrieve far more than a single conversation thread.
Some of the apps identified in the audit have already faced scrutiny over other issues, including lawsuits over harm to minors, privacy fines and a case in which chatbot interactions were linked to a user's death. According to the researchers, the vulnerabilities described in the report remained unpatched at the time of the findings.
The problems are not without precedent. Security incidents involving AI companion platforms have previously exposed tens of millions of messages and large numbers of user photos through misconfigured servers and cloud databases. The latest audit suggests that common application-layer flaws, such as embedded credentials, insecure web views and poor file protections, may create similar risks even when a backend is not openly exposed.
Sergey Toshin, founder of Oversecured, said the category has expanded quickly while basic safeguards have lagged behind.
"One app includes both its OpenAI token and its Google Cloud private key in the code - the Cloud key belongs to the developer's invoicing system. With those two credentials, you can reach the AI backend and the billing infrastructure," Toshin said.
He said the sensitivity of the information handled by companion apps puts them in a similar risk bracket to other services that process deeply personal records.
"The AI companion category handles a different but equally sensitive type of data as therapy apps - personal confessions, relationship details, sexual content. These apps grew so fast that basic security was never part of the process," he said.
The findings are likely to increase pressure on developers of AI companion services to review how mobile apps store credentials, handle web content, isolate advertising software and protect local files, particularly as regulators continue to examine the wider social risks tied to the sector.