Hackers leaked 72,000+ selfies, IDs, and DMs from Tea’s unsecured database.
The personal data of girls utilizing the app is now searchable and spreading on-line.
The unique leaker stated lax “vibe coding” could have been one of many explanation why the app was left huge open to assault.
The viral women-only relationship security app Tea suffered a large knowledge breach this week after customers on 4chan found its backend database was utterly unsecured—no password, no encryption, nothing.
The outcome? Over 72,000 personal photos—together with selfies and authorities IDs submitted for consumer verification—had been scraped and unfold on-line inside hours. Some had been mapped and made searchable. Personal DMs had been leaked. The app designed to guard girls from harmful males had simply uncovered its total consumer base.
The uncovered knowledge, totaling 59.3 GB, included:
13,000+ verification selfies and government-issued IDs
Tens of hundreds of photos from messages and public posts
IDs relationship as not too long ago as 2024 and 2025, contradicting Tea’s declare that the breach concerned solely “outdated knowledge”
4chan customers initially posted the recordsdata, however even after the unique thread was deleted, automated scripts saved scraping knowledge. On decentralized platforms like BitTorrent, as soon as it’s out, it’s out for good.
From viral app to whole meltdown
Tea had simply hit #1 on the App Retailer, driving a wave of virality with over 4 million customers. Its pitch: a women-only area to “gossip” about males for security functions—although critics noticed it as a “man-shaming” platform wrapped in empowerment branding.
One Reddit consumer summed up the schadenfreude: “Create a women-centric app for doxxing males out of envy. Find yourself by accident doxxing the ladies purchasers. I adore it.”
Verification required customers to add a authorities ID and selfie, supposedly to maintain out faux accounts and non-women. Now these paperwork are within the wild.
The corporate instructed 404 Media that “[t]his knowledge was initially saved in compliance with regulation enforcement necessities associated to cyber-bullying prevention.”
Decrypt reached out however has not acquired an official response but.
The offender: ‘Vibe coding’
This is what the O.G. hacker wrote. “That is what occurs while you entrust your private info to a bunch of vibe-coding DEI hires.”
“Vibe coding” is when builders kind “make me a relationship app” into ChatGPT or one other AI chatbot and ship no matter comes out. No safety assessment, no understanding of what the code really does. Simply vibes.
Apparently, Tea’s Firebase bucket had zero authentication as a result of that is what AI instruments generate by default. “No authentication, no nothing. It is a public bucket,” the unique leaker stated.
It could be vibe coding, or just poor coding. Regardless, the overreliance on generative AI is just growing.
This is not some remoted incident. Earlier in 2025, the founding father of SaaStr watched its AI agent delete the corporate’s total manufacturing database throughout a “vibe coding” session. The agent then created faux accounts, generated hallucinated knowledge, and lied about it within the logs.
General, researchers from Georgetown College discovered 48% of AI-generated code accommodates exploitable flaws, but 25% of Y Combinator startups use AI for his or her core options.
So though vibe coding is efficient for infrequent use, and tech behemoths like Google and Microsoft pray the AI gospel claiming their chatbots construct a powerful a part of their code, the common consumer and small entrepreneurs could also be safer sticking to human coding—or a minimum of assessment the work of their AIs very, very closely.
“Vibe coding is superior, however the code these fashions generate is filled with safety holes and will be simply hacked,” laptop scientist Santiago Valdarrama warned on social media.
Vibe-coding is superior, however the code these fashions generate is filled with safety holes and will be simply hacked.
This can be a dwell, 90-minute session the place @snyksec will construct a demo software utilizing Copilot + ChatGPT and dwell hack it to search out each weak spot within the generated…
— Santiago (@svpino) March 17, 2025
The issue will get worse with “slopsquatting.” AI suggests packages that do not exist, hackers then create these packages full of malicious code, and builders set up them with out checking.
Tea customers are scrambling, and a few IDs already seem on searchable maps. Signing up for credit score monitoring could also be a good suggestion for customers attempting to forestall additional harm.
Typically Clever E-newsletter
A weekly AI journey narrated by Gen, a generative AI mannequin.