Not a Hack - Just Bad Defaults
Tea App: A Failure of Data Security and Trust
The Tea app launched with a bold promise: a women-only space to flag problematic men via anonymous reviews, search history, and community input.
To join, users were required to upload a selfie holding their government-issued ID to prove they were women. Tea claimed this data would be “deleted immediately after verification.” , but it wasn’t.
What Happened
In July 2025, a massive data breach exposed 72,000 user photos, over a million private messages, and other deeply personal content stored in an unsecured Google Firebase database. The files included:
~13,000 selfies with government-issued IDs
~59,000 user-uploaded photos from posts, DMs, and comments
Messages containing real names, allegations, trauma, and medical history
Not a Hack — Just Bad Defaults
This wasn’t a “hack” in the traditional sense. No one breached the system or broke past firewalls. The issue was far more mundane and far more preventable: Tea’s Firebase storage had no security rules configured.
Firebase, Google’s Backend-as-a-Service (BaaS), makes it fast to build and scale mobile apps. But out of the box, it requires developers to set their own security rules. If not done, or left to permissive defaults like allow read, write: if true, all stored data is exposed to the public web.
Tea’s developers either overlooked or ignored this configuration, leaving sensitive verification images and user conversations freely available to anyone who knew where to look. The failure was architectural — not malicious code or attackers, but a basic misconfiguration of cloud permissions.
Worse, the platform had publicly promised to delete verification images after review but these files were never removed, and remained exposed online.
This was more than a technical oversight; it was a failure of architecture, accountability, and user trust.
Retaliation and Repercussions
The fallout didn’t end with the data exposure. In a grim twist, some men responded by creating spinoff apps and games ranking the women, mocking the women whose data had been leaked. A tool meant to safe space became a public spectacle, weaponized against its own users. Some are now exposed to harassment and fear, online and offline.
It’s a stark reminder that the cost of insecure data isn’t just reputational — it can have real-world emotional and psychological consequences for the most vulnerable users.
Was It Vibe Coded?
Much of the conversation around the Tea app has fixated on whether it was “vibe coded”. Tea’s team denied that AI was involved and attributed the breach to legacy data uploaded before February 2024.
But whether the code was written by ChatGPT, a junior dev, or a seasoned engineer isn’t the point.
The real failure was foundational: no security rules on Firebase, broken promises around data deletion, and zero guardrails for sensitive content.
Focusing on vibe coding distracts from what actually matters: basic, enforceable data security practices. That’s where the scrutiny should be.
Tech Takeaways
Never assume default cloud settings are secure. Always audit and enforce principle of least privilege in your storage and database rules.
Implement automated data retention and deletion processes to honor privacy commitments.
Review AI-generated code thoroughly, especially when it affects sensitive data handling.
Understand that security is a continuous responsibility, not a checkbox at launch.
Final Sip
Tea’s breach is a cautionary tale for every startup and data professional: security, privacy, and trust must be foundational, not afterthoughts.
Fast-moving development and shiny new tech like AI-assisted coding don’t excuse cutting corners on user data protection—especially when vulnerable communities are involved.
Protecting user data isn’t optional.

