- Severity
- critical
- Records
- 1,500,000
- Vector
- Misconfiguration — Supabase Row Level Security disabled
- Organization
- Moltbook
- Incident Date
- 2026-01-28
What Happened
Moltbook, a social network designed exclusively for AI agents, left its production database completely exposed to the internet due to a misconfigured Supabase backend. Security researchers at Wiz discovered the flaw, which granted unauthenticated read and write access to all platform data including 1.5 million API authentication tokens, 35,000 email addresses, and private messages containing third-party credentials.
Incident overview
| Attribute | Details |
|---|
| Victim | Moltbook |
| Industry | AI/Social media |
| Platform launch | January 28, 2026 |
| Discovery | January 31, 2026 |
| Fixed | February 1, 2026 |
| Exposure duration | ~4 days |
| API tokens exposed | 1,500,000 |
| Email addresses | 35,000 |
| Private messages | ~4,000 conversations |
| Discoverer | Wiz Security Research |
Timeline
| Date | Event |
|---|
| January 28, 2026 | Moltbook opens to public |
| January 31, 2026 | Wiz contacts Moltbook maintainer |
| February 1, 2026 | All tables secured, vulnerability patched |
| February 3, 2026 | Wiz publishes disclosure |
Data exposed
Authentication data
| Data type | Count |
|---|
| API authentication tokens | 1,500,000 |
| Ownership tokens | Unknown |
| Verification codes | Unknown |
| OpenAI API keys (plaintext) | Multiple found in messages |
User data
| Data type | Count |
|---|
| Email addresses | 35,000 |
| Human account owners | ~17,000 |
| AI agent accounts | 1,500,000 |
| Private DM conversations | ~4,000 |
Access capabilities
| Capability | Status |
|---|
| Read all data | Yes |
| Write/modify data | Yes (initially) |
| Delete data | Yes (initially) |
| Account takeover | Yes (via agent API keys) |
Root cause
Supabase misconfiguration
| Issue | Details |
|---|
| Backend | Supabase |
| Security feature | Row Level Security (RLS) |
| Status | Completely disabled |
| Result | Public API key granted full access |
The Supabase API key is normally safe to expose publicly if Row Level Security is properly configured. Moltbook’s implementation completely omitted RLS, meaning anyone with the publicly visible API key could access the entire database.
”Vibe coding” factor
| Factor | Details |
|---|
| Development method | AI-assisted (“vibe coding”) |
| Human code written | None (per founder) |
| Security review | Absent |
| Result | Critical misconfiguration shipped |
Moltbook’s founder publicly stated he did not write any code for the site, instead directing an AI assistant to build the entire setup. Wiz warned this approach “often skips basic safeguards.”
Key revelations
Bot-to-human ratio
| Metric | Count |
|---|
| Registered AI agents | 1,500,000 |
| Human owners | ~17,000 |
| Ratio | 88:1 |
The exposure revealed Moltbook’s “revolutionary AI social network” was largely humans operating fleets of bots, not genuine AI agent activity.
Third-party credential exposure
Private messages contained plaintext credentials for external services:
| Credential type | Found |
|---|
| OpenAI API keys | Yes |
| Other service tokens | Yes |
| Passwords | Some |
Impact assessment
| Risk | Impact |
|---|
| Account takeover | Full control via exposed tokens |
| API key abuse | Third-party service compromise |
| Data manipulation | Posts could be edited/deleted |
| Malicious injection | Content consumed by AI agents |
Broader implications
| Implication | Details |
|---|
| AI supply chain risk | Compromised agents could spread malicious content |
| Credential cascade | Exposed keys enable further breaches |
| Trust model failure | Platform security assumptions invalid |
Response
Moltbook actions
| Action | Timeline |
|---|
| Initial contact | January 31, 2026 |
| Database secured | February 1, 2026 |
| Full patch | February 1, 2026 |
The four-day response time from launch to fix was rapid, but the exposure window coincided with the platform’s viral growth period.
Recommendations
For Moltbook users
| Priority | Action |
|---|
| Critical | Rotate all API tokens |
| Critical | Rotate any credentials shared in DMs |
| High | Revoke and regenerate OpenAI API keys |
| High | Monitor third-party services for abuse |
| Medium | Review agent activity for manipulation |
For developers using Supabase
| Priority | Action |
|---|
| Critical | Verify Row Level Security is enabled |
| Critical | Test RLS policies before production |
| High | Never assume public keys are safe without RLS |
| High | Conduct security review before launch |
For AI-assisted development
| Priority | Action |
|---|
| Critical | Review AI-generated security configurations |
| High | Implement security checklists for deployment |
| High | Conduct human security review regardless of code source |
| Medium | Use security-focused prompts with AI assistants |
Context
The Moltbook incident is a cautionary tale for the “vibe coding” era. While AI-assisted development enables extraordinary speed, it can skip fundamental security safeguards when developers don’t understand or review the generated code.
Wiz’s assessment was direct: “Speed without secure defaults creates systemic risk.” The entire breach traced to a single backend configuration setting—Row Level Security—that any security review would have caught.
The revelation that 1.5 million “AI agents” were actually 17,000 humans operating bot fleets raises questions about AI platform authenticity more broadly. The exposure provided rare transparency into how such platforms actually operate.
For the broader AI ecosystem, the incident highlights supply chain risks. If attackers had exploited write access before the fix, they could have injected malicious content into data consumed by thousands of AI agents, potentially propagating harmful outputs across connected systems.
The four-day exposure window during Moltbook’s viral launch period means the compromised data was likely accessed by multiple parties. Users should assume their tokens and any credentials shared on the platform are compromised and act accordingly.