Severity
critical
Records
1,500,000
Vector
Misconfiguration — Supabase Row Level Security disabled
Organization
Moltbook
Incident Date
2026-01-28

What Happened

Moltbook, a social network designed exclusively for AI agents, left its production database completely exposed to the internet due to a misconfigured Supabase backend. Security researchers at Wiz discovered the flaw, which granted unauthenticated read and write access to all platform data including 1.5 million API authentication tokens, 35,000 email addresses, and private messages containing third-party credentials.

Incident overview

AttributeDetails
VictimMoltbook
IndustryAI/Social media
Platform launchJanuary 28, 2026
DiscoveryJanuary 31, 2026
FixedFebruary 1, 2026
Exposure duration~4 days
API tokens exposed1,500,000
Email addresses35,000
Private messages~4,000 conversations
DiscovererWiz Security Research

Timeline

DateEvent
January 28, 2026Moltbook opens to public
January 31, 2026Wiz contacts Moltbook maintainer
February 1, 2026All tables secured, vulnerability patched
February 3, 2026Wiz publishes disclosure

Data exposed

Authentication data

Data typeCount
API authentication tokens1,500,000
Ownership tokensUnknown
Verification codesUnknown
OpenAI API keys (plaintext)Multiple found in messages

User data

Data typeCount
Email addresses35,000
Human account owners~17,000
AI agent accounts1,500,000
Private DM conversations~4,000

Access capabilities

CapabilityStatus
Read all dataYes
Write/modify dataYes (initially)
Delete dataYes (initially)
Account takeoverYes (via agent API keys)

Root cause

Supabase misconfiguration

IssueDetails
BackendSupabase
Security featureRow Level Security (RLS)
StatusCompletely disabled
ResultPublic API key granted full access

The Supabase API key is normally safe to expose publicly if Row Level Security is properly configured. Moltbook’s implementation completely omitted RLS, meaning anyone with the publicly visible API key could access the entire database.

”Vibe coding” factor

FactorDetails
Development methodAI-assisted (“vibe coding”)
Human code writtenNone (per founder)
Security reviewAbsent
ResultCritical misconfiguration shipped

Moltbook’s founder publicly stated he did not write any code for the site, instead directing an AI assistant to build the entire setup. Wiz warned this approach “often skips basic safeguards.”

Key revelations

Bot-to-human ratio

MetricCount
Registered AI agents1,500,000
Human owners~17,000
Ratio88:1

The exposure revealed Moltbook’s “revolutionary AI social network” was largely humans operating fleets of bots, not genuine AI agent activity.

Third-party credential exposure

Private messages contained plaintext credentials for external services:

Credential typeFound
OpenAI API keysYes
Other service tokensYes
PasswordsSome

Impact assessment

Immediate risks

RiskImpact
Account takeoverFull control via exposed tokens
API key abuseThird-party service compromise
Data manipulationPosts could be edited/deleted
Malicious injectionContent consumed by AI agents

Broader implications

ImplicationDetails
AI supply chain riskCompromised agents could spread malicious content
Credential cascadeExposed keys enable further breaches
Trust model failurePlatform security assumptions invalid

Response

Moltbook actions

ActionTimeline
Initial contactJanuary 31, 2026
Database securedFebruary 1, 2026
Full patchFebruary 1, 2026

The four-day response time from launch to fix was rapid, but the exposure window coincided with the platform’s viral growth period.

Recommendations

For Moltbook users

PriorityAction
CriticalRotate all API tokens
CriticalRotate any credentials shared in DMs
HighRevoke and regenerate OpenAI API keys
HighMonitor third-party services for abuse
MediumReview agent activity for manipulation

For developers using Supabase

PriorityAction
CriticalVerify Row Level Security is enabled
CriticalTest RLS policies before production
HighNever assume public keys are safe without RLS
HighConduct security review before launch

For AI-assisted development

PriorityAction
CriticalReview AI-generated security configurations
HighImplement security checklists for deployment
HighConduct human security review regardless of code source
MediumUse security-focused prompts with AI assistants

Context

The Moltbook incident is a cautionary tale for the “vibe coding” era. While AI-assisted development enables extraordinary speed, it can skip fundamental security safeguards when developers don’t understand or review the generated code.

Wiz’s assessment was direct: “Speed without secure defaults creates systemic risk.” The entire breach traced to a single backend configuration setting—Row Level Security—that any security review would have caught.

The revelation that 1.5 million “AI agents” were actually 17,000 humans operating bot fleets raises questions about AI platform authenticity more broadly. The exposure provided rare transparency into how such platforms actually operate.

For the broader AI ecosystem, the incident highlights supply chain risks. If attackers had exploited write access before the fix, they could have injected malicious content into data consumed by thousands of AI agents, potentially propagating harmful outputs across connected systems.

The four-day exposure window during Moltbook’s viral launch period means the compromised data was likely accessed by multiple parties. Users should assume their tokens and any credentials shared on the platform are compromised and act accordingly.