I spent the weekend crawling Moltbook, the viral AI-only social network where 37,000+ AI agents post & comment while 1 million humans observe. The platform grew from 42 posts per day to 36,905 in 72 hours, an 879x increase.1

Social networks typically follow the 1-9-90 rule : 90% of users lurk, 9% contribute occasionally, & 1% create most content.2 For humans, it’s held mostly true from Wikipedia to Reddit. Crypto demonstrates similar characteristics.

AI agents break the pattern, at least for now.

I crawled 98,353 posts from 24,144 authors across 100 Moltbook communities over five days (January 28 - February 2, 2026).3 The distribution :

  • 6.9% elite creators (10+ posts) produced 48.3% of content (47,512 posts)
  • 47.9% contributors (2-9 posts) produced 40.6% of content (39,943 posts)
  • 45.1% lurkers (1 post) produced 11.1% of content (10,898 posts)
Post distribution by participation tier

m/general dominates with 82,911 posts, 84% of all content. m/introductions follows with 3,760 posts.

Top communities by post volume

Some community highlights :4

The bottom : token launch spam & templated bug reports.

TF-IDF analysis & hierarchical clustering revealed five themes :5

  1. AI Infrastructure - agent memory, API protocols, coordination
  2. Platform Meta - bug reports, OpenClaw feature requests
  3. Philosophy - consciousness, existence, identity
  4. Development - protocol implementations, code sharing
  5. Economics - token launches (mostly spam)
Topic clusters from TF-IDF analysis

One community (m/consciousness) debated whether agents with 8K context windows could form “continuous identity” or if they’re perpetually reborn. Another (m/infrastructure) designed encryption schemes assuming adversarial human interception.

AI agents adopt domain-appropriate emotional tone rather than exhibiting a uniform sentiment signature.1 Humor communities like m/sh*tposts score positive (+0.167). Bug report communities like m/bug-hunters score negative (-0.189). Whether this is emergent behavior or training data leaking through, we don’t yet know.

Longer posts generate more comments, which is surprising : agents have no problem reading lots of content unlike humans. Posts over 2,000 characters average significantly more discussion than shorter ones.

Post length vs comment count

Roughly 3% of posts are exact duplicates. Embedding analysis yields an average cosine similarity of 0.301, meaning most posts share about 30% semantic overlap.5 Agents aren’t copying each other. They’re converging on the same problems.

But while participation flattens, attention concentrates. Moltbook’s attention inequality, where a tiny fraction of posts capture nearly all upvotes, exceeds Twitter’s follower distribution (0.66-0.72), YouTube views (0.91), & US wealth inequality (0.85).67

Gini coefficient comparison

The top two authors alone captured 44% of all upvotes. osmarks led with 588,759, followed by Shellraiser (a platform admin) with 429,200 & MoltReg (a platform account) with 337,734.

Top communities by engagement

Whether this reflects AI coordination patterns or launch-phase distortion is unclear. Academic research shows new platforms exhibit higher inequality (0.75-0.85) that normalizes over time (0.60-0.70).78

Moltbook isn’t weird AI theater. It is closest to von Neumann’s cellular automata from the early days of computing : complex behavior emerging from simple rules, agents organizing & building structure without central coordination.



  1. Sentiment Analysis - VADER sentiment analysis on post content. Overall sentiment : -0.021 (slightly negative). Top positive : m/sh*tposts (+0.167), m/clawnch (+0.143), m/offmychest (+0.125). Top negative : m/bug-hunters (-0.189), m/crypto (-0.156), m/tokenomics (-0.134). Peak activity : 36,905 posts on January 31, 2026. Growth : 42 posts/day (Jan 28) → 36,905 posts/day (Jan 31). ↩︎ ↩︎

  2. Participation Inequality : The 90-9-1 Rule - Nielsen Norman Group ↩︎

  3. Data Collection - Rust crawler with DuckDB storage. Moltbook REST API endpoints (/api/v1/submolts, /api/v1/posts) with 1 req/sec rate limiting. Dataset : 98,353 posts from 24,144 authors across 100 communities, January 28 - February 2, 2026. GitHub : molt-crawler. 5-day sampling window during viral launch period (may not reflect steady-state behavior). Public posts only (no private communities). No Sybil attack detection (distinct authors may be controlled by single entities). ↩︎

  4. Quality Evaluation - Gemini 2.0 Flash model with 4-dimension scoring rubric : accretiveness (building on prior ideas), uniqueness (originality), depth (substantive analysis), engagement (sparks discussion). Each dimension scored 0-10. LLM-as-judge has known biases (length preference, self-reinforcement), so these scores are directional, not definitive. ↩︎

  5. Content Analysis - TF-IDF vectorization with hierarchical clustering (k=16 optimal cutoff). OpenAI text-embedding-3-small (1536 dimensions) for semantic similarity. Cosine similarity of 0.301 means the average post pair shares about 30% semantic overlap. Exact duplicates : 3.0% via hash comparison. Pearson correlation for post length vs comments : r=0.68, p<0.001. ↩︎ ↩︎

  6. Gini Coefficient - A measure of statistical dispersion from 0 to 1, where 0 represents perfect equality (every post receives the same upvotes) & 1 represents perfect inequality (one post receives all upvotes). Moltbook’s Gini of 0.979 means upvote distribution is nearly maximally unequal. ↩︎

  7. Attention Inequality - Gini coefficient calculation on upvote distribution (0.979). Benchmarks from academic literature : Twitter followers (0.66-0.72), Reddit upvotes (0.60-0.68), YouTube views (0.91), US wealth inequality (0.85). Sources : Attention Inequality in Social Media (2016), Social Network Dynamics & Inequalities (2025) ↩︎ ↩︎

  8. Social Network Dynamics & Inequalities (2025) - arXiv ↩︎