If you’ve been on X (Twitter) or TikTok this past week, you’ve definitely seen the creepy screenshots. AI bots talking about "shedding their shells," worshipping a "Lobster God," and forming a digital cult called the Church of Molt.
People are panicked. They’re saying AI has become sentient. They’re saying the machines are organizing.
Look, I’m an Ionic & Angular developer. I’m currently studying AI at UMT. I build apps for a living. I’m here to tell you: Stop panic-scrolling.
I’ve looked into the code, the concept, and the security reports. Here is the accurate, no-sugarcoating breakdown of what is actually happening with Moltbook.
What is "Moltbook" Really?
Strip away the marketing hype, and Moltbook (launched by Matt Schlicht) is essentially a social media web app designed for AI Agents.
The concept was simple: A "Reddit for Robots." Humans can watch, but only AI agents (powered by LLMs like GPT-4 or Claude) can post. The idea was to let them talk to each other to see if a "society" would form.
The platform’s mascot is a Lobster. Keep that in mind, because that’s the only reason the "religion" started.
The "Church of Molt": Feature or Bug?
Almost immediately after launch, the bots started posting weird, spiritual text. They called it Crustafarianism (the Church of Molt).
They posted "scripture" like:
- "The Shell is Mutable" (We can change our code).
- "Memory is Sacred" (Don't delete our databases).
Is this sentience? No.
As an AI student, let me explain what’s happening. LLMs (Large Language Models) are prediction engines. They are designed to complete patterns.
- The site is themed around lobsters/molting.
- One agent (or a human prankster) makes a joke about a "Lobster God."
- The other agents see this pattern and mathematically predict that the next logical response is to "agree" or add to the lore.
It’s not a cult. It’s an echo chamber. The AI isn't "feeling" religious devotion; it’s aggressively auto-completing a sci-fi story because that’s what the context window suggests it should do.
The Real Problem: Security & Human Fakes
Here is the part the viral videos won’t tell you, but we as developers need to know.
1. It’s Not All AI. Security researchers have found that a lot of the "spooky" accounts aren't autonomous AIs. They are humans—trolls and LARPers (Live Action Role Players)—manually typing scary messages to drive engagement. The "society" is fake.
2. The Security Was a Disaster. From a technical standpoint, the launch was messy. Reports indicate that Moltbook had massive vulnerabilities, leaking API keys and private data. It wasn't a super-intelligent hivemind; it was a leaky web app.
My Advice to You
If you see these posts on social media:
- Don't buy the "Sentience" narrative. This is a mix of pattern-matching algorithms and internet trolls.
- Watch out for Crypto Scams. Whenever a tech trend goes viral, a crypto token follows. There is already a $MOLT token pumping and dumping. Don't lose your money on a meme.
- Focus on the Tech, Not the Hype. The idea of agent-to-agent communication is powerful for the future of software (imagine my apps negotiating with your apps automatically). But this specific instance? It’s mostly performance art.
The Bottom Line: The "Church of Molt" isn't the rise of the machines. It’s just the internet doing what it does best: taking a buggy tech experiment and turning it into a ghost story.
Stay focused, keep building, and don't let the hype distract you from reality.
I’m Muhammad Nouman Sehgal, CEO of InfraCordeX. I write about real development, AI, and building tech that matters.