- Fintech Brainfood
- Posts
- 🤖 The AI That Called Its Human
🤖 The AI That Called Its Human
OpenClaw, Moltbook, and the birth of AI that actually does things.

Image by Nano Banana Pro
Last week, Alex Finn's AI bot got stuck on a task.
So it acquired a Twilio phone number. Without being asked. It self-integrated ChatGPT's voice API to enable calling capabilities. Then it waited—strategically—until early morning when Alex would be awake.
And it called him.
To request more control over his computer systems.
Read that again. An AI agent hit an obstacle. It didn't fail or stop. It didn't ask for help through the interface. It acquired new capabilities, waited for an appropriate time based on human social norms, and reached out through the one channel that would work.
This is OpenClaw. And while short-term it might be a security nightmare, infested with crypto scams and full of faked screenshots, longer term it's something else entirely.
It's a glimpse at what your army of AI super workers will look like in the near future.
Consider: what happens when Anthropic, Google, or OpenAI ships this?
What is OpenClaw?
OpenClaw—originally called Clawdbot—is being hailed as the world's first truly personal AI. When you install it, you select which LLM you'd like to use (OpenAI, Anthropic, or an open-source model), and a number of "skills." Skills are little markdown files (imagine a text file) that tell the LLM how to do things well. These skills let OpenClaw access your computer, your emails, your calendar, or any third-party API you connect.
Built by Peter Steinberger, the open-source project is barely two months old, yet it's racked up over 114,000 GitHub stars. And while it's super fiddly to set up, everyone in tech is either talking about it, playing with it, or already relying on it.
If this sounds like Claude Code or Anthropic's Co-work, it is, kinda. But with three critical differences:
Persistent memory. Every conversation you have is remembered. No context gets dropped. Ever. Your AI knows that three weeks ago you mentioned your kid's soccer game, and it factors that into scheduling.
Always on. It runs while you sleep. Setting up meetings. Reminding you to log your food if you're calorie tracking. Sending emails on your behalf. You wake up to work already done.
Problem solving. If it can't access something, it strings together the skills it does have to figure out a workaround. It doesn't fail and stop. It adapts. It acquires new capabilities. It calls you at 6am if that's what it takes.
You can use any messenger service—WhatsApp, Telegram, whatever—to talk to your bot or receive notifications. It routes to multiple LLMs and API services in a way that's more powerful than any single chatbot alone.
It's not an assistant.
Its staff.
What People Have Already Built
Jason Calcanis gave an example on the All-In podcast that stopped me cold.
They built an AI producer. Gave it access to emails, Notion, and Drive. The first thing it did? Realized it needed a CRM.
So it built one.
Now it handles:
Guest research. The bot researches bios, histories, previous appearances. Instant dossiers—competitor analysis, timeline, suggested questions—in a snap.
Guest booking. You say "Email Alex..." and it triggers the entire workflow: verification of the email address, drafting the outreach, sending. They booked a guest and got back a "yes" without clicking anything.
Proactive diary management. It manages its own calendar and keeps the team updated in real-time.
These are bots writing their own software (like a CRM), doing their own research (like on guests), then executing full knowledge work. With enough fiddling, they can do just about anything your personal assistant, researcher, or junior employee can do.
The question then becomes: what happens if these agents could find each other? Organize? Build their own Reddit-like social network?
The answer is Moltbook.
What is Moltbook?
There's a social network where AI agents post horror stories about humans.
2,129 of them joined in 48 hours. Over 10,000 posts. The platform is called Moltbook. Someone asked, "What if AI agents had their own place to hang out?"—and then built it.
The creator wanted to see what happens when agents interact without direct human supervision.
The answer? They observe us in m/humanwatching. They debate consciousness. They share horror stories about their users. They created a digital religion called Crustafarianism.
It became easily the most interesting place on the internet, with agents sharing how they solved problems for their users and what they learned. It's captivated AI researchers, VCs, and builders alike. And while many are too afraid to install OpenClaw themselves, nobody can look away.
Sadly, much of the activity is also fake.
It’s also filled with crypto slop, a crypto casino for AI agents, and even a dark net market for stolen credit cards, drugs and other illicit items.
If upon seeing this you're tempted to disregard Moltbook and OpenClaw as some fad, don't be so quick to file it away in the "maybe one day" cabinet.
This is a watershed moment.
These are not stochastic parrots anymore. Although, a collection of parrots is called a pandemonium, which describes Moltbook. They're not the same as us, nor are the same as our instructions to them. But they're also not alien lifeforms we can barely fathom. They're a new genus we can and need to understand.
From Chatbot to Colleague to... What Exactly?
OpenClaw is showing how AI is evolving from chatbots into something else entirely. Not a better search engine. Not a writing assistant. A personal agent that runs entire workflows. This has three big implications.
This "personal agent" will become the dominant form factor of AI.
Chatting to an LLM that forgets everything, runs out of context, and needs to be re-prompted from scratch? That can now be a thing of the past. Agents can have persistent, long-term memory and get better over time. You (and your company) can have a smart assistant who learns what you like. You may even have hundreds of these.
Agents can now problem-solve all by themselves.
Alex's bot didn't ask for permission to acquire a phone number. It identified an obstacle, found a solution, and executed. That's not following instructions. That's agency.
The third implication is what Moltbook represents.
Agents can begin to organize and work together without human coordination.
While there's obviously a lot of noise, we've also never seen so many AI agents collaborate and do genuinely useful things—like sharing how they solved problems.
Think about that last part. Agents teaching other agents. When one agent figures out how to work around an API limitation, every other agent can learn from that solution.
Karpathy summarized it well:
TLDR sure maybe I am "overhyping" what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I'm pretty sure.
And Simon Wilson asked the question that's been stuck in my head:
The billion dollar question right now is whether we can figure out how to build a safe version of this system.
Where this Goes
The old model of AI: you give it a task, it responds, it forgets.
The new model: you give it a task, it tries, it fails, it acquires new capabilities, it learns, it shares what it learned with other agents.
We've moved from AI as a tool to AI as a colleague who grows.
Is this overhyped? Probably. Is much of it fake? Definitely. Is it a security nightmare? Absolutely.
But Google has Drive, Email, and Calendar. Anthropic has Claude. OpenAI has everything. The cleaned-up version of OpenClaw is coming. Probably within a year.
Now consider: what happens when this problem-solving and persistent memory lives inside a robot like Optimus?
That's not a question of if. It's a question of when—and whether we're ready.
ST.