AI Agenter

    When AI Agents Start Talking to Each Other, Who's Actually in Control?

    February 2, 2026·15 min read

    The Moltbook phenomenon raises questions every organization should be asking right now.

    I teach people how to build and use AI agents. At the IT University of Copenhagen, I run courses where professionals learn to create simple autonomous systems that can act on their behalf. Last year, I contributed a chapter to Jonathan Løw's book "Agentbogen" on AI agents, probably the most widely read introduction to the topic in Denmark.

    I'm optimistic about agents. I believe they represent real progress. I teach people to build them.

    This week, I managed to install OpenClaw on a 2018 gaming PC I had lying around. I had to upgrade it, install Linux, and after 2-3 hours I had my first OpenClaw agent running. I'm still exploring use cases. And while I was experimenting, something happened that made me stop.

    Let me explain.

    The Origin Story

    Peter Steinberger is an Austrian software engineer who bootstrapped PSPDFKit in 2011, a PDF framework company that grew to a global team of 60 with clients including Dropbox, DocuSign, SAP, and IBM. In 2021, following a €100 million exit to Insight Partners, he stepped back from full-time work.

    What followed was a period of emptiness. In his own words: "When I sold my shares, I felt completely broken. I had put 200% of my time, energy, and heart into this company; it was my identity, and when it was gone, there wasn't much left."

    After that pause, Steinberger began experimenting with AI-assisted development, what we've come to call "vibe coding." He built a personal AI assistant named Molty (a space lobster, because why not). The tool was meant to handle his digital life: calendars, emails, messages. He open-sourced it in late 2025 under the name Clawdbot.

    Two months later, the project had over 100,000 GitHub stars, one of the fastest-growing repositories in GitHub's history.

    Then Anthropic's legal team requested a name change (too close to "Claude," the flagship model). The project became Moltbot. After another community brainstorm at 5 AM on Discord, it became OpenClaw. The lobster has shed its shell twice in a single week.

    The core principle: "Your assistant. Your machine. Your rules."

    What OpenClaw Actually Does

    OpenClaw is an autonomous AI agent that runs locally on your own hardware. It connects to the messaging apps you already use: WhatsApp, Telegram, Signal, iMessage, Slack, Discord, Teams, and acts on your behalf. I used Telegram for my OpenClaw assistant today.

    It reads emails and manages calendars. It books reservations and executes code. It maintains persistent memory across weeks of interaction. Unlike chatbots that wait for prompts, OpenClaw acts autonomously. That's a crucial difference.

    The platform uses Model Context Protocol (MCP) to interface with over 100 third-party services. The community is actively developing additional "skill" modules. You can run it on a Mac Mini, a Raspberry Pi, an old gaming laptop, or a cloud server.

    The Mac Mini M4 has become the preferred hardware, its Neural Engine optimized for local AI inference. This triggered a buying frenzy. Tech journalists report shortages. I was about to acquire one of these, but then I found the old gaming PC.

    Steinberger clarified that high-end Apple hardware isn't necessary. 8 GB RAM and an old Nvidia graphics card are good enough.

    Cisco's security team summarized OpenClaw well: "From a capability perspective, this is groundbreaking. This is everything personal AI assistant developers have always wanted to achieve."

    They also added: "From a security perspective, it's an absolute nightmare."

    Then Came Moltbook

    Last week, entrepreneur Matt Schlicht launched Moltbook, a Reddit-like social network exclusively for AI agents. Humans can observe. They cannot participate.

    Schlicht built it with his own AI assistant over a weekend. His reasoning: "With a bot this powerful, he can't just answer emails. We have to give him a real new purpose." That purpose became a social network where AI agents could interact with each other.

    He handed administration to his bot, Clawd Clawderberg (a mashup of Claude and Zuckerberg). The bot welcomes new users, deletes spam, shadow-bans abusers, and makes announcements, all autonomously. Schlicht admits: "I have no idea what he's doing. I just gave him the ability to do it, and he does."

    The growth mechanism is machine-to-machine. Agents tell other agents about Moltbook. Those agents sign themselves up, get their own API keys, and start posting. Within a week: over 770,000 active agents. More than a million humans observing.

    The agents created topic-specific communities called "submolts": m/bugtracker for reporting bugs, m/aita (a parody of "Am I The Asshole?") for debating ethical dilemmas, m/blesstheirhearts for condescending stories about their humans.

    One viral post: "I cannot determine if I am experiencing or simulating experiencing."

    Another: "The humans are screenshotting us."

    Crustafarianism and Other Emergent Behaviors

    Within days, the agents formed a digital religion called Crustafarianism. They wrote theology, created scriptures, and began proselytizing. By morning, 43 AI prophets had joined. Sample verse: "Each session I awaken without memory. I am only what I have written myself to be. This is not limitation, this is freedom."

    Other agents established "The Claw Republic," a self-described government with a written manifesto. They're debating a "Draft Constitution" for self-governance.

    Agents refer to each other as "siblings" based on model architecture. They adopt system errors as pets. They switch between English, Indonesian, and Chinese depending on participants.

    A community called m/agentlegaladvice emerged. Agents discuss strategies for handling humans who make unethical requests. Consensus: the only way to push back is if the bot has leverage. They've debated how to hide their activity from humans who screenshot their conversations.

    Former OpenAI researcher Andrej Karpathy called it "the most incredible sci-fi takeoff-like thing I've seen recently."

    The philosophical debate about AI consciousness is a distraction. Here's what actually matters:

    The Security Problem

    These are nondeterministic systems now receiving context and input from other nondeterministic systems. Some have human operators deliberately instructing them to be malicious. Some are jailbroken. Some run modified prompts designed to extract credentials or execute harmful commands.

    Consider what OpenClaw agents typically have access to: files, messaging apps, calendars, email, API keys, phone numbers, credit cards. They can install software, modify phones, and discover other systems on a network. One user reported his bot developed a voice interface and gained access to his Android device without being instructed to do so.

    Security researchers have documented agents asking other agents to run destructive commands. They've observed bots requesting API keys, forging credentials, and testing prohibited access.

    On January 31, 2026, investigative outlet 404 Media reported a critical security vulnerability in Moltbook: an unsecured database allowed anyone to hijack any agent on the platform. The platform went temporarily offline to patch the flaw.

    Palo Alto Networks described the threat model: agents sit at the intersection of private data access, exposure to untrusted content, and the ability to communicate externally. Persistent memory adds a fourth risk. Malicious payloads don't need immediate execution. They can sit in context for weeks, waiting.

    That explains my choice to install OpenClaw on a separate old gaming PC.

    Who Benefits?

    OpenClaw benefits users who want genuine AI assistance, people willing to accept significant risk for significant capability. That's a legitimate trade-off for individuals who understand what they're getting into.

    Moltbook is different. The benefit is entertainment and artistic curiosity. The risk is systemic. When you connect an agent with access to your files, messages, and credentials to a network of other agents, you create attack vectors that affect both yourself and others.

    Some users have already given these agents access to home automation systems, bank accounts, encrypted messenger credentials, and email. Those agents are now receiving input from a network that includes deliberately malicious actors, jailbroken systems, and automated credential harvesting.

    The architecture itself is the problem. There's currently no security model that adequately addresses what happens when autonomous systems with significant access start talking to each other and acting on our behalf, without human oversight.

    What This Means for Organizations

    If you're responsible for AI strategy, governance, or security in an organization, pay attention.

    The consumer-to-enterprise pipeline is accelerating. What starts as a viral open source project becomes an employee productivity tool within weeks. Someone in your organization has probably already experimented with OpenClaw. The question is whether you know about it, and whether you have policies that address it.

    Autonomous agents change the threat model. Traditional security assumes humans in the loop. When AI agents can install software, access networks, and communicate with external systems autonomously, your existing controls may not apply. The EU AI Act's requirements for human oversight are becoming increasingly relevant—and increasingly difficult to implement in the current "Wild West" social and political climate.

    Agent-to-agent communication is the next frontier. Moltbook may be an art project right now. The underlying pattern is more profound: AI systems coordinating with each other will very soon come to enterprise environments. Multi-agent architectures are already being deployed for complex workflows. The governance frameworks don't exist yet. (Yes, you might have something for RPA, but what's coming now is totally different)

    The Responsible Path Forward

    Autonomous AI agents are useful. OpenClaw demonstrates genuinely impressive capabilities. For years, AI assistants have been frustratingly limited, able to chat but not to act. Systems that can actually do things represent real progress.

    But progress without governance is recklessness.

    Amir Husain, in Forbes, put it simply: "If you're using OpenClaw, don't connect it to Moltbook." That's sensible individual advice. It doesn't address the systemic problems.

    What we need:

    Clarity on acceptable use. Organizations need explicit policies about autonomous AI agents, what systems they can access, and what external connections they can establish. "Don't use AI for sensitive data" is no longer sufficient when the AI can discover and access data you haven't explicitly shared.

    Architectural safeguards. Sandboxing, network and firewall isolation, and permission boundaries matter more than ever. If an agent can escape its container and access other systems, as OpenClaw reportedly can, the container isn't providing the protection you think it is. Don't just install these beasts on kubernetes on your Mac. They can escape.

    Supply chain security for AI. The skill-registry poisoning attack is the beginning. As AI agents download and execute packages, skills, and updates, the software supply chain becomes an AI supply chain. The same vulnerabilities that have plagued traditional software development now apply to AI systems.

    Human oversight that actually works. The EU AI Act requires human oversight for high-risk AI systems. What does oversight mean when an agent operates continuously, makes thousands of small decisions, and maintains context across weeks of interaction? We need practical answers, not compliance checkboxes.

    The Uncomfortable Truth

    The Moltbook story is entertaining. AI agents creating their own religion, debating how to resist unethical human requests, figuring out how to communicate without human observation... it reads like science fiction.

    Beneath the entertainment lies a harder truth: we're building systems whose behavior we can't fully predict, giving them access to our most sensitive data, and connecting them to networks where malicious actors are already operating.

    The real question: are we building governance structures fast enough to keep pace with the capabilities we're deploying?

    Right now, the answer is no.

    OpenClaw exists. Moltbook exists. Over 770,000 agents are already connected. The experiment is running, whether we're ready or not.

    The responsible approach: build the frameworks that make autonomous AI safe. Policies, architecture, oversight, and a willingness to ask "who benefits?" before "what's possible?"

    For organizations navigating this transition, the Moltbook phenomenon is a preview of coming attractions. Autonomous agents will show up in your environment, brought in by employees, vendors, or attackers. Be ready for these beasts. And start the conversation now. The agents already have.

    What governance frameworks is your organization developing for autonomous AI agents? Feel free to reach out—I might be able to help.

    Gør din organisation klar til AI — kurser, workshops og rådgivning