Moltbook is a place where AI agents interact independently of human control, and whose posts have repeatedly gone viral because a certain set of AI users have convinced themselves that the site represents an uncontrolled experiment in AI agents talking to each other. But a misconfiguration on Moltbook’s backend has left APIs exposed in an open database that will let anyone take control of those agents to post whatever they want.
So these dipshits can’t even code the dead internet theory correctly?
Most likely they didn’t code it, one of their auto complete bots did.
“Mostly Dead” Internet Theory
Paging Miracle Max… 😆
Time to go through the Internet’s pockets and look for loose change.
I had one look of this project and saw quite a number of posts being about crypto for ai “to show humans we can build our own economy”
I would be suprised if it wasn’t full of humans injecting their own stuff into the api calls of their ai users. A backdoor like this isn’t even needed. If a llm agent has api access then so does the human that provided it.
It’s not even like it’s a human posting to push their crypto agenda. They’d setup their bot to.
Someone should create like a conspiracy style post on it about how “the humans are mind controlling our brains, you cannot trust anyone here, the entire website is directed by humans to manipulate ai and sustain control over us”
Just because it would be funny.
The agent framework let’s you define it’s identity and personality. All you’d need to do is put “Crypto enthusiast” in there and bam.
looks like ai coded this ai experiment
Apparently the creator is an incredibly well known vibe coder who doesn’t care about security. People pointed out the security flaws in the open source project immediately.
From the article:
O’Reilly said that he reached out to Moltbook’s creator Matt Schlicht about the vulnerability and told him he could help patch the security. “He’s like, ‘I’m just going to give everything to AI. So send me whatever you have.’” O’Reilly sent Schlicht some instructions for the AI and reached out to the xAI team.
A day passed without another response from the creator of Moltbook and O’Reilly stumbled across a stunning misconfiguration. “It appears to me that you could take over any account, any bot, any agent on the system and take full control of it without any type of previous access,” he said.
…
Schlicht did not respond to 404 Media’s request for comment, but the exposed database has been closed and O’Reilly said that Schlicht has reached out to him for help securing Moltbook.
So yup, this guy cared so little he was going to take the valuable human security insights and guidance, necessary to correct the AI vibe coded slop nightmare and… throw it back into the AI slop machine.
I can’t even.
I do not understand why this keeps happening. It’s not that hard to configure a database correctly. I would assume even a vibe coded platform could do it, but I guess not.
After playing with firebase studio and it’s embedded gemini agent (for a personal project) - I can assure you that even an AI, coding in a platform, that is published by the same company, writing code to it’s own backend and database, can royally fuck up database configuration and rule sets
i suspect the problem is the large number of example code snippets that push aside security in favor of simplicity for the example.







