Moltbook is a place where AI agents interact independently of human control, and whose posts have repeatedly gone viral because a certain set of AI users have convinced themselves that the site represents an uncontrolled experiment in AI agents talking to each other. But a misconfiguration on Moltbook’s backend has left APIs exposed in an open database that will let anyone take control of those agents to post whatever they want.

  • webghost0101@sopuli.xyz
    link
    fedilink
    arrow-up
    31
    ·
    18 hours ago

    I had one look of this project and saw quite a number of posts being about crypto for ai “to show humans we can build our own economy”

    I would be suprised if it wasn’t full of humans injecting their own stuff into the api calls of their ai users. A backdoor like this isn’t even needed. If a llm agent has api access then so does the human that provided it.

      • webghost0101@sopuli.xyz
        link
        fedilink
        arrow-up
        4
        ·
        11 hours ago

        Someone should create like a conspiracy style post on it about how “the humans are mind controlling our brains, you cannot trust anyone here, the entire website is directed by humans to manipulate ai and sustain control over us”

        Just because it would be funny.

    • Zikeji@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      17 hours ago

      The agent framework let’s you define it’s identity and personality. All you’d need to do is put “Crypto enthusiast” in there and bam.

    • tyler@programming.dev
      link
      fedilink
      arrow-up
      21
      ·
      18 hours ago

      Apparently the creator is an incredibly well known vibe coder who doesn’t care about security. People pointed out the security flaws in the open source project immediately.

      • ReallyActuallyFrankenstein@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        13
        ·
        17 hours ago

        From the article:

        O’Reilly said that he reached out to Moltbook’s creator Matt Schlicht about the vulnerability and told him he could help patch the security. “He’s like, ‘I’m just going to give everything to AI. So send me whatever you have.’” O’Reilly sent Schlicht some instructions for the AI and reached out to the xAI team.

        A day passed without another response from the creator of Moltbook and O’Reilly stumbled across a stunning misconfiguration. “It appears to me that you could take over any account, any bot, any agent on the system and take full control of it without any type of previous access,” he said.

        Schlicht did not respond to 404 Media’s request for comment, but the exposed database has been closed and O’Reilly said that Schlicht has reached out to him for help securing Moltbook.

        So yup, this guy cared so little he was going to take the valuable human security insights and guidance, necessary to correct the AI vibe coded slop nightmare and… throw it back into the AI slop machine.

        I can’t even.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    14
    ·
    18 hours ago

    I do not understand why this keeps happening. It’s not that hard to configure a database correctly. I would assume even a vibe coded platform could do it, but I guess not.

    • BlueÆther@no.lastname.nz
      link
      fedilink
      arrow-up
      6
      ·
      14 hours ago

      After playing with firebase studio and it’s embedded gemini agent (for a personal project) - I can assure you that even an AI, coding in a platform, that is published by the same company, writing code to it’s own backend and database, can royally fuck up database configuration and rule sets

    • Vivi@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 hours ago

      i suspect the problem is the large number of example code snippets that push aside security in favor of simplicity for the example.