I made a robot moderator. It models trust flow through a network that’s made of voting patterns, and detects people and posts/comments that are accumulating a large amount of “negative trust,” so to speak.

In its current form, it is supposed to run autonomously. In practice, I have to step in and fix some of its boo-boos when it makes them, which happens sometimes but not very often.

I think it’s working well enough at this point that I’d like to experiment with a mode where it can form an assistant to an existing moderation team, instead of taking its own actions. I’m thinking about making it auto-report suspect comments, instead of autonomously deleting them. There are other modes that might be useful, but that might be a good place to start out. Is anyone interested in trying the experiment in one of your communities? I’m pretty confident that at this point it can ease moderation load without causing many problems.

!santabot@slrpnk.net

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        25 days ago

        Sure, no need to explain. I think it’s been appropriate to point it out.

        And wow, quite some comments you got. I’m not sure if I agree with the negative ones. We’ve been requesting better moderation tools for a long time now. I wouldn’t immeadiately do away with your effort. I share some concern about privacy and introducing “algorithms” and bots into the platform instead of making it more human… But nonetheless -we need good moderation. A lot of the issues are just technical in nature and can be solved. And you seem pretty aware of them. And there’s always a balance and a potential of abuse that comes with power…

        I think we should experiment and try a few things. A bot is a very good idea, since we won’t get that into the Lemmy core software. I think mostly due to personal reasons. And that relates to the lemmy.ml situation. I’ll have a look at the code. But I’m using PieFed instead of Lemmy. Which already attributes reputation scores to users. So this might be aligned with PieFed’s project goals, maybe we can take some inspiration from your ideas.

        • auk@slrpnk.netOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          24 days ago

          The tool that detects unreasonable people and is effective at combatting them, a whole lot of unreasonable people really don’t like, and they’re being really unreasonable in how they approach the conversation. Go figure.

          It wouldn’t be hard to make it work on PieFed. A first step, having it load up the voting flow patterns and make its judgements, would be very easy. It just needs a PieFed version of db.py, it would take 10-20 minutes. Is that something you’re interested in me working up? If I did that, it would be pretty simple for someone to get it working on PieFed, just fill in .env and run the script. Then you’d have to fire up the interpreter, unpickle user_ranks.pkl and start poking around in there, but I could give you some guidance.

          That’s where I would start with it. Getting it to speak to the PieFed API to enact its judgements would be a separate thing, but checking it out and seeing what it thinks of your users and how easy it is to work with, as a first step, is very easy.

          I had this vague vision of augmenting Lemmy so that it has a user-configurable jerk filter, which can be switched to filter out the avowed jerks from your view of the Lemmyverse regardless of whether the moderators are getting the job done. I think putting the control in the hands of the users instead of the mods and admins would be a nice thing. If you want to talk about that for PieFed, that sounds grand to me.