• sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    11 months ago

    feel bad

    That’s not the point at all though. The point is that it hides good content that a motivated group wants to silence. We had precisely this problem earlier in Lemmy’s history where posts critical of China were heavily down voted, not because of quality, but because the group didn’t like the message.

    Requiring a comment gives context to the negative reaction. It’s not a silver bullet, but it should increase the barrier to hiding content, hopefully enough that good, controversial content stays visible.

    I’m actually working on a Lemmy alternative that uses a web of trust instead of votes to prioritize and moderate content. Reddit has shown the limitations of voting, and I’m more interested in interesting content than content the majority likes.

    • wildginger@lemmy.myserv.one
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      11 months ago

      Comments do the same thing, by drowning the one opinion in a sea of alternate opinions, and is directly incentivizing only interaction via people with the time to type up a comment. You arent preventing brigades, you are reducing the number of users who arent capable of attempting to brigade at all.

      Especially since your version of brigading is literally how communities work. If the group doesnt agree with an opinion, even opinions you do agree with, the opinion is going to be drowned out. You cannot police “opinion quality.” Because such a subjective thing is good when it agrees with you and bad when it doesnt.

      Good luck and I look forward to seeing it, but to be frank it sounds like you want to build a personal group chat, not a social media site. And like any web of trust, it relies on the integrity of the central member. Which isnt a defense against brigading, just a defense against brigading that doesnt come from the central member or their points of trust.

      E: mind, not that theres anything wrong with crafting your own supported super chat. Just that its less social media, and more a hyper evolved chat among friends and friend-of-friends

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        11 months ago

        Comments do the same thing

        Maybe at a very high level, but comments have the very obvious advantage that they provide something that moderators can block. Lemmy does have open voting logs, but I highly doubt any decent moderator would feel comfortable blocking people based purely on how they vote, and they’d only actually look if there was an obvious problem (e.g. maybe they need to consider blocking an entire instance).

        directly incentivizing only interaction via people with the time to type up a comment

        This only applies to negative interactions, you would always be able to upvote a post.

        I think there’s an argument for hiding the voting buttons inside of the comment thread so users can’t just drive-by vote without actually looking at the comments, much less the linked content, but that’s not what I’m arguing for.

        You cannot police opinion quality

        You’re absolutely right, but you can increase the effort needed to downvote something. A downvote tends to have more weight than an upvote, so it should require more effort as well (e.g. a post with 8 upvotes and 0 downvotes would probably be ranked higher than one with 20 upvotes and 12 downvotes).

        it sounds like you want to build a personal group chat, not a social media site

        No, I definitely want a social media site, I just want everything distributed, including moderation.

        Basically, I want something like BitTorrent, but for social media instead of files. That way there’s no central authority for pretty much anything, so moderation pretty much has to be opt-in (otherwise you’d pick a different client with different moderation). Ideally, you’d select a moderation team that would filter out bad stuff like CSAM, but not filter out high quality content that you simply disagree with. So you’d pick a diverse set of content moderators to trust, and content would only get filtered out if a certain number of them flagged it. You could use the tools to create an echo chamber for yourself, or you can use it to expose yourself to diverse, high quality content that may challenge your beliefs (my personal preference).

        That said, things tend to work differently in practice. At the very least, I’m not going to release it until I have a way for users to review the quality of the moderators they pick.