Getting a bot to spam out 12 posts in a minute is not the way to make me want to engage.

  • breadsmasher@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    1
    ·
    10 months ago

    But when anyone can run an instance, you can’t control it. Someone has an instance which allows them to make as many posts as they want, and then all that content is federated to connect servers

    • Azzu@lemm.ee
      link
      fedilink
      English
      arrow-up
      15
      ·
      10 months ago

      Really though? You can implement the same limits for federated posts, and just drop the ones exceeding the rate limit. Who knows, might be frustrating for normal users that genuinely exceed the rate limits, because their stuff won’t be seen by everyone without any notice, but if they are sane it should be minimal.

      The notice might still be able to be implemented though. idk how federation works exactly, but when a federated post is sent/retrieved, you can also exchange that it has been rejected. The local server of the user can then inform the user that their content has been rejected by other servers.

      There are solutions for a lot of things, it just takes the time to think about & implement them, which is incredibly limited.

      • nicetriangle@kbin.social
        link
        fedilink
        arrow-up
        8
        ·
        10 months ago

        Even a “normal” user needs to chill out a bit when they start reliably hitting a (for example) 3-post-a-minute threshold.

      • breadsmasher@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Not to suggest it isn’t a problem that needs to be solved. But from my understanding of activitypub protocol, there isn’t a way to control content federation on a per message basis, solely on allow/block instances as a whole

    • HeartyBeast@kbin.social
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      10 months ago

      It’s an interesting problem to be sure. It feels like it should be possible for servers to automagically detect spam on incoming federated feeds and decline to accept spam posts.

      Maybe an _actual _ useful application of LLMs

      • Azzu@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        10 months ago

        There’s already plenty of tools that do this automatically, sadly they’re very often proprietary and paid-for services. You just have to have a way to appeal false positives, because there will always be some, and, depending on how aggressive it is, sometimes a lot.

      • originalucifer@moist.catsweat.com
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        10 months ago

        i look forward to an automated mechanism, like with image checking…

        that said, the existing tools arent all that terrible, even if its after the fact.

        ‘purge content’ does a pretty good job of dumping data from know bad actors. and then being able blocking users/instances.

        if everything was rate limited to some degree, we would manually catch these earlier, and block before the rest of the content made its way over… maybe.

    • keefshape@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      10 months ago

      Perhaps a case to be made for a federated minimum-config. If servers don’t adhere to a minimum viable contract, say meeting requirements for rate-limiting, or not requiring 2fa, or other config-level things… They become defederated.

      A way of enforcing adherence to an agreed upon minimum standard of behaviour, of sorts

      • Dran@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        10 months ago

        It would be very easy to spoof those values in a handshake though, unless you’re proposing that in the initial data exchange a remote server gets a dump of every post and computationally verifies compliance.

        Federated trust is an unsolved problem in computer science because of how complex of a problem it is.

        • keefshape@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 months ago

          Spoofing that handshake would be a bad faith action, one that would not go unnoticed longer term. Instances with a bunch of bad faith actions will make the case for not federating with themselves.

          • Dran@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            It just has to go unnoticed long enough to spam for a few days, get defederated, delete itself, start over