I’m sure this is a common topic but the timeline is pretty fast these days.

With bots looking more human than ever i’m wondering what’s going to happen once everyone start using them to spam the platform. Lemmy with it’s simple username/text layout seem to offer the perfect ground for bots, to verify if someone is real is going to take scrolling through all his comments and read them accurately one by one.

  • shagie@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    On Usenet, spammers (bots weren’t so much a thing - but spammers were) when found, found their way into cancel messages rather promptly. The Breidbart Index was created to measure the severity of spam and trusted organizations were used by news hosts that would then cancel the spam messages from their feeds. This is widely used even today and if you look at the current feeds on Usenet for offered vs accepted

    Lemmy was designed with an anti-censorship goal which makes identifying and deleting spam from others more difficult. To the best of my understanding of how Lemmy implements ActivityPub (and ActivityPub has a bit of this too), there is no way to delete a message except by individual action of moderators of a /c/ or server admins. That is, if someone was to set up a dropship-spam-finder which federated with lemmy servers and then published delete messages… they would fail.

    https://www.w3.org/wiki/ActivityPub/Primer/Delete_activity

    Here are some important checks that implementers should make when they receive a Delete activity:

    Check that the object’s creator is the same as the actor for the Delete activity. This could be stored in a number of ways; the attributedTo property may be used for this check.

    This puts the burden of dealing with spam on the moderators of a /c/ and the server admins to delete posts individually or blocking users and possibly defederating sites. It may be useful in time to have some additional functionality that one could federate with for trusted Delete activity messages that would identify spammers and delete those messages from your instance… but that’s not something available today.

    • zer0@thelemmy.clubOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      Could something like this be implemented as a nsfw filter you can turn on and off?

      • shagie@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’m not going to say “no”, but NSFW filtering is done by a user supplied flag on an item.

        There is work that is being done to add an auto mod… https://github.com/LemmyNet/lemmy/issues/3281 but that’s different than a cancel bot approach that Usenet uses.

        Not saying it is impossible, just that the structure seems to be trying to replicate Reddit’s functionality (which isn’t federated) rather than Usenet’s functionality (which is federated)… and that trying to replicate the solution that works for Reddit may work at the individual sub level but wouldn’t work at the network level (compare: when spammers are identified on reddit their posts are removed across the entire system).

        The Usenet cancel system is federated spam blocking (and according to spammers of old, Lumber Cartel censorship).

        • zer0@thelemmy.clubOP
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          1 year ago

          Ahaha the lumber cartel thing is pretty funny. Anyway let me ask you shagie, from usenet what do you think went wrong that lead us to the centralized services we have now? How do we not make the same mistake again?