• frog 🐸@beehaw.org
    link
    fedilink
    English
    arrow-up
    76
    ·
    1 year ago

    It is true that removing and demonetising Nazi content wouldn’t make the problem of Nazis go away. It would just be moved to dark corners of the internet where the majority of people would never find it, and its presence on dodgy-looking websites combined with its absence on major platforms would contribute to a general sense that being a Nazi isn’t something that’s accepted in wider society. Even without entirely making the problem go away, the problem is substantially reduced when it isn’t normalised.

    • alyaza [they/she]@beehaw.orgM
      link
      fedilink
      arrow-up
      45
      ·
      edit-2
      1 year ago

      the weirdest thing to me is these guys always ignore that banning the freaks worked on Reddit–which is stereotypically the most cringe techno-libertarian platform of the lot–without ruining the right to say goofy shit on the platform. they banned a bunch of the reactionary subs and, spoiler, issues with those communities have been much lessened since that happened while still allowing for people to say patently wild, unpopular shit

      • frog 🐸@beehaw.org
        link
        fedilink
        English
        arrow-up
        23
        ·
        1 year ago

        Yep! Reddit is still pretty awful in many respects (and I only even bother with it for specific communities for which I haven’t found a suitable active equivalent on Lemmy - more frogs and bugs on Lemmy please), but it did get notably less unpleasant when the majority of the truly terrible subs were banned. So it does make a difference.

        I feel like “don’t let perfect be the enemy of good” is apt when it comes to reactionaries and fascists. Completely eliminating hateful ideologies would be perfect, but limiting their reach is still good, and saying “removing their content doesn’t make the problem go away” makes it sound like any effort to limit the harm they do is rendered meaningless because the outcome is merely good rather than perfect.

      • Auzy@beehaw.org
        link
        fedilink
        arrow-up
        11
        ·
        1 year ago

        They took way too long unfortunately , but totally agree. thedonald, femaledatingstrategy and fatpeoplehate should have been banned a lot quicker

        It feels like they’ve let it degrade again too now. Last I was on it, lots of subs had gone really toxic and weird

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        1 year ago

        I’d argue that it still broke Reddit.

        Back in the day, I might say something out of tone in some subreddit, get the comment flagged, discuss it with a mod, and either agree to edit it or get it removed. No problem.

        Then Reddit started banning reactionary subs, subs started using bots to ban people for even commenting on other blacklisted subs, subs started abusing automod to ban people left and right, even quoting someone to criticize them started counting as using the same “forbidden words”, conversations with mods to clear stuff up pretty much disappeared, application of modern ToS retroactively to 10 year old content became a thing… until I got permabanned from the whole site after trying to recur a ban, with zero human interaction. Some months later, while already banned sitewide, they also banned me from some more subs.

        Recently Reddit revealed a “hidden karma” feature to let automod pre-moderate potentially disruptive users.

        Issues with the communities may have lessened, but there is definitely no longer the ability to say goofy, wild, or unpopular stuff… or in some cases, even to criticize them. There also have been an unknown number of “collateral damage” bans, that Reddit doesn’t care about anymore.

        • alyaza [they/she]@beehaw.orgM
          link
          fedilink
          arrow-up
          13
          ·
          1 year ago

          imo if reddit couldn’t survive “purging literally its worst elements, which included some of the most vehement bigotry and abhorrent content outside of 4chan” it probably doesn’t deserve to survive

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I see it as a cautionary tale about relying too much on automated mod tools to deal with an overwhelming userbase. People make mistakes, simple tools make more.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            The only time I got banned for bigoted stuff, was precisely for quoting someone’s n-word and calling them out on it. Automod didn’t care about the context, no human did either. Also got banned for getting carried away and making a joke in a “no jokes” (zero tolerance) sub. Several years following the rules didn’t grant me even a second chance. Then was the funny time when someone made me a mod of a something-CCP sub, and automatically several other subs banned me.

            There is a lot more going on Reddit than what meets the eye, and they like to keep it out of sight.

            • Vodulas [they/them]@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              The only time I got banned for bigoted stuff, was precisely for quoting someone’s n-word and calling them out on it. Automod didn’t care about the context, no human did either.

              It sounds like the right call was made (as long as both you and the OP were banned). As a white person, there is no reason for you to use the n-word. In that situation simply changing it to “n-word” is the very least that could have been done

              I’m not really sure how that provides and example of stuff going on in the background that someone wants to keep out of sight.

              • jarfil@beehaw.org
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                The thing is I did not “use” it, just quoted their whole message. In hindsight, maybe I should have changed it, but I still find it a flaw to not take context into account.

                It provides an example of context-less rules blindly applied by a machine, with no public accountability of what happened, much less of the now gone context.

                There are many better ways of handling those cases, like flagging the comment with a content warning, maybe replacing the offensive words, or locking it for moderation, instead of disappearing everything. I didn’t have half a chance of fixing things, had to use reveddit to just guess what I might’ve done wrong.

                • Vodulas [they/them]@beehaw.org
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  The thing is, no context would have made it OK. You may have just been quoting someone, but you still used the word in the quote. Quotes are not some uneditable thing, so it was your choice to leave it in. Zero tolerance for hate means repeating the hateful thing is also not tolerated, and that, IMO, is a good thing and the perfect use of an auto-mod.

                  The other examples are a bit nebulous, and I have no doubt that communities on reddit have esoteric moderation guidelines, but this particular example seems pretty cut and dry.

                  • jarfil@beehaw.org
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    Quotes are not uneditable… but neither are comments.

                    Wouldn’t be the first time when the parent gets edited to make a reply look like nonsense, so I got used to quoting as a countermeasure. Then they unlocked comment editing even in 10 year old “archived” posts 🤦 (BTW, the same applies to Lemmy: should I quote you? will you edit what you said?.. tomorrow, or in 10 years?.. maybe I’ll risk it, this time)

                    “Zero tolerance” becomes a problem when the system requires you to quote, but then some months or years later decides to change the rules and applies them retroactively. I still wouldn’t mind if they just flagged, hid, or removed the comment, it’s the “go on a treasure hunt to find out why you got banned” that I find insulting (kind of like the “wrong login”… /jk, you got banned. Wonder if it’s been fixed in Lemmy already, I know of some sites that haven’t for the last 15 years).

      • jasory@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        You’re literally on a platform that was created to harbor extremist groups. Look at who Dessalines is, (aka u/parentis-shotgun) and their self-proclaimed motivation for writing LemmyNet. When you ban people from a website, they just move to another place, they are not stupid it’s pretty easy to create websites. It’s purely optical, you’re not saving civilisation from harmful ideas, just preventing yourself from seeing it.

        • alyaza [they/she]@beehaw.orgM
          link
          fedilink
          arrow-up
          22
          ·
          edit-2
          1 year ago

          When you ban people from a website, they just move to another place, they are not stupid it’s pretty easy to create websites. It’s purely optical,

          you are literally describing an event that induces the sort of entropy we’re talking about here–necessarily when you ban a community of Nazis or something and they have to go somewhere else, not everybody moves to the next place (and those people diffuse back into the general population), which has a deradicalizing effect on them overall because they’re not just stewing in a cauldron of other people who reinforce their beliefs

          • jasory@programming.dev
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            “A deradicalising effect”

            I’m sorry what? The idea that smaller communities are somehow less radical is absurd.

            I think you are unaware (or much more likely willfully ignoring) that communities are primarily dominated by a few active users, and simply viewed with a varying degree of support by non-engaging users.

            If they never valued communities enough to stay with them, then they never really cared about the cause to begin with. These aren’t the radicals you need to be concerned about.

            “And those people diffuse back into the general population”

            Because that doesn’t happen to a greater degree when exposed to the “general population” on the same website?

            • alyaza [they/she]@beehaw.orgM
              link
              fedilink
              arrow-up
              8
              ·
              edit-2
              1 year ago

              I’m sorry what? The idea that smaller communities are somehow less radical is absurd.

              i’d like you to quote where i said this–and i’m just going to ignore everything else you say here until you do, because it’s not useful to have a discussion in which you completely misunderstand what i’m saying from the first sentence.

            • t3rmit3@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              The deradicalizing effect occurs in the people who do not follow the fringe group to a new platform. Many people lurk on Reddit who will see extremist content there and be influenced by it, but who do not align with the group posting it directly, and will not seek them out after their subreddit is banned.

              • jasory@programming.dev
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                Sure but what degree of influence is actually “radicalising” or a point of concern?

                We like to pretend that by banning extreme communities we are saving civilisation from them. But the fact is that extreme groups are already rejected by society. If your ideas are not actually somewhat adjacent to already held beliefs, you can’t just force people to accept them.

                I think a good example of this was the “fall” of Richard Spencer. All the leftist communities (of which I was semi-active in at the time) credited his decline with the punch he received and apparently assumed that it was the act of punching that resulted in his decline, and used it to justify more violent actions. The reality is that Spencer just had a clique of friends that the left (and Spencer himself) interpreted as wide support and when he was punched the greater public didn’t care because they never cared about him.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            deradicalizing effect on them overall because they’re not just stewing in a cauldron of other people who reinforce their beliefs

            Whom are we talking about here, the ones who get kicked out and seek each other in a more concentrated form, or the ones who are left behind without the radicalizing agents?

            I don’t want to have to deal with Nazis, or several other sects, but I don’t think forcing them into a smaller echo chamber is helping either.

            Ideally, I think a social platform should lure radicalizing agents, then expose them to de-radicalizing ones, without exposing everyone else. Might be a hard task to achieve, but worth it.

            • Zworf@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              Ideally, I think a social platform should lure radicalizing agents, then expose them to de-radicalizing ones, without exposing everyone else. Might be a hard task to achieve, but worth it.

              You really think this works? I don’t. I just see them souring the atmosphere for everyone and attracting more mainstream users to their views.

              We’ve seen in Holland how this worked out. The nazi party leader (who chanted “Less Moroccans”) won the elections by a landslide a month ago. There is a real danger of disenchanted mainstreamers being attracted to nazi propaganda in droves. We’re stuck with them now for 4 years (unless they manage to collapse on their own, which I do hope).

              • jarfil@beehaw.org
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                1 year ago

                No, that’s why I said “Ideally”, meaning it as a goal.

                I don’t think we have the means to do it yet, or at least I don’t know of any platform working like that, but I have some ideas of how some of it could be done. Back in the days of Digg, with some people, we spitballed some ideas for social networks, among them a movie ranking one (that turned out to be a flop because different people would categorize films differently), and a kind of PageRank for social networks, that back then was computationally impractical. But with modern LLMs running trillions of parameters, and further hardware advances, even O(n²) with n=millions becomes feasible in real time, and in practice it wouldn’t need to do nearly that much work. With the right tuning, and dynamic message visibility, I think something like that could create the exact echo chambers that would attract X people, allow in des-X people, while keeping everyone else out and unbothered.

                Of course there is a dark side, in that a platform could use the same strategy to mold the opinion of any group… and I wouldn’t be surprised to learn that Meta had been doing exactly that.