• Katana314@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    2
    ·
    1 year ago

    As long as it’s just flagging voice clips for review by a moderator of some kind, that sounds fine to me. I’ve been wanting more games to find new ways of enforcing moderation - maybe clean up the communities a bit so that whole demographics aren’t afraid to engage.

    • Sabata11792@kbin.social
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      1 year ago

      I’m sure they will at first. Knowing Activision and Blizzards history with moderation, once it reaches slightly passible, there going to have a un-monitored ban hammer machine they treat as the final verdict.

      • Katana314@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        1 year ago

        I have a dozen reasons to hate Activision, but I hadn’t heard anything in their history about indiscriminate banning. Care to share?

        Honestly, given their corporate culture, I thought they would’ve leaned towards being permissive of toxic gamer culture.

        • Sabata11792@kbin.social
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          Warcraft has a few stories of stupid or no context bans. Can’t say for sure if they are all justified, but its clearly a system of guilty until proven innocent and getting human intervention is difficult. Seen stories on Reddit where someone has screenshots proving innocent and can’t get past copy pasted or bot replies without raising a shitstorm on twitter.

          Not in in tune on the activision side since I haven’t played cod recently. They are 100% going to half ass the system and then fire off a bunch of human mods.

    • Carighan Maconar@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      And it’s a really interesting use case in fact, pre-filter a lot of input somewhat better than other tools could, but also much faster than the human reviewers could that can then do the actual review of the pre-filtered samples.

    • sirfancy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      1 year ago

      Yeah this is what it does; all it is is essentially another player to sit in a game and listen and report players. More games are adding ToxMod and I’m here for it. It’s funny when people get mad and review bomb games for adding it because they’re mad they can’t say the n-word anymore and call it “spyware”.

    • JadenSmith@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      Yee the moderation thing seems good to me, since a lot of times when I get killed I just instinctively shout “AHH, dickhead!!!”.

  • colonial@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 year ago

    I’m sure that any flagged snippets will be submitted to a human for final review. They definitely won’t just auto-ban-hammer innocent people because the AI misinterpreted something they said!

    Sigh.

  • Yepthatsme@kbin.social
    link
    fedilink
    arrow-up
    15
    arrow-down
    4
    ·
    1 year ago

    CoD has a Nazi problem. They literally setup honey pots and groups in the game.

    For example, 2-3 people who pretend not to know each other but somehow play a bit too well together. Then you have some random invite from some random you never played with 88 in their name and a group invite with a bunch of people with names that are clearly Nazi lovers.

    It’s not because of the shit talking. Every game has that.

    They’re trying to clean up the trash.

  • BURN@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    Personally I’m of the opinion that chat abuse is a core part of CoD, and without people yelling questionable things it’ll take away a lot of the fun.

    That being said, the unfunny stuff tends to be racist and should be moderated, I just don’t know if that’s possible without killing CoD trash talk.

    • Th4tGuyII@kbin.social
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      CoD wouldn’t be CoD without some squeaker with a crappy mic telling me they fucked my Mum last night, or that their Dad works at Microsoft and will get me banned

      • Disgusted_Tadpole@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Man, those were the times. And then 5 min later that same child wakes up his mother at 1h30 and gets his ass whooped to another dimension through voice chat

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Exactly. As long as that kind of shit doesn’t get caught up in this we’re fine, but knowing these big companies they’re going to overtune it so much that a curse word will set it off

    • echo64@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      1 year ago

      i’m 100% okay with the cod trash talk era being over if it stops racists from being racist in video games.

  • Cloudless ☼
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    So players are going to find loopholes to trick AI into bypassing the moderation.

    • higgs@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      1 year ago

      There will always be loopholes. The nice thing with AI is that it’s constantly learning and adapt to new situations very fast.

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        So that’s not inherently true. AI (at least in this sense of it actually being Machine Learning) does not learn on the fly. It learns off base data and applies those findings until it’s retrained again.

        • higgs@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          You’re correct and that’s way more efficient than teaching dozens of people what do ban. People make mistakes, Tech doesn’t (as long as it’s coded correctly)

          • BURN@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I reject that pretty majorly. Tech makes mistakes at a much higher rate than humans, even when built correctly. Tech just makes consistent mistakes instead.

            I don’t trust AI moderation of anything.

            • higgs@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Do you have an example for a correctly built tech stuff which makes constant mistakes?

              • BURN@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Pretty much any AI system.

                Photo AI still have issues determining between dogs and cats. Cancer detection AIs were analyzing x-rays and basing decisions off the doctor who signed them.

                The Boeing 737 MAX built a properly working Autopilot system, but didn’t train pilots correctly, causing pilots to expect functionality similar to older versions and causing plane crashes. The software was 100% right, but it made mistakes because the human input was different than expected.

    • Katana314@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Whenever a law is invented to apply protections, someone always points out that a criminal mastermind can circumvent that protection.

      That often doesn’t matter, because intelligent people have no motivation to breach the protection, and less intelligent people fall into the trap. Even with some circumvention, it can catch a large number of bad actors.

      It’s like saying “Fishing won’t work because fish will just learn to swim around nets”.

    • whiskers@lemmings.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      1 year ago

      This sounds quite good to me. Lot of online games have chat abuse that goes unchecked. If their AI is any good it should help.