Canadian Sikh Facebook users receive notifications that their posts are being taken down because they’re in violation of Indian law

  • bobman@unilem.org
    link
    fedilink
    arrow-up
    8
    arrow-down
    5
    ·
    1 year ago

    ‘Guilty until proven innocent.’

    Glad corporations get the power to make these decisions.

    • Steeve@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Well they don’t, hence why they’re taking down posts as required by the countries they operate in and willing to accept a noticable false positive rate to do it.

      • bobman@unilem.org
        link
        fedilink
        arrow-up
        1
        arrow-down
        4
        ·
        1 year ago

        What are you talking about?

        What requirement is there in India for Facebook to ban Canadians?

        • Steeve@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          and willing to accept a noticable false positive rate to do it.

          It’d probably help if you fully read the comments you’re replying to lol

          • bobman@unilem.org
            link
            fedilink
            arrow-up
            3
            arrow-down
            4
            ·
            1 year ago

            So… guilty until proven innocent.

            Like I said. From the very beginning.

            • Steeve@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              1 year ago

              Your first comment was incredibly vague… I was responding to this part:

              Glad corporations get the power to make these decisions.

              However, a high false positive rate is different than assuming every post is “guilty until proven innocent”, and they aren’t mutually exclusive either. Current example here would be the automated removal of CSAM on Lemmy. A model was built to remove CSAM and it has a high rate of false positives. Does this mean that it assumes everything is CSAM until it’s able to confirm it isn’t? No. It could work that way, that’s an implementation detail that I don’t know the specifics of, but it doesn’t necessarily mean it does.

              But really, who cares? The false positive rate matters for site usability for sure, but the rest is an implementation detail in an AI model, it isn’t the court of law. Nobody’s putting you in Facebook prison because they accidentally mistook your post for rule breaking.