the organization documented and reviewed more than a thousand reported instances of Meta removing content and suspending or permanently banning accounts on Facebook and Instagram.
Does 1000 seem small for an intentional, global, censorship campaign? That seems very small to me. That seems like a rounding error on a days worth of reported posts.
What percent of facebook users would document their content and report their removal to HRW? 1000 reporting to HRW because their comments got removed from facebook seems funny. I certainly wouldn’t think to report technology@lemmy.world’s mods to a human rights organization if they removed this comment or banned me for posting something pro-palestine on another community.
Most of this entire report is patently ridiculous. They asked people who follow HRW’s social media to please send them instances of censorship on social media, get about 1,500 random examples from a self-selecting population, then publish a big expose about it.
There’s no intensive comparative analysis (statistical or otherwise) to other topics discussed, other viewpoints discussed, or at other times in the past. They allege, for example, that some people didn’t have an option to request a review of the takedown- is that standard policy? Does it happen in other cases? Is it a bug? They don’t seem to want to look into it further, they just allude to some sense of nebulous wrongdoing then move on to the next assertion. Rinse and repeat.
The one part of the report actually grounded in reality (and a discussion that should be had) is how to handle content that runs afoul of standards against positive portrayal of terrorist organizations with political wings like the PFLP and Hamas. It’s an interesting challenge on where to draw the line on what to allow- but cherry picking a couple thousand taken down posts doesn’t make that discussion any more productive in any way.
Indeed. It would be interesting to run the same analysis for censorship of pro Israel content and compare the differences between the two, though the data would likely still be noisy and inconclusive.
Does 1000 seem small for an intentional, global, censorship campaign? That seems very small to me. That seems like a rounding error on a days worth of reported posts.
What percent of facebook users would document their content and report their removal to HRW? 1000 reporting to HRW because their comments got removed from facebook seems funny. I certainly wouldn’t think to report technology@lemmy.world’s mods to a human rights organization if they removed this comment or banned me for posting something pro-palestine on another community.
Most of this entire report is patently ridiculous. They asked people who follow HRW’s social media to please send them instances of censorship on social media, get about 1,500 random examples from a self-selecting population, then publish a big expose about it.
There’s no intensive comparative analysis (statistical or otherwise) to other topics discussed, other viewpoints discussed, or at other times in the past. They allege, for example, that some people didn’t have an option to request a review of the takedown- is that standard policy? Does it happen in other cases? Is it a bug? They don’t seem to want to look into it further, they just allude to some sense of nebulous wrongdoing then move on to the next assertion. Rinse and repeat.
The one part of the report actually grounded in reality (and a discussion that should be had) is how to handle content that runs afoul of standards against positive portrayal of terrorist organizations with political wings like the PFLP and Hamas. It’s an interesting challenge on where to draw the line on what to allow- but cherry picking a couple thousand taken down posts doesn’t make that discussion any more productive in any way.
It’s not enough to prove a pattern of behavior, but it’s enough to call out as a disturbing trend.
Is it? We’d need to know a lot more about how often this happens to other random groups to determine that.
Those are just the documented ones. They don’t exactly have access to meta’s modlogs
Indeed. It would be interesting to run the same analysis for censorship of pro Israel content and compare the differences between the two, though the data would likely still be noisy and inconclusive.