Here’s a brief summary, although you miss something if you don’t read the study (trigger warning: stats):
-
The researchers suggest a novel incentive structure that significantly reduced the spread of misinformation and provide insights into the cognitive mechanisms that make it work. This structure can be adopted by social media platforms at no cost.
-
The key was to offer reaction buttons that participants were likely to use in a way that discerned between true and false information. Users who found themselves in such an environment, shared more true than false posts.
-
In particular, ‘trust’ and ‘distrust’ reaction buttons, which in contrast to ‘likes’ and ‘dislikes’, are by definition associated with veracity. For example, the study authors say, a person may dislike a post about Joe Biden winning the US presidential election, however, this does not necessarily mean that they think it is untrue.
-
Study participants used ‘distrust’ and ‘trust’ reaction buttons in a more discerning manner than ‘dislike’ and ‘like’ reaction buttons. This created an environment in which the number of social rewards and punishments in form of clicks were strongly associated with the veracity of the information shared.
-
The findings also held across a wide range of different topics (e.g., politics, health, science, etc.) and a diverse sample of participants, suggesting that the intervention is not limited to a set group of topics or users, but instead relies more broadly on the underlying mechanism of associating veracity and social rewards.
-
The researchers conclude that the new structure reduces the spread of misinformation and may help in correcting false beliefs. It does so without drastically diverging from the existing incentive structure of social media networks by relying on user engagement. Thus, this intervention may be a powerful addition to existing intervention such as educating users on how to detect misinformation.
Slashdot had this covered years ago, literally decades.
- Upvotes limited to +5.
- Votes categorized: funny, informative, insightful, etc.
- Number of votes limited per time frame and user karma.
- Meta-moderation: your votes (up/down both) were subject to voting (correct/incorrect). good score == more upvotes to spend.
It’s a pity that Reddit and other sites didn’t follow this model.
I’m really skeptical about this. I feel like it could make misinformation even worse. By letting users democratically label things as “true” or “false”, you’re encouraging users to rely on groupthink to decide what’s true, rather than encouraging users to think critically about everything they see. For example, if a user comes across a post that’s been voted as 90% true, they’ll probably be like “I don’t need to think critically about this because the community says it’s true, which means it must be true.”
I feel like the downvote button in special should / could be multidimensional. People downvote content out of multiple reasons: “this is incorrect”, “this is really dumb”, “this is off-topic”, “the poster is a jerk”, so goes on.
IMO this would combo really well with the experimental study in the OP.