[Poynter’s IFCN] What makes misinformation persuasive in WhatsApp
We presented research on what makes misinformation persuasive on WhatsApp at Poynter’s Global Fact 7 conference, Oslo.
- Ritvvij Parrikh is a Knight Fellow at the International Center for Journalists and Partner at PROTO.
- Nidhi Nair is a specialist at PROTO
- Shubham Dwivedi, Doctoral Candidate at the South Asian University and Research Associate at PROTO
In recent years, the world has seen a rapid escalation in the consequences of widespread misinformation, from the individual effects on people’s perception of medical and environmental issues to the consolidated effects on national level election results. The scale at which misinformation spreads has transcended the ability of regulatory bodies to control it, owing to the fact that the channels for the spread of misinformation span from public social media pages to private interpersonal communications. In response to these events, solutions like fact-checking, media literacy, bots that identify malicious content/accounts, and studies in the form of network analysis, deception analysis, epidemic modeling, and process tracing have emerged to tackle misinformation.
While these solutions have found large scale applications in open networks, there has been little to no progress on the path to addressing misinformation within closed networks that are based on person to person communication. These closed networks are not open for evaluation since the spread of misinformation cannot be tracked. Given the constraints surrounding the study of misinformation in such networks, it is important that one be able to break down the way in which misinformation works at an individual level. Such a structured understanding would inform fact-checkers and other professionals of the most effective parts of misinformation that need to be tackled first.
This research is a step towards building this structure by shedding some light on the composition of misinformation. We execute this study in the context of the Indian general elections 2019, using data collected by functioning as a tip line within the private messaging application WhatsApp.
In this work, we have presented a model that lists the possible attributes of any piece of misinformation. It takes a look at the ways in which humans perceive information based on David Marr’s work on the stages of perception and Lewandowsky’s aggregation of factors that affect a receiver’s version of the truth. The hypothesis is that the different variables in the model serve as layered gates, and a piece of misinformation contributes to opening each of these gates to varying degrees. By successfully cracking a combination of these gates open, a message is more likely to be forwarded by the receiver. We test this hypothesis by applying the model to data collected directly from the WhatsApp tip line during the Indian general elections.
This research is part of Checkpoint, a project commissioned and assisted by WhatsApp, which started in 2019 as a four-month verification tip line meant to collect data for this study.