Can the Divided, We Scroll: Social Cohesion in a Misinformation Age
Misinformation and Trust | Chanel Lai
It is damaging to society. It propagates itself, feeding on the information that is supplied by today’s digital age. It survives and thrives, eating away at the glue that maintains social cohesion, evolving as more people turn to social media to communicate. Misinformation and disinformation are the next viruses.
Introduction
Social cohesion is vital for allowing communities to achieve their goals, whether the goal is economic growth or achieving high rates of vaccinations in populations. While political, social, lingual, and economic differences are a challenge to social cohesion, two increasingly important factors are disinformation and misinformation. Together, they co-operate to destroy trust within relationships, and the togetherness of a population [1].
Social Media, Technology and Its Impact on Cohesion
Social cohesion has been around since the earliest records of social order and hierarchy [2]. It consists of two pillars, a subjective and an objective component. The objective component consists of individuals’ actions to achieve a specific goal, while the subjective component is the willingness of an individual to trust another entity. The subjective component also relates to the perception of other individuals, that is, whether they can be relied on or trusted [1]. Trust depends on whether we believe in someone to carry out a specific task or have good intentions. Trust is categorised as interpersonal trust, social trust and institutional trust. Interpersonal trust is between you and a known individual, while social trust would be between you and a stranger, and finally, institutional trust would include the relationship with a government [2].
Both social media and technology can be beneficial for spreading news and research, but when exploited, they can cause distrust and destroy social cohesion. Social media is any digital platform that allows people to connect and share information. It is usually associated with share, like and repost buttons. It is the time spent on social media which impacts relationships, because of interpersonal connections we feel behind a screen. With social media, networks can be enlarged or rewired, no matter how far individuals may be. Forget about geographical distance. Networking allows us to meet like-minded people and find groups we identify with or affiliate with from kilometres away. The lack of geographical restriction allows the proliferation of all kinds of information to different individuals. This can allow positive outcomes as it creates a sense of unity and oneness. However, these networks can also encourage extreme self-selection and polarisation, leading to differing information realities for people [3].
Artificial intelligence has also influenced the rapid spread of information. It has greatly reduced the cost of generating and dispersing disinformation. AI can create personalised, well-crafted text, visual and audio material that can be used to target specific populations. This can manipulate individuals into believing sources that are not necessarily factual [4]. Social media allows for maximum diffusion of information [3]. For some subjects, such as politics, this may have positive benefits. What about when the information that’s spreading is misinformation/disinformation?
Misinformation, Disinformation and The Dual Problem
Misinformation is the spread of any incorrect information, regardless of having a good or bad intention, whereas disinformation is intentionally spreading wrong information to cause harm [4]. When we read something on a digital platform, our reliance on other people for information causes two learning problems: learning about the issue and analysing how trustworthy and reliable the source is. Not solving this could lead to falling for misinformation/disinformation, and equally importantly, leave us uninformed about important issues. This is already evident in society for issues such as climate change or vaccination rates [5].
Several factors cause misinformation/disinformation, such as emotional appeal and confirmation bias. Algorithms play a significant role in what you see on your feed and can contribute to spreading misinformation/disinformation. Beliefs can be reinforced, producing what is coined as “echo chambers” to allow false information to spread into groups that share the same opinions. The emotional appeal influences the spread, as it can evoke emotional responses, thereby increasing the chances of spreading incorrect information. Once people have shaped their beliefs by viewing information that confirms their bias, also known as “confirmation bias,” this allows for more rapid spreading of information. [6]–[7].
Is there a way to model trust?
Lee et al. (2013) developed a computational model to predict how much someone trusts another individual. This study underwent machine learning based on information they had obtained from previous human-subject experiments. This information was implemented into the computational model. The computational model’s predictions were based on the non-verbal behaviours shown when two individuals were socially interacting. This was enforced through machine learning approaches. Authors also used supervised learning, which used input and output patterns to give an accurate prediction [8].
They identified non-verbal cues from their previous studies as untrustworthy behaviour. The results were used to train the computational model to predict trust-related outcomes. A key part of their study was using a humanoid robot to evaluate the degree to which an individual trusts the robot. The authors classified certain actions as ‘low trust indicators’ such as face touching, arm crossing, leaning backwards and hand touching. As well as this, they programmed neutral conversational gestures. Participants of the study had a conversation with the robot, where the mannerisms of the robot were controlled for each participant. Increased frequency of these non-verbal cues was defined as ‘less trusting’. They predicted that the robot’s expressions of the cues caused the participants to view the robot as less trustworthy [8].
Figure 2: The role of robots is still uncertain, but they might provide insights that help us model complex phenomena such as the degree of trust in an entity. Source: Unsplash
The Role of Computational Modelling
Manually fact-checking sources becomes tedious when we need to check many sources in a limited amount of time [9]. Recently, Khalil et al. explored quantum machine learning methods to detect fake news accurately. How a computational model classifies fake news can depend on different learning approaches. It could be knowledge-based, which includes fact-checking to confirm the author’s authenticity and information. Style-based classification uses analysis of the tone of the written work to check if it can mislead the reader. Classification could also be propagation-based, that is, analysing how the information is spread, such as via reposts or share buttons. Specifically, the authors of this study focused on how propagation-based classification can be used to train their computational model by using features from social media posts such as the likes, comments, shares and discussions [10].
To summarise this, they used a BuzzFeed dataset which included 2282 records of news sources that were grouped as mostly true, mostly fake, mixed and non-factual. After data processing, they had to enable a feature map. Feature maps highlight features of the input data, which is important for neural networks and allows the model to recognise patterns in the data set [10]. They compared their model’s accuracy to other studies that have developed fake news models. Their model, PegasosQSVM, had the highest accuracy. This showed that their model successfully filtered out fake information [10].
The characterisation of text into vectors for machine learning significantly affects how effectively a model predicts and classifies fake news. Similarly, a study completed by Katsaros et al. (2019) explored and compared different machine learning paradigms. This included analysing regression, support vector machines, Bayesian methods, decision tree methods, and neural networks. The paper compared the speed and effectiveness of these paradigms and the results of how well they can detect fake news when provided with a URL. This study showed insights into which vector transformation technique was the most effective. They concluded that term frequency-inverse document frequency had the highest accuracy compared to other text-to-vector transformation techniques [11]. Understanding the efficiency of these paradigms can give insight into how to improve computational modelling systems to enable better understanding of complex phenomena.
Challenges to Misinformation Studies and Modelling
While computational models of trust and misinformation/disinformation look promising, the topic of computational modelling can be controversial. Computational modelling is a valuable way to explore complex systems, but its ability to explain complex ideas is judged by how well it works under the conditions specified by the modeller [12].
There are issues surrounding the variability of frameworks, models and simulations. One model may be very accurate for datasets with one specific topic, and it may not be very accurate for another topic. In a review comparing different studies about computational modelling for determining fake news, the authors found that many computational models focus on subjects such as politics. Other subjects should be considered when training models, as the digitalised world contains diverse topics. Going forward, studies about computational methods and misinformation/disinformation should be independent of contexts to allow for adaptation in a digitalised world [13].
Ethics plays a significant role in the future of computational modelling. Ethics is a significant concern when dealing with misinformation/ disinformation studies. Since these models can filter out misinformation/disinformation, we can unwittingly censor different kinds of information by imposing strict criteria in computational models. On the other hand, criteria that are too relaxed can cause information not to be checked. This is risky as it counters the idea of free speech [13].
It is challenging to determine universal truths. Truth has a subjective nature, and trying to establish a determinate truth implies that there is only one narrative or way of seeing things. Research on how misinformation/ disinformation spreads and evolves to target different communities is ongoing. It is a challenging task to analyse which sources are credible. It is a more challenging task to mitigate and limit the spread of misinformation/ disinformation [13]. Even with fact-checking and credibility tools, misinformation and disinformation successfully persist in the digital world [14].
Conclusion
Misinformation and disinformation are used by bad actors that diminish social cohesion and togetherness. Enabled by technology and digital platforms, it provokes emotions that cause us to question our beliefs and reality, diminishing relationships with each other and institutions. Even with insights into computational models as solutions, it is uncertain how computational models will work for all sources and people. It is up to us to scrutinise, be critical of and responsible for what we share and believe in.
Figure 1: Trust is a critical factor that influences social cohesion. Source: Unsplash
[1] G. Grimalda, N. Tänzer, “Understanding and fostering social cohesion.” Argentina, G20 Insights. 2018. Accessed: August 8, 2025. https:// www.global-solutions-initiative.org/wp-content/uploads/2025/03/ TF8-8.1-Social-cohesion-Policy-Brief-Version-II.pdf.
[2] K. Reibold, M. Bachvarova, P. T. Lenard, “Introduction: trust, social cohesion, and integration”, Crit. Rev. Int. Soc. Political Philos, pp. 1–18, Jul. 2025. https://doi.org/10.1080/13698230.2025.2528379.
[3] S. González Bailón, and Y. Lelkes, “Do social media undermine social cohesion? A critical review,” Soc. Iss. Policy Rev., vol. 17, no. 1, 155– 180, Dec. 2022. https://doi.org/10.1111/sipr.12091.
[4] E. K.Clemons, M. Schreieck, and V. R. Waran, “Managing disinformation on social media platforms,” EM, vol. 35, no. 1, pp. 52. https://doi. org/10.1007/s12525-025-00796-6.
[5] L. Schulz, Y. Streicher, E. Schulz, R. Bhu, and P. Dayan, “Mechanisms of mistrust: A Bayesian account of Mis/ disinformation learning,” PLoS Comput. Biol., vol. 21, no. 5, pp. 1012814, May. 2025. https://doi.org/10.1371/journal. pcbi.1012814.
[6] S. Govindankutty, and S. P. Gopalan, “Epidemic modeling for Mis/disinformation spread in digital networks through a social intelligence approach,” Sci. Rep., vol. 14, no. 1, pp. 19100, Aug. 2024. https://doi. org/10.1038/s41598-024-69657-0.
[7] D. Caled, and M. J. Silva, “Digital media and Mis/disinformation: An outlook on multidisciplinary strategies against manipulation,” J. Comput. Soc. Sci., vol. 5, no. 1, pp. 123–159, Mar. 2021. https://doi. org/10.1007/s42001-021-00118-8.
[8] J. J. Lee, W. B. Knox, J. B. Wormwood, C. Breazeal, and D. DeSteno, “Computationally modeling interpersonal trust,” Front. Psychol., vol. 4, pp. 893, Dec. 2013. https:// doi.org/10.3389/fpsyg.2013.00893.
[9] N. Capuano, G. Fenza, V. Loia, and F.D. Nota, “Content-Based Fake News Detection With Machine and Deep Learning: a Systematic Review,” Neurocomputing, vol. 530, pp. 91– 103, Apr. 2023. https://doi.org/10.1016/j. neucom.2023.02.005.
[10] M. Khalil, C. Zhang, Z. Ye, and P. Zhang, “PegasosQSVM: A Quantum Machine Learning Approach for Accurate Fake News Detection. Applied Artificial Intelligence,” Appl. Artif. Intell., vol. 39, no. 1, pp. 2457207. https://doi.org/10.1080/0 8839514.2025.2457207.
[11] D. Katsaros, G. Stavropoulus, and D. Papakostas, “Which machine learning paradigm for fake news detection?,” in 2019 IEEE/WIC/ACM Int. Conf., Oct. 2019, pp. 383-387. https://doi. org/10.1145/3350546.3352552.
[12] J. Symons, and F. Boschetti, “How Computational Models Predict the Behavior of Complex Systems,” Found. Sci., vol. 18, no. 4, pp. 809–821, Oct. 2012. https://doi. org/10.1007/s10699-012-9307-6.
[13] A. B. López, J. Pastor-Galindo, and J. A. Ruipérez-Valiente, “Frameworks, Modeling and Simulations of Mis/disinformation and Disinformation: A Systematic Literature Review,” Jun. 2024. https://doi. org/10.48550/arxiv.2406.09343.
[14] N. L. Bragazzi, and S. Garbarino, “Understanding and Combating Mis/ disinformation: An Evolutionary Perspective,” JI, vol. 4, pp. 65521, Dec. 2024. https://doi.org/10.2196/65521.
Chanel is in her final year of her Bachelor of Science degree, majoring in biological sciences and chemistry. She is passionate about microbes as well as organic and analytical chemistry. She plans to pursue her masters in chemistry next year, specialising in forensic science.
Chanel Lai - BSc, Biological Sciences and Chemistry
Siouxsie is an award-winning microbiologist, science communicator, and lover of bioluminescence and Lego. She was made a member of the New Zealand Order of Merit in 2019 for services to microbiology and science communication, and was named the 2021 Kiwibank New Zealander of the Year for her COVID-19 communication work.