Social media serves as a vehicle that accelerates misinformation. False narratives can spread rapidly and go viral due to algorithms, emotional intensity, and echo chambers, often reaching far more people than factual information. This article explores how these mechanisms contribute to the rapid spread of misinformation through social media.
1. Algorithms
Social media platforms prioritise posts that increase engagement such as likes, shares, and comments, because engagement drives advertising revenue. Algorithms are designed to keep users on the platform for longer. Unfortunately, false information spreads faster and wider than factual content because it is often more shocking, emotionally charged, or novel.
A 2018 study by MIT researchers Vosoughi, Roy, and Aral found that false news was retweeted 70% more than true news on Twitter. When looking at over 126,000 rumors spread by 3 million people, they discovered that false stories reached 1,500 users six times faster than true ones. This rapid spread was attributed to novelty and the strong emotional reactions misinformation provoked, such as surprise and disgust (Vosoughi et al., 2018). Importantly, the study determined that humans, not bots, were primarily responsible for the spread of false information on the platform.
Interestingly, in 2021, journalists mistakenly claimed that this study had been debunked by newer research (Juul & Ugander, 2021). However, the newer study actually confirmed the findings, not contradicted them. The journalists later retracted their claims, but the false "debunking" received far more attention than the correction, ironically demonstrating how misinformation thrives. You can read one of the study’s authors' Sinan Aral account of this here.
2. Emotional type and intensity
Beyond novelty, misinformation spreads because it elicits stronger emotional reactions. Research shows that false information evokes more surprise and disgust, and true information is more likely to evoke sadness, anticipation, joy, and trust (Vosoughi, Roy, & Aral, 2018).
People are more inclined to share content that triggers a strong emotional response (Berger, 2011; Berger & Milkman, 2012). This is known as the emotional contagion effect, where emotionally charged content spreads rapidly through social networks. Since misinformation can often be sensationalised or fear-inducing, it gains traction much faster than factual reporting (Berger, 2011; Berger & Milkman, 2012; Vosoughi, et al., 2018)
2. Echo chambers
Social media algorithms personalise content based on user behaviour, creating echo chambers: digital spaces where individuals are predominantly exposed to viewpoints that align with their beliefs. In echo chambers, misinformation gets validated and reinforced within closed groups. Hence, they allow us to fall prey to confirmation bias, the tendency to seek out, interpret, and remember information in a way that reinforces existing beliefs and expectations. When true information, or new ideas are not being presented, and biases are not challenged, this can strengthen attitude polarisation and belief in misinformation. This is why misinformation in echo chambers can be very difficult to correct. Echo chambers lead to:
In one study, participants were asked to deactivate Facebook for four weeks, a month before the 2018 US election (Allcott, Braghieri, Eichmeyer, Gentzkow, 2020). They found that compared to participants who did not deactivate their Facebook, they showed less knowledge and attention to news and less attitude polarisation on policy issues. Facebook deactivation may have lowered issue polarisation by reducing the extent to which individuals were exposed to echo chambers in their feed. However, the study did not find that deactivation led to less attitude polarisation, meaning that participants still showed negative feelings about another political party (Allcott et al., 2020). This suggests that while social media fosters political echo chambers, simply reducing exposure may not be enough on its own to change deeply ingrained biases.
Social media’s ability to spread misinformation faster than facts presents a serious challenge to public discourse. Some mis- and disinformation is deliberately amplified by bots and trolls to manipulate public opinion. The speed and scale of viral misinformation makes it hard to counteract in real time. This raises a critical question: What would a healthier platform ecosystem look like?
A responsible digital space should provide room for diverse opinions and promote democratic debate. However, the current landscape often falls short. Here are some key considerations and questions:
Without intervention, mis- and disinformation will continue to pose significant risks to health, public policy, and environmental issues.
References