Fri. Dec 5th, 2025

The Manipulation of Belief in the Age of Online Misinformation.

Thanks to the internet’s promise of widespread information sharing, people now have greater access to knowledge than ever before. In reality, though, the digital revolution has also brought up a more dangerous phenomenon: disinformation. Misinformation, which is defined as inaccurate or deceptive data that is spread without purpose, grows online due to social media platforms’ benefits for quickness, interaction, and emotional responses. Misinformation is no longer a side issue in a world where billions of people get their news from algorithmically generated feeds; rather, it is becoming a fundamental aspect of digital communication.

There are significant consequences. Misinformation has changed how people form opinions and make decisions, from affecting elections and public health decisions to damaging faith in science and journalism. Viral posts during the COVID-19 epidemic challenged health officials by claiming vaccines were dangerous or involved in conspiracies. False information flourished on Facebook and Twitter during the Brexit vote and the 2016 U.S. election, influencing opinions about the candidates and issues. The public’s opinion is still distorted by false information regarding the conflict in Ukraine and climate change. The wide distribution of false claims, like the notorious “£350 million per week to the EU,” also influenced the 2016 U.S. presidential election and the Brexit referendum by altering the general public’s understanding of important issues. Perceptions of reality are still being manipulated by ongoing disinformation strategies over climate change and the conflict in Ukraine.

Let’s examine online disinformation’s growing impact in more detail, including how it spreads, why it is so convincing, and practical solutions. How much of the misinformation we consume online shapes our opinions is a question that impacts everyone. More significantly, is society able to protect itself from it? Public opinion is unintentionally influenced by disinformation. It is effective not only because of the content but also because it appeals to our natural prejudices and the way social media platforms are made to draw in and keep users. False narratives are extremely strong because of this combination. A number of approaches show potential, but there isn’t a single solution that can completely address the issue. For example, teaching media literacy makes it easier for people to spot false information when they see it. Another option is to ask for greater accountability from the platforms that spread misinformation. A more comprehensive, multi-layered strategy that addresses the problem from several perspectives by combining policy, technology, and education is maybe the most successful. Together, these answers provide a significant path ahead, even though none of them is ideal on its own.

The Impact of Misinformation

Online disinformation spreading is systemic rather than accidental. According to a seminal study in Science, misleading information is 70% more likely to be retweeted on Twitter than correct information, and it spreads “significantly farther, faster, deeper, and more broadly”. This trend is consistent across a variety of fields, including politics, health, and celebrity rumours. Untrue statements are preferred above confirmed facts by the dynamics of popularity, which include speed, novelty, and emotional appeal.

A striking example of the human cost of disinformation was the COVID-19 epidemic. According to a study from the Reuters Institute at Oxford, much of the false information about COVID-19 originated from well-known public individuals whose statements were spread online rather than from niche conspiracy websites. In addition to becoming viral, false information regarding masks, vaccines, and miracle cures also affected behaviour, which led to vaccine hesitancy and failure with health precautions. The scenario has been called a “infodemic” by the World Health Organisation, which refers to an increase of information, most of it untrue, that spreads as dangerously as the infection itself.

It affects democracy as well. Russian misinformation efforts used social media as a weapon to divide voters during the 2016 U.S. election, amplifying disputing narratives with false accounts. Similar to this, false statements like the notorious “£350 million per week to the EU” went viral online during the Brexit campaign, distorting public perceptions of the expenses associated with EU membership. According to a Council of Europe report, this kind of “information disorder” undermines democratic societies’ foundations by manipulating the common reality.

Misinformation has evolved into a geopolitical weapon in conflict areas. The international information arena has been confused by Russia’s disinformation campaigns about its war in Ukraine, which have promoted narratives that deny atrocities or defend invasions. Misinformation has a direct impact on public opinion and policy discussions outside of the battlefield by influencing people’s perceptions of what is happening on the ground.

False information is more than simply a digital inconvenience; it has real consequences that influence public trust, political choices, and health outcomes.

Causes of the Spread

Why is it so easy for false information to spread? Researchers identify two primary causes: user cognitive biases and social media platform design.

On the platform side, algorithms designed to maximize engagement play a central role. Regardless of its authenticity, social media feeds give preference to content that gets hits, shares, and comments. This implies that statements that are dramatic or emotionally charged are algorithmically enhanced, resulting in “virality loops.” Tandoc, Lim, and Ling (2018) argue that “fake news” thrives on digital platforms’ attention-driven advertising business model (link). In other words, platforms profit from interaction, which is why disinformation grows.

Human psychology is involved as well. Humans are not impartial information processors; we are susceptible to prejudices like motivated reasoning (accepting information that supports our identity or worldview) and confirmation bias (favouring information that supports our ideas). According to research, people frequently spread false information because they don’t properly consider its accuracy rather than because they wish to deceive. When interacting with internet content, people frequently rely on instinctive responses rather than critical analysis, which Pennycook & Rand define as “lazy, not biassed”.

Furthermore, false information takes advantage of our mental reflexes. According to Lewandowsky, Ecker, and Cook , the “illusory truth effect” is an effect whereby repeated incorrect claims appear more and more trustworthy. Social media increases this effect with its constant memes and reposts. Exposure can create long-lasting effects even when people are aware that a claim is untrue.

Some academics believe that false information has less of an impact than is frequently thought, mostly used to confirm previously held beliefs rather than drastically altering people’s opinions. However, this perspective neglects the power of frequent exposure to change public opinion and strengthen divisions. Misinformation divides societies by strengthening echo chambers and undermining trust in institutions, even if it not often results in actual conversions.

Combating Misinformation

Can society protect itself if disinformation is deeply rooted in digital culture both structurally and psychologically? Although there is no silver bullet, a mix of tactics—platform accountability, media literacy, and multi-level interventions—offers optimism.

A. Media Literacy

One of the most frequently accepted solutions is teaching people to evaluate information they see online. Programs for media literacy train users to check claims, challenge sources, and spot false advertising. Digital literacy treatments dramatically increase people’s capacity to differentiate between mainstream and fake news, according to experimental research by Guess, Lerner, Lyons, Montgomery, and Nyhan.

According to UNESCO, “media and information literacy for all” should be incorporated into all educational curricula. Additionally successful have been debunking tactics, which teach individuals the tricks of false information before they come across it. Short films that described common manipulation strategies (such targeting or emotional language) decreased participants’ susceptibility to false information when they later came across it, according to a 2022 Google Jigsaw experiment.

However, the issue cannot be resolved by literacy alone. Interventions frequently have short-term effects, and not all users have equal access to education. Users are nonetheless susceptible to the systemic incentives of platforms in the absence of structural adjustments.

B. Platform Responsibility

Platforms must be included in the solution since they play a key role in the circulation of false information. Virality can be decreased by actions like algorithmic transparency, fact-checking labels, and downranking of content that is clearly incorrect. Exposure to fact-checking labels considerably decreased the perceived accuracy of deceptive headlines, according to a study by Clayton et al.

A few websites have tried community-driven fact-checking. Users can work together to provide context to messages that are misleading by using Twitter’s (now X) “Community Notes” mechanism. Since corrections are made by peers rather than platforms, there is preliminary evidence that this method reduces the propagation of false information more successfully than top-down moderation.

Additionally, transparency is essential. It is more difficult to spot manipulation since users rarely comprehend how algorithms determine what they see. In addition to media literacy, academics argue for “algorithmic literacy,” which aims to assist consumers comprehend how platforms decide what shows in their feeds as well as the content itself.

C. Multi-Level Solutions

In the end, fighting false information requires action from platforms, governments, and individuals. The European Union’s Digital Services Act, which mandates that large platforms reduce the dangers of disinformation, is an example of how governments can control accountability and transparency (link). NGOs and fact-checking groups, such as Full Fact in the UK, are essential for keeping an eye on ecosystems of false information. In order to adapt, journalists must also increase transparency and confidence in their reporting.

The significance of progressive interventions is emphasised by researchers. According to a recent approach by Ecker et al. , systemic reforms to stop amplification, “debunking” to rectify lies, and “prebunking” to increase resilience should all be combined.

Conclusion

Online disinformation has a significant influence on how people think, make decisions, and behave; it is far more than background noise in the digital world. Social media platforms’ structural incentives and human cognition’s psychological weaknesses allow it to thrive. The effects are evident and significant, including divided politics to vaccine hesitancy.

However, society is not helpless. Platforms may create systems that prioritise truth above virality, prebunking techniques can protect against manipulation, and media literacy initiatives can give individuals critical thinking skills. Accountability and transparency can be enforced by governments and members of society. When combined, these tactics create a multilayered defence that is insufficient on its own.

The goal of the fight against misinformation is to make societies less susceptible to lies rather than entirely eliminate them, which is an impossible effort. Ultimately, protecting democracy and public trust in the digital era depends on the institutions, beliefs, and mechanisms that construct our shared reality as much as the information we consume.

By Leti

Related Post