Trapped in the tunnel: How internet algorithms are turning the web into a polarised maze

The internet, once hailed as a great equaliser, now faces a growing enemy: Algorithmic polarisation

These invisible mechanisms, designed to personalise our online experiences, are increasingly accused of trapping us in echo chambers, amplifying extreme voices, and fuelling societal divisions. But is this just doomsday talk, or does data back up the concerns?

Echo chambers: A growing reality
A 2020 study by MIT found that on platforms like Facebook, users’ exposure to opposing viewpoints has decreased by 68% in just five years. The result? We’re increasingly interacting with information that confirms our existing beliefs, creating “echo chambers” where dissenting voices are muted. This isn’t just anecdotal; a 2022 Pew Research Centre study revealed that Americans with opposing political views only get 10% of their news from the same sources.

Numbers tell the story:
• A 2019 study by Oxford University found that in the lead-up to the 2016 US election, fake news stories shared on Facebook were 70% more likely to be liked, commented on, and shared than factual news stories.
• A 2023 report by the World Economic Forum revealed that 80% of respondents believe misinformation is a major threat to democracy.

The algorithmic vortex: So, how exactly do these algorithms work? Platforms like Facebook and Twitter use complex algorithms that analyse our data, including likes, shares, clicks, and comments, to build a profile of our interests and preferences. This profile is then used to curate our newsfeeds and suggest content that they believe is most likely to keep us engaged. While this personalisation can be convenient, it can also lead to a filter bubble effect, where we’re only exposed to information that confirms our existing beliefs.

The consequences of a fractured web: The implications of this algorithmic polarisation are far-reaching. It can:

Fuel social and political division: By amplifying extreme voices and suppressing opposing viewpoints, algorithms can contribute to a sense of “us vs. them” and make it harder to find common ground.
Spread misinformation and disinformation: Algorithmic filters can create echo chambers where false information can easily spread unchecked, leading to a decline in trust in legitimate sources and a rise in conspiracy theories.
Hinder critical thinking and civic engagement: When we’re only exposed to information that reinforces our existing beliefs, it becomes harder to critically evaluate different viewpoints and engage in constructive dialogue.

The Road ahead: Breaking free from the algorithmic echo chamber requires a conscious effort. We can:

Seek out diverse viewpoints: Actively search for information from sources that contradict our existing beliefs.
Critically evaluate information: Don’t take everything you read online at face value. Check sources, verify facts, and be sceptical of claims that seem too good to be true.
Engage with people who hold different views: Have respectful conversations with people who don’t agree with you. This can help you to see issues from different perspectives and expand your understanding.
Demand transparency from platforms: Hold platforms accountable for the algorithms they use and advocate for greater transparency in how they curate content.

The internet has the potential to be a powerful tool for connecting people and fostering understanding. However, the current algorithmic landscape is creating a fragmented and polarised online world. By being aware of the filter bubble effect and taking steps to break free from it, we can help to create a more inclusive and informed online experience for everyone.

Disclaimer: The views expressed in this article are those of the author and do not necessarily reflect the views of ET Edge Insights, its management, or its members

Scroll to Top