Echo Chambers & Political Discourse: Essential Reads on Media Manipulation and Algorithms

Discover crucial insights into digital media and privacy with Digiseg’s latest reading list. These recommended books delve into how algorithms create echo chambers and influence political discourse, offering an in-depth look at their impact on society.

Algorithms shape our world in ways we never imagined, influencing everything from the news we consume to our political beliefs. In this article, Søren H. Dinesen, Co-founder and CEO of Digiseg explores critical trends in digital media through a curated list of must-read books, exploring the intricate relationship between algorithms, media, and privacy. These works dissect the creation of echo chambers, the rise of filter bubbles, and the profound impact these phenomena have on political discourse and societal polarization.

The Algorithmic Influence: Shaping Our News and Beliefs

Dinesen reviews key titles that unravel how algorithms narrow our perspectives and distort our cultural and democratic frameworks. By examining these influential books, he provides a comprehensive overview of the challenges and implications of living in an algorithm-driven world, urging us to rethink how digital media shapes our reality.

Let’s dive in.

The Media Manipulation Machine: A Closer Look

In their book The United States of Distraction: Media Manipulation in Post-Through America, Dr. Nolan Higdon and Mickey Huff tackle urgent questions facing America. Key among them: How did we get to the point where citizens decide what’s true based on the number of people who believe it, rather than facts or reality?

Consider some of the outlandish things Americans believe:

  • Three in ten believe the 2020 presidential election was stolen from Donald Trump
  • About 25% say there is at least some truth that COVID was planned
  • 22% believed that the “storm” predicted by QAnon would occur
  • 7% believe chocolate milk comes from brown cows

One would think it’s easy enough to dispel such ridiculousness … isn’t that what the media is supposed to do? Provide reality checks for a reading public? Sadly, vetted journalism isn’t convincing anyone because distrust of the media is at an all-time high.

Echo Chambers: The Rise of Filter Bubbles

How did we arrive at such a sad state of affairs? Over the past 10+ years, numerous scholars have concluded that the root of all our troubles begins with the deployment of algorithms. Algorithms are designed to present us with the information we have shown a propensity to consume, which in turn, limits the range of information we see and what we believe.

The Impact of Digital Isolation on Democracy

In 2011, Eli Pariser coined the term “filter bubbles” in his book, The Filter Bubble, What the Internet is Hiding from You. Two years earlier, Google had deployed AI to its search results, narrowing sources to just those the algorithm predicts will interest the user.

To Pariser, it was the dawn of the great dumbing down of people as the range of information they were exposed to was limited. We thought we had access to the full spectrum of global information — that we were traveling along a great information superhighway — but with algorithms deciding what we see, that is no longer the case. An important onramp — Google Search — merged with its algorithmic offramp, unbeknownst to us.

Radicalization Through Algorithms: How Big Tech Went Wrong

But that was only one of the challenges. The other was how it groups us into clusters of people who think like us, trapping us into “filter bubbles.” Today we call those bubbles echo chambers.

All of this narrowing and grouping is fueled by AI — algorithms that track our online activity and preferences, then selectively show us content that aligns with our existing beliefs and interests. With enough positive reinforcement from others on the internet, it comes as no surprise people can fall for conspiracy theories.

It’s one thing to believe that chocolate milk comes from brown cows (when I was young I thought trees made wind by bending back and forth). But when AI-curated content leads someone to ransack a Target outlet or bring a gun to a pizza joint to free kids held as slaves, it’s time to admit we have a problem.

This is a theme that is echoed by Stanford University Professors Rob Reich, Mehran Sahami, and Jeremy M. Weinstein in their book, System Error: Where Big Tech Went Wrong and How We Can Reboot. They argue that when Big Tech’s hyper-focus on a single metric — say YouTube’s decision to prioritize time spent consuming videos — bad things happen. They’re not wrong, as it’s now understood that the tuning of an algorithm to prompt binge-watching leads to the radicalization of users and the spread of conspiracy theories.

Memes as Propaganda: The Online Battle for Truth

In the book Meme Wars: The Untold Story of the Online Battles Upending Democracy in America, Harvard’s Joan Donavan and others argue that memes serve as the bedrock of conspiracy theories, helping to make the unbelievable believable. Clever, and easy to go viral, memes like “Stop the Steal” can move entire cohorts of people to act violently and in anti-democratic ways.

Conspiracy Theories and Algorithmic Fuel: A Dangerous Mix

Memes are particularly effective at swaying people and recruiting them to extremism (the book Accountable: The True Story of a Racist Social Media Account and the Teenagers Whose Lives It Changed describes how Googling “black on black crime” will lead people down a white supremacist rabbit hole). 

One of the reasons why memes are effective at converting people is that they break down offensive and outlandish beliefs into bite-sized and memorable riffs. They serve as a starting point to a process that slowly eases people into horrific belief systems. Recruiters of extremist beliefs understand this, and they’ve honed their skills. This, by the way, isn’t just an American problem; it’s global. 

Cultural Flattening: Algorithms Blunting Creativity

In her book, The Next Civil War, Stephanie Marche also warns that algorithms make us more extreme, but she says they add another wrinkle: they make it easy for racist and violent people to find one another. In the days before the Internet, a disaffected youth would have trouble joining a white supremacist or neo-Nazi group because such people were  underground. Today, social media algorithms will recommend them as something users may be interested in.

But it’s not just the worst parts of society that algorithms distort. Algorithms are also blunting the best parts of cultural life, as Kyle Chayka’s recent book Filterworld: How Algorithms Flatten Culture clarifies.

He warns that algorithms have effectively constricted our access to information, serving the lowest common denominator of content because such blandness will appeal to the largest number of people. This results in books, music, and even physical spaces such as cafes, all reading, sounding, and looking alike because algorithms have taught us to expect no better. Groundbreaking ideas are penalized because they don’t have the same level of virality as the more market-tested ones. Put another way, algorithms are flattening the global culture.

A Broken Promise: The Internet’s Failed Information Superhighway

The internet was supposed to be an information superhighway, providing unrestricted access to ideas to anyone with an interest in learning. But instead of providing easy access to ideas and information, the algorithms that now rule the Web are shrinking what we’re exposed to, and in many ways, our free will. Instead of being presented with new ideas, we are grouped into cohorts of people who amplify our beliefs and prejudices, setting the stage for outlandish, post-truth beliefs.

The Path Forward: Reclaiming Our Shared Reality

Shared truths are essential to functioning societies—- this candidate won an election, the Earth is round, not flat, and climate change is an urgent issue that needs addressing. Without a basic set of facts we can all believe in, humanity will only get more mired in the deception and misinformation that is propagated by AI, with people of differing opinions retreating further into their corners.

It’s our duty to prevent that from occurring.
___

This is the third article in Digiseg’s Privacy Series. The first, Privacy Signals, AI in Advertising & the Democratic Dilemma, takes a broad view of the issues of private signals and one-to-one signals as we see them. The second, Surveillance Capitalism 2.0, examines how emerging privacy-centric solutions track user behavior just as much as cookies ever did.