Bad actors are often blamed for sowing ideological division. But what about the incentive structures that underlie our media landscape?
In the lead-up to the 2020 election, 87% of Democratic Party voters expected a Biden victory, while 84% of Republicans expected Donald Trump to win (Coibion, Gorodnichenko, and Weber 2020). In fact, as many as 15% of Democrats and 23% of Republicans rated their candidate’s chances of winning at 100%. While it’s not wholly unusual for supporters of a particular candidate to overestimate their likelihood of success, this partisan division effectively amounts to two groups experiencing entirely different realities. These disparities make it easier for baseless conspiracies about election fraud to gain traction over the truth. This growing ideological rift among the American populace is now rightly seen as a catalyst for political violence, eroding trust in democratic institutions and threatening the very fabric of democratic governance.
When an event like January 6th occurs, there’s an immediate temptation to individualize the blame. We blame the rioters for their failure to distinguish basic facts from outrageous lies and we blame Donald Trump and his allies for spreading those lies. However, events like this are only made possible by the substantial epistemic divide in Americans’ beliefs. While Trump may have armed the bomb and the rioters flipped the switch, it was the distributors of pervasive misinformation and deep-seated polarization that built the bomb itself. That can only lead us to wonder, who are the architects of these ideological divisions?
In tracing the root causes of polarized beliefs, attention often falls on the increasingly partisan coverage of mainstream news outlets (Kim, Lelkes, and McCrain 2022). This raises a chicken-or-egg question: are news networks responding to consumer demand by adjusting their offerings to match the beliefs of their prospective audiences? Or, is the divide in the audience’s beliefs being driven by the increasingly partisan coverage offered by news networks?
To understand the possible reasons behind the shift towards more partisan coverage, we need to break down the media landscape and the incentive structures that led us to this point. As Charlie Munger put it: “Show me the incentives, and I’ll show you the outcome”
If everyone was after land in the 19th century and oil in the 20th, it seems that Beanie Babies / NFTs / attention is the new hottest commodity on the market.
This shift towards capturing attention has brought with it a slew of new challenges. Where tangible resources like oil allow us to leverage our extensive knowledge of fluid dynamics and geology to maximize their extraction, the mechanisms underlying information demand are far more elusive. The complexity of these dynamics leave us without an optimized method to attract attention, making approximation and innovation key to winning eyeballs.
Picture yourself as the CEO of a media conglomerate in the early 2000s. Your profits are dependent on the share of the market you hold, but that market share is constantly being siphoned off by competitors with similar offerings. The functional features that your network once prided itself on—being first on the scene or having exclusive coverage—begin to dry up as the internet democratizes the information space. You’re left asking yourself:
It’s no industry secret that emotive content gets views. Emotion plays a central role in how we select which content to consume, and in turn, how that content is curated. In psychology, we think of emotions as occurring across two dimensions: valence (how positive or negative the emotion is) and arousal (how intense an emotion is). For example, excitement is an example of an emotion that is high in both valence and arousal, while a feeling of calm would be high in valence but low in arousal.
In an analysis of online news headlines between 2000 and 2019, Rozado, Hughes, and Halberstadt (2022) reported a gradual but sustained trend towards more negative emotional valence. While they didn’t explicitly analyze trends in the emotional arousal of content, the frequency of headlines with a neutral emotional payload dropped from around 70% to below 50%, while those featuring anger and fear increased by 100% and 150%, respectively1.
If we combine the sentiment analysis of Rozado, Hughes, and Halberstadt (2022) with a metric of partisan bias (we’ll use the scores from the August 2024 [v12.0] Ad Fontes media bias report), we find that it is more biased news outlets that tend to produce the most negatively-valenced content2.
From these data, it seems that if you want to produce more emotive content, it might be necessary to adopt a more strongly partisan stance3. It’s plausible to suggest, therefore, that the shift towards more biased content may simply be a side-effect of the pursuit of emotionally-driven eyeballs.
Similar to emotive content, it’s a well-known fact of human information demand that we tend to favor information that is consistent with our existing beliefs or values (commonly termed confirmation bias). Picture someone whose conspiratorial anti-establishment beliefs are an important part of their identity. This person is more likely to seek out anti-vax information, for example, than someone who doesn’t share those beliefs. This behaviour is commonly cast as a defense mechanism for protecting our chosen identity – we avoid information that we think might challenge our beliefs because the dissonance between the two is unpleasant. However, this view of confirmation bias is reductive and is not consistent with our understanding of how we select information to sample and how we use that information to inform ourselves.
To an extent, behavior consistent with confirmation bias is not biased at all, but a logical, Bayesian process of integrating one’s priors into the evaluation of new evidence. To someone with a deep mistrust of institutions, anti-vax conspiracies are far more plausible than they are to the average person. Extend this to a less extreme example and we can understand that someone with right-leaning beliefs might choose to watch Fox News over a more balanced alternative simply because they believe it to be more accurate than a center-aligned network. All of this is to make the obvious point that our beliefs and our decisions about where we choose to get information from are not independent.
The evolution of one’s beliefs and one’s choice of media content can be thought of as a spiral (Slater, Shehata, and Strömbäck 2020). By integrating our existing beliefs and attempting to avoid cognitive dissonance, we sample media that is consistent with our existing values and beliefs. The sampled media content in turn reinforces those pre-existing beliefs and values, resulting in deeper entrenchment and resilience to change. This produces a feedback loop, as the strengthened beliefs then influence media selection which then continues reinforcing the increasingly entrenched beliefs…and so on…
So, how might you—early 2000s media mogul—leverage these dynamics to build a dedicated following for your network? Let’s imagine that your news network currently operates as a center-right news outlet – considered the most right-wing of the mainstream outlets. You might consider two main options:
This option might allow you to expand your network of likely viewers by expanding your appeal to an audience with more center-aligned beliefs.
This option might increase the ideological distance between your existing audience and the other networks, allowing you to create and corner a less competitive share of the market.
To get a sense of how these options might work, we can conduct a small-scale simulation. In our simulation, individual agents will repeatedly sample from five media outlets, obtaining some information and updating their beliefs and their trust in each of the five outlets accordingly. This simulation is adapted from the work of Perfors and Navarro (2019) – I’ll leave more details in a footnote here4.
To test how changing the partisan identity of your outlet’s coverage might affect audience metrics, I simulated the choices of 1200 Bayesian, rational agents across 400 iterations. In the first and last 100 iterations, the media outlets did not change in the bias of their coverage. In the middle 200 iterations, Outlet E steadily shifted the partisan identity of its coverage towards/away from the others. Below are example simulations of 30 agents and 5 media outlets. The axes here are fairly arbitrary—we would need more dimensions to capture real ideological positions—we’re just demonstrating here that it’s possible to simulate the basic dynamics.
So, which option will give your outlet better long-term outcomes? Let’s consider two important metrics: (1) What proportion of all views does your outlet receive, and (2) How many dedicated viewers does your outlet have?
To test these, we’ll focus on the final 100 iterations of the simulations. I’m defining a dedicated viewer as any agent who samples from a single outlet on at least 80 of the last 100 iterations.
Overall, it looks like diverging away from the ideological center produces the best results for Outlet E. Specifically, while diverging didn’t increase the total proportion of views Outlet E received, it did increase its number of regular, dedicated viewers by 24% relative to the converge option.
In this article, I deliberately chose not to address certain incentives that could motivate media outlets to propagate fringe or extremist ideologies. Specifically, I overlooked the potential for these outlets to divert attention from class disparities by focusing on divisions related to race, gender, or sexuality—a strategy that could disproportionately benefit the wealthy owners and financiers of these media entities. Despite taking this charitable approach, we still found that an attention-based economy alone contains potential incentives that promote divisive ideological shifts. Specifically, we found that, in pursuit of a dedicated audience, steadily shifting towards the production of content that is emotive and ideologically extreme could be an effective strategy.
Ultimately, there is no panacea to address the wide-ranging impact our attention-based economy has had on media divisions. A collective approach that combines regulatory oversight, education, ethical journalism, and technological refinement offers the best chance for reducing media-driven ideological polarization. Only through sustained efforts across these domains can we hope to rebuild a media landscape that fosters informed dialogue and promotes societal unity rather than division.
A plausible explanation for the change in mean valence could be the advent of more partisan outlets with strong, negative emotional content in the middle of 2000-2019 time period (e.g., The Daily Wire). However, the downward trend in mean valence is only slightly lessened when we exclude the 27 outlets that don’t have data for the full 20-year period.↩︎
Note that all the content in these analyses is based on written (online) news media. This means that the scores for outlets with both written and broadcast content (e.g., Fox News, MSNBC, etc.) are based solely on their written content. This is relevant because the broadcast content for some of these outlets is far more focused on opinion pieces, which tend to have a considerably stronger partisan bias and lower reliability (e.g., primetime shows from the likes of Laura Ingraham, Sean Hannity, Jesse Watters, or Lawrence O’Donnell)↩︎
This analysis is both cross-sectional and correlational, so take it with a grain of salt.↩︎
In brief, each agent in the simulation is represented by their current belief state and their level of trust in each of the five available media outlets. In each iteration, the agents each choose one of the media outlets to tune in to. The more they trust an outlet (a function of how closely the outlet’s views align with their own), the more likely the agent is to sample from it. The media outlet produces an information sample drawn from a normal distribution around the mean of the outlet’s coverage. Using a a probabilistic acceptance rule (Metropolis-Hastings algorithm), the agent may accept the new information and update their beliefs, allowing for exploration of different belief states. The agent’s trust in each outlet is then re-calculated using the likelihood of the recent samples they received from each source given their current belief-state.↩︎