Algorithms and filter bubbles: can we escape informational confinement?
Articles by the same author:
1
Popularized in 2011 by American activist for a democratic internet, Eli Pariser, the concept of a filter bubble refers to the phenomenon in which the algorithms of social networks and search engines personalize the content offered to users based on their browsing and search history, their digital community, and their positive or negative interactions with content.
Initially, the intention is rather laudable. It aims to guide internet users through the overwhelming abundance of information available on the web, helping them find readings, videos, or podcasts most aligned with their interests. Major platforms such as Facebook, X (formerly Twitter), LinkedIn, Instagram, TikTok, and YouTube have all developed highly effective algorithms to create an experience where users are primarily presented with information likely to please, entertain, enrich their knowledge, or validate their worldview, beliefs, and dislikes.
This is precisely where the filter bubble comes into play. By observing our digital behaviors, algorithms increasingly discern what aligns with our preferences and ideas. They then filter out what is deemed irrelevant, delivering only what is expected to meet our expectations. In other words, a user identified by the algorithm as an advocate of a specific cause will predominantly be shown content supporting that cause and, in return, be less exposed to critical or opposing viewpoints. Ultimately, what we see is largely what we like.
By fostering the creation of personalized informational universes, filter bubbles lead to a form of cognitive confinement, creating a reality almost entirely aligned with what we want to hear, read, and see. This reduced exposure to diverse and contradictory perspectives immerses individuals in a sphere where doubt and critical thinking are gradually replaced by the brain’s confirmation biases. Numerous psychological and sociological studies have demonstrated humanity’s natural tendency to prioritize information that supports preconceived notions, received ideas, and hypotheses.
While some scientific research has somewhat mitigated the impact of the confinement described by Eli Pariser, filter bubbles undoubtedly exist and are influenced by search engine and social platform algorithms. In increasingly polarized and distrustful Western societies, filter bubbles add another harmful layer. The clearest example is perhaps the evolution of X (formerly Twitter) since Elon Musk took over. Musk recently admitted to manipulating the platform’s algorithm to promote his tweets and those of Donald Trump.
In October, Wall Street Journal journalists conducted an experiment to better understand X’s algorithmic recommendation system. They created 14 active profiles in the so-called “swing states” ahead of the presidential election to select the 47th president of the United States. The posted content was unrelated to politics, covering diverse topics like running, cooking recipes, and crafts.
The journalists’ findings are eye-opening! Most of the publications recommended by X’s algorithms to these 14 profiles had political content, often election-related. Moreover, the bias was apparent as ten profiles were primarily shown content favoring the Republican candidate. Pro-Trump messages appeared twice as often as pro-Harris posts (1).
The central role of major tech companies like Meta, Google, Twitter, and TikTok in creating and maintaining these filter bubbles through their recommendation and personalization algorithms is undeniable. Only their willingness can decide whether to curb or amplify the effects of algorithmic confinement. So far, however, these actors have been slow to introduce fixes despite repeated controversies. It seems unlikely they will be more inclined in the future to adjust content recommendations more openly and diversely. This model lies at the core of their commercial power and ability to granularly target users for advertisements and other paid content.
Should we then resign ourselves to being trapped in informational silos? The answer is a resounding no, but the solution is not technological. It lies in taking control of one’s informational spaces. Unlike in the United States, where 36% of adults aged 18–29 consider social media their primary news source (2), expanding the range of sources is crucial. Consulting traditional media outlets, which remain reliable actors offering diverse perspectives, is one way to diversify.
Additionally, leaving certain platforms like X, whose informational toxicity continues to grow with fake news propagated by Musk himself and the dominance of extremist content, is another option.
This informational hygiene is essential and deserves extensive public awareness campaigns and inclusion in educational systems. Some notable initiatives, like those of CLEMI (Center for Media and Information Literacy Education), regularly engage students to understand and decipher the media universe, verify sources and information, develop an interest in current events, and build their citizen identity. Similar approaches should be developed for social networks, particularly for younger generations who are avid users.
– https://www.wsj.com/politics/elections/x-twitter-political-content-election-2024-28f2dadd?st=ASpaEA&reflink=mobilewebshare_permalink
– https://ajis.aaisnet.org/index.php/ajis/article/view/2867/1023