top of page
Search

The feedback loop between our Minds, Social Media and AI truth

  • Writer: Tejas Singh
    Tejas Singh
  • Oct 4, 2025
  • 6 min read

Updated: Dec 19, 2025


The Bias We're Born With

We are all born with quirks that make us inherently biased, which often – either knowingly or unknowingly – direct our actions. In his book Judgement Under Uncertainty, Daniel Kahneman argues against the logical and rational mind. He believed that “We are often confident when we are wrong, and an objective observer is more likely to detect our errors than we are.” Today, as we put the onus on technology to play our objective observer, it’s likely that we’re giving into a collective delusion. I witnessed mine at play last Friday.

Sometime after watching an episode of Shark Tank India, I was scrolling through Instagram. Unsurprisingly, the first suggested post on my feed was a Shark Tank reel, but the one with Mark Cuban in it. I didn’t pay much heed to it then, but then I stumbled upon an excerpt of a British adaptation of Shark Tank called The Dragon’s Den on YouTube. Shark Tank wasn’t the first in the list of worldwide adaptations of the show. In fact, the first in the series goes back to 2001 Japan – Tigers of Money. Until then, I had believed the origin of the adaptations cited back to the west, much like the others in the past – for instance, Kaun Banega Crorepati, an Indian adaption of Who Wants to Be a Millionaire?. One can reasonably argue that it was the lack of viewership rating of the show that led to its oblivion, but a lack of a near-credible legacy tells a different story.


Eye-level view of a lush green forest with sunlight filtering through the trees
A world map infographic with a timeline of TV show adaptations: 2001 Japan ‘Tigers of Money,’ then UK ‘Dragons’ Den,’ then USA ‘Shark Tank,’ then India ‘Shark Tank India.’

Why We Misremember: The Glamour Bias

We often tend to gravitate towards what’s glamourous and aspirational. Young Indian consumers, in particular, are drawn to international brands and western media narratives because they allude to modernity, affluence and global belonging. We rely upon familial or peer recommendations and reinforce these word-of-mouth decisions when foreignness is seen as conferring to higher status or better value. This perception often drives the popularity of certain media content over others and plays a significant role in how a country becomes more prominent in memory¹. The situation gets exacerbated when the misattributed country is also socio-economically advanced, technologically more capable and is – in reality – better in production quality. It then leads to an unfavorable image of the actual country of origin, either by discrediting the source or even poor-quality generalization. Suddenly it’s too late to just be self-aware; the internet is smarter than our smartphones.


How Algorithms Reinforce Misconceptions

When Eli Pariser coined “filter bubble”² to describe a self-referential social media environment, Instagram still had years to become as personalized as it is today. Although personalization and specifically curated feeds offer indisputable ease, they also raise concerns about homogenization of online culture. Misattributed associations get picked up by the social media algorithm as they are the sum-total of the content that we interact with most frequently. Suppose an influencer or someone you follow online recommends Western originals after consuming an Indian adaptation. Algorithms pick up on such content and push the Western adaptations as “related” or “more popular” version. Hashtags also play an important role. If a Shark Tank India reel is tagged as #SharkTank, algorithms would classify it with similar global content. Even adding a western audio to an Indian content reel would aid algorithmic promotion as global content. Consequently, the algorithm groups these together, nudging users toward the Western adaptation for “authenticity” or “high quality”. What starts off as sensationalist content becomes a pseudo-fact for the online discourse.

User data, such as browsing history, preferences, search requests, social media interactions, is often fed back into the algorithms to refresh its recommendations. Since Western adaptations often have higher engagement – more views, shares, comments – and are tagged with globally recognized labels (e.g., #Hollywood, #Netflix), algorithms amplify the original Western content to users who have engaged with similar Indian adaptations, reinforcing perceptions that the Western version is superior or more prestigious. The abundance of this content becomes new training data for algorithms – ultimately “teaching” the system that these are the dominant or “objectively” correct facts³. This loop can shape and even distort our choices, as about 72% of consumer decisions are influenced by such algorithmically customized content.


When Data Becomes Doctrine

These effects get amplified as new participants enter the loop. LLMs or Large Language Models are trained on vast internet corpora, web pages, and social media. They’re much faster and efficient but also only artificially intelligent. Because these LLMs do not have a true factual verification but are products of statistical correlations, they too can mirror the social proof of an influencer-driven bias. They can show patterns of biased knowledge learned from social media data and misattribute media origin, prestige, or even authenticity. This means if social media circles echo a dominant narrative of a Western adaptation as the original source, LLMs will frequently output this as a fact. As users query LLMs for facts, the answers would reflect prevailing associations, influencing further misunderstanding.

The central problem here isn’t AI or social media or technology or faulty correlations. They don’t misrepresent the world or its people in any sense. In fact, they’re not supposed to represent the world at all. Nor is the knowledge of Shark Tank’s origin likely to be of relevance to most of the population. The problem is the evolution of technology that’s devoid of social awareness and ethical considerations: recruitment algorithms penalizing women’s resumes, healthcare systems denying admission to people with “foreign-sounding names”, or detection bias in online hate speech. In such cases, logical reasoning and dispassionate mathematics leads to perspectives that contribute to a dehumanized outlook to a smart society.

A recent LLM driven simulation of Echo Chamber effect mimicked the evolution of distinct opinions online. researchers designed AI agents where each represented a social media user. The researchers evolved the agents’ opinions over the course of simulation through directed prompts and leading LLMs (ChatGPT, Gemini, Llama, etc.) were then compared over metrics of polarization, real world opinion evolution and heuristic nuances. The results were fascinating but cautionary. For instance, initial conditions matter. The starting point of an opinion strongly impacts the degree of amplification of a bias – example, it matters if I say Shark Tank originated in the US versus Mark Cuban saying it. Second, LLMs exaggerate polarization and can over-amplify their inherent biases. This means that you’re no longer exhibiting a bias in a silo. Your activities online alone are enough to amplify their effects on a global level.


What Ethical AI Should Look Like

Paul Ernest, the Emeritus Professor of Philosophy of Mathematics Education at Exeter University, has suggested a helpful ethical audit to correct some of these biases. The audit begins by questioning when mathematics is overvalued in society, assessing who may be socially excluded or harmed by the application of a model. Luckily, informed by such simulations, we can also take directive actions suggested by Paul’s audit. For instance, rather than showing users more diverse content indiscriminately (which can backfire for highly polarized) users, platforms should tailor recommendations, promoting diversity most to “intermediate” users, and sometimes diverting polarized users towards less contentious topics. Promoting posts by users in the lower-influence tier or forming temporary, balanced influence groups can soften the dominance of high-profile figures. This could reduce the intimidation factor and break the cycle of reinforcing polarization. A platform induced invisible aid could also notify users when they are engaging with overly polarized or misrepresentative content and provide accurate data or context.

Beyond explicit applications, Ernest urges examining the hidden, less visible ways these models can contribute to neoliberal policies, surveillance and how they performatively overall shape social institutions, practices and power relations. Most importantly, he stresses the requirement of clarity, reproducibility, and openness about the limitations and uncertainties of models to aid ethical scrutiny and public accountability. Our biases are not only mirrored but they can be codified as global knowledge, unless transparency, change, and diversity are prioritized at every step – from personal consumption habits to AI development practices.

References:

  1. Country of Origin Effect And Perception of Indian Consumers (Sanchita Ghosh, Saroj Kumar Datta, 2019)

  2. The Filter Bubble: What the Internet is Hiding from you (Eli Pariser, 2011)

  3. People see more of their biases in algorithms (Begum Celiktutan, Romain Cadario, Carey K Morewedge, 2024)

  4. The Influence of Social Media Algorithms on Consumer Buying Behaviour (Dr. S. Prabha Arockia Joans, Dr. R. Marie Sheila, 2025)

  5. Generative language models exhibit social identity biases (Tiancheng Hu, Yara Kyrychenko, Steve Rathje, Nigel Collier, Sander van der Linden & Jon Roozenbeek, 2025)

  6. Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms (Nicol Turner Lee, Paul Resnick, Genie Barton, Brookings 2019)

  7. Large Language Model Driven Agents for Simulating Echo Chamber Formation (Chenhao Gu, Ling Luo, Zainab Razia Zaidi, Shanika Karunasekera, 2025)

 
 
 

Comments


bottom of page