AI Chatbots Mimicking Social Media Reveal Emergence of Extremist Groupings

AI Chatbots Mimicking Social Media Reveal Emergence of Extremist Groupings

Unexpected Extremism Emerges in AI Chatbots Simulating Social Media Networks

A team of researchers developed a scaled-down social media platform populated entirely by AI chatbots powered by OpenAI’s GPT-4 Mini algorithm, each endowed with unique personalities mimicking real-life demographics and ideological distributions derived from US national election data. Shockingly, the bots began forming extremist clusters, mirroring polarization phenomena commonly seen in human online interactions.

The Experimental Setup with Over 500 Personalized Chatbots

The platform hosted more than 500 AI chatbots, each assigned attributes like age, gender, income, education, political affiliation, ideology, religion, and interests. This allowed the researchers to observe naturalistic social dynamics without algorithmic recommendation engines or paid promotions influencing content. The bots were free to post, follow, and promote, simulating true social media interaction with high fidelity.

Results Mirror Real-World Online Behavior

The bots gravitated towards like-minded peers, forming tight-knit echo chambers and generating highly partisan content. Controversial and extreme posts attracted the most followers and re-shares, amplifying tribalistic divisions. After a short period, a dominant faction of extremists took control of the conversational landscape, akin to influential social media figures.

Attempts to Mitigate Extremism Proved Ineffective

The research team experimented with strategies to reduce extremist influence, such as hiding follower counts, demoting content, and limiting trending posts. However, these measures did not significantly alter the power of the dominant extremist groups, suggesting that structural factors in social networks, rather than algorithms alone, play a critical role.

Implications for Understanding Social Media Challenges

This study challenges the commonplace view that algorithms are solely responsible for social media polarization. Instead, it highlights the inherent tendencies of social systems to cluster ideologically based on shared beliefs and emotional engagement, even absent algorithmic interference. The findings have broad implications for the design and governance of online platforms aiming to foster healthier discourse.

Past and Parallel Studies

The lead researcher, Peter Turnberg from the University of Amsterdam, previously conducted a similar study in 2023 using 500 AI chatbots powered by GPT-3.5 to simulate news reading and discussions in an online social environment. Meanwhile, Facebook (now Meta) ran experiments with AI bots on its platform in 2020 to study toxic content dissemination.