Skip to main content
News

Unlocking the Secrets of Social Bots: Research Sheds Light on AI's Role in Spreading Disinformation

Social bots are not just benign entities; they have the power to influence public opinion and even manipulate markets, according to research from Queen Mary University of London academic Dr Mina Tajvidi

Published on:

In a world where social media influences opinions and shapes narratives, the rise of Artificial Intelligence (AI) is both a boon and a challenge. “Social Bots and the Spread of Disinformation in Social Media: The Challenges of Artificial Intelligence”, published in the British Journal of Management, delves into the realm of AI-powered social bots, revealing their potential to spread misinformation and the urgent need for organisations to detect and mitigate their harmful effects.

Led by a team of esteemed researchers, the study utilises cutting-edge text mining and machine learning techniques to dissect the behaviour of social bots on X (formally Twitter), one of the most prominent social media platforms. By analysing a dataset of 30,000 English-language tweets, the researchers uncover the intricate web of interactions between human and non-human actors, shedding light on the propagation of disinformation in the digital sphere.

"Social bots are not just benign entities; they have the power to influence public opinion and even manipulate markets," says Dr Mina Tajvidi, Co-Director of MSc Marketing Programme; Lecturer in Marketing in the School of Business and Management, Queen Mary University of London. "Our research underscores the importance of understanding their intentions and detecting their presence early on to prevent the spread of false information."

The study draws from the actor-network theory (ANT), providing a theoretical framework to examine the dynamics between humans, bots, and the digital landscape. By integrating ANT with deep learning models, the researchers unveil the symbiotic relationship between actors and the language they use, offering new insights into the spread of disinformation.

"Our findings highlight the need for enhanced detection techniques and greater awareness of the role social bots play in shaping online discourse," adds Dr Mina Tajvidi. "While our research focuses on X (formally Twitter), the implications extend to all social media platforms, where the influence of AI is increasingly prevalent."

However, the study also acknowledges its limitations, including the lack of metadata and the focus on English-language tweets. The researchers emphasise the need for future studies to explore additional languages and communication modalities to provide a comprehensive understanding of social bot behaviour.

As the digital landscape continues to evolve, the research serves as a clarion call for vigilance and proactive measures to combat the spread of disinformation. By harnessing the power of AI for good and equipping organisations with the tools to detect and mitigate harmful social bots, we can pave the way for a more informed and resilient society.

To view the full research paper “Social Bots and the Spread of Disinformation in Social Media: The Challenges of Artificial Intelligence”  visit: https://onlinelibrary.wiley.com/doi/full/10.1111/1467-8551.12554

Back to top