Skip to main content
News

The impact of artificial intelligence on individual development and the functioning of democratic societies

AI potential can "only be realised through a considered and evidence-based approach, which prioritises human rights and safeguards against unintended consequences" writes Daragh Murray, Senior Lecturer at Queen Mary School of Law and Fellow of the Institute for the Humanities and Social Sciences

Published on:

Artificial intelligence (AI) has long moved beyond being an abstract concept. Increasingly, it’s being incorporated into decision-making processes across governments and society. This includes everything from eligibility for social welfare and mortgage applications to granting prisoners parole or bail, and whether those in need of care are prioritised for medical intervention.

My current research aims to examine the impact of artificial intelligence on individuals’ ability to freely develop their personality and the effective functioning of a democratic society. It’s a four-year project that places particular emphasis on law enforcement, intelligence agencies, and the use of military AI applications.

How is AI shaping how we enforce our laws?

In recent years, the use of AI applications for law enforcement has exponentially grown. This includes facial recognition technology, which represents a step change in police surveillance capability, and introduces the real possibility of 24/7 pervasive surveillance. Facial recognition gives rise to a significant potential for human rights harm, and before we push ahead, I believe we need more rigorous research that looks further into potential ‘chilling effects’.

Research from the i newspaper and Liberty Investigates, based on UK Home Office data under freedom of information rules, revealed there was a 330 per cent increase last year in the number of searches using a form of facial recognition, to match suspects for crimes, against the Police National Database (PND).

The PND holds around 20 million images, including large numbers of individuals who have never been charged with a crime, or have had only passing contact with the police.

Beyond the use of the PND, facial recognition gives rise to two key concerns. First, in the UK, there is no legal framework regulating how police facial recognition is used, leaving discretion entirely in the police’s hands. This means that the police decide where and how this pervasive surveillance is used, without any public debate or parliamentary scrutiny.

The combination of live and retrospective facial recognition makes it possible to monitor, track and profile large amounts of the population, with significant private life implications. This is something that we, as a society, need to discuss.

Second, we know that facial recognition technology is often biased. Academics from Trinity College Dublin have already raised concerns about the use of facial recognition technology, warning it tends to bias against minority groups.

What does this mean for public liberty?

The increased surveillance power of AI can lead to so-called ‘chilling effects’, which are changes in behaviour and increased self-censorship, caused by the fact or fear of surveillance.

In other words, chilling occurs when an individual alters their behaviour based on the frameworks of power – for example, the worry that they may be subject to increased police surveillance, or disclosure of personal information. AI’s acceleration of mass surveillance has meant that chilling effects can happen far more frequently, and there is a lack of detailed analysis of its unique effects in academia.

We need to fill this gap for the benefit of personal, societal, and legal outcomes. It is of particular importance that we study this through the lens of human rights law. And then look at how we can balance human rights protections with any need to protect public order or public safety.

The surveillance made possible by AI can erode public trust and impact people’s ability to organise and mobilise collective action. Through my research, I have interviewed 150 people, and aim to speak to a further 150, who have been subject to surveillance. I aim to discover if there is a commonality of experience with these individuals.

So far, the interviews have focused on environmental protesters in the UK, as well as people impacted by Pegasus, which will be a standalone case study. These have been conducted in the UK, US, Uganda, and Zimbabwe, involving a range of human rights defenders, and people from marginalised or impacted communities.

A key reference point for a study such as this is China and its use of AI and surveillance. The Chinese state’s use of facial recognition systems logs nearly every single citizen in the country, with a network of cameras across the nation. In 2019, a database link gave a glimpse of how pervasive China's surveillance tools are – with more than 6.8 million records in a single day. These were taken from cameras positioned in public locations, such as hotels, parks, tourism spots and mosques, and they were found to be logging details on people as young as 9 days old

In China, facial recognition is used for deliberately repressive purposes, particularly concerning the Uyghur population. The danger is that, with no guidance or protocol in place, the UK risks reaching the point where AI restricts societal freedoms despite initial good intentions.

Looking at the bigger picture

All of this is timely, considering that the UK Parliament reached a provisional agreement with the Council on the AI Act, in December 2023.

The AI Act aims to “strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

A key component of the Act is the classification system that determines the level of risk an AI application could pose to the health and safety, or fundamental rights, of a person. The framework includes four risk tiers: unacceptable, high, limited, and minimal. ‘Unacceptable risk’ AI systems are classified as those considered a threat to people and will be banned.

By examining these issues more closely, with evidence from academic research, we can equip policymakers with a greater understanding of how AI could potentially impact people and law enforcement measures, now and in the future.

Finding the balance

AI holds immense potential to address societal challenges. However, this potential can only be realised through a considered and evidence-based approach, which prioritises human rights and safeguards against unintended consequences.

There is a delicate balance between technological advancement and societal wellbeing. Rushing forward with AI implementation, without considering the potential pitfalls, risks profound societal harm. This can trigger a backlash against the positive application of AI systems, and a breakdown in public trust.

Once mistrust develops in society, it can undermine the opportunities that AI presents us. We must assess the potential for harm and put in place an appropriate regulatory framework in order to make sure that AI can help to improve, rather than undermine, societal progress.

Related items

For media information, contact:

Press Office
email: press@qmul.ac.uk
Back to top