Skip to main content
Queen Mary Global Policy Institute

Artificial Intelligence and Ethics

In 1955, the phrase “artificial intelligence” was first coined in a proposal for a study submitted by a team of researchers from Dartmouth College, Harvard University, IBM and Bell Telephone Laboratories. In America, the workshop that originated from that proposal is understood to be the birth of the field of AI, although in the UK it is accepted that the seed was planted by Alan Turing in his “Computing Machinery and Intelligence”.

Published on:
A series of circuits
A series of circuits

Sixty-six years later, we are in a situation where Artificial Intelligence is ubiquitous, and many aspects of people’s lives are influenced by decisions made using AI systems or based on analysis made by it. As the term AI is widely used, continuing its usage ensures a consensus of people referring to the use of systems that incorporate it, but in a fast-paced informational reality where many readers do not venture beyond the title or the surface of an article these words can become deceitful.

Therefore, it is necessary to clarify a definition of Artificial Intelligence, which refers to systems that have in their core algorithms that give them the ability to improve their performance and predictive characteristics with data, making the use of “machine learning systems” more accurate than AI. With this caveat, “AI” can be used, knowing that some suggest that it is not intelligent and not much artificial.

Algorithmic systems are currently used for a vast array of activities and decision-making, from the simple virtual assistants we find on websites, to life-changing decisions on prisoners’ parole and sentencing, through social security decisions, surveillance and farming. The alleged predictive capabilities of AI implies that many tasks and decisions are left to the algorithm, creating a situation where the decisions made using them are difficult to challenge for different reasons.

From a socio-psychological point of view, the decision makers, such as the human in the loop, tend to find departing from the AI proposed choice problematic. From a technological point of view, even those programming tend to not understand fully how or why a particular choice is made by the system.

The characteristics of the algorithmic systems and the continuous growth of their use in many activities affects human lives, bringing into question the need to properly regulate them. Often, those creating them cannot explain the internal working of an algorithmic system, so it is better to leave them alone, as regulating them may preclude their development. This is problematic as it implies that the lives of those that suffer due to the use of AI are somehow a means to an end.

To consider an oversimplification, should an untested system be allowed to make decisions on a plane with the risk of bringing down a plane full of passengers, for the sake of technology development? This example may sound purely theoretical, but it really is problematic as in the case of Boeing’s 737 MAX aircraft it actually happened).

Some people believe AI development should be stopped until a proper set of rules, legal or otherwise, are in place to clarify the limits that algorithmic systems can reach, and the responsibility of those that outside or within the limits cause harm to others. However, this proposition could result in halting the development of systems that could truly represent advances that foster human development and protect people’s health. As the planet is going through a pandemic that has cost trillions of pounds and millions of lives, there is a strong argument that both the economic and human cost would have been much greater without the help of algorithmic systems in the areas of social, economic and policy decision making, alongside virus analysis and vaccine development.

One of the problems in the discussion about AI lies on the conceptual difference between development and use. It has to be clear that from the scientific point of view, the development of AI systems needs to be encouraged and, if needed, some legal safeguards be put in place to protect those developing them. However, using AI in real-life scenarios, which understandably fosters development through corporate funding and continuous testing, needs to be closely monitored and controlled, which takes us back to the fatal crashes involving Boeing 737 MAX aircraft.

The key lies in finding the balance between the need to develop the technological tools that allow genuine human development in a sustainable and inclusive manner and protecting the rights of those affected by those technologies. In the process, it is necessary to leave out of the picture techno-solutionism that preaches that technology is the solution to any problem, even to those we don’t know we have, and therefore technology development, per se, is always good.

It is essential to put aside the technophobic belief that technological interventions tend to create more problems than they solve, not acknowledging that the growth that the world economy has seen in the last few centuries, the increase in life expectancy across the globe, and that the solution to the world severe environmental problems might be found in technologies currently under development such as AI.

To add few more extra layers of complexity to the conundrum, even if there was a decision, made by humans, that AI needs to be regulated, how would that be done? Based on risk or principles? Who would regulate it? A multi-stakeholder entity, national authorities or international organisations? Are we talking about soft or hard law? Internationally harmonised or uniform?

Understanding that the current pandemic and the impending global environmental catastrophe are the most urgent matters for humanity to consider, we will try to answer some of the questions presented by AI in our global seminar. Artificial Intelligence is an imperative global policy issue, as it may hold the key to enhancing the resilience of individuals and communities and the sustainability of the planet.

About the author

Fernando Barrio, SCL Trustee and Senior Lecturer in Business Law, School of Business and Management, Queen Mary University of London and Academic Lead for Resilience and Sustainability, Queen Mary Global Policy Institute

 

Related items

For media information, contact:

Press Office
email: press@qmul.ac.uk
Back to top