"The public domain and its citizens need to play a major role in determining the framework within which AI technology continues to develop," argues Lappin, Professor of Natural Language Processing at the School of Electronic Engineering and Computer Science.
He sets aside speculative fears about superintelligent machines as not rooted in the actual engineering realities of current AI systems. Instead, he focuses on the immediate challenges requiring policy intervention.
Professor Lappin identifies tech monopolisation as a critical concern. Large companies now dominate AI development, with tech companies creating 32 major machine learning models in 2022 while universities produced only three. This concentration of power, he argues, allows corporations to shape research priorities according to commercial interests rather than public benefit.
Environmental damage presents another urgent challenge. Training ChatGPT-4 reportedly consumed approximately 50 gigawatt hours of electricity—equivalent to the annual usage of thousands of American households. The manufacturing of microchips for AI systems involves toxic chemicals, vast amounts of water, and enormous quantities of electricity, with chip factories consuming up to 100 megawatts per hour.
To address these challenges, Professor Lappin outlines several key policy priorities. First, comprehensive international regulation of tech companies is essential, as individual countries lack sufficient resources and enforcement powers to address these global issues. International trade agreements could provide mechanisms for imposing effective regulations.
Second, intellectual property rights must be reformed to ensure rights holders are compensated when their work is used to train AI systems. "At a minimum, these companies should be required to receive the consent of the copyright holders for the protected data that they use. In the interests of transparency, they should also be obliged to list the materials on which their systems are trained," notes Lappin.
Lappin also addresses widespread bias in AI decision-making systems across health care, hiring, and financial services. He suggests that effective measures must be policy-led and implemented to combat disinformation and hate speech online, balancing free expression with protection from harmful content, as current self-regulation by tech companies has proven ineffective.
He argues that disinformation and deep fakes represent a tangible threat. As generative AI becomes increasingly sophisticated, distinguishing fact from fiction grows more difficult.
"We could soon find ourselves living in an environment where separating fact from malicious fiction becomes increasingly difficult. At that point, the shared beliefs needed to sustain cohesion within the public domain begin to give way to doubt, recrimination, and chaos," Lappin warns.
Finally, governments must prepare for potential widespread job displacement as AI automation extends across various sectors. Significant public investment in services and alternative forms of employment will be necessary to prevent major social disruption.
"These are not matters that we can afford to leave solely to the vicissitudes of the market, and to the tech companies that play such a dominant role in shaping that market," Lappin concludes.
"Understanding the Artificial Intelligence Revolution" also provides an accessible introduction to the history of AI, as well as a clear scientific overview of its current systems.
The original was published on TechXplore, you can read it here.
The book will be launched on 16th June at Queen Mary University alongside Marcus Pearce's book, you can find out more here.