Skip to main content
Data-Centric Engineering

Electronic Engineering & Computer Science

Below you will find Data-Centric Engineering projects offered by supervisors within the School of Electronic Engineering & Computer Science

This is not an exhaustive list. If you have your own research idea, or if you are a prospective PDS candidate, please return to the main DCE Research webpage for further guidance, or contact us at dce-cdt@qmul.ac.uk 

Data-Driven Intelligent Audio Technologies

Many audio technologies require expert users in order to be properly applied. Dynamic range compression, room correction, acoustic feedback prevention etc. all use established signal processing technologies, with limitations. Either they need manual fine-tuning or are usable only in limited situations. Modern machine learning techniques may hold the key to automatically and intelligently applying parameters. Neural networks may be trained to optimise settings for different conditions.

The project is concerned with investigating, implementing and evaluating intelligent systems that render audio content towards a target. It will explore new approaches to challenging problems, with wide application to industry and potential end users.

The first stage of work involves a preparatory literature study, defining use cases and developing an initial prototype. The next stage will involve working with and learning from existing data sets and experimental measurements. The final stage is concerned with objective and subjective evaluation, as well as development of metrics and automated correcting of system parameters from subjective evaluation.

The project can be shaped to the researcher’s interests and expertise. It is expected that the research could yield high impact publications and have application in wider machine learning contexts.

Supervisor: Prof Josh Reiss 

Data-Centric Engineering for prediction and analysis of software controlled radio access networks

New technologies such as Open Radio Access Networks have brought new possibilities into the management of cellular networks. BT have a considerable investment in this area and access to unique data sets. They are interested in management and prediction of traffic from the RAN to their main networks. BT are currently beginning a large-scale AI-based project (AIMM) which the student would work alongside for the first years of their study.
https://www.celticnext.eu/project-aimm/

This is an excellent opportunity to work with some state of the art technology, some large-scale, real-life data sets on an important engineering problem in telecommunications. In this project the student would use traditional statistical techniques to analyse the network data. They would learn traditional time series techniques and new AI based techniques to compare them for both prediction and management techniques in networks.

Supervisors: Dr Richard CleggDr Mona Jaber 

Composition-aware music recommendation systems for music production

Music recommender systems (MRSs) have to date primarily focused on song or playlist recommendation for listening purposes, but much less work has been done on the recommendation of audio content for music production. Contemporary music production tools commonly include large digital audio libraries of loops (repeatable musical parts typically used in grid-based music), virtual instrument samples (based on synthesis or recordings) and sound packs (collections of sounds). Such richness of content can however impede creativity as it can be daunting for musicians to navigate tens of thousands of sounds to find items matching the style of their production and intent.

This project will research, develop and assess composition-aware music recommendation systems for music production enabling musicians get the best musical value out of their creative digital audio libraries.

Computer music makers often produce music by combining different instrumental parts together (e.g. drums, bass, lead, vocals, etc.). One of the challenges will be to provide new audio items that can be meaningfully mixed with other audio elements in the composition. The project will investigate methods to assess the musical compatibility between a set of audio items given constraints such as a composer’s musical taste and creative style.

Different music recommendation paradigms and AI techniques will be researched in order to minimise the impact of cold-start and sparsity issues, and maximise interpretability of the results. The project will compare content-based filtering (CBF) using audio or metadata (e.g. “analog dirty bass”, “crisp bright lead”, etc.), collaborative filtering (CF) based on user-item interactions, and hybrid models. Graph-based solutions for composition-aware recommendation will be investigated, e.g. by developing multi-layer structure taking into account user preferences and audio item musical compatibility. The project will also research deep learning techniques for (i) automatic feature learning from audio (e.g. to model perceptual distances between sounds to find sound-alikes), (ii) modelling audio item musical compatibility, and (iii) extracting latent factors from user-item interactions.

Evaluations of the models will be conducted taking into account how they support creative agency and provide interpretable recommendations to the user.

The scholar will have access to data and software provided by Focusrite including a catalogue of over 10k WAV loops with metadata, a collection of about 30k synth patches (with about 200 parameters), and musical compatibility indicators. The recommendation models will be tested on Ampify iOS music apps such as Launchpad, to make and remix music, and Blocs Wave, to make and record music  (https://ampifymusic.com/).

Supervisor: Dr Mathieu Barthet

Data-Centric Engineering methodology to enable emerging wireless technologies

This project aims to develop novel data-centric methodology and traceable techniques informed by machine learning and real data capture to enable emerging wireless technologies – e.g. Internet of Things (IoT), fifth-generation (5G) and sixth-generation (6G) mobile networks, to improve measurement efficiency and uncertainties to underpin all aspects from the systems, environments, and exposures and to provide metrological support on activities related to their standardisation.

The rollout of 5G/6G networks and large-scale deployments of cellular IoT will lead to fundamental changes to our society, impacting not only consumer service but also industries embarking on digital transformations. While their standardisation and definition processes are ongoing, the key challenges are the lack of accurate, fast, low-cost, and traceable methods for the verification of new radio (NR) high-volume products and this is mainly due to lack of adoption of data sceince and machine learning in such field where automation can be applied through data-centric processes and engineering that will be able to adapt to changing environments and requirements.

The main objective will be to conduct extensinve data gathering on inteliigent antennas, radio channels, over the air test beds and finally create accurate and adaptive radio link models informed by traceable and data-centric learning algorithms. 

Supervisor: Dr Akram Alomainy

Virtual reality-based human-robot remote control interfaces

Remote sensing and robot control is beneficial in many industrial and societal applications. Significant amount of data is collected during remote robotic inspection tasks. These data should be analysed by a human-operator during the actual inspection task or off-line by the stake-holders after the task is complete. In this project, novel data-centric methodologies for sensor fusion, visualisation and analysis will be developed and validated for virtual reality-based human-robot remote control interfaces. The control interface will be designed to operate a remotely located robotic system equipped with various on-board sensors (cameras, tactile elements, laser scanners, microphones, etc). Efficient real-time visualisation and analysis of the data collected from the sensors with the help of machine learning will provide a human-operator with safe and performant robot operation interface. The project will be based in the Centre of Advanced Robotics at QMUL and experiments will be performed in the qArena interactive robotics environment (funded by EPSRC) that comprises a stereo VR projection, bimanual haptic interface and motion tracking system.

Supervisor: Dr Ildar Farkhatdinov

Smart data approach to improved product safety assessment

Every year there are products available in the UK market that pose a serious risk to the health and safety of consumers. Product risk assessment is the overall process of determining whether a product, which could be anything from a washing machine to a teddy bear, is judged safe for consumers to use.

There are several methods of product risk assessment, including RAPEX, which is the primary method used by product safety regulators in the UK and EU. However, despite its widespread use, the Office of Product Safety and Standards (OPSS) – the UK product safety regulator – has identified several limitations of RAPEX that it feels could be addressed by exploiting state-of-the-art ‘smart data’ methods. In this project OPSS will provide us with in kind contributions including at least access to data sets, and access to policy agendas that will help address the identified limitations of RAPEX. These include societal/population risk, hazard exposure and risk tolerability since RAPEX does not consider these critical factors when estimating product risk.

It is proposed to use Bayesian Networks (combining data and causal knowledge) to produce a new method of product risk assessment to address these factors. This project will combine expertise in computer science, statistics, psychology, and risk assessment at QMUL (Risk and Information Management Group) and OPSS and use different research methods such as case studies. It will contribute to the limited literature on quantitative product risk assessment and provide a causal systematic method and tool for product risk assessment that effectively estimate and communicate product risk and uncertainty. Finally, since this project is built around the needs of OPSS, we believe that its output will directly impact and improve the risk assessment process for regulators and manufacturers ensuring that the products we use are acceptably safe.

Supervisor: Prof Norman Fenton

Parametric controls from data analytics

Many production technologies have been developed with experts in mind. Thus, their controls require specialist knowledge. Yet, it is now possible to get a wealth of information on how a technology is used to achieve a task, and to gather descriptions of their use (‘make the image brighter’, ‘show me just where to place the fixtures’, ‘remove the excess noise’,…). This information may be harnessed to map inputs and control settings to meaningful targets. Such data analytics allows the construction of high level, parametric controls which make the tools more accessible and more intuitive. The goal of this research project is to explore, implement and evaluate different approaches to this challenge. Though the focus will be on real world data sets and practical applications, it will lead to core results that can generalise to different data sets and domains.

Supervisor: Prof Josh Reiss

Deep audio inpainting 

Real-life audio signals often suffer from local degradation and lost information. Examples include short audio intervals corrupted by impulse noise and clicks, or a clip of audio wiped out due to damaged digital media or packet loss in audio transmission. Audio inpainting is a class of techniques that aim to restore the lost information with newly generated samples without introducing audible artifacts. In addition to digital restoration, audio inpainting also finds wide applications in audio editing (e.g. removing audience noise in live music recording) and music enhancement (e.g. audio bandwidth extension and super-resolution). Approaches to audio inpainting can be classified depending on the length of the lost information, i.e. the gap. For example, in declicking and declipping, corruption may be frequently but mostly confined to only a few milliseconds duration or less. On the other hand, gaps on a scale of hundreds of milliseconds or even seconds may happen due to digital media damage, transmission loss, and audio editing. While intensive work has been done on inpainting short gaps, long audio inpainting still remains a challenging problem due to the high dimensional, complex and non-correlated audio features.

This project intends to investigate the possibility of adapting deep learning frameworks from various domains inclusive of audio synthesis and image inpainting for audio inpainting. A particular focus will be given to recovering musical signals with long-gap information missing, and reconstructing super-resolution audio signals through bandwidth extension, which are both challenging tasks in the state of the art. The research will be conducted by combining one or several methodologies from:
1) Traditional musical signal processing approaches, e.g. exemplar-based method.
2) Deep learning based approaches, e.g. convolution neural networks and generative adversarial network.
3) Audio-visual based approaches exploiting additional visual context, e.g. video recording of instrument performance.

Supervisor: Dr Lin Wang