Skip to main content
School of Electronic Engineering and Computer Science

Dr Emmanouil Benetos

Emmanouil

Reader in Machine Listening & Director of Research

Email: emmanouil.benetos@qmul.ac.uk
Telephone: +44 20 7882 6206
Room Number: Engineering, Eng 403
Website: http://www.eecs.qmul.ac.uk/~emmanouilb
Twitter: @emmanouilb
Office Hours: Wednesday 15:00-16:00

Profile

Emmanouil Benetos is Reader in Machine Listening and Director of Research at the School of Electronic Engineering and Computer Science of Queen Mary University of London. Within Queen Mary, he is member of the Centre for Digital Music, Centre for Intelligent Sensing, and Digital Environment Research Institute, and co-leads the School's Machine Listening Lab.

His main research topic is computational audio analysis, also referred to as machine listening or computer audition - applied to music, urban, everyday and nature sounds. He has been Royal Academy of Engineering / Leverhulme Trust Research Fellow in resource-efficient machine listening, Turing Fellow at the Alan Turing Institute, Royal Academy of Engineering Research Fellow, and has been principal- and co-investigator for several funded research projects in the intersection of machine learning and audio. He is also Deputy Director for the UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM).

On academic service, he is currently secretary for the International Society for Music Information Retrieval (ISMIR), member & chair of the education subcommittee of the IEEE Technical Committee on Audio and Acoustic Signal Processing (AASP TC), member of the EURASIP Acoustic, Speech and Music Signal Processing Technical Area Committee (ASMSP TAC), associate editor for the IEEE/ACM Transactions on Audio, Speech, and Language Processing, and associate editor for the EURASIP Journal on Audio, Speech, and Music Processing.

Teaching

Data Mining (Postgraduate)

Data that has relevance for decision-making is accumulating at an incredible rate due to a host of technological advances. Electronic data capture has become inexpensive and ubiquitous as a by-product of innovations such as the Internet, e-commerce, electronic banking, point-of-sale devices, bar-code readers, and electronic patient records. Data mining is a rapidly growing field that is concerned with developing techniques to assist decision-makers to make intelligent use of these repositories. The field of data mining has evolved from the disciplines of statistics and artificial intelligence. This module will combine practical exploration of data mining techniques with a exploration of algorithms, including their limitations. Students taking this module should have an elementary understanding of probability concepts and some experience of programming.

Music Informatics (Postgraduate/Undergraduate)

This module introduces students to state-of-the-art methods for the analysis of music data, with a focus on music audio. It presents in-depth studies of general approaches to the low-level analysis of audio signals, and follows these with specialised methods for the high-level analysis of music signals, including the extraction of information related to the rhythm, melody, harmony, form and instrumentation of recorded music. This is followed by an examination of the most important methods of extracting high-level musical content, sound source separation, and on analysing multimodal music data.

Research

Research Interests:

  • Machine listening
  • Audio signal processing
  • Machine learning
  • Music information retrieval
  • Multimodal AI

Research Funding

For a list of funded research projects see: http://www.eecs.qmul.ac.uk/~emmanouilb/research.html

Publications

For a complete list of publications see: http://www.eecs.qmul.ac.uk/~emmanouilb/publications.html

Publications

Supervision

PhD Students (primary and joint supervisees)

  • Shahar Elisha Topic: Style classification of podcasts using audio
  • Christos Plachouras (co-supervised with Johan Pauwels) Topic: Deep learning for low-resource music
  • Antonella Torrisi Topic: Computational analysis of chick vocalisations: from categorisation to live feedback
  • Aditya Bhattacharjee Topic: Self-supervised learning in audio fingerprinting
  • Yinghao Ma Topic: Self-supervision in machine listening
  • Jinhua Liang Topic: Everyday sound recognition with limited annotations
  • Jiawen Huang Topic: Lyrics alignment and transcription For polyphonic music
  • Inês Nolasco (co-supervised with Huy Phan and Dan Stowell). Topic: Automatic acoustic identification of individual animals in the wild
  • Shubhr Singh (co-supervised with Huy Phan and Dan Stowell). Topic: Novel mathematical methods for audio based deep learning
  • Ilaria Manco (co-supervised with George Fazekas). Topic: Multimodal deep learning for music information retrieval
  • Lele Liu Topic: Automatic music score transcription with deep neural networks

PhD Students (second supervisees)

  • Ivan Shanin Topic: Modeling melodic jazz improvisation
  • Yu Cao Topic: Generative modeling with few-shot learning
  • Julien Guinot Topic: Improved self-supervised learning and human-in-the-loop for musical audio: towards expert, navigable, and interpretable representations of music
  • Peiling Yi Topic: Youth cyberbullying detection across different social media platforms
  • Chin-Yun Yu Topic: Analysing and controlling extreme vocal expression using differentiable DSP and neural networks
  • Christopher Mitcheltree Topic: Representation learning for audio effect and synthesizer modulations
  • Yisu Zong Topic: Machine learning for physical models of sound synthesis
  • Huan Zhang Topic: Computational modelling of expressive piano performance
  • Andrew Edwards Topic: Computational models for jazz piano: transcription, analysis, and generative modeling
  • Xiaowan Yi Topic: Composition-aware music recommendation system for music production
  • Dimitrios Stoidis Topic: Protecting voice biometrics with disentangled representations of speech
  • Yukun Li Topic: Computational comparison between different genres of music in terms of the singing voice

Research Assistants

  • Shubhr Singh (Sept. 2024 - March 2025). Project: Online Speech Enhancement In Scenarios With Low Direct-to-Reverberant-Ratio
  • Omar Ahmed (July - Sept. 2024). Project: Automatic transcription of guitar tabs

Visiting Researchers

Back to top