DMRN+15 is sponsored by
The UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM); a leading PhD research programme aimed at the Music/Audio Technology and Creative Industries, based at Queen Mary University of London.
A new AIM CDT call for PhD positions will open in December 2020. There will be at least 12 fully funded PhD positions on AI + music topics.
Keynote 1 - by Prof. Philippe Esling- Associate professor and head of the Artificial Creative Intelligence and Data Science (ACIDS) research group at IRCAM
Title: Creativity at the era of artificial intelligence. Video of the talk (or QM Media link)
Abstract : Creativity is a deeply debated topic, as this concept is arguably quintessential to our humanity. Across different epochs, it has been infused with an extensive variety of meanings relevant to that era. Along these, the evolution of technology have provided a plurality of novel tools for creative purposes. Recently, the advent of Artificial Intelligence (AI), through deep learning approaches, have seen proficient successes across various applications. The use of such technologies for creativity appear in a natural continuity to the artistic trend of this century. However, the aura of a technological artefact labeled as intelligent has unleashed passionate and somewhat unhinged debates on its implication for creative endeavors. In this talk, we aim to provide a new perspective on the question of creativity at the era of AI, by blurring the frontier between social and computational sciences. To do so, we rely on reflections from social science studies of creativity to view how current AI would be considered through this lens. As creativity is a highly context-prone concept, we underline the limits and deficiencies of current AI, requiring to move towards artificial creativity. We exemplify our argument with several very recent research works from our team at IRCAM, called Artificial Creative Intelligence and Data Science (ACIDS).
Keynote 2 - by Prof. Dorien Herremans - Assistant Professor at Singapore University of Technology and Design (SUTD) where she leads the AMAAI lab and is Director of SUTD Game Lab.
Title: Controllable music generation: from MorpheuS to deep networks. Video of the talk ( or QM Media link).
Abstract: In its more than 60 year history, music generation systems have never been more popular than today. In this talk, I will discuss a number of co-creative music generation systems that have been developed over the last few years. These include MorpheuS, a tonal tension-steered music generation system guided by tonal tension and long-term structure. MusicFaderNets, a variational auto encoder model that allows for controllable arousal and rhythmic density of music. Finally, some more recent models by our AMAAI lab which include architectures such as controllable transformers and hierarchical RNN.
Keynote 3 - by Dr. Mariana Lopez - Senior Lecturer in Sound Production and Post Production at the Department of Theatre, Film, Television and Interactive Media at University of York
Title: Accessibility through sound design and spatialisation: towards more creative and inclusive practices in film and television. Video of the talk ( or QM Media link)
Abstract: Studies on sound design and spatialisation in the creative arts seldom engage with their potential to create accessible experiences. But these strategies can do much more than just entertain and immerse audiences, they could be put to the service of the creation of more accessible and inclusive experiences. This talk will explore research on the use of creative sound design for the development of accessible film and television experiences for visually impaired audiences. It will do so by exploring the Enhancing Audio Description Methods (EAD Methods), as an alternative to traditional Audio Description practices. The talk will explore the potential of the methods for accessibility practices as well as the creative advantages they hold for sound designers as well as film and television creators. Attendees will be introduced to notions of integrated access, accessible filmmaking and universal design, and how these are key for the creation of creative and accessible film and television productions, in which innovation on sound design and spatialisation is focused on their contribution towards social inclusion.
The Digital Music Research Network (DMRN) aims to promote research in the area of Digital Music, by bringing together researchers from universities and industry in electronic engineering, computer science, and music.
DMRN will be holding its next 1-day workshop on Tuesday 15th December 2020. The workshop will include invited and contributed talks, and posters will be on display during the day, including during the lunch and coffee breaks.
The workshop will be an ideal opportunity for networking with other people working in the area.
Call for papers is closed.
TALKS may range from the latest research, through research overviews or surveys, to opinion pieces or position statements, particularly those likely to be of interest to an interdisciplinary audience. Due to the online format, we plan to keep talks to about 10 minutes each, depending on the number of submissions. Short announcements about other items of interest (e.g. future events or other networks) are also welcome.
POSTERS can be on any research topic of interest to the members of the network. Posters will be displayed virtually (probably as slides rather than a poster format) in an asynchronous poster session (probably on Slack, as at ISMIR 2020).
Submission of your talk or poster proposal in the form of an extended abstract (1 page of A4 1page-dmrn15-template-word [DOC 105KB], DMRN+15 template latex [408KB]) by mail to giving in the body of the email the following information about your presentation:
The event will be online, registration is mandatory.
Registration: Eventbrite link.
Details with the links to join the workshop will be send to the registration email.
Alvaro Bort (firstname.lastname@example.org)
EECS, Queen Mary University of London Mile End Road, London E1 4NS, UK.
DMRN+15 will be fully online.
“Creativity at the Era of Artificial Intelligence”, Prof. Philippe Esling (Institut de Recherche et Coordination Acoustique Musique)
“Joint Piano-roll and Score Transcription for Polyphonic Piano Music”, Lele Liu, Veronica Morfi and Emmanouil Benetos (Queen Mary University of London)
“A Modular System for Harmonic Structure Analysis of Music”, Andrew McLeod and Martin Rohrmeier (École Polytechnique Fédérale de Lausanne)
“Choral Music Separation using Time-domain Neural Networks”, Saurjya Sarkar, Emmanouil Benetos, and Mark Sandler (Queen Mary University of London)
“Generating Audio Mosaics with Particle Smoothing”, Graham Coleman (Oldenburg, Germany)
“How to Automatically Calculate Tonal Tension Using AuToTen”, Germán Ruiz-Marcos, Robin Laney and Alistair Willis (Open University)
“Perceptual Similarities in Neural Timbre Embeddings”, Ben Hayes, Luke Brosnahan, Charalampos Saitis, and George Fazekas (Queen Mary University of London)
“Creating and Evaluating an Annotated Corpus Using the Library ms3”, Johannes Hentschel and Martin Rohrmeier (École Polytechnique Fédérale de Lausanne)
“Temporal Classes of User Behaviours on Music Streaming Platforms”, Dougal Shakespeare and Camille Roth (Centre March Bloch)
“Controllable Music Generation: from MorpheuS to Deep Networks”, Prof. Dorien Herremans (Singapore University of Technology and Design)
Open poster session were the participant will be able to view the poster and chat with the Authors
“Accessibility through sound design and spatialisation: towards more creative and inclusive practices in film and television”, Dr. Mariana Lopez (University of York)
“Prosociality and Collaborative Playlisting: A Preliminary Study”, Ilana Harris (Freie Universität Berlin) and Ian Cross (University of Cambridge)
“auraloss: Audio-focused loss functions in PyTorch”, Christian J. Steinmetz and Joshua D. Reiss (Queen Mary University of London)
“Development of an Audio Quality Dataset Under Uncontrolled Conditions”, Alessandro Ragano (University College Dublin), Emmanouil Benetos (Queen Mary University of London) and Andrew Hines (University College Dublin)
“Analysis of Chord Progression Networks”, Lidija Jovanovska and Bojan Evkoski (International Postgraduate School Jozef Stefan)
“Fusion of Hilbert-Huang Transform and Deep Convolutional Neural Network for Predominant Musical Instrument Recognition”, Xiaoquan Li and Jinchang Ren (University of Strathclyde)
“Supporting Child Composers' Creativity with AI”, Corey Ford and Nick Bryan-Kinns (Queen Mary University of London)
PERFORM-AI (Provide Extended Realities for Musical Performance using AI)”, Max Graf and Mathieu Barthet (Queen Mary University of London)
“Improving AI-generated Music with Pleasure Models”, Madeline Hamilton and Marcus Pearce (Queen Mary University of London)
“Lyrics Alignment for Polyphonic Music”, Jiawen Huang and Emmanouil Benetos (Queen Mary University of London)
“Informed source separation for multi-mic production”, Harnick Khera and Mark Sandler (Queen Mary University of London)
“Unsupervised Disentangled Representation Learning for Music and Audio", Yin-Jyun Luo and Simon Dixon (Queen Mary University of London)
“Gender-coded sound: A multimodal data-driven analysis of gender encoding strategies in sound and music for advertising”, Luca Marinelli and Charalampos Saitis (Queen Mary University of London)
“Digging Deeper - expanding the “Dig That Lick” corpus with new sources and techniques”, Xavier Riley and Simon Dixon (Queen Mary University of London)
“Automatic micro-composition for professional/novice composers using generative models as creativity support tools”, Eleanor Row and Simon Colton (Queen Mary University of London)
“Audio Applications of Novel Mathematical Methods in Deep Learning”, Shubhr Singh and Dan Stowell (Queen Mary University of London)
“End-to-End System Design for Music Style Transfer with Neural Networks”, Jingjing Tang and George Fazekas (Queen Mary University of London)
“Real-time instrument transformation and augmentation with deep learning”, Lewis Wolstanholme and Andrew McPherson (Queen Mary University of London)
“Machine Learning Methods for Artificial Musicality”, Yixiao Zhang and Simon Dixon (Queen Mary University of London)