Skip to main content
Digital Music Research Network

DMRN+16: Digital Music Research Network One-day Workshop 2021

 

Queen Mary University of London

Tuesday 21st December 2021

 

News

 

Keynote speakers

We have confirmed two keynote speakers.

  • Prof Sophie Scott - (UCL)

Title: "Sound on the brain - insights from functional neuroimaging and neuroanatomy"

Link to video at C4DM YouTube channel

  • Prof Gus Xia - (NYU Shanghai)

Title: "Learning interpretable music representations: from human stupidity to artificial intelligence"

Link to video at C4DM YouTube channel


 DMRN+16 is sponsored by

The UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM); a leading PhD research programme aimed at the Music/Audio Technology and Creative Industries, based at Queen Mary University of London.

A new AIM CDT call for PhD positions will open in December 2021. There will be at least 12 fully funded PhD positions on AI + music topics.

 

Keynote talks

 

Keynote 1. Prof. Sophie Scott -Director, Institute of Cognitive Neuroscience, UCL.

Title: "Sound on the brain - insights from functional neuroimaging and neuroanatomy"

Abstract

In this talk I will use functional imaging and models of primate neuroanatomy to explore how sound is processed in the human brain. I will demonstrate that sound is represented cortically in different parallel streams. I will expand this to show how this can impact on the concept of auditory perception, which arguably incorporates multiple kinds of distinct perceptual processes.  I will address the roles that subcortical processes play in this, and also the contributions from hemispheric asymmetries.


 Keynote 2: Prof. Gus Xia - Assistant Professor at NYU Shanghai

Title: "Learning interpretable music representations: from human stupidity to artificial intelligence"

Abstract

Gus has been leading the Music X Lab in developing intelligent systems that help people better compose and learn music. In this talk, he will show us the importance of music representation for both humans and machines, and how to learn better music representations via the design of inductive bias. Once we got interpretable music representations, the potential applications are limitless.

 

Call for Contributions

The Digital Music Research Network (DMRN) aims to promote research in the area of digital music, by bringing together researchers from UK and overseas universities, as well as industry, for its annual workshop. The workshop will include invited and contributed talks and posters. The workshop will be an ideal opportunity for networking with other people working in the area. 

 

* Call for Contributions

You are invited to submit a proposal for a "talk" and/or a "poster" to be presented at this event.

TALKS may range from the latest research, through research overviews or surveys, to opinion pieces or position statements, particularly those likely to be of interest to an interdisciplinary audience. Due to the online format, we plan to keep talks to about 10 minutes each, depending on the number of submissions. Short announcements about other items of interest (e.g. future events or other networks) are also welcome.

POSTERS can be on any research topic of interest to the members of the network. Posters will be displayed virtually via the online platform gather.town.

The abstracts of presentations will be collated into a digest and distributed on the day.

 

* Submission

Please prepare your talk or poster proposal in the form of an abstract (1 page A4, using the template PDF-DMRN+16-Template [PDF 155KB], Word-DMRN+16-Template [DOC 94KB], One-page Template LaTex [403KB]). Submit it via email to dmrn@lists.eecs.qmul.ac.uk giving the following information about your presentation:

  • Authors
  • Title
  • Abstract
  • Preference for talk or poster (or "no preference").

 

* Deadlines

Extended to Friday 26th November - Abstract submission deadline

29 Nov 2021: Notification of acceptance

17 Dec 2021: Registration deadline

21 Dec 2021: DMRN+16 Workshop

 

Registration

The event will be online, registration is mandatory (so we can give you access to gather.town).

Register here

The information with the zoom and Gather,town link will be send to those register by email.

 

 

 Programme

 

10:00

Welcome, Simon Dixon

10:10

KEYNOTE

 

"Sound on the brain - insights from functional neuroimaging and neuroanatomy", Prof Sophie Scott - (Institute of Cognitive Neuroscience - UCL)

Link to video


10:50

Building style-aware neural MIDI synthesizers using simplified differentiable DSP approach, Sergey Grechin and Ryan Groves (Infinite Album) - Link to video

11:00

Completing Audio Drum Loops with Transformer Neural Networks, Teresa Pelinski (Queen Mary University of London), Behzad Haki and Sergi Jordà (Pompeu Fabra University) - Link to video - TP-DRM+16_slides [PDF 457KB]

11:10

Evaluation of GPT-2-based Symbolic Music Generation, Berker Banar and Simon Colton (Queen Mary University of London) (a new version of this talk will be upload shortly, contact Berker with any query).

11:20

NASH: the Neural Audio Synthesis Hackathon, Ben Hayes, Cyrus Vahidi and Charalampos Saitis (Queen Mary University of London) - Link to video

11:30

5 min break

11:35

Designing a synthesiser to elicit a feeling of perceived tension, Connor Welham, Bruno Fazenda, and Duncan Williams (Acoustic Department, University of Salford) - Link to video

11:45

Is Automatically Transcribed Data Reliable Enough for Expressive Piano Performance Research?, Huan Zhang, Simon Dixon (Queen Mary University of London) - Link to video

11:55

CAMAT: Computer Assisted Music Analysis Toolkit, Egor Poliakov (IHMT Leipzig) and Christon R. Nadar (Semantic Music Technologies, Fraunhofer IDMT) - Link to video

12:05

An Investigation on Pitch-Based Features on Selected Music Generation Systems, Yuqiang Li, Shengchen Li (Xi’an Jiaotong-Liverpool University) and George Fazekas (Queen Mary University of London) - Link to video

12:15

 

Lunch break

 

13:15

KEYNOTE

 

"Learning interpretable music representations: from human stupidity to artificial intelligence". Assistant Prof Gus Xia - (NYU Shanghai)

Link to video

 

13:55

Announcements and Intro to Gather Town

14:00

POSTER SESSION

 

Open poster session where the participant will be able to view the poster and chat with the authors.

 

16:00

Close

 

Note: Keynote spearkers and talks may be recorded, if you do not want to be please switch off your camara when Join in. We will only record the talk and not the Q&A.

 

Poster session

1

Sketching Sounds: Using sound-shape associations to build a sketch- based sound synthesizer, Sebastian Löbbers and George Fazekas (Queen Mary University of London)

2

Everyday Sound Recognition with Limited Annotations, Jinhua Liang, Huy Phan and Emmanouil Benetos (Queen Mary University of London)

3

Generating Comments from Music and Lyrics, Yixiao Zhang and Simon Dixon (Queen Mary University of London)

4

AI-Assisted FM Synthesis, Franco Caspe, Mark Sandler and Andrew McPherson (Queen Mary University of London)

5

Algorithmic Music Composition for The Environment, Rosa Park (San Francisco State University)

6

The Vienna Philharmonic's New Year's Concert Series: A Corpus for Digital Musicology and Performance Science, David M. Weigl and Werner Goebl (University of Music and Performing Arts Vienna)

7

An Interactive Tool for Visualising Musical Performance Subtleties, Yucong Jiang (University of Richmond)

8

A Benchmark Dataset to Study Microphone Mismatch Conditions for Piano Multipitch Estimation on Mobile Devices, Jakob Abeßer, Franca Bittner, Maike Richter, Marcel Gonzalez and Hanna Lukashevich (Fraunhofer IDMT)

9

Looking at the Future of Data-Driven Procedural Audio, Adrián Barahona-Ríos (University of York)

10

Making graphical scores accessible to visually impaired people: A haptic interactive installation, Christina Karpodini

11

Acoustic Representations for Perceptual Timbre Similarity, Cyrus Vahidi, Ben Hayes, Charalampos Saitis and George Fazekas (Queen Mary University of London)

12

Investigating a computational methodology for quantitive analysis of singing performance style, Yukun Li, Polina Proutskova, Zhaoxin Yu and Simon Dixon (Queen Mary University of London)

13

Variational Auto Encoding and Cycle-Consistent Adversarial Networks for Timbre Transfer, Russell Sammut Bonnici, Martin Benning, & Charalampos Saitis (Queen Mary University of London)

14

Characterizing Texture for Symbolic Piano Music, Louis Couturier (Universite de Picardie Jules Verne), Louis Bigo (Universite de Lille )and Florence Leve (Universite de Picardie Jules Verne and Universite de Lille)

15

Beat-Based Audio-to-Score Transcription for Monophonic Instruments, Jingyan Xu (Music X Lab, NYU Shanghai)

16

Predicting hit songs: multimodal and data-driven approach, Katarzyna Adamska, Joshua Reiss (Queen Mary University of London)

17

Character-based adaptive generative music for film and video games, Sara Cardinale and Simon Colton  (Queen Mary University of London)

18

Physically-inspired Modelling with Neural Networks, Carlos De La Vega Martin and Mark Sandler (Queen Mary University of London)

19

Hearing a Volumetric Drum, Rodrigo Diaz and Mark Sandler (Queen Mary University of London)

20

Computational Modelling of Jazz Piano via Large-Scale Automatic Transcription, Drew Edwards, and Simon Dixon ((Queen Mary University of London)

21

Music Emotion Mood Modelling using Graph and Neural Nets, Maryam Torshizi, George Fazekas, and Charalampos Saitis (Queen Mary University of London)

22

Virtual Placement of Objects in Acoustic Scenes, Yazhou Li, Lin Wang and Joshua Reiss (Queen Mary University of London)

23

Real Time Timbre Transfer with a Smart Acoustic Guitar, Jack Loth and Mathieu Barthet (Queen Mary University of London)

24

Music Interestingness in the Brain, Chris Winnard (Queen Mary University of London), Preben Kidmose (Aarhus University), Kaare Mikkelsen (Aarhus University) and Huy Phan (Queen Mary University of London)

25

Intelligent music production, Soumya Vanka (Queen Mary University of London), Jean Baptiste Roland (Steinberg) and George Fazekas (Queen Mary University of London)

26

Composition-aware music recommendation for music production, Xiaowan Yi and Mathieu Barthet (Queen Mary University of London)

27

Dynamic mood recognition in film music, Ruby Crocker and George Fazekas; (Queen Mary University of London)

28

The Sound of Care: researching the use of deep learning and sonification for the daily support of people with Chronic Pain, Bleiz Del Sette and Charalampos Saitis (Queen Mary University of London)

29

Embodiment in Intelligent Musical Systems, Oluremi Falowo and Charalampos Saitis (Queen Mary University of London)

 

 

 

Contact information

Alvaro Bort (a.bort@qmul.ac.uk)

EECS, Queen Mary University of London
Mile End Road, London E1 4NS, UK.

Back to top