Skip to main content
Digital Music Research Network

DMRN+17: Digital Music Research Network One-day Workshop 2022

 

Queen Mary University of London

Tuesday 20th December 2022

 

News

 

Keynote speakers: Sander Dieleman

Tittle: TBC (on generative modelling and iterative refinement).

Bio: Sander Dieleman is a Research Scientist at DeepMind in London, UK, where he has worked on the development of AlphaGo and WaveNet. He obtained his PhD from Ghent University in 2016, where he conducted research on feature learning and deep learning techniques for learning hierarchical representations of musical audio signals. His current research interests include representation learning and generative modelling of perceptual signals such as speech, music and visual data.

 

 DMRN+17 is sponsored by

The UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM); a leading PhD research programme aimed at the Music/Audio Technology and Creative Industries, based at Queen Mary University of London.

A new AIM CDT call for PhD positions will open in December 2022. There will be at least 12 fully funded PhD positions on AI + music topics.

 

Location:      Arts 2 Theatre – QMUL Mile end Campus (In person)

                     Zoom:  https://qmul-ac-uk.zoom.us/j/89668766939 (On line)

 

Call for Contributions - closed

The Digital Music Research Network (DMRN) aims to promote research in the area of digital music, by bringing together researchers from UK and overseas universities, as well as industry, for its annual workshop. The workshop will include invited and contributed talks and posters. The workshop will be an ideal opportunity for networking with other people working in the area. 

 

* Call for Contributions

You are invited to submit a proposal for a "talk" and/or a "poster" to be presented at this event.

TALKS may range from the latest research, through research overviews or surveys, to opinion pieces or position statements, particularly those likely to be of interest to an interdisciplinary audience. We plan to keep talks to about 15 minutes each, depending on the number of submissions. Short announcements about other items of interest (e.g. future events or other networks) are also welcome.

POSTERS can be on any research topic of interest to the members of the network.

The abstracts of presentations will be collated into a digest and distributed on the day.

 

* Submission

Please prepare your talk or poster proposal in the form of an abstract (1 page A4, using the template 1-page word template DMRN+17 [DOC 95KB], LaTex template 1-page DMRN+17 [397KB] . Submit it via email to dmrn@lists.eecs.qmul.ac.uk giving the following information about your presentation:

  • Authors
  • Title
  • Abstract
  • Preference for talk or poster (or "no preference").

 

* Deadlines

 

20 Nov 2022: Abstract submission deadline 

25 Nov 2022: Notification of acceptance 

17 Dec 2022: Registration deadline 

20 Dec 2022: DMRN+17 Workshop

 

Registration

The event will be in person but people could follow talks online, registration is mandatory for those comming in person (the registration cost is £25, it covers catering services, coffee and lunch).

Register here

 

 Programme (TBC by 9th December)

 

10:00

Welcome - Andrew McPherson

10:10

KEYNOTE

 

 "(On generative modelling and iterative refinement)", Sander Dieleman- (Research Scientist at DeepMind)

 

 

11:10

Break (Coffee break)

11:30

 “Improving Chord Sequence Graphs with Transcription Resiliency and a Chord Similarity Metric”, Jeff Miller, Vincenzo Nicosia and Mark Sandler (Queen Mary University of London, UK)

11:45

“Bringing the concert hall into the living room: digital scholarship of small-scale arrangements of large-scale musical works”, David Lewis and Kevin R. Page (University of Oxford e-Research Centre, UK)

12:00

““Leveraging Music Domain Knowledge in Symbolic Music Modeling”, Zixun Guo and Dorien Herremans (ISTD, Singapore University of Technology and Design, Singapore)

12:15

“Large-Scale Pretrained Model for Self-Supervised Music Audio Representation Learning”, Yizhi Li (University of Sheffield, UK), Ruibin Yuan (Beijing Academy of Artificial Intelligence, China, and Carnegie Mellon University, PA, USA) , Ge Zhang (Beijing Academy of Artificial Intelligence, China and University of Michigan Ann Arbor, USA) , Yinghao Ma (Queen Mary University of London, UK), Chenghua Lin (University of Sheffield, UK) , Xingran Chen (University of Michigan Ann Arbor, USA), Anton Ragni (University of Sheffield, UK), Hanzhi Yin (Carnegie Mellon University, PA, USA), Zhijie Hu (HSBC Business School, Peking University, China), Haoyu He (University of Tübingen & MPI-IS, Germany), Emmanouil Benetos (Queen Mary University of London, UK), Norbert Gyenge (University of Sheffield, UK), Ruibo Liu (Dartmouth College, NH, USA) and Jie Fu (Beijing Academy of Artificial Intelligence, China)

12:30

Announcements

 

12:45

 

 

Lunch - Poster Session

 

14:15

“Time-Frequency Scattering in Kymatio”, Cyrus Vahidi (Queen Mary University of London), Vincent Lostanlen, Han Han, Changhong Wang (Queen Mary University of London ) and György Fazekas (Queen Mary University of London)

14:30

“Working for the AI Man: Algorithmic Rents, Accumulation by Dispossession and Alien Power”, Hussein Boon (University of Westminster, UK)

14:45

“Remarks on a Cultural Investigation of Abstract Percussion Instruments”, Lewis Wolstanholme and Andrew McPherson (Queen Mary University of London, UK)

15:00

“Beat Byte Bot: a bot-based system architecture for audio cataloguing and proliferation with neural networks and Linked Data”, J. M. Gil Panal (E.T.S.I. Informática, University of Málaga, Spain and Luís Arandas (INESC-TEC, University of Porto, Portugal)

15:15

Break

15:30

“Symmetries and Minima in Differentiable Sinusoidal Models”, Ben Hayes, Charalampos Saitis, György Fazekas (Queen Mary University of London)

15:45

Affordances of Generative Models of Raw Audio to Instrumental Practice and Improvisation”, Mark Hanslip (School of Arts and Creative Technologies, University of York)

16:00

“Practical Text-Conditioned Music Sample Generation”, Scott H. Hawley (Belmont University, USA and Harmonai), Zach Evans, C.J. Carr (Harmonai) and Flavio Schneider (ETH Zürich and Harmonai and Harmonai).

16:15

Close - Andrew McPherson

 

 

* - There will be an opportunity to continue discussions after the Workshop in a nearby Pub/Restaurant for those in London.

 

Posters

 

1

“Which car is moving? A listening approach using distributed acoustic sensor systems”, Chia-Yen Chiang and Mona Jaber (Queen Mary University of London)

2

" YourMT3: a toolkit for training multi-task and multi-track music transcription model for everyone", Sungkyun Chang, Simon Dixon and Emmanouil Benetos (Queen Mary University of London)

3

“Supervised Contrastive Learning for Musical Onset Detection”, James Bolt and György Fazekas (Queen Mary University of London)

4

“Computational Modelling of Expectancy-Based Music Cognition of Timbre Structures”, Adam Garrow and Marcus Pearce (Queen Mary University of London)

5

“Self-supervised Learning for Music Information Retrieval” Yinghao Ma and Emmanouil Benetos (Queen Mary University of London)

6

"Performance Rendering for Automatic Music Generation Pipelines", Tyler McIntosh and Simon Dixon (Queen Mary University of London)

7

“Explainability in End-User Creative Artificial Intelligence”, Ashley Noel-Hirst and Nick Bryan-Kinns (Queen Mary University of London)

8

" Real-time timbre mapping for synthesized percussive performance", Jordan Shier(Queen Mary University of London), Andrew Robertson (Ableton), Andrew McPherson and Charalampos Saitis (Queen Mary University of London)

9

“Machine Learning of Physical Models for Voice Synthesis”, David Südholt and Joshua Reiss (Queen Mary University of London)

10

"Using Signal-informed Source Separation (SISS) principles to improve instrument separation from legacy recordings ", Louise Thorpe, Emmanouil Benetos and Mark Sandler (Queen Mary University of London)

11

“Personalised music descriptors: valuing user perspective”, Yannis Vasilakis (Queen Mary University of London), Rachel M Bittner (Spotify), Johan Pauwels (Queen Mary University of London)

12

"Learning Music Representations using Coordinated based Neural Network", Ningzhi Wang and Simon Dixon (Queen Mary University of London)

13

“User-Driven Music Generation in Digital Audio Workstations”, Alexander Williams (Queen Mary University of London), Stefan Lattner (Sony SCL) and Mathieu Barthet (Queen Mary University of London)

14

"Conditioning in Variational Diffusion Models for Audio Super-Resolution", Chin-Yun Yu (Queen Mary University of London) Sung-Lin Yeh (University of Edinburgh) György Fazekas (Queen Mary University of London) Hao Tang (University of Edinburgh) 

Back to top