Cynthia Liem, Assistant Professor at the Multimedia Computing Group - Delft University of Technology
Title: Data and human interpretation: music and beyond
Music is a prime example of human-interpreted information. It needs human creators, performers and audiences in order to be meaningful. It manages doing this at scale: broad and diverse audiences engage with it. Also in the academic domain, there has been very broad interest in music, with multiple disciplines rooted in very different schools of thought actively studying it.
Now we live in a digital information society, information about music and its consumption has largely taken the form of multimodal data. At the same time, we increasingly rely on digital platforms for music access and discovery. This poses several interesting technological challenges, that call for new perspectives on digital information (re)presentation and interpretation. In this talk, I will present several examples of this, coming from my dual background as a computer science researcher and a performing musician. In addition, I will argue that while these new perspectives are natural in the music domain, they are essential in broader current discussions on responsible usage of AI in our increasingly digitized societies.
DMRN+14 is sponsored by
The UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM); a leading PhD research programme aimed at the Music/Audio Technology and Creative Industries, based at Queen Mary University of London.
The Digital Music Research Network (DMRN) aims to promote research in the area of Digital Music, by bringing together researchers from universities and industry in electronic engineering, computer science, and music.
DMRN will be holding its next 1-day workshop on Tuesday 17th December 2019. The workshop will include invited and contributed talks, and posters will be on display during the day, including during the lunch and coffee breaks.
The workshop will be an ideal opportunity for networking with other people working in the area. There will also be an opportunity to continue discussions after the Workshop in a nearby Pub/Restaurant.
Call for papers is Closed.
TALKS may range from the latest research, through research overviews or surveys, to opinion pieces or position statements, particularly those likely to be of interest to an interdisciplinary audience. Short announcements about other items of interest (e.g., future events or other networks) are also welcome.
POSTERS can be on any research topic of interest to the members of the network. Posters (A0 portrait) will be on display through the day, including lunch break and coffee breaks.
Each poster must fit on a poster board that is 3 feet (91.4 cm) wide and 6 feet (182.9 cm) tall. However, posters should not reach down to the floor as this makes them hard to read. Posters should therefore be no more than 85 cm (33.5 in) wide and no more than 119 cm (46.9 in) tall (i.e., no larger than A0 portrait or A1 landscape).
IMPORTANT: Posters wider than the stated dimensions will not fit on the poster boards. A0 landscape is TOO WIDE.
Submission of your talk or poster proposal in the form of an abstract (1 page of A4 in MS Word format, see DMRN+14 [DOC 93KB]) by email to firstname.lastname@example.org, giving the following information about your presentation:
A registration fee is payable, to cover room hire & refreshments.
How to Register
Please register online (closed)
The Event will take place at the Arts Two Lecture Theatre, Queen Mary University of London, Mile End Road, London E1 4NS.
The venue is easily accessible by public transport. It is within a five-minute walk of both Mile End Underground station (Central, District, and Hammersmith & City lines) and Stepney Green Underground station (District, and Hammersmith & City lines).
For travel information, see [opens in new window]:
Suggested hotels for staying before or after the workshop:
Registration opens Tea/Coffee
Welcome and opening remarks
Simon Dixon (Centre for Digital Music, Queen Mary University of London)
“Data and Human Interpretation: Music and Beyond", Dr Cynthia Liem (Delft University of Technology)
“Software for Analysis of Harmony as a Computer Tool for Search of Tonal Cadencies in Various Midi Files”, Ferková Eva and Šukola Michal (Academy of Performing Arts, Bratislava, Slovakia)
“Human vs. Automated Judgements of Similarity in a Global Music Sample”, Hideo Daikoku and Ding Shenghao (Keio University, Japan), Ujwal Sriharsha Sanne (Queen Mary University of London), Marino Kinoshita, Rei Konno, Yoichi Kitayama, Shinya Fujii and Patrick E. Savage (Keio University, Japan)
“Modelling Keys and Modulation with Scales and Harmonic Progressions”, Laurent Feisthauer, Louis Bigo, Mathieu Giraud (Université de Lille, France) and Florence Levé Université de Picardie Jules Verne & Université de Lille, France)
Lunch/Coffee Posters will be on display
“Exploring the Aesthetics and Utility of Sonification with Isomorph– Interactive Sonification for Molecular Physics”, Joseph Hyde (Bath Spa University), Thomas J. Mitchell (University of the West of England), Helen M. Deeks, David R. Glowacki and Alex J. Jones (University of Bristol)
“Alter: An Ensemble Work Composed with and about AI”, David De Roure (University of Oxford & The Alan Turing Institute), Emily Howard, Robert Laidlow (Royal Northern College of Music) and Pip Willcox (University of Oxford & The National Archives)
“Unmixer: An Interface for Extracting and Remixing Loops”, Jordan B. L. Smith, Yuta Kawasaki and Masataka Goto (National Institute of Advanced Industrial Science and Technology, Japan)
“The Impact of Dataset Modifications on Music Similarity Measures”, Roberto Piassi Passos Bodo (University of São Paulo, Brazil), Emmanouil Benetos (Queen Mary University of London) and Marcelo Queiroz (University of São Paulo, Brazil)
Tea/Coffee Posters will be on display
“Dig That Lick: Exploring Patterns in Jazz Solos”, Simon Dixon, Polina Proutskova (Queen Mary University of London), Tillman Weyde, Daniel Wolff (City University of London), Martin Pfleiderer, Klaus Frieler, Frank Höger (University of Music Weimar, Germany), Hélène-Camille Crayencour, Jordan B. L. Smith (National Center for Scientific Research, France), Geoffroy Peeters (Telecom Paris, France), Doğaç Başaran (IRCAM, France), Gabriel Solis, Lucas Henry (University of Illinois, USA), Krin Gabbard, Andrew Vogel (Columbia University, USA)
“Modulation Spectra for Musical Dynamics Perception and Retrieval”, Luca Marinelli, Athanasios Lykartsis (Technischen Universität Berlin) and Charalampos Saitis (Queen Mary University of London)
“Searching for Efficient Processing Pipelines Applied to MIR in Embedded Systems”, Filipe Lins, Marcelo Johann and Rodrigo Schramm (UFRGS, Brazil)
“Homepage and Search Personalization at Spotify”, Mi Tian, Rishabh Mehrotra, Lucas Maystre and Mounia Lalmas (Spotify, UK)
* - There will be an opportunity to continue discussions after the Workshop in a nearby Pub/Restaurant.
“Automatic Music Accompaniment with a Chroma-based Music Data Representation”, Lele Liu and Emmanouil Benetos (Queen Mary University of London)
“A MELD TimeMachine for Wagner’s Lohengrin”, David Lewis, Kevin Page and Laurence Dreyfus (University of Oxford)
“RadioMe: Artificially Intelligent Radio for People with Dementia”, Satvik Venkatesh, David Moffat, and Eduardo Reck Miranda (University of Plymouth)
“A Decentralized MELD Agent Framework for Computational Analysis of the Live Music Archive”, Graham Klyne (University of Oxford), Thomas Wilmering (Queen Mary University of London), John Pybus and Kevin Page (University of Oxford)
“AMT for Musicians: Performed-MIDI-to-Score Transcription”, Francesco Foscarin (CNAM, Paris), Florent Jaquemard (CNAM and INRIA, Paris), Philippe Rigaux and Raphaël Fournier-S'niehotta (CNAM, Paris)
“Using Different Feature Selection Methods for Mood Prediction”, Cornelia Metzig and Mark Sandler (Queen Mary University of London)
“The Algorithmically Enhanced Stylophone”, David De Roure (University of Oxford & The Alan Turing Institute), Alan Chamberlain (University of Nottingham), Iain Emsley (University of Sussex) and John Pybus (University of Oxford)
“Context-Aware Audio QoE: A Case Study on the Apollo 11 Audio Archive”, Alessandro Ragano (University College Dublin, Ireland & Queen Mary University of London & The Alan Turing Institute), Emmanouil Benetos (Queen Mary University of London & The Alan Turing Institute) and Andrew Hines (University College Dublin)
“Retro in Digital: Understanding the Semantics of Audio Effects”, Gary Bromham (Queen Mary University of London), David Moffat (University of Plymouth), Mathieu Barthet and György Fazekas (Queen Mary University of London)
“Computational Comparison Between Different Styles of Singing Voice in Terms of the Pitch”, Yukun Li and Simon Dixon (Queen Mary University of London)