Skip to main content
School of Electronic Engineering and Computer Science

Mr Luca Marinelli


Room Number: Engineering, Eng 104


Project Title:

Gender-coded sound: A multimodal data-driven analysis of gender encoding strategies in sound and music for advertising


Recent studies in computational social sciences have shown that, when the training data reflect human biases, machine learning models are able to acquire, reproduce and eventually reveal these biases. Taking into account that music is found in more than 90% of television adverts, and by interpreting it as an inherently multimodal discourse, this study aims at implementing deep learning models for a critical analysis of gendered markers in two large corpora of television adverts, with the purpose of deconstructing the dominant discourse on gender in music and advertising, and to inform further investigations on a commercial and contemporary musical semiotics of gender. By means of approaches from music information retrieval and multimodal discourse analysis, the proposed study aims at uncovering and visualising underlying patterns of intersubjectivity emerging from the influence of gender-based segmentation strategies on the selection and composition of music for advertising. Standing in the context of artificial intelligence, computational social sciences, computational musicology and critical discourse analysis, this study could benefit not only scholars but also advertisers, broadcasters and the music industry at large.

According to Sut Jhally "advertising seems to be obsessed with gender and sexuality", as these are arguably the most used social resources in the industry. There are several reasons for this, firstly because gender is easily identifiable and accessible, gender segments are also easily measurable and, more importantly, they are large and profitable . Gender-based market segmentation strategies mostly leverage the dominant ideology and, in doing so, they help to reproduce and perpetuate the binary gender discourse and its power relations. Advertisers use "colour, shape, texture, packaging, logos, verbiage, graphics, sound, and names to define the gender of a brand".

Whereas stereotyped language in text may be easier to interpret, gender-coding in music ensues from the historical sedimentation, in musical practice, of cross-modal associations between gendered meanings in texts and other visual images, and basic structures in music. For example, in 1989 Tagg found widespread agreement on the gendered meanings associated with television theme tunes, where a slower tempo was often assigned to females characters, while males tended to be accompanied by more active bass lines and greater rhythmic irregularity. More recently, Sergeant and Himonides, and Wang et all eplained the processes of gendered-meaning ascription to music as a result of predominant gender schemata, and suggested that mapping such associations to musical structures might not be a trivial matter. However, we can expect the phenomenon to be more explicit, and thus more easily quantifiable when -much like in advertising, pop music videos and commercial movies-creative production is influenced at all modal levels by gender stereotypes.

C4DM theme affiliation:

Music Cognition, Machine Listening


Professional and Research Practice (Undergraduate)

This module provides you with the opportunity to examine the role of engineering in society and the expectations of society for a professional engineer. During the module, you should develop and achieve a level of written and spoken communication expected of a professional engineer. You will also construct a personal development plan (PDP) and an on-going employability skills folder. The assessment of the module is 100 per cent coursework, broken down as follows: oral presentation: 25 per cent; in-class essay: 25 per cent; PDP folder: 25 per cent; employability folder: 25 per cent. Not open to Associate Students or students from other departments.


Research Interests:

machine learning, machine listening, music information retrieval, sound and music computing

Back to top