New Frontiers in Music Information Processing (MIP-Frontiers)

(Web page under construction.)

MIP-Frontiers is a European Training Network funded by the European Commission, starting on 1 April 2018. We will be recruiting 15 PhD students to start in 2018 across a range of exciting projects in collaboration with our industry and cultural partners (listed below). PhD students will be enrolled at one of the four institutions: Queen Mary University of London, Universitat Pompeu Fabra, Telecom ParisTech, and Johannes Kepler University of Linz. Applicants may have any nationality, but must satisfy the EU mobility rule, that they have not lived in the country where they are hosted for more than 12 months in the previous 3 years.

To give an overview of the whole project, the abstract from the proposal is included below, followed by the list of members of the MIP-Frontiers consortium, and a table of the PhD projects which will be offered (including the hosting organisation, where most of the work will be done, and the secondary host).

Abstract

Music Information Processing (also known as Music Information Research; MIR) involves the use of information processing methodologies to understand and model music, and to develop products and services for creation, distribution and interaction with music and music-related information. MIR has reached a state of maturity where there are standard methods for most music information processing tasks, but as these have been developed and tested on small datasets, the methods tend to be neither robust to different musical styles or use contexts, nor scalable to industrial scale datasets. To address this need, and to train a new generation of researchers who are aware of, and can tackle, these challenges, we bring together leading MIR groups and a wide range of industrial and cultural stakeholders to create a multidisciplinary, transnational and cross-sectoral European Training Network for MIR researchers, in order to contribute to Europe's leading role in this field of scientific innovation, and accelerate the impact of innovation on European products and industry. The researchers will develop breadth in the fields that make up MIR and in transferable skills, whilst gaining deep knowledge and skills in their own area of speciality. They will learn to perform collaborative research, and to think entrepreneurially and exploit their research in new ways that benefit European industry and society. The proposed work is structured along three research frontiers identified as requiring intensive attention and integration (data-driven, knowledge-driven, and user-driven approaches), and will be guided by and grounded in real application needs by a unique set of industrial and cultural stakeholders in the consortium, which range from consumer electronics companies and big players in media entertainment to innovative SMEs, cultural institutions, and even a famous opera house, thus encompassing a very wide spectrum of the digital music world.

Consortium

Partners

PhD Projects

PhD Topic Supervisor Host Secondment
Representations and models for singing voice transcription Simon Dixon QMUL DRM
Instrument modelling to aid polyphonic transcription Simon Dixon DRM QMUL
Leveraging user interaction to learn performance tracking Simon Dixon QMUL TIDO
Fine grain time resolution audio features for MIR Mark Sandler ROLI QMUL
Note level audio features for understanding and visualising musical performance Mark Sandler QMUL ROLI
Tag propagation from structured to unstructured audio collections Xavier Serra UPF JAM
Extending audio collections by combining audio descriptions and audio transformations Xavier Serra UPF NI
Audio content description of broadcast recordings Emilia Gómez UPF BMAT
Behavioural music data analytics Gael Richard TPT DZ
Voice models for lead vocal extraction and lyrics alignment Gael Richard TPT AN
Multimodal movie music track remastering Gael Richard TPT TC
Context-driven music transformation Gael Richard TPT TC
Defining, extracting and recreating studio production style from audio recordings Gael Richard TPT SONY
Large-scale multi-modal music search and retrieval without symbolic representations Gerhard Widmer JKU KI
Live tracking and synchronisation of complex musical works via multi-modal analysis Gerhard Widmer JKU VSO

How to Apply

We will advertise soon and plan to accept applications from January 2018.

Contact

Email the relevant supervisor or project coordinator Simon Dixon s.e.dixon@qmul.ac.uk.