IRCAM is looking for candidates for a three years PhD project in the interdisciplinary research project HAIKUS funded by ANR (French National Research Agency).
The objective of the project is to provide deep learning and audio signal processing methods for a seamless integration of computer-generated immersive audio content into Audio Augmented Reality (AAR) systems. This objective calls for an automatic adaptation of virtual auditory objects rendering to the permanently changing acoustics of the user’s real environment (e.g. when the sources move or when the user walks through one or several rooms of a venue). Three main challenges may be identified:
1/ the blind estimation of room acoustic parameters or room geometry from the observed reverberant audio signals and analysis of live sounds occuring in the room;
2/ the adhesion of the participant to the augmented scene also requires a realistic and congruent acoustic feedback induced by the navigation through the sound scene;
3/ Interactive augmented 3D audio scenes are typically rendered to binaural signals played over headphones. It is well known that convincing binaural reproduction requires individual Head Related Transfer Functions (HRTFs), which in ideal conditions should be measured in an anechoic chamber using calibrated signals. In the project, we aim at blindly estimating the individual HRTFs of the listener from binaural signals captured in real and non-supervised acoustic environments (i.e. not knowing the source signals nor their positions) by in-ear or ear-through earphones.
The thesis will concentrate on the later challenge with obvious links to the two previous ones.
The project brings together three public research laboratories IRCAM, LORIA (INRIA/Univ. Lorraine) and MPIA (Sorbonne Univ.) with skills and expertise in the fields of audio signal processing, machine learning, acoustic imaging, room acoustics modelling and binaural technologies.
The PhD will be supervised by two senior researchers active in 3D audio signal processing and computer science. The thesis will be hosted at IRCAM, located in the centre of Paris near the Georges Pompidou Center, at 1, Place Igor Stravinsky 75004 Paris.
IRCAM: (Institut de Recherche et Coordination Acoustique/Musique) is a leading non-profit organization associated to Centre Pompidou, dedicated to music production, scientific research and education in sound and music technologies. The Acoustic and Cognitive Spaces team is part of the research unit STMS (Science and Technology for Music and Sound) supported by IRCAM, CNRS and Sorbonne Université.
The candidate must hold a Master in computer science with strong knowledge/experience in artifical intelligence and audio signal processing.
Excellent skills in Matlab, Python programming
SALARY : monthly gross salary : 1900€
Deadline for application: 24th August 2020
Expected starting period: 30th november at the latest
Please send an application letter with the reference 2020_HAIKUS together with your resume and any suitable information addressing the above challenges and letters of recommendation preferably by email to: warusfel at ircam dot fr with cc to giavitto at ircam dot fr .