Research Projects

OpenTuning

Individuation créative par le design d'interaction avec des systèmes musicaux génératifs

Dates : March 2026 to December 2029

Inside Artificial Improvisation

Dans la boîte noire de l’improvisation artificielle

Dates : January 2026 to December 2029

INTIM

INteractive analysis/synthesis of musical TIMbre

Dates : September 2024 to March 2026

PostGenAI@Paris

Projet ayant pour ambition de renforcer la stratégie française en intelligence artificielle en créant un pôle d’excellence international spécialisé dans l’IA post-générative

Dates : January 2025 to December 2029

Koral

Playing music collectively using smartphones

Dates : December 2023 to December 2024

EVA

Explicite Voice Attributes

Dates : October 2023 to January 2028

ReNAR

Reducing Noise with Augmented Reality

Dates : October 2023 to March 2028

Axe Son-Musique-Santé

This cross-cutting Sound–Music & Health strand brings together research related to wellbeing and health conducted within the STMS laboratory.

DeTOX

Lutte contre les vidéos hyper-truquées de personnalités françaises

Dates : January 2023 to December 2025

REVOLT

REvealing human bias with real Time VOcal deep fakes

Dates : January 2021 to December 2023

Synthesis of Directionality by Corpus

Aaron Einbond's artistic residency focuses on the cohabitation of instrumental and synthetic sounds in a diffusion space.The playing of an instrumentalist on stage is captured and analyzed in real time: different audio descriptors (timbral) are computed and exploited to produce electronics by concatenative synthesis by corpus (realized by CataRT-MuBu). The question of the diffusion of the samples (grains) of the corpus is then raised. For this purpose, we use a compact array of IKO loudspeakers, which allows us to simulate radiation patterns (described by their representation in third order spherical harmonics). The radiation patterns used here are selected from a directivity database of (historical and modern) acoustic instruments measured and made available by TU Berlin: the audio descriptors (from the player's playing) are used to select one (or more) instruments from the TU database in order to apply their directivity pattern to the grains. The underlying idea is not to faithfully reproduce the spatial radiation of instruments, but to give the synthesized sounds a "natural, plausible" spatiality, so that the electronics merge harmoniously with the acoustic instruments present on stage.

RAMHO

Musical Research and Acoustics in France after 1945: An Oral History

Page

With the supervision of:

IrcamSorbonne UniversityCNRSMinistry of Culture

Explore also

Research news

See all news
See all news