All projects

Synthesis of Directionality by Corpus

Aaron Einbond's artistic residency focuses on the cohabitation of instrumental and synthetic sounds in a diffusion space.The playing of an instrumentalist on stage is captured and analyzed in real time: different audio descriptors (timbral) are computed and exploited to produce electronics by concatenative synthesis by corpus (realized by CataRT-MuBu). The question of the diffusion of the samples (grains) of the corpus is then raised. For this purpose, we use a compact array of IKO loudspeakers, which allows us to simulate radiation patterns (described by their representation in third order spherical harmonics). The radiation patterns used here are selected from a directivity database of (historical and modern) acoustic instruments measured and made available by TU Berlin: the audio descriptors (from the player's playing) are used to select one (or more) instruments from the TU database in order to apply their directivity pattern to the grains. The underlying idea is not to faithfully reproduce the spatial radiation of instruments, but to give the synthesized sounds a "natural, plausible" spatiality, so that the electronics merge harmoniously with the acoustic instruments present on stage.

Aaron Einbond's artistic residency focuses on the cohabitation of instrumental and synthetic sounds in a diffusion space.
The playing of an instrumentalist on stage is captured and analyzed in real time: different audio descriptors (timbral) are computed and exploited to produce electronics by concatenative synthesis by corpus (realized by CataRT-MuBu). The question of the diffusion of the samples (grains) of the corpus is then raised. For this purpose, we use a compact array of IKO loudspeakers, which allows us to simulate radiation patterns (described by their representation in third order spherical harmonics). The radiation patterns used here are selected from a directivity database of (historical and modern) acoustic instruments measured and made available by TU Berlin: the audio descriptors (from the player's playing) are used to select one (or more) instruments from the TU database in order to apply their directivity pattern to the grains. The underlying idea is not to faithfully reproduce the spatial radiation of instruments, but to give the synthesized sounds a "natural, plausible" spatiality, so that the electronics merge harmoniously with the acoustic instruments present on stage.

In addition, we have organized the database of radiation patterns, analyzing the similarities/dissimilarities of the spatial patterns with a spherical correlation operator. It is thus possible to 'navigate' coherently in the directivity space, and to interpolate between the radiations of several instruments. Thus, different creative strategies have been implemented to generate dynamic 3D radiations, and to produce a spatial polyphony.

With the supervision of:

IrcamSorbonne UniversityCNRSMinistry of Culture

Discover other team projects

OpenTuning

Individuation créative par le design d'interaction avec des systèmes musicaux génératifs

Dates : March 2026 to December 2029

Inside Artificial Improvisation

Dans la boîte noire de l’improvisation artificielle

Dates : January 2026 to December 2029

INTIM

INteractive analysis/synthesis of musical TIMbre

Dates : September 2024 to March 2026