All projects

Corpus-Based Concatenative Synthesis

Database of recorded sounds and a unit selection algorithm

Corpus-based concatenative synthesis uses a database of recorded sounds and a unit selection algorithm that chooses the segments from the database that best suit the musical sequence that we would like to synthesize by concatenation. The selection is based on the characteristics of the recording obtained through signal analysis and match, for example, the pitch, energy, or spectrum.

The habitual methods for musical synthesis are based on a model of a sound signal, but it is very difficult to establish a model that conserves the entirety of the details and delicacy of the sound. However, concatenative synthesis—that uses real recordings—preserves these details. Putting the new approach for concatenative synthesis by corpus in real-time in place enables an interactive exploration of a sound database and a granular composition that targets specific sound characteristics. It also makes it possible for composers and musicians to reach new sounds. This principle is carried out in the CataRT system.

This system makes it possible to display a 2D projection of the descriptor space that can be browsed using a mouse or external controllers. Grains are then selected in the original recording and performed by geometric proximity, metronome, in loops, or continuously. It is also possible to define a perimeter around one’s present position that selects a sub-group of grains that are then played randomly. CataRT is used for musical composition, performance, and in various sound installations. As this field of research is fairly young, several interesting research questions have been raised (or will be raised in the future) concerning the analysis and exploitation of the information found in the data of a corpus, the visualization, and real-time interaction.

With the supervision of:

IrcamSorbonne UniversityCNRSMinistry of Culture

Discover other team projects

OpenTuning

Individuation créative par le design d'interaction avec des systèmes musicaux génératifs

Dates : March 2026 to December 2029

Inside Artificial Improvisation

Dans la boîte noire de l’improvisation artificielle

Dates : January 2026 to December 2029

INTIM

INteractive analysis/synthesis of musical TIMbre

Dates : September 2024 to March 2026