Thomas Hélie: "We model instruments, musicians, and the signals they produce”

Publish date
Dec. 12, 2017

Physical modelling, digital analysis of sounds, identification of signals: the Sound Systems and Signals: Audio/Acoustic, Instruments team at IRCAM unravels the mechanisms of sound production by all types of instruments, from brass to string, the voice, and electronic systems.

Your team “the Sound Systems and Signals: Audio/Acoustic, Instruments” was created at IRCAM in January 2017. What are its guiding principles?

The scientists who study musical instruments and the sounds they make to understand the mechanics and create musical synthesis generally do so using two distinct approaches. Certain are interested in the cause: they describe and model the physical phenomena that govern the behavior of the instruments and their coupling with a musician. This is the “systems” approach. Others are more interested in the structure of sounds without being concerned about their production. This is the “signals” approach. In reality, there are interactions, and even a continuity, between these two approaches that is already a work subject at IRCAM. The laboratory decided to recognize this and develop it by creating our team.

What are the objectives of your work?

In the field of audio and music, an important motivation is to create the most realistic sound synthesis possible using physical modelling. When we have a model that we feel is sufficiently realistic, we can offer tools to assist instrument-making, tools that try to predict the consequences of physical modifications to a traditional instrument. We also create instruments that don’t exist physically—but which respect all the laws of physics—on a computer using “virtual instrument-making”. We can also use hybrid techniques to make instruments, adding sensors to traditional instruments, digital treatments, or actuators to modify the sounds. More generally, we work on the control and correction of the sound production of instruments. The results of our work can be used for artistic creation or for the conservation and reproduction of ancient instruments. We also carry out educational projects and projects in the field of health on the voice.

How do you study sound systems?

Our team carries out experimental studies. For this, we use techniques to measure vibrations using precise sensors (lasers, accelerometers, pressure sensors, etc.).

We also develop robotic systems that allow us to continually vary the parameters (in a way that can be reproduced) as the instrument is being played. For example, we built an artificial mouth controlled by actuators that includes lips and is capable of playing notes (but also transients or complex systems) for brass instruments.

Thomas HélieThomas Hélie, Sound Systems and Signals: Audio/Acoustic, Instruments team © Deborah Lopatin

Can you build models based on the results of these experiments?

They let us validate or improve them. One of the greatest difficulties with modelling musical instruments is that these are non-linear systems. If you blow twice as hard into a clarinet or you pluck a guitar string two times harder, it doesn’t make a sound that is twice as loud, it makes a different sound. In physics, when we want to take this non-linearity into account in an equation, you must pay attention to the quality of the simulations and avoid certain predictable pitfalls that could lead to unusual behaviors. This is why we have introduced fairly general theoretical frameworks to our work. One of them is “differential geometry”, another is called “Hamiltonian systems with ports”. This makes it possible for us to model subsets separately, perhaps obeying different laws of physics, while being sure that we can connect them to create a realistic, complete model in the end. In particular, the models and simulations developed in this way respect the fact that our systems are passive. Within them, there is no spontaneous creation of energy, only transfers.

Can you give us some examples?

If we think about a trombone player with her instrument, we can model them distinguishing the mouth with the mechanics of fluids, the lips with solid mechanics of materials that can be deformed, and the air in the mouthpiece (again using the mechanics of fluids), the air that vibrates in the instrument’s tube using acoustics.

We have also modelled an electro-mechanic Fender-Rhodes piano: the key, the hammer, and the felt all follow the laws of solid mechanics; the metallic bar that is struck vibrates according to the laws of continuum mechanics; the vibration of the bar modifies the magnetic field of a magnet coil system, this is electromagnetism; and finally, we use the laws of electronics for the loud-speakers, and even acoustics if we consider its noise radiation.

You also try to reproduce certain physical systems without knowing the laws: on what basis?

At the junction of the “systems” approach and the “signals” approach, we are developing “double blind methods” to model systems without knowing their internal structure. For example, how can we completely copy an electronic effect pedal for guitar, or an amplifier simply by inputting signals and observing the corresponding output? What type of signals should we use for input so that we are as effective and as similar as possible? These questions are difficult when you have to take into account non-linearity. We have therefore developed the use of mathematical methods that not only analyze sound frequencies at a given point, but also capture regular distortions, with a memory effect. These are signal analysis tools that let us take into account the underlying physics. We can produce a purely “signal” representation of a physical system. Synthesis avoids constantly re-calculating sounds from the equations that describe the physicality of a system. On a fundamental level, we are also interested in working in the opposite direction: finding equations of the physics based on an analysis of the signal. This problem is still unresolved.

You are interested in the human voice. What makes this such a particular instrument?

It’s the only instrument that everyone knows how to play naturally. But it’s also the most complex instrument in terms of its constitution and its possible behaviors.

For example, the exciter is the larynx, more precisely the vocal folds, commonly referred to as “vocal cords”. To better understand how they work, we collaborate with a laboratory in Grenoble that studies the materials and the structure of these folds. From a control point of view, there are several registers: a normal voice, a husky voice, a whistling voice, a very low crackling voice. The transitions from one register to another are complex phenomena. There is a lot of research to be done before completely understanding the voice, its normal functioning, but also what we hear when there are pathologies.

You mentioned educational applications for your work. What are they?

Our team is a part of the European project iMuSciCA. The project partners are French, Swiss, Spanish, Belgian, and Greek and began in January 2017. The goal is to create new learning methods via innovative didactic tools in certain scientific teachings: an approach of discovery and research. An adapted version of the software program Modalys for sound synthesis using physical models (perfected several years ago at IRCAM and developed by our team) makes it possible for middle school and high school students to create virtual musical instruments, to change them, test different elements in real-time such as geometry or the nature of different materials.

By Luc Allemand