Giulia Lorusso et Alessandro Rudi



Giulia Lorusso was born in Rome in 1990, Giulia Lorusso studied Piano and Composition at at the Conservatory “Giuseppe Verdi” of Milan and in Paris where she attended the Cursus IRCAM and obtained a Master’s degree in Composition at CNSMDP. Between 2016 and 2018 she received commissions by Spinola-Banna for the arts Foundation (Turin,Italy) in co-production with IRCAM-Centre Pompidou, Bludenzer Tage zeitgemäßer Musik (Bludenz, Austria),Radio-France, ProQuartet. Her music has been performed in Italy and abroad: Festival Milano Musica, Festival Manifeste- IRCAM, Tzlil Meudcan Festival (Tel Aviv), Bludenzer Tage zeitgemäßer Musik, Forum Tactus for young composers (Bruxelles) by ensemble as Distractfold Ensemble, Quartetto Prometeo, Divertimento Ensemble, Ensemble Nikel, Ensemble KNM, mdi Ensemble, Brussels Philharmonic Orchestra, Ensemble Intercontemporain.

Alessandro Rudi is a researcher at INRIA and École Normale Supérieure, Paris. He received his PhD in 2014 from the University of Genova, after being a visiting student at the Center for Biological and Computational Learning at MIT. Between 2014 and 2017 he has been a postdoctoral fellow at Laboratory of Computational and Statistical Learning at Italian Institute of Technology and University of Genova.

2019.20 Artistic Research Residency

Between interaction and generation:
 new perspectives on generative sound environments via localized structured prediction.
In collaboration with the Musical Representations IRCAM-STMS Team and ZKM.

By leveraging state of the art audio signal descriptors and recent developments in generative models for structured prediction and deep learning, this project aspires to question computational creativity issues with the goal of exploring some of the infinite application of technology in the field of generative sound art.

In particular, the present project investigates the integration of environmental sound analysis and recognition techniques with latest generative machine learning models, to provide a system that is able to discover emerging patterns of a given audio input and transfigure them in something unexpected, when trained with a large corpus of suitably selected samples. This opens new artistic perspectives on the interaction between computer generated sound systems and the surrounding environment, with the potential of “creative” and yet coherent positive feedback loops.

The project will deal with the following core aspects:
- environmental sound analysis and recognition
- audio classification applied to complex sound scenes
- audio features and modeling for environmental sounds
- machine learning for large scale and structured data
- generative framework via structured prediction techniques exploiting (multilevel) local structure in the data.