France Atlanta 2020: Machines of Loving Grace

October 12 through Tue 10 November 2020,
10 a.m.

All Watched Over by Machines of Loving Grace is a 1967 poem by Richard Brautigan that describes the peaceful and harmonious cohabitation of humans and computers. Fifty years later – following the technological revolutions brought about by the internet, big data, cloud computing and deep learning – Brautigan’s vision resonates singularly. It compels us to rethink our interactions with machines, even in regards to the intimate act of artistic creation. In collaboration with its American partners, France’s Institute for Research and Coordination in Acoustics/Music (IRCAM) invites participants to discover some remarkable examples of possible artistic relationships between humans and computers during two seminars featuring researchers and artists who are building this new deal.

October, 14-15

"Working Creatively with Machines"
October 14, 11 am (Atlanta) / 5 pm (Paris)
Introduction by Yves Berthelot (Vice-Provost for International Initiatives, Georgia Institute of Technology) and Frank Madlener (director of IRCAM).
With Carmine Emanuele Cella (CNMAT - CU Berkeley), Rémi Mignot (IRCAM), Nicolas Obin (Sorbonne Université - IRCAM), Alex Ruthman (NYU), Jason Freeman (Georgia Tech). 

"Performances with Machines"

October 15, 11 am (Atlanta) / 5 pm (Paris)
With Jérôme Nika (IRCAM), Benjamin Levy (IRCAM), Grace Leslie(Georgia Tech),Daniele Ghisi (composer), Elaine Chew (CNRS).

Virtual exhibition
October 12 - November 10

From October 12-November 10, a virtual exhibition will be accessible in order to present excerpts of works and tools that heavily rely on artificial intelligence techniques and that question the relationship between human and machines. The exhibition includes works by such artists and researchers as Daniele Ghisi (composer), Jérôme Nika and Gérard Assayag (researchers, IRCAM), Jason Freeman (Georgia Tech), Carmine Emanuele Cella (professor and composer, CNMAT – CU Berkeley), Alex Ruthman (professor, New York University).

This event is organized by IRCAM – the Institute for Research and Coordination in Acoustics/Music – and the STMS lab (IRCAM, The French National Centre for Scientific Research, the Ministry of Culture and Sorbonne University), in collaboration with Georgia Tech’s School of Music, NYU Steinhardt – Music Education, the UC Berkeley Center for New Music and Audio Technologies, and the Atlanta Office of the Cultural Services of the Embassy of France in the United States.
  • 'Atlas' Concert, Centre Pompidou, 2019. With Carmine Emanuele Cella  © Hervé Véronèse
    'Atlas' Concert, Centre Pompidou, 2019. With Carmine Emanuele Cella © Hervé Véronèse
  • 'Atlas' Concert, Centre Pompidou, 2019. With Carmine Emanuele Cella  © Hervé Véronèse
    'Atlas' Concert, Centre Pompidou, 2019. With Carmine Emanuele Cella © Hervé Véronèse
  • An Experiment With Time by Daniele Ghisi  © Daniele Ghisi
    An Experiment With Time by Daniele Ghisi © Daniele Ghisi
  • An Experiment With Time by Daniele Ghisi  © Daniele Ghisi
    An Experiment With Time by Daniele Ghisi © Daniele Ghisi
  • Lullaby Experience by Pascal Dusapin, Jérôme Nika (IRCAM computer music design)  © Quentin Chevrier
    Lullaby Experience by Pascal Dusapin, Jérôme Nika (IRCAM computer music design) © Quentin Chevrier
  • Lullaby Experience by Pascal Dusapin, Jérôme Nika (IRCAM computer music design)  © Quentin Chevrier
    Lullaby Experience by Pascal Dusapin, Jérôme Nika (IRCAM computer music design) © Quentin Chevrier
  • Ircam Live, Centre Pompidou, 2018. With Benjamin Lévy & Raphaël Imbert  © Sébastien Calvet
    Ircam Live, Centre Pompidou, 2018. With Benjamin Lévy & Raphaël Imbert © Sébastien Calvet


  • Daniele Ghisi: Born in Italy in 1984, Daniele Ghisi studied Music Composition at Bergame Conservatory with S. Gervasoni and continued his studies with IRCAM Cursus. In 2009-2010, he is a residency composer at Akademie der Künste (Berlin), in 2011-2012, he is residency composer in Spain, member of the Academie de France in Madrid – Casa de Velázquez. In 2015, he is in residency in Milano, with the Divertimento Ensemble, which recorded it’s first monographic CD (Geografie). Since 2010, he develops, with composer Andrea Agostini, “bach: automated composer’s helper”, the library for computer-assisted composition. He is the co-founder of the blog, in which he writes. He is edited by Ricordi. Between 2017 and 2020, he teaches Electroacoustic Composition at the Gênes Conservatory. Actually, he’s composer-researcher at California University, Berkeley (CNMAT).

  • Jérome Nika: Jérôme Nika is researcher in human-machine musical interaction in the Music Representations Team at Ircam. He graduated from the French Grandes Écoles Télécom ParisTech and ENSTA ParisTech. In addition, he studied acoustics, signal processing and computer science applied to music, and composition. He specialized in the applications of computer science and signal processing to digital creation and music through a PhD (Young Researcher Prize in Science and Music, 2015; Young Researcher Prize awarded by the French Association of Computer Music, 2016), and then as a researcher at Ircam. His research focuses on the introduction of authoring, composition, and control in human-computer music co-improvisation. This work led to numerous collaborations and musical productions, particularly in improvised music (Steve Lehman, Bernard Lubat, Benoît Delbecq, Rémi Fox) and contemporary music (Pascal Dusapin, Marta Gentilucci). In 2019 – 2020, his work was involved in 3 ambitions productions : Lullaby Experience, an evolutive project by composer Pascal Dusapin, and two improvised music projects: Silver Lake Studies, in duo with Steve, and C’est Pour ça, in duo with Rémi Fox. In 2020 he is in residency at Le Fresnoy – Studio National des Arts Contemporains. More info:

  • Gérard Assayag: Gerard Assayag, an Ircam research director, is head of IRCAM Music Representation Team, a team he has founded in 1992. He has been head of the Ircam research lab (STMS, a joint lab between Ircam, CNRS and Sorbonne University) between 2011 and 2017. His research interests are centered on music representation issues, including programming languages, machine learning, constraint and visual programming, computational musicology, music modeling, and computer-assisted composition and interaction. He has designed with his collaborators OpenMusic and OMax, two music research software environments which have gained international reputation and are used in many places for computer assisted composition, analysis and improvisation.

  • Jason Freeman: Jason Freeman is a Professor of Music at Georgia Tech and Chair of the School of Music. His artistic practice and scholarly research focus on using technology to engage diverse audiences in collaborative, experimental, and accessible musical experiences. He also develops educational interventions in K-12, university, and MOOC environments that broaden and increase engagement in STEM disciplines through authentic integrations of music and computing. His music has been performed at Carnegie Hall, exhibited at ACM SIGGRAPH, published by Universal Edition, broadcast on public radio’s Performance Today, and commissioned through support from the National Endowment for the Arts. Freeman’s wide-ranging work has attracted over $10 million in funding from sources such as the National Science Foundation, Amazon, and Turbulence. It has been disseminated through over 80 refereed book chapters, journal articles, and conference publications. Freeman received his B.A. in music from Yale University and his M.A. and D.M.A. in composition from Columbia University.

  • Carmine Emanuele Cella: Carmine Emanuele Cella is a internationally renown composer with advanced studies in applied mathematics. He studied at Conservatory of Music G. Rossini in Italy getting master degrees in piano, computer music and composition and he got a PhD in musical composition at the Accademia di S. Cecilia in Rome. He also studied philosophy and mathematics and got a PhD in mathematical logic at the University of Bologna entitled On Symbolic Representations of Music (2011). In 2007-2008, Carmine-Emanuele Cella works as a researcher in Paris in Ircam’s Analysis/Synthesis team working on audio indexing and since January 2019, he is Assistant Professor of Music and Technology at the University of Berkeley.

  • Alex Ruthmann: S. Alex Ruthmann is Associate Professor of Music Education & Music Technology, and the Director of the NYU Music Experience Design Lab (MusEDLab) at NYU Steinhardt. He holds affiliate appointments with the NYU Digital Media Design for Learning program, and the Program on Creativity and Innovation at NYU Shanghai. He currently serves as Chair of the Music in Schools and Teacher Education Commission for the International Society for Music Education. Ruthmann is co-author of Scratch Music Projects, a new book published by Oxford University Press bringing creative music and coding projects to students and educators. He is co-editor of the Oxford Handbook of Technology and Music Education, and the Routledge Companion to Music, Technology and Education. He also serves as Associate Editor of the Journal of Music, Technology, and Education. Ruthmann’s research focuses on the design of new technologies and experiences for music making, learning and engagement. Partners include the New York Philharmonic, Shanghai Symphony, Peter Gabriel, Herbie Hancock, Yungu and Portfolio Schools, Tinkamo, UNESCO, and the Rock and Roll Forever Foundation. The MusEDLab creative learning and software projects are in active use by over 900,000 people in over 150 countries across the world.

  • Rémy Mignot: Rémi Mignot is a researcher of the analysis-synthesis team of IRCAM. In 2009, he obtained his PhD thesis from the EDITE doctoral school, on the modeling and the simulation of acoustic waves for wind instruments, with Thomas Hélie (IRCAM) and Denis Matignon (Supaero). In 2010-2012, he joined the Langevin Institut (Paris) for a post-doctoral research about the sampling of room impulse responses using compressed sensing, with Laurent Daudet (Paris Diderot) and François Ollivier (UPMC). In 2012-2014, he left for the department of Signal Processing and Acoustics of Aalto University at Espoo (Finland), to work with Vesa Välimäki on the extended subtractive synthesis of musical instruments. He came back to IRCAM in 2014 to do researches about audio indexing and classification with Geoffroy Peeters. Since 2018, he has been responsible of researches on music information retrieval in the analysis-synthesis team.

  • Nicolas Obin: Nicolas Obin is a researcher in audio signal processing, machine learning, and statistical modeling of sound signals with specialization on speech processing. My main area of research is the generative modeling of the expressivity in spoken and singing voices, with application to various fields such as speech synthesis, conversational agents, and computational musicology. He is actively involved in promoting digital science and technology for arts, culture, and heritage. In particular, he collaborated with renowned artists (Georges Aperghis, Philippe Manoury, Roman Polansky, Philippe Parreno, Eric Rohmer, André Dussolier), and helped to reconstitute the digital voice of personalities, like the artificial cloning of André Dussolier’s voice (2011), the short-film Marilyn (P. Parreno, 2012) and Juger Pétain documentary (R. Saada, 2014).He regularly conducts guest lectures for reknown schools (Collège de France, Ecole Normale Supérieure, Sciences Po), organizations (CNIL, AIPPI) and in the press and the media (Le Monde, Télérama, TF1, France 5, Arte, Pour la Science).

  • Grace Leslie: Grace Leslie is a flutist, electronic musician, and scientist. She develops brain-music interfaces and other physiological sensor systems that reveal aspects of her internal cognitive and affective state, those left unexpressed by sound or gesture, to an audience. As an electronic music composer and improviser, she maintains a brain-body performance practice. Grace strives to integrate the manners of conventional emotional and musical expression that she learned as a flutist with the new forms available to her as an electronic musician, using brain- computer interface to reveal aspects of her internal mental state, those left unexpressed by sound or gesture, to an audience. In recent years she has performed this music in academic and popular music venues, conferences and residencies in the United States, UK, Australia, Germany, Singapore, South Korea, China, and Japan, and released three records of this mind-body music. During her Ph.D. (Music and Cognitive Science at UCSD) studies she completed a yearlong position at Ircam in Paris, where she collaborated on an interactive sound installation and performed experiments studying the effect of active involvement on music listening. She completed her undergraduate and Masters work in Music, Science, and Technology at CCRMA, Stanford University.

  • Benjamin Levy: Computer music designer at IRCAM, Benjamin Lévy studied both sciences—primarily computer science, with at PhD in engineering—and music. Since 2008, he has collaborated on both scientific and musical projects with several teams at IRCAM, in particular around the OMax improvisation software. As an R&D engineer and developer, he has also worked in the private sector for companies specialized in creative audio technologies. He has taken part in several artistic projects at IRCAM and elsewhere as a computer musician for contemporary music works as well as jazz, free improv, theater, and dance. He has collaborated with choreographers such as Aurélien Richard, worked on musical theater with Benjamin Lazar, and performs with the jazz saxophonist Raphaël Imbert.

  • Philippe Esling: Philippe Esling received an MSc in Acoustics, Signal Processing and Computer Science in 2009 and a PhD on multiobjective time series matching in 2012. He was a post-doctoral fellow in the department of Genetics and Evolution at the University of Geneva in 2012. He is now an associate professor with tenure at IRCAM, Paris 6 since 2013. In this short time span, he authored and co-authored over 15 peer-reviewed journal papers in prestigious journals such as ACM Computing Surveys, Publications of the National Academy of Science, IEEE TSALP and Nucleic Acids Research. He received a young researcher award for his work in audio querying in 2011 and a PhD award for his work in multiobjective time series data mining in 2013. In applied research, he developed and released the first computer-aided orchestration software called Orchids, commercialized at fall 2014 and already used by a wide community of composer. He directed six Masters interns, a C++ developer for a full year, and is currently directing two PhD students. He is the lead investigator in time series mining at IRCAM, main collaborator in the international France-Canada SSHRC partnership and the supervisor of an international workgroup on orchestration.