The training programs below have not been scheduled yet. If you are interested, please contact us at 01 44 78 47 70 | info-pedagogie(at)ircam.fr.
Learn about Live Video in the Max Environment.
Upon completion of this course participants will gain a global understanding of the possibilities of Jitter (OpenGL programming, Java, shaders) and acquire the foundations necessary to manipulate 2D and 3D images. They will be able to create a complete chain of acquisition, analysis, video treatments, and the display of an image using basic modules.
Designing Audio Experiences on the Web
These courses offers the opportunity to developers, musicians, composers or teachers to discover how to create interactive performances and installations using mobile and web technologies with or without Max.
Upon completion of this training, participants will have appropriated the theories and techniques to develop audio contents on the web, using the potential of dedicated programming languages and Web Audio API. Participants will be able to use:
Sensors, Interfaces, and Interactive Machine Learning for Music
The mastery of tools for interaction in real-time lets users imagine several interactions between machines and musicians, dancers, actors, the audience as staged performances and installations bringing together sound, visual, multimedia, and virtual reality environments.
This course introduces participants (composers, musicians, performers, teachers, designers...) to sonification by programming interfaces connected to motion and physiological sensors. Upon completion of this training program, participants will have acquired the theoretical and practical notions for the conception and realization of interactive sound and music systems in Max, using a range of motion and physiological sensors as well as pattern-recognition systems. Programming interfaces such as the Arduino or R-IoT will also be addressed.
Using software for musical creation, multimedia sound design, music for film, the creation of hybrid voices the user works at the very heart of the sound material. Participants (composers, musicians, sound engineers, teachers...) will use AudioSculpt, a program that lets users sculpt and process sound visually. AudioSculpt is the fruit of several years of research in the Sound Analysis Synthesis team at IRCAM. It is both a powerful software program for analysis and an innovative tool for sound creation.
During this training, all these functions will be addressed and explained so you can use the full range of possibilities available to you in AudioSculpt. Take advantage of the best transposition and temporal compression/dilation algorithms available. Apply targeted filtering treatments through frequency representation. Create sound morphing through cross synthesis. Upon completion of this course participants will be able to read a sonogram, optimize analysis parameters, use the filtering functions on the frequency representation of sound, configure and correctly carry out basic processing techniques (transposition, temporal dilation, cross synthesis, breath reduction, additive analysis and re-synthesis).
This class introduces sound designers, sound artists or students in sound-related fields to the steps necessary to create the sound of a future object, based on the object itself, sketches, or a video presenting its principle uses.
Upon completion of this training, participants will have acquired knowledge in the domain of sound design and the associated scientific and technological processes. They will be able to put in place the steps necessary for a new sound design approach: analysis, the development of a scenario for use, an explanation based on a sound sketch, the proposal of a solution.