Musical orchestration is the subtle art of writing musical pieces for orchestra, by combining instruments to achieve a particular sonic goal. Orchestration has been transmitted empirically and a true scientific theory of orchestration has never emerged.
This project aims to create the first partnership towards the long-term goal of a true scientific theory of orchestration by coalescing the domains of computer science, artificial intelligence, experimental psychology, digital signal processing, computational audition, and music analysis. To achieve this aim, the project will exploit a large number of orchestral pieces in digital form for both symbolic scores and multi-track acoustic renderings of independent instrumental tracks. Orchestral excerpts are currently being annotated by panels of experts, in terms of the occurrence of given perceptual orchestral effects. This library of orchestral knowledge, readily available in both symbolic and signal formats for data mining and deep learning approaches. Our objective is to evaluate the optimal representations for symbolic scores and audio recordings of orchestration, by assessing their predictive capabilities on given perceptual effects.
Then, we will develop novel models of learning and knowledge extraction capable of link musical signals, symbolic scores, and perceptual analyses by targeting multimodal embedding systems (transforming multiple sources of information into a unified coordinate system). These spaces can provide metric relationships between modalities that can be exploited for both automatic generation and knowledge extraction. The results from the models will then feed back to and be validated through extensive perceptual studies. By closing the loop between perceptual effects and learning, while validating the higher-level knowledge that will be extracted, this project will revolutionize creative approaches to orchestration and its pedagogy. The predicted outputs include the development of technological tools for the automatic analysis of musical scores, for predicting the perceptual results of combining multiple musical sources, as well as the development of digital media environments for orchestration pedagogy, computer-aided orchestration and instrumental performance in simulated ensembles.
This project implicates an international partnership with the Candaian CRSNG.