Projects

EN | FR

ACTOR Project Funding aimed to support innovative research and pilot projects led by ACTOR members across various Workgroups. These projects were designed to foster external funding opportunities or serve as a foundation for independent research.

Funding was allocated to three main categories:

  1. Strategic Projects: Research-focused initiatives.

  2. Research-Creation Projects: Combining creative practice with scholarly research.

  3. Student Collaborative Projects: Interdisciplinary efforts involving students from two different institutions.

Key outcomes included joint publications, new modules for the Timbre and Orchestration Resource (TOR), public presentations of ACTOR research, and the creation, premiere, or recording of musical compositions. Priority was given to interdisciplinary projects aligned with the ACTOR project’s mandate.

Below is a list of all funded projects, each linking to its abstract and detailing the researchers involved. Every project was led by a Principal Investigator (PI) and could include external collaborators. Additionally, we have included Partner Projects, which were related initiatives supported by external funding.

By Type

Collaborative Student Projects.

Strategic Projects

Research-Creation Projects

All Projects

Past Partner Projects

  • ThisOrchestration and Perception Project seeks to create a psychological foundation for a theory of orchestration practice based on perceptual principles associated with musical timbre. It involves an international collaboration between McGill University, Ircam-Centre Pompidou and the Haute école de musique de Genève. The four thematic research axes include:

    1. the role of timbre in instrumental fusion and in the differentiation of musical voices,

    2. its role in the creation of musical structures,

    3. the perception of orchestral gestures as meaningful units in a musical discourse, and

    4. the historical evolution of orchestration techniques across epochs.

    Each theme will be addressed by analyzing orchestration treatises, analyzing musical scores and cataloguing and classifying orchestral effects, automated mining of symbolic digital representations of scores, creating sonic renderings of scores by an orchestral rendering environment allowing for the comparison of several versions (original and reorchestrated) to test specific hypotheses, conducting perceptual tests on orchestral effects, integrating the results into a theory of orchestration, and transferring acquired knowledge to computer-aided orchestration systems and to the development of new pedagogical tools for orchestration.

    This Project was part of the foundation of, and evolved into, the ACTOR Project.

  • Research into electronic orchestration (e-Orch) at the Haute école de musique Genève – Neuchâtel

    Orchestration can be described from several angles: it is mainly represented as the art of writing, from symbolic data, musical pieces for several instruments, by combining them which each other, the size of the ensemble being variable. We can therefore deduce that it exploits instrumental timbres with the aim of producing orchestral effects; yet the very act of mixing the timbral and acoustic properties of instruments also falls within the domain of signal processing. This dual symbolic/signal view is not to be seen as an opposition, but as an illustration of the complexity of this practice; it lies at the heart of the relationship between art and science, two disciplines whose intersection is particularly relevant to the present time.
    Read more…n text goes here

  • Multimodal analysis and knowledge inference for musical orchestration (MAKIMOno) [NSERC (Canada), ANR (France)]

    This project brings together IRCAM-CNRS-Sorbonne Université, McGill University, and OrchPlayMusic, Inc. to address scientifically one of the most complex aspects of music: the use of timbre—the complex set of tone colours that distinguish sounds emanating from different instruments or their blended combinations—to shape music through various modes of orchestration. This first-of-its-kind project will lead to the creation of information technologies for human interaction with digital media that will radically change orchestration pedagogy, provide better tools for the computer-aided interactive creation of musical content, and lead to a better understanding of perceptual principles underlying orchestration practice. 

  • ACTOR partners at the Detmold University of Music are part of the EU FUNDED RESEARCH PROJECT “VRACE”: “The ITN project ‘VRACE – Virtual Reality Audio for Cyber Environments’ establishes a multidisciplinary network that will train the next generation of researchers in the audio part of virtual and augmented reality. The aim is to raise Virtual / Augmented Reality to a next level beyond gaming and entertainment by benefiting from the critical mass of expertise gathered in this distinguished consortium.” https://vrace-etn.eu/

  • A musical performance of some symbolic music data (e.g. the score, MusicXML, MEI) is the entirety of all transformations necessary to make the music sound. This includes the temporal order of sound events as well as their specific execution. The Music Performance Markup format (MPM) is dedicated to describe and model musical performances in large detail in the manner of a construction kit. It comes packed with a series of performance features from several domains incl. the following:

    • Timing features: tempo (incl. discrete and continuous tempo changes), rubato, asynchrony, random/non-systematical deviations from precise timing,

    • Dynamics features: macro dynamics (incl. discrete and continuous dynamics changes), metrical accentuation, random/non-systematical deviations from precise dynamics,

    • Articulation: absolute and relative modifications of a tone's duration, dynamics, timing (e.g. agogic accent), and intonation, random/non-systematic variations of tone duration and intonation.

    Each feature is designed on the basis of a mathematical model that was derived from empirical performance research. These models not only reproduce the typical characteristics of their respective features. Two musicians may perform the same features (say an articulation, a crescendo, or a ritardando) very differently. Thus, the models are also equipped with expressive parameters to recreate the whole bandwidth of such variations.