SMCClab Sound, Music, and Creative Computing at ANU

About SMCClab

The Sound, Music and Creative Computing Lab is part of the School of Computing at the Australian National University.

The goal of the lab is to create new kinds of musical instruments that sense and understand music. These instruments will actively respond during performances to assist musicians.

Performing on touchscreens and percussion

We envision that musical instruments of the future will do more than react to musicians. They will predict their human player’s intentions and sense the current artistic context. Intelligent instruments will use this information to shape their sonic output. They might seamlessly add expression to sounds, update controller mappings, or even generate notes that the performer hasn’t played (yet!).

The idea here is not to put musicians out of work. We want to create tools that allow musicians to reach the highest levels of artistic expression, and that assist novice users in experiencing the excitement and flow of performance. Imagine an expert musician recording themselves on different instruments in their studio, and then performing a track with a live AI-generated ensemble, trained in their style. Think of a music student who can join their teachers in a jazz combo, learning how to follow the form of the song without worrying about playing wrong notes in their solo.

We think that combining music technology with AI and machine learning can lead to a plethora of new musical instruments. Our mission is to develop new intelligent instruments, perform with them, and bring them to a broad audience of musicians and performers. Along the way, we want to find out what intelligent instruments mean to musicians, to their music-making process, and what new music these tools can create!

Our work combines three cutting edge fields of research:

  • Expressive Musical Sensing: Understanding how music is played and what performers are doing. This involves hardware prototyping, creating new hyper-instruments, and applying cutting-edge sensors.
  • Musical Machine Learning: Creating and training predictive models of musical notes, sounds, and gestures. This includes applying techniques symbolic music generation, to understand scores and MIDI data, and music information retrieval to “hear” music in audio data.
  • Musical Human-Computer Interaction: Finding new ways for predictive models to work with musicians, and to analyse the musical experience that emerges.