Embedded computing technologies have long been present in NIME but it is challenging to deploy AI and machine learning models on such systems. In this hands-on workshop, we will provide you with starting points to develop your own NIMEs with embedded platforms and machine learning models.
Registration: is available for NIME Tuesday workshops through this form.
Schedule
The workshop schedule is as follows (N.B.: order of presentation may change):
- 10:00 Intro (slides here)
- 10:30 Raspberry Pi and IMPSy
- 13:00 Lunch break
- 14:00 pyBela, pyTorch, cross-compilation (2.5 hours)
- 16:30 Discussion and Hacking
Things to bring to the workshop
- Your own laptop and power brick
- A Raspberry Pi or Bela (if you have one)
- A USB MIDI controller or MIDI sound source (if you have one and want to use it with the Raspberry Pi)
- Headphones with 3.5mm jack for the Belas.
- USB-C to USB-A adapter if you only have USB-C ports on your computer.
- SD card reader if you have one (helpful for the Raspberry Pi workshop, although not necessary)
If you want to be super prepared before the workshop, you can make sure that you have Docker and Pure Data installed on your laptop and you can browse the walkthroughs available on this site.
Organisers
Charles Patrick Martin
Charles Martin is a computer scientist specialising in music technology, creative AI and human-computer interaction at The Australian National University, Canberra. Charles develops musical apps such as PhaseRings, researches creative AI, and performs music with Ensemble Metatone and Andromeda is Coming. At the ANU, Charles teaches creative computing and leads research into intelligent musical instruments. His lab’s focus is on developing new intelligent instruments, performing new music with them, and bringing them to a broad audience of musicians and performers. Charles has presented workshops on AI/ML and NIMEs at NIME 2019, 2020, and 2021 and is an organiser of the Generative AI and HCI workshops at CHI 2022, 2023, and 2024.
Teresa Pelinski
Teresa Pelinski is a PhD student at the Augmented Instruments Lab at the Centre for Digital Music, based in Queen Mary University of London. Teresa’s research focuses on developing tools for prototyping with ML in the context of musical practice, and on doing so from a practice research lens. Currently, she is also an Enrichment Student at the Alan Turing Institute. Teresa holds a BSc in Physics from the Universidad Autónoma de Madrid and a MSc in Sound and Music Computing (MSc) from Universitat Pompeu Fabra in Barcelona. Teresa was an organiser of the Embedded AI for NIME 2022 workshop.
Links
The workshop will build on materials used in previous NIME workshops and classes at the presenters’ institutions, e.g.:
-
Making Predictive NIMEs with Neural Networks: https://creativeprediction.xyz/workshop/
-
Embedded AI for NIME: https://embedded-ai-for-nime.github.io/
-
Critical Perspectives on AI/ML in Musical Interfaces: https://critical-ml-music-interfaces.github.io/
-
Sound and Music Computing: Generative AI and Computer Music: https://comp.anu.edu.au/courses/laptop-ensemble/lectures/11-genai/