Held in conjunction with AAAI-26
26–27 January 2026, Singapore
This full day workshop, part of a new annual series on the latest developments in AI for music, explores how AI can become a true creative collaborator rather than just another automation tool. As AI transforms music composition, performance, and production, we're seeing an exciting shift from fully autonomous systems toward human-centered approaches. We aim to explore the latest research in human-centric AI for music and discuss how systems can empower humans to shape AI outputs through intuitive interfaces, interactive workflows, interpretable systems and meaningful user-controls. As the field is increasingly shifting toward co-creative and controllable systems, we’ll dive into cutting-edge research and discuss how meaningful musical outcomes can arise through human-machine collaboration. Join us for this inaugural workshop as we explore how to design AI that enhances rather than overshadows human musical creativity.
Submission Requirements
We welcome submissions of original research, creative systems, and critical perspectives on emerging AI technologies for music.
Contributions may include theoretical work, empirical studies, system designs, evaluations, or artistic explorations.
While implemented systems and demonstrations are encouraged, strong conceptual and analytical contributions without a prototype are also welcome.
We particularly value submissions that emphasize controllability, interpretability, explainability, personalization, human-AI interaction, and collaboration in music systems.
Topics of Interest include, but are not limited to:
Important Dates
Paper Submission Deadline: 24 October 2025
Acceptance/Rejection Notice: 14 November 2025
Camera-Ready Deadline: 14 December 2025
Workshop Date: 26–27 January 2026
Publishing Venue
Proceedings will be published as a volume in Proceedings of Machine Learning Research (PMLR).
Submission Instructions
Paper format: Maximum of 8 pages (excluding references). Please follow the PMLR format using this template:
Download Template (ZIP)
Submission Portal:
OpenReview – EAIM Workshop
Bio Ethan is currently a Senior Research Scientist at Google DeepMind on the Magenta Team. He finished his PhD in Computer Science working under Bryan Pardo in the Interactive Audio Lab at Northwestern University. During his PhD, he spent two years as a Student Researcher with Magenta and prior to that, he spent a year and half as Student Researcher at MERL on the Speech and Audio Team. His research centers on making machine learning systems that listen to and understand musical audio in an effort to make tools that can better assist artists.
Bio Elio is a scientist, engineer and leader with over a decade of experience in Artificial Intelligence, Machine Learning, and Audio Technology. Currently VP of Artificial Intelligence at Universal Music Group (UMG) and advisor to creative AI startups, he founded and leads the Music & Audio Machine Learning Lab (MAML), the first ever Machine Learning R\&D group in the recorded music industry. MAML's mission is to invent and build next-generation AI/ML tools to support and empower artists and industry professionals globally. Trained as both a scientist, engineer and musician, Elio holds a PhD in ML and Audio DSP from the Center for Digital Music, a Physics MSc, a Music Technology MA, and a diploma in Commercial Music performance from BIMM London.
Bio Dorien is an Associate Professor at Singapore University of Technology and Design (SUTD), where she leads the Audio, Music, and AI (AMAAI) Lab. Before joining SUTD, she was a Marie Sklodowska-Curie Postdoctoral Fellow at the Centre for Digital Music at Queen Mary University of London. She was also nominated on the Singapore 100 Women in Tech list in 2021, and one of the top 30 SAIL Award (Super AI Leader) Finalists in 2024 at the World AI Conference.
Bio Lamtharn "Hanoi" Hantrakul is an AI Research Scientist and AI Sound Artist based in Bangkok, Thailand. His work explores how machine learning can empower music, arts and culture, particularly from Southeast Asia. With over 8 years of experience in the tech industry, he has developed state-of-the-art Generative AI models at Google, TikTok and ByteDance as a Senior AI Research Scientist. He is co-inventor of notable technologies including Google's open source DDSP library and the music LLM SEED-MUSIC deployed in Doubao (China's ChatGPT). As a sound artist performing under "yaboihanoi," his electronic music incorporates Thai tunings and rhythms. He won the 2022 international AI Song Contest with "Enter Demons and Gods" and has performed at SONAR Music Festival alongside artists like Skrillex and Four Tet. His musical instrument "Fidular" received an A' Silver Award and Core77 Design Award in 2017 and is permanently exhibited at the Musical Instruments Museum in Phoenix, AZ. His work on machine learning and cultural empowerment has been covered by international media including Deutschlandfunk, Scientific American and Fast Company.
Keshav is a PhD candidate at the Centre for Digital Music (C4DM) at Queen Mary University of London supervised by Prof. Simon Colton. His research explores neuro-symbolic methods for music composition with a focus on musical structure and controllability. Prior to Queen Mary, he was part of the Interactive Audio Lab at Northwestern University, Evanston. Over the years, Keshav has published papers across conferences such as NeurIPS, AAAI, IJCNN and recently won the best paper award at EvoMUSART-25 (part of EvoStar). Keshav will be the main contact for the proposed AI music workshop at AAAI-26.
Abhinaba is a senior research fellow at Singapore University of Technology and Design. He received his Ph.D. in Computer Vision in 2019 from the Istituto Italiano di Tecnologia, Genoa, Italy. He has held positions in both industry and academia, focusing on developing and deploying practical AI solutions. In recent years, his research has increasingly focused on the intersection of AI and music, particularly in areas such as text-to-music generation, symbolic music creation, and multimodal music understanding. His work has been published in leading conferences including ISMIR, IJCNN, AAAI, and others.
Simon is a Professor of Computational Creativity, AI and Games in EECS at Queen Mary University of London. He was previously an EPSRC leadership fellow at Imperial College and Goldsmiths College, and held an ERA Chair at Falmouth University. He is an AI researcher with around 200 publications whose work has won national and international awards, and has led numerous EPSRC and EU-funded projects. He focuses specifically on questions of Computational Creativity, where researchers study how to engineer systems to take on creative responsibilities in generative music, arts and science projects. Prof. Colton has has written about the philosophy of Computational Creativity and led numerous public engagement projects.
Dorien is an Associate Professor at Singapore University of Technology and Design (SUTD), where she leads the Audio, Music, and AI (AMAAI) Lab. Before joining SUTD, she was a Marie Sklodowska-Curie Postdoctoral Fellow at the Centre for Digital Music at Queen Mary University of London. She was also nominated on the Singapore 100 Women in Tech list in 2021, and one of the top 30 SAIL Award (Super AI Leader) Finalists in 2024 at the World AI Conference.
Prof. Dorien Herremans, Singapore University of Technology and Design
Keshav Bhandari, PhD candidate, Queen Mary University of London
Dr. Abhinaba Roy, Singapore University of Technology and Design
Prof. Simon Colton, Queen Mary University of London
Prof. Mathieu Barthet, Queen Mary University of London
Dr. Jaeyong Kang, Singapore University of Technology and Design
Dr. Jan Melechovsky, Singapore University of Technology and Design
Dr. Berker Banar, Ellison Institute of Technology Oxford
Benjamin Hayes, Sony CSL, Paris
Shuoyang Zheng, PhD candidate, Queen Mary University of London
MA Yinghao, PhD candidate, Queen Mary University of London
Zixun Guo, PhD candidate, Queen Mary University of London
Jordie Shier, PhD candidate, Queen Mary University of London
You can contact the organizers at: k [dot] bhandari [at] qmul.ac.uk