February 25, 2020
Location: KCIC Auditorium
Tuesday, February 25, 2020
Generating Music and Audio with Machine Learning
Dr. Sageev Oore
Abstract: I will show two different approaches to generating music and audio with deep learning techniques:
1) One wave at a time… In TimbreTron, we combine cycleGan and waveNet architectures to manipulate the timbre of a sound sample from one instrument to match that of another instrument.
2) One note at a time… In PerformanceRNN, we work directly with MIDI data rather than raw audio—much like a digital player piano roll rather than vinyl—and this allows us to treat music generation as language-modeling problem. We use conditional LSTMs to generate solo piano music based on a dataset of human performances.
I will aim to provide quick overviews as needed throughout the talk of background audio and ML concepts. And... there will be plenty of musical samples to listen to! This will include work done at Google Research (Brain), Dalhousie University, and the Vector Institute (Toronto).
About the Presenter: Sageev Oore is a faculty member in Computer Science at Dalhousie University (Halifax), a research faculty member at the Vector Institute for Artificial Intelligence (Toronto), and a Canada CIFAR AI Chair. Together with his undergraduate and graduate students at both Dalhousie and Vector Institute, he is interested basic research in machine learning and deep learning, with particular focus on applications of deep learning in audio and music, and computational creativity. He recently spent a year and a half as a Visiting Scientist in Google Brain (California), where he worked on the Magenta team developing generative music systems. He also works as a professional musician; as a pianist he has performed as soloist with orchestras both as a classical soloist and as an improviser.
Members of the Acadia Community are welcome to attend