MCP 

Music Computing and Psychology Lab
Frost School of Music, University of Miami

Recently

AI Summit of the Americas

31th January, 2024. The AI Summit of the Americas is taking place at University of Miami on Tue 13th Feb, 2024, and features eminent contributors such as Nevenka Dimitrova, Matt Serletic, and Tom Griffiths. Find out more and register here.

There is also a hackathon taking place all day Sat 10th Feb with generous cash prizes!

FGIX | FilmGate Miami

30th November–3rd December, 2023. We are participating in the International FilmGate Interactive Multimedia Festival with a VR version of VTGO (Vertigo), best experienced with a Meta Quest 2.

VTGO (Vertigo) on general release today!

29th November, 2023. VTGO (Vertigo), a collaboration with Kemi Sulola and Harriet Raynor, is out today! See here to listen to it on your platform of choice.

We're proud to say the song finished 3rd in the 2023 AI Song Contest!

Analysing and Visualising Musical Structure

19th July, 2023. Lab members Chenyu Gao and Tom Collins are presenting at the HCI International Conference and the associated Workshop on Interactive Technologies for Analysing and Visualising Musical Structure in Copenhagen, beginning Sunday 23rd July.

Chenyu's contribution is about interactive pendular graphs, which can be explored here.

NoiseBandNet: Controllable, time-varying neural synthesis of sound effects using filterbanks


18th July, 2023. Lab PhD student Adrián Barahona-Ríos has been working on sound effect modelling and synthesis. The new model, indicated in the diagram above and called NoiseBandNet, has some exciting creative applications. For example, once trained on a sound like this metal impact:

we can drive the timbral world of metal impact with the loudness curve extracted from another sound, such as this beat box clip:

resulting in an interesting hybrid:

In addition to the music-creative possibilities indicated above, we anticipate applications in game audio and XR, where until now it has been labour-intensive to generate scene-driven alterations to sound effects.

PhD Thesis Prize for 2022

14th April, 2023. Congratulations to former lab member Zongyu Yin, who has won the Computer Science Dept.'s Best PhD Thesis Prize for 2022, for his work New evaluation methods for automatic music generation.

AI collaboration with Imogen Heap

10th April, 2020. "It's doing what I hoped it would do, which is what I believe AI will do for musicians, which is to push us to the next level of our own creativity" (Imogen Heap, Grammy Award-winning musican and technologist on working with music generation algorithms built in the lab).


Video courtesy of BBC Click

Interfaces for kids and grown-up kids

12 April, 2021. Here are some fun, educational interfaces for kids to explore music technology. We have found some of these interfaces engage kids as young as two years old. For kids aged four and above, up to you whether to explore side-by-side with them, or say farewell to your device and let them have at it...

  • Rock da mic!
  • Creating music is about composing sounds. Sometimes the possibilities are overwhelming, so why not explore the 625 possibilities of the Sample selector?!
  • Sketch to sound
  • Colouring with keys (select "Play own stuff", hit start, and use A, S, D..., W, E,... keys to explore making different colours with different major/minor keys)
  • Scanning barcodes to make music (A4 printer required; barcode scanner not required!)
  • Chrome Music Lab (Spectrogram, Voice spinner, Rhythm, Oscillators, and Song maker are Tom's four-year-old's favourites).

New to (web) programming?

Learning JavaScript is a good place to start. I've put some demos below to get you excited about wanting to do this! Read more...

Demos

Here are some examples of dynamic web-based music interfaces that have been developed in the lab, using packages built on the Web Audio API. For pedagogical purposes, we have used mostly basic JavaScript, left comments in the code, and avoided optimizations like minifying. If you make JSFiddles, CodePens, or your own standalone interfaces based directly or indirectly on what you find below, please feel free to share them with us to enhance the pedagogical experience!

New to (web) programming?

To rework/extend the demos, you'll need to understand how to program in HTML, CSS, JavaScript, PHP, and Node.js, with the most important of these being JavaScript. Read more...

Jobs

We like hearing from people who are interested in contributing to the work of the lab. At the moment, we're particularly interested in hearing from people with software engineering expertise who are looking for more autonomy and to learn some research skills.

Feel free to get in touch if you fit any of the categories below.

Full-stack JavaScript developer

Familiarity with Node.js and SQLite or another database solution. Experience with client-side syntax/library such as Handlebars, React, or Vue. Ideal candidate has experience with building single-page web applications from API to UI/X with authorization and access control.

Grad/postdoc

First-author publications and/or evidence of writing productivity appropriate to level. Experience as a music scholar, a computer scientist, a cognitive scientist, or some combination of the three. Demonstration of willingness to work on and optimize time-consuming tasks such as data collection and analysis, and music data curation.

Undergrad

Interest in music, computer science, cognitive science, or some combination of the three. Has looked at the demos and attempted to rework/extend at least one of them.

About

Both in the lab and with collaborators across the globe, we apply the scientific method to explore...

Team

This is a team of researchers that we hope will grow in exciting ways over the next decades – even beyond the current PI's retirement! If you are interested in working with us, you are welcome to get in touch to discuss opportunities. We're happy to try to support face-to-face research visits and/or distributed collaborations.

Here's the team, present and past, in pseudo-random order...

Members

Associates

Previous

  • Zongyu (Alex) Yin graduated in 2022, having been a PhD student (Computer Science) at University of York, with research interests in music generation with deep learning, and exploring various generation methods based on music-theoretic expertise. After leaving the lab, Alex went on to work for the SAMI (Speech, Audio, and Music Intelligence) team at TikTok.
  • Jonno Witts, Web App Developer at University of York, helped to develop Discover Music, an app for discovering new music makers.
  • Luke George, Integrated Master's student (Electronic Engineering with Music Technology Systems), joined the team as an intern from the Student Internship Bureau. He has aspirations to work in a field combining his passions for music and technology.
  • Andreas Katsiavalos, DMU PhD student, with research interests in adaptive, complete music information retrieval systems, and a focus on the automatic extraction of high-level concepts such as musical schemata.
  • Dr. Berit Janssen is a researcher and scientific programmer based in the Digital Humanities Lab, Utrecht University, the Netherlands. She is interested in expectation and prediction in music.
  • Iris Yuping Ren is a PhD student in the Department of Information and Computing Sciences, Utrecht University, the Netherlands. She has research interests in musical pattern discovery, algorithm evaluation, functional programming, complex systems, machine learning and AI.
  • Ben Gordon, having graduated from Lafayette College (Data Science and Music), continues to work with Tom worked on web-based interfaces involving natural language understanding and music.
  • Lynette Quek worked with the lab on our entry to the 2021 AI Song Contest, focusing on the visuals and putting together this awesome video.
  • Annabell Pidduck, former Music undergraduate at University of York, with interests in the use of music technology in (high) schools and its effects on student learning and development.
  • Jasmine Banful, Lehigh undergraduate (Mechanical Engineering), with interests in web-based DJ'ing software.
  • Reggie Lahens, Lehigh undergraduate (Journalism), with interests in web-based mixing software.
  • Linda Chen, Lehigh undergraduate (Psychology and Management), worked on a project that aimed to determine how differing levels of feedback affect users' ability to lean to read staff notation.
  • Dr. Thom Corah, formerly DMU PhD student, worked on a framework for the use of real-time binaural audio on personal mobile devices. The aim was to create an audio-based augmented reality, with applications in digital heritage and assisted living.
  • Dr. Katrien Foubert visited the group in June 2015 while still a PhD student. We worked on extracting structural features from piano improvisations recorded during music therapy sessions, with a view to predicting diagnoses of borderline personality disorder. Among other outputs, the collaboration resulted in a Frontiers in Psychology paper.
  • Austin Katz, Lehigh undergraduate (Journalism and Psychology), worked on a project that aimed to shed light on the perception of repetitive structure in music.
  • Fahida Miah was the Nuffield Research Placement student in summer 2014. Her project involved auto-generation of Pop music and quantitative evaluation of creative systems.
  • Ali Nikrang is a Key Researcher and Artist at the Ars Electronica Futurelab, Linz, Austria. As a Master's student, he was the main developer of the PatternViewer, an application that plays an audio file of a piece synchronized to interactive representations of tonal and repetitive structures. Ali's thesis describes the construction of this application, the music-psychological research on which it is founded, and the influence of the application on listeners' music appraisal skills.
  • Emily Stekl, Lehigh undergraduate (Psychology), assisted with the investigation of the effect of music artificial intelligence on creativity. We embedded an AI suggestion button in an interface and studied how it affects users' compositional processes.
  • Zhanfan (Jeremy) Yu, Lafayette undergraduate (Computer Science), helped develop a cloud-based music transcription system.

Contact and credits

I hope you enjoyed visiting this site.
Feel free to get in touch (tom.collins@miami.edu) if you have any questions or suggestions.

Credits

The code above was written by Tom Collins and others as specified (e.g., toward the bottom of each demo interface). Reuse of the code is welcomed, and governed by the GNU General Public License Version 3 or later.

Get Connected