about our research
Both at Lehigh and with team members and collaborators across the globe, we conduct experiments and develop computational models addressing:
- Discovery of repeated patterns in music, visual, and other domains;
- Question answering on music scores (given a query like 'perfect cadence followed by homophonic texture', retrieve the relevant events from a digital score);
- Musical expectancy and listening choices (for symbolic/audio input and different listener backgrounds/contexts);
- Automatic generation of stylistic compositions, incorporation in software, and the technology's effect on student education and work;
- Interfaces using the Web Audio Framework.
The publications arising from our work cross boundaries between music, computing, psychology, mathematics, and statistics. As an example, a paper on cognition of tonality appeared in Psychological Review (Impact Factor 9.8, and 3rd out of 126 journals in the category Psychology – Multidisciplinary).
If you are interested in joining the team, temporarily or more permanently, in person or remotely, you are welcome to get in touch to discuss opportunities. Over time we will be pushing the boundaries of music/cognitive psychology, music informatics research, and music composition. Research areas include (but are not limited to):
In terms of Lehigh University, Bethlehem, and the wider Lehigh Valley, it's a great place to live or visit, with warm weather throughout the summer, skiing etc. in the winter, and lots of (music-related) activities going on.
Here are some examples of music composed by humans, as well as computer-based music, which is produced by algorithmically combining existing music in new ways. Have a listen and see if you can tell who did what: human composer or computer based?
Scribble down your thoughts, send me your answers, and I'll tell you whodunnit! If you'd like to read more about the algorithms behind these examples, this article by my collaborators and I just came out. If you're struggling to get access to it, then you can checkout the accepted version here.
Designed by Ali Nikrang, Tom Collins, and Gerhard Widmer, the PatternViewer application plays an audio file of a piece synchronized to a point-set representation, where the colour of the points represents an estimate of the local key. The pendular graph in the top-left corner represents the piece’s repetitive structure, and can be clicked to find out about motifs, themes, and other repetitive elements. PatternViewer Version 1.0 is available now for download from here.