I have also worked on music recommendation for Play Music, involving both both learning from audio and learning from how users consume music. In the audio domain, the main goal is to transform the ones and zeros in a digital audio file into something where musically-similar songs are also numerically similar, making it easier to do music recommendation. This is (a) user-dependent: my idea of similar is not the same as yours and (b) changes with context: my idea of similarity changes when I make a playlist for jogging versus making a playlist for a dinner party. I might choose the same song (say "Taxman" by the Beatles) but perhaps it would be the tempo for jogging that drove the selection of that specific song versus "I like the album Revolver and want to add it to the dinner party mix" for a dinner party playlist.
Before joining Google in 2010, I was an Associate Professor in Computer Science at University of Montreal. I helped found the BRAMS research center (Brain Music and Sound; www.brams.org) and was involved at the McGill CIRMMT center (Centre for Interdisciplinary Research in Music Media and Technology; www.cirmmt.org). Aside from audio signal processing and machine learning, I worked on music performance modeling. What exactly does a good music performer add to what is already in the score? I treated this as a machine learning question: Hypothetically, if we showed a piano-playing robot a huge collection of Chopin performances--- from the best in the world all the way down to that of a struggling teenage pianist---could it learn to play well by analyzing all of these examples? If so, what’s the right way to perform that analysis? In the end I learned a lot about the complexity and beauty of human music performance, and how performance relates to and extends composition.