Tristan and I have been working hard to add a new kind of information to the Echo Nest’s audio APIs. We’re calling these things attributes, and they are quantities that are calculated with data from our track analysis. Our attributes depend on ground truth data generated by The Echo Nest’s awesome Data QA Team, a passionate group of musicians and music lovers that includes several Berklee students. When they tell us a song is danceable, we believe it.
We each groove to different music; what constitutes dance music is inherently subjective. The Echo Nest defines danceability as the ease with which a person could dance to a song, over the course of the whole song. We use a mix of features to compute danceability, including beat strength, tempo stability, overall tempo, and more. One cool thing that I’ve noticed is that remixes of songs tend to have a higher danceability score than the originals. Here’s the distribution of danceability over all the songs we have analyzed (over 14 million).
Energy is less subjective. How energetic is the music? Does it make you want to bop all over the room, or fall into a coma? The feature mix we use to compute energy includes loudness and segment durations. Here’s the distribution of energy over all the songs we have analyzed (over 14 million):
Here are the various ways you can interact with danceability and energy through the API:
Attributes show how powerful and complete The Echo Nest’s
analyze data is. Armed with only those JSON documents, you could make your own attributes, too. Maybe you want to implement goodness? But seriously, what are you going to do with danceability & energy for a Music HackDay? I can’t wait to find out.
Originally published: Fri, 15 Oct 2010 to https://runningwithdata.tumblr.com/post/1321504427