Sign up FAST! Login – the Blog · Advanced Robotics

Stashed in: Pandora

To save this post, select a stash from drop-down menu or type in a new one:

We were aiming to answer two different questions with this experiment:

Are the labels we’re trying to apply to tracks meaningful?

Do our robots reliably apply the right labels to a track?

The first question is the more fundamental – if we’re using labels that don’t mean anything to humans, it doesn’t much matter what our robots say! To answer this question we looked at the average agreement between humans for each track. If humans reliably agree with each other we can conclude the label has a clear meaning, and it’s worth trying to get our robots to replicate those judgements.

We were looking at 15 different audio “features”. Each feature describes a particular aspect of music, such as:


Rhythmic Regularity




The features have a number of categories, for example “Speed” can be fast, slow or midtempo. Each time a human used the Robot Ears app, they were asked to sort tracks into the appropriate categories for a particular feature. Meanwhile our robots were asked to do the same. At the end of a turn, we showed you how your answers matched up with the robot’s:

Unclear if this is any more useful than Pandora matching.

 no idea