Last.fm – the Blog · Advanced Robotics
Jared Sperli stashed this in internet
Stashed in: Pandora
We were aiming to answer two different questions with this experiment:
Are the labels we’re trying to apply to tracks meaningful?
Do our robots reliably apply the right labels to a track?
The first question is the more fundamental – if we’re using labels that don’t mean anything to humans, it doesn’t much matter what our robots say! To answer this question we looked at the average agreement between humans for each track. If humans reliably agree with each other we can conclude the label has a clear meaning, and it’s worth trying to get our robots to replicate those judgements.
We were looking at 15 different audio “features”. Each feature describes a particular aspect of music, such as:
Speed
Rhythmic Regularity
Noisiness
“Punchiness”
etc.
The features have a number of categories, for example “Speed” can be fast, slow or midtempo. Each time a human used the Robot Ears app, they were asked to sort tracks into the appropriate categories for a particular feature. Meanwhile our robots were asked to do the same. At the end of a turn, we showed you how your answers matched up with the robot’s:
8:23 PM Nov 26 2012