I’ve been swamped lately with finishing up my master’s thesis and have been tweeting occasional gripes and self-created hashtags about the process.
#thesisjams might not be inspiring to anyone besides me, but what if all my tweets about the process were unwittingly being turned into public music?
The Listening Machine is a project by composer and cellist Peter Gregson and doctoral researcher and artistic programmer Daniel Jones. They’ve taken the twitter feeds of 500 unidentified people and turned them into a constant stream of music.
What exactly is being translated into sound?
Music and human language have very different properties, so it’s impossible to directly translate one into the other. Instead, The Listening Machine extract various pieces of information and uses them to control different parameters of the piece.
- The overall rate of tweeting is linked to the rate and speed of music triggered
- Emotional trends govern the piece’s musical mode: positive, negative or neutral
- Phrases and sentences that make up tweets are used to generate sequences of musical notes
- Other keywords and topics are used to trigger larger movements within the piece
To find out more about how the Listening Machine works, check out their in-depth explanation here.
I’m probably not one of the 500 twitter users fueling the machine, but it definitely qualifies as one of my #thesisjams.