As a former sound studies student and radio/music/audio geek, I tend to think a lot about the various aural phenomena I confront on a daily basis. On the job, it’s mostly the gentle roar of Broadway and the whisper-quiet discussions of my Chartteam colleagues hard at work… Thankfully the Chartbeat record player is actually a thing and we’re slowly building out our library (thanks to @tovah at the Lansing State Journal for the killer batch of vinyl!).
Ok, but when I’m not burying everything in headphones full of blissful fuzz I’ve got to listen to what’s going on around me and filter out the signal from the noise. It’s a common problem; one that Chartbeat’s data science team confronts everyday. When you’re looking at a large dataset it can be tough to pick out the useful bits, so you graph them out, relying on your eyes to pick out the anomalies (obviously there’s much more to it than that, but I’m not the data scientist ‘round here, so let’s just let bygones be bygones and keep on truckin’, cool?). This works in most cases, but in the real-time paradigm you’re frequently monitoring data as it comes in… and you’ve only got one set of eyes, so how are you going to mustachify that picture of your CEO and watch the data at the same time?
Recently I’ve been seeing a bunch of sonification projects that are attempting to free up those eyes so you can monitor real-time events passively while working on something else. Data sonification has been around for a while (cue image of guys in hazmat suits walking around with Geiger counters), but I thought I’d take a moment on the ol’ Chartbeat blog to throw around some links for the interested data/sound nerds who are still reading….
This one has been going around the web quite a bit recently…. basically a real-time sonification of various types of Wikipedia activity, particularly notable for the thoughtful sound design and accompanying visuals (github).
The inspiration for Listen to Wikipedia. Both of these make excellent use of the howler.js library, which defaults to Web Audio API with an HTML5 Audio fallback (github). [BONUS: If you’ve got half an hour to kill go play around with these Web Audio API demos].
A real-time sonic feed of German tweets. There’s a lot of cool stuff about this one, but I love how complex the composition is... tweets are spatialized in the stereo mix according to which side of the country they came from, and of course you get different sounds for replies, retweets, and hashtags.
Listen in on all of the public Github activity. This one was put together by choir.io as a demo for their real-time data sonification service (DSAAS anyone?), which allows you to create your own custom event monitoring streams.
An art installation that tracks the position of taxis at busy New York City intersections and synthesizes the data into a soundscape in real-time.
Ok, so this one isn’t real-time, but I mean come on… we’re listening to data from a particle collision at the LHC.
Crafted by our very own @dbow1234, this one creates an ambient soundscape from your site’s historical traffic data. The link above is using Chartbeat’s data for gizmodo.com, with frequency range governed by total concurrents and distortion/reverb mapped to social traffic. Swap in your domain and API key to get your own sounds. Danny already wrote a bit about this on the blog, so check out his post for technical details and peep his other hackweek projects.
If you want to learn more, here’s a few academic resources because I’m a super nerd and so are you: