Sound and Color: Data as Art

As an engineer at Chartbeat, I’m obviously a believer in the power of data visualization. When you have the right data displayed in the right way at the right time, you gain a deeper understanding of the world and can make more thoughtful decisions based on that.

This belief has made me very curious about what happens when you bring data and technology to fields where they’re traditionally absent. I’ve worked on a number of projects exploring alternative data visualization and interaction, ranging from SoundQuake, the winning piece at Art Hack SF, to Living Los Sures, an innovative documentary piece from PBS’s POV Hackathon last summer. I often use my Hack Week time at Chartbeat to try to visualize and interact with our data in new ways. 

Data as Abstract Expression

We usually visualize data in very literal ways to make sure we’re getting a concrete sense of what it means in real world terms. Numbers, charts, graphs – these take a data set and convert it to something very tangible. You can understand the Average Engaged Time of all users by converting the Engaged Time of each individual user into a single number. You can understand which sites refer traffic to your site more than others by looking at a pie chart of referral traffic which translates percentages into literal segments of a circle.

But I was curious what would happen if you broke that literal connection and tried to convert a data set into something more purely emotional, active, and formless – if you could connect the data to the subconscious in a way that yielded a different understanding of it. My expectations were that it would be “kind of cool, but probably not that useful.” The result, Chartbeatnik (the name seemed more clever at the time…) was an interesting first foray into this idea.

Chartbeatnik uses d3 and SVG to convert real-time data into an abstract expressionistic dripped-paint style visualization. Each visitor will be painted on the screen and the Engaged Time of that visitor will affect the size and shape of the form. So rather than seeing the Engaged Time of your visitors as a single literal number, you see it unconsciously in the energy and spirit of the forms. 

Data as Synthesized Sound

I’m a musician and have been very interested in sound as an expressive form for a long time. For one of my more recent Hack Week projects, I wanted to see if I could convert a data set into a soundscape – a kind of data synthesizer. I was also curious to experiment with Web Audio. I ended up making Chartwave, which takes a historical data set from Chartbeat’s historical traffic series API and creates a soundscape based on the series data returned. You can modify which series it uses by adding something like ‘?fields=return,social’ to the end of the URL (the available values are listed in the app itself and the API documentation).

The first parameter controls the frequency of the tones played. The set of possible frequencies spans 5 octaves, harmonically centered on a G major chord. As you go up in frequency, the harmonic focus shifts occasionally to A major. The subset of frequencies playing at any given time is determined by the value of the first parameter at that time relative to the maximum value of that parameter. So if the first parameter is 10% of the maximum in the series, then the lowest 10% of tones will be playing. The bottom 2 tones are always playing no matter what. Each tone plays for a random amount of time between 0 and 5 seconds, with a 1 second cooldown before it can play again. So not all tones that can be played will be played at any given moment – a given tone might already be playing, or in its cooldown phase.

The second parameter controls the level of distortion and reverb applied to the output. This is controlled similar to the first one, based on the value of the second parameter at any given time relative to the maximum value in the series. So if the second parameter is 50% of the maximum in the series, 50% of the output will be routed through the distortion/reverb channel.

For those curious about slightly more technical details, the routing graph is detailed here. The first parameter basically controls the gain levels of the spectrum of oscillator nodes. The second parameter controls the relative gain of the MasterWetGain vs the MasterDryGain – to increase distortion/reverb, more of the output is from the MasterWetGain. It uses the Web Audio API, jQuery, Underscore, dat.GUI, Html5Boilerplate, and the Google Visualization API.

Data for everyone

Beyond being “kind of cool,” these experiments are interesting to me because they don’t assume that everyone sees the world the same way. Maybe for a lot of people numbers are the fastest, most useful way to understand something about the world, but there are probably other people for whom numbers don’t quite have the same power as sound or color. We should be open to the idea that in some contexts numeric information could be converted into expressive forms that might be more meaningful.

If you have any ideas for future hacks involving abstract data interpretations, please shoot me an email or add suggestions in the Comments section.


More in Culture