Posts Tagged ‘Hack Week’

Big Board Mini Hack

July 31st, 2013 by David

You’re probably familiar with Big Boards – you use ‘em, you love ‘em, and they’re a super easy way of keeping everyone in the newsroom in the know about what stories are trending online.

The original Big Boards (nerd joke) were made so they would be easy to enhance and modify. We tried to do the same with our Big Board, so we open-sourced the code for you to tinker with to your heart’s content. We also threw in a few added functionalities that might not be so obvious… here’s one of my favorites:

Author List

If you’re using Chartbeat Publishing you can append “&group=author” to the end of your big board URL to get a leaderboard style list of all of your writers sorted in real time by how many people are reading their articles.

It looks something like this, but with your actual writers instead of the names of some Chartteam members:
(hmm… I seem to be doing quite well today)

I know a lot of you are fans of the Big Board, so stay tuned for future Big Board hacks and projects.


As an engineer at Chartbeat, I’m obviously a believer in the power of data visualization. When you have the right data displayed in the right way at the right time, you gain a deeper understanding of the world and can make more thoughtful decisions based on that.

This belief has made me very curious about what happens when you bring data and technology to fields where they’re traditionally absent. I’ve worked on a number of projects exploring alternative data visualization and interaction, ranging from SoundQuake, the winning piece at Art Hack SF, to Living Los Sures, an innovative documentary piece from PBS’s POV Hackathon last summer. I often use my Hack Week time at Chartbeat to try to visualize and interact with our data in new ways. 

Data as Abstract Expression

We usually visualize data in very literal ways to make sure we’re getting a concrete sense of what it means in real world terms. Numbers, charts, graphs – these take a data set and convert it to something very tangible. You can understand the Average Engaged Time of all users by converting the Engaged Time of each individual user into a single number. You can understand which sites refer traffic to your site more than others by looking at a pie chart of referral traffic which translates percentages into literal segments of a circle.

But I was curious what would happen if you broke that literal connection and tried to convert a data set into something more purely emotional, active, and formless – if you could connect the data to the subconscious in a way that yielded a different understanding of it. My expectations were that it would be “kind of cool, but probably not that useful.” The result, Chartbeatnik (the name seemed more clever at the time…) was an interesting first foray into this idea.

Chartbeatnik uses d3 and SVG to convert real-time data into an abstract expressionistic dripped-paint style visualization. Each visitor will be painted on the screen and the Engaged Time of that visitor will affect the size and shape of the form. So rather than seeing the Engaged Time of your visitors as a single literal number, you see it unconsciously in the energy and spirit of the forms. 

Data as Synthesized Sound

I’m a musician and have been very interested in sound as an expressive form for a long time. For one of my more recent Hack Week projects, I wanted to see if I could convert a data set into a soundscape – a kind of data synthesizer. I was also curious to experiment with Web Audio. I ended up making Chartwave, which takes a historical data set from Chartbeat’s historical traffic series API and creates a soundscape based on the series data returned. You can modify which series it uses by adding something like ‘?fields=return,social’ to the end of the URL (the available values are listed in the app itself and the API documentation).

The first parameter controls the frequency of the tones played. The set of possible frequencies spans 5 octaves, harmonically centered on a G major chord. As you go up in frequency, the harmonic focus shifts occasionally to A major. The subset of frequencies playing at any given time is determined by the value of the first parameter at that time relative to the maximum value of that parameter. So if the first parameter is 10% of the maximum in the series, then the lowest 10% of tones will be playing. The bottom 2 tones are always playing no matter what. Each tone plays for a random amount of time between 0 and 5 seconds, with a 1 second cooldown before it can play again. So not all tones that can be played will be played at any given moment – a given tone might already be playing, or in its cooldown phase.

The second parameter controls the level of distortion and reverb applied to the output. This is controlled similar to the first one, based on the value of the second parameter at any given time relative to the maximum value in the series. So if the second parameter is 50% of the maximum in the series, 50% of the output will be routed through the distortion/reverb channel.

For those curious about slightly more technical details, the routing graph is detailed here. The first parameter basically controls the gain levels of the spectrum of oscillator nodes. The second parameter controls the relative gain of the MasterWetGain vs the MasterDryGain – to increase distortion/reverb, more of the output is from the MasterWetGain. It uses the Web Audio API, jQuery, Underscore, dat.GUI, Html5Boilerplate, and the Google Visualization API.

Data for everyone

Beyond being “kind of cool,” these experiments are interesting to me because they don’t assume that everyone sees the world the same way. Maybe for a lot of people numbers are the fastest, most useful way to understand something about the world, but there are probably other people for whom numbers don’t quite have the same power as sound or color. We should be open to the idea that in some contexts numeric information could be converted into expressive forms that might be more meaningful.

If you have any ideas for future hacks involving abstract data interpretations, please shoot me an email or add suggestions in the Comments section.


MVC frameworks are a design pattern commonly used when programming modern Graphical User Interfaces (GUI).  There are dozens of well-written, well-supported frameworks out there, so it is not to say that the world needs yet another one, but creating a new one is an excellent exercise in architecture, planning, and testing. That is why my most recent Hack Week project was to design and build my own MVC framework for use in my own projects.


Photo credit:


What is a MVC Framework?

An MVC framework stems from the popular object-oriented programming paradigm called Model View Controller. The strength of this pattern can be easily summarized by the fact that it ensures the separation of concerns, where each individual component is responsible for a specific set of functionality; in this case a model is tasked with managing and storing data, a view is responsible for rendering the visual (display) aspect of the site, and a controller is essentially the brain, taking user input from the views and manipulating data on the model. None of these components should be tightly coupled, providing an environment of interchangeable pieces.

Wraith v0.1.0


Meet Wraith

Wraith was a project I thought up several months ago out of my frustrations with the current MV* frameworks available on the internet. I was working on a few small, single page applications and was testing different frameworks to see which suit my needs. I used Backbone, Spine, Angular, and a few others that didn’t quite fit the bill. What I wanted was a framework that bound the data to the view, something I call implicit model-view binding, but required no logic to be present inside the views.

For all intents and purposes, Angular does provide this level of functionality, and so does Backbone, with the help from a variety of different plugins. But Angular is rather big, has a pretty steep learning curve, and doesn’t enforce logicless views (something I feel is extremely important in such a framework), and Backbone takes a bit of finagling to get anything to work quite right. Additionally all of these frameworks work best when used with a library like jQuery, or Zepto to handle event delegation and DOM manipulation.

Why make another MV* Framework?

I wrote Wraith because I wanted a MV* framework that didn’t depend on any external libraries, had Angular-like Model-View binding, and was super lightweight and easy to understand. Additionally I wanted to write this framework in CoffeeScript since it is easy to read, has powerful array comprehension, and is just a ton of fun to write in.

Along the way I sought inspiration from Spine, Backbone and Angular, mixing Spine-style Models and Collections, Angular style directives, with Backbone-style templating (a la Handlebars). All of these inspirations make Wraith a unique experience, but it still feels incredibly familiar to most frontend developers.

What makes Wraith different?

Wraith is completely self-contained. You need nothing else to get started creating a basic single page application. I say basic because the framework is very much in its infancy. It does not have support for URL routing, AJAX requests, animations, or persistent storage. These are all things I hope to accomplish in the near future.

Now that I have identified what Wraith doesn’t have (yet), lets talk about what it does well:

  • Implicit Model-View binding (akin to the MVVM design pattern)
  • Controllers that are also views (again, MVVM)
  • Handlebars-esque logicless templating
  • Template declaration directly in the DOM that doesn’t require a compilation process
  • Event binding directly from the DOM, instead of requiring JS to do so
  • Partial view updating (only update elements that changed)
  • Well under 20kb when minified

How to get started with Wraith?

Wraith is declarative, in that much of the heavy lifting – data and event binding, class and text manipulation – happens directly in the markup (HTML). Your controller is initialized from the DOM directly, so when you create your app it’ll look something like this:

Wraith will require you to create an App.MainController object in the global namespace, which it will find and create an instance of, binding it to the element that its defined on (in this case, section).



Before Wraith will do anything though, you must initialize its bootloader. This will start the controller initialization.

Event Handling

In Wraith, you are required to create event handlers in your controllers, but you bind them to events inside the DOM structure like so:

Now when the text input is typed into, the onKeypress method on App.MainController will be invoked.

Models and Collections

I really enjoyed working with Models in Spine when compared to other frameworks, and thus Wraith’s models are similar in design. You can create a new model with default values easily:



Collections can be done similarly:



Data Binding

One of the most important things I tried to accomplish with Wraith was easy data binding. I didn’t want to write logic in my views, so I needed to handle looping over collections as well as showing and hiding views or partial views. The solution was to allow a view to be bound via dot-notation to a property on a model similar to what Angular does.

This will bind the input to the list property on your controller (App.MainController). Every time the list.items property changes, the view will automatically be updated (and in this case, repeated as a list).

Class Binding

Want to hide or show something? Instead of writing logic in JavaScript to hide and show an element or alter its class attributes, you can use data or methods from your models to alter the class structure.

When selected is true, the class highlight will be applied to the span surrounding our text.

Want to know more?

This was just a brief overview of what Wraith is capable of doing right now, and what it will be capable of doing in the future. For more information, follow me on Twitter (@Shaun_Springer), check out the github repository, read the documentation, and check out the examples below:

As many of you guys know, every 7 weeks at Chartbeat we have a Hack Week where we can work on any project or projects that interest us. At the end of the week, we present our creations, some of which make it to our Labs page or even become fully-fledged features for our products. 

Meet Hue.

Last year Philips released a much-hyped LED light bulb called the Hue that gives you complete control of every aspect of the bulb from your smart phone or through the Philips website. Hue lights have an API which unlocks a ton of potential for interesting integrations.  I acquired a Hue set around the holidays and ever since I’ve been wanting to hook them up to our PagerDuty alert notification system at Chartbeat.

Enter: My Hack Week project, which I’ve named “Pager Huety”.  Pager Huety is a script that controls the Philips Hue light bulbs based on triggered incidents from the PagerDuty API.

Meet me and my hack.

Part of my job as the Chartbeat Senior Web Operations Engineer requires being a part of an on-call rotation to ensure everyone’s dashboards are running smoothly 24/7.  Occasionally you may see an issue on your dashboard but luckily we’re immediately alerted to any issues via our monitoring system, which will send an alert via PagerDuty to the dedicated on-call team member.

I wrote Pager Huety to wake me up in the middle of the night with my Hue lights rather than have my phone going off with whatever less-than-awesome ringtone I’ve selected.  There’s an option to only have it alert during nighttime and filter for incidents only assigned to me.  Currently the default light sequence is to flash the bulbs red and blue a few times, then turn bright white for 10 seconds and shut off.  A video of Pager Huety in action can be seen below:


There are a lot of cool things you can do with the Hue Bulbs and your Chartbeat data.  You can even utilize the Chartbeat API to flash Hue lights when you hit a new 30 day max – want to give it a go yourself? Tweet your results to @Chartbeat.

Since my last blog post, things have been moving quickly over here. Hack Week was invigorating (i.e., hectic) as usual, and while I didn’t get to demo my hack because I was out sick, I did make a ton of progress on building a reader that’s based on what I’m now calling the Shaun Appreciation Score –  a measure of a story’s Average Engaged Time, average scroll depth, and content height. 

Measuring appreciation

The Shaun Appreciation Score is my attempt to measure how much time users are spending within the actual content – the story itself, and not just on the page. I calculated my score by sampling our existing data from thousands of articles across our publishers sites*, as well as writing a system to get new data that we currently don’t have available to us. This includes scraping content from these pages, determining where the content starts and stops, and then figuring out how much time it takes to consume that content. 

Once I collected this massive set of data, I then chatted with Josh, one of our awesome Chartbeat data scientists. He suggested that I start plotting some of this data to get a feel for what the distribution might look like. So I calculated the mean and standard deviation of all the key data points I wanted to measure: average scroll depth, Average Engaged Time, and content height (the physical length of a story on a page).

The beauty of a bell curve

After these calculations, I plotted this data and seeing if there was any correlation between Engaged Time and scroll depth. As it turns out, there is a strong correlation(!). Content that has above-average Engaged Time generally has higher average scroll depth as well, but that’s not the whole picture. I wanted to see how much time was spent within the content, not in the comments section at the bottom, or in the slideshow at the top of the page. There wasn’t an easy way to determine this, so I decided taking the Engaged Time relative to the content length would help weed out articles with fewer words, or users who spend a ton of time in the comments section.

Plotting the data in a histogram looked something like this:


This bell curve is good news, I got something that resembles a normal distribution, which tells me that my data quality is good, and that there is in fact a difference between what I’m calling “good” content appreciation and “bad” content appreciation.

More to do next Hack Week

While I didn’t finish building my reader within this past Hack Week, I did get to a point where I’m feeling pretty good about my results so far. I built an API endpoint that will spit out the top 5 and bottom 5 pages ranked by my appreciation score and this is enough to build a rough prototypal UI on top of it. Hopefully next Hack Week I can revisit this application and finish up my ambitious goal of building what’s effectively a quality-based reader.

What do you think my reader’s UI should look like?


*While we experiment with the data available from our API, our clients’ data is kept private and never shared publicly.