Archive for May, 2012

Designing Data – Part 5: Launch and Iterate

May 24th, 2012 by Matt

We’re asked about our design all the time – usually in an incredibly kind way full of high fives and “how’d you do that?!”s but sometimes in a “ugh, did you even think about talking to a customer??” kind of way. So, we decided to give you a week-long deep dive into our design process in this “Designing Data” series. Today's the last post in this series...though iterating never ends, so maybe there will  be more down the line. Stay tuned!

We make changes constantly. Seriously. Every day.

Sometimes they’re small and only our most dedicated users notice them; sometimes they entirely change the way people interact with our data. We don’t look at these changes as regressions, dead ends, or a waste of time; we look at this as the right way to design products. We vet everything intensely, but we can only do so much in the confines of our office with a few dozen users in pre-launch testing. And the only way to know if we’ve succeeded is to put it all out there into the real world (even if it sometimes results in a few harsh Tweets headed our way). So, when a new design is launched, we can't just pick up and move onto the next thing. We listen. We read every email coming into customer service, we consider every tweet, we grab everyone who wants to talk to us about what we’ve done and we listen. We figure out where things rocked and where things sucked and then this whole cycle starts again. If you've been following this design blog series, you know that in our initial design we tried to answer the question our users were always asking, “what's special happening on my site right now?”  with the new dashboard Overview. We stripped out a whole bunch of detail and put the focus on Notable Pages (an algorithm that detects what pages have unusual traffic) and Peer Stats (information from similar sites.) With the combination of those two stats our dashboard was always able to alert our users when something surprising, unique, or interesting was happening. But when it got out there into the wild...it turned out that people wanted to see more. They didn’t realize it at first, but it wasn’t just “is something special happening?” that they wanted answered rather it was “is everything normal?” While the questions sound similar, there is a world of difference between the two. We had built a dashboard that always caught and called out the big stuff, but it had trouble distinguishing between a normal day and a really bad one. We thought users would turn to the other views by clicking down their left-hand nav to understand these more nuanced things, but we were quickly told they wanted to put these new things on a second screen to always have a pulse of their site, even if it was just to tell them that everything was going smoothly. So we went back to into research mode. We asked what it meant when something was running smoothly, we watched how people used a dashboard when they weren’t paying attention, and we found out what screen they put our dashboard on. We learned a lot of little things (like that people liked to put our dashboards on big screens, but there wasn’t enough contrast in the colors to display information well enough at that scale). We got some bigger ones, too, that weren’t solved by changing colors. It turned out it wasn’t just the surging pages they wanted to see... they also wanted to know if anything unusual was happening with their more always-trafficked content.  So we brought back Top Pages and showed them with the Notable ones as context. Now the dashboard doesn’t just show you when something special is happening - it also lets you know when you’re doing alright and everything's just fine. But that only solved things on the individual page level when the information about the people sending it was every bit as important. So we changed the visualization of Traffic Sources to show two new things: the top referring domains and search terms and how your performance today compares with your average performance. Now, no matter if your day is normal, great, or terrible, you can tell right away. And that was the point all along. So welcome to our new Overview page. We think it answers the three questions that were most important...
  1. How many people are on my site?
  2. What are they looking at?
  3. Where are they coming from?
  ...but maybe you don’t. That’s awesome. We want to know. This process won’t stop until we do...

Designing Data – Part 4: Design and Prototype

May 24th, 2012 by Matt

We’re asked about our design all the time – usually in an incredibly kind way full of high fives and “how’d you do that?!”s but sometimes in a “ugh, did you even think about talking to a customer??” kind of way. So, we decided to give you a week-long deep dive into our design process in this “Designing Data” series. Yesterday we looked at the data, and today we're talking about designing against it.

Once we know that we’re solving real problems and using good data, we get to the part that everyone sees, touches, and (hopefully!) falls in love with: The design.

While we love delighting users and get pretty jazzed when people ohh or ahh over what they see on screen, we make sure to have a few people on our product design team who are solely responsible for making sure that the design uses and highlights the data we're displaying in the best possible way. So that it's not just pretty, but usable. Although this is the stage where pixels are pushed, lines of code are also written. Usually, we need to prototype a live design to make sure we can touch it, vet it, and sometimes get user feedback on it, regardless of how rough it is. Whether it’s quickly building a navigation system to make sure it functions instinctively or prototyping a visualization with real data pumping through it to see what it looks like -- a functioning, built design lets you see things that you never would in the static world of Photoshop. For this very reason, every designer on our team is also a developer. When pulling together the original Chartbeat dashboard Overview, we went through a bunch of prototypes that were pushed out to small groups of our users. Well, they didn't love everything, that's for sure. That sucked. We'd spent hours/days/weeks designing and prototyping....and it just didn’t work. But we don't try to make it fit or fall on our sword and go with it anyway. We made that first version of the Overview better. From the beginning, our goal for this view was to provide a high level, at-a-glance view of the site so it was obvious if something interesting or unusual was happening. Some of our early concepts were knocked down by users mainly because we weren't answering the questions they were asking, the needs they had at that instant level. Honestly, many of these concepts were visually much more appealing than the design we launched with but, as always, the goal isn’t only to make things pretty; the goal is finding the delicate balance between beauty, usefulness, and delight. To us, that's the core of good design. That's is what makes people who don’t have time for raw data able to understand, use, act on, and enjoy our products. So what were those pretty (but not so useful) designs that didn’t make it for Traffic Sources? The radar chart is a beautiful visualization and has the advantage of being able to quickly compare shapes, but it falls apart in the eyes of many users who are trying to get precise values of individual traffic sources. A radar chart is intended to show balance, when in reality most sites don’t have (and shouldn’t have) an equally balanced set of traffic. So we canned that guy and moved on to the dot chart. The dot chart was definitely cool, but  it didn't work for us a different reason. We liked the idea of being able to show a shape, or thumbprint, of your site’s traffic and added a connecting line between the values. The line didn't have a value - it was just meant to aid the eye in spotting the dots easily. But users were interpreting it is a relationship between each traffic source, which it certainly wasn't meant to be. We took a hit on that one for sure. The last thing we ever ever want is to launch a misleading visualization! Then came the circle chart. This guy had a lot of promise. Circle charts can be beautiful and usable, but its main weakness really shows when trying to compare values on different radii. The eye gets a little tripped up. Not to mention that it's tricky to grasp at a quick glance - the whole point of the Overview. So we inevitably trashed this guy, too. How did we land on our "final" design? Check back tomorrow to read our last piece of Designing Data: Launch and iterate.

Designing Data – Part 3: Look at the Data

May 23rd, 2012 by Alex Carusillo

We’re asked about our design all the time – usually in an incredibly kind way full of high fives and “how’d you do that?!”s but sometimes in a “ugh, did you even think about talking to a customer??” kind of way. So, we decided to give you a week-long deep dive into our design process in this “Designing Data” series. Yesterday we identified problems through user research, and today we'll look at the data to validate our thinking. We’ve identified the problem. Next step is to start sketching or to open up Photoshop and start throwing pixels on the screen, right? Nope. Negative. Wrong.

We always look at the data before even thinking about design.

We’re a company that eats, sleeps, and dreams information and insight, and we design from what we have -- not what we wish we had. It’s so easy to spend days pushing pixels in Photoshop making the perfect design for the perfect scenario, but it’s all meaningless if the result doesn’t carry over to reality. Great mockups are easy. Designs that work well with real data of all kinds are hard. Some days, really hard. When we were designing the Overview on the Chartbeat dashboard, we took each question people were trying to answer and made sure the data we were tracking matched up against those questions to provide real, unique value. We questioned and vetted every decision as to whether or not to put each data point on the Overview, a secondary page, or not include it at all. We looked at client after client, page after page after page trying to find places where our data fell down.
  • People wanted to know where traffic was coming from. But did that mean dropping a list of each referring site was be enough?
  • How did the dashboard look if your site had very low traffic, like 0-10 concurrents?
  • What if it had a ton, like 500,000+?
  • What if everyone was coming from search? Or social?
  • What if you got a flood of traffic all at once? Would the visualization work? Would it tell you that something special was happening?
To some people it can seem like overkill. But when you're working with data, you can't make assumptions and expect the design to match every possible form of data across thousands of sites. Otherwise, it's a ridiculous waste of time. Which is why we've gotten into the habit of  ending meetings the second someone says “Well, if the data says ____” by responding with “Let’s go look at it.” And all gather around my screen, or Isaac's (our chief data scientist's), or Matt's (our UI designer). We tear apart every single concept that makes it to this second stage in the process. When they survive this grueling data-review round, the fun part begins: Designing and prototyping - which we’ll dive into tomorrow.

We’re On TV!

May 22nd, 2012 by Lauryn

We've been working closely with our good friends over at Patch since Chartbeat was born, essentially. They're always testing out cool stuff and truly understand how to put real-time data to work. So when they come to us with new ideas of how to use their Chartbeat data, we pretty much always jump at the opportunity. Our latest love child? Putting online data on TV. That's right -- three-screen Chartbeat, baby! Patch and PIX11 News took their Chartbeat Big Board and used it to share what people are reading and talking about in New York, New Jersey, and Connecticut right now. Using online content data to inform offline/on-air topics. Awesome, right? [wpvideo 2tKqyuCm] What ways are you guys using the Big Board? Up in your office? On your own personal second/third monitor? Share the Big Board love!

Designing Data – Part 2: User Research

May 22nd, 2012 by Alex Carusillo

We’re asked about our design all the time – usually in an incredibly kind way full of high fives and “how’d you do that?!”s but sometimes in a “ugh, did you even think about talking to a customer??” kind of way. So, we decided to give you a week-long deep dive into our design process in this “Designing Data” series. Yesterday we covered the right mindset, today's all about user research. Check in tomorrow for the looking at the data portion of our design process.

We don’t make a single move without making sure it will solve a real problem for real users.

Whether they’re sitting here at the Chartbase telling us what makes their jobs hard, on a screen-share showing us how they use the dashboard , or through behavioral data from our APIs -- everything we do comes from real people and their real usage. Not best case, hypothetical personas who might one day open up our dashboard. That’s why we’re so serious when we talk about learning from our users. They are as much partners as they are anything else and we just couldn’t deliver without them. The biggest thing we’ve learned is to always be listening (ABL, if you will). And really listening. Not just hearing what they say and delivering that exact feature. But truly understanding what will make them better than they every thought possible. We can’t build anything until we really get that. But understanding and solving are very different things. Don’t try to solve everything or even anything right off the bat. In our early user research, we’re just trying to figure out the common problems people bump into. We don’t ask “what don’t you like about our dashboard?” we ask “what makes you come to Chartbeat? Why do you want to see data at all?” The former results in feature requests while the latter uncovers the things that make life harder -- and we’re way more interested in those. Until we know why people turn to us we can’t solve their problems, we can’t make something that will make them better. For example, in talking to people about our recent Chartbeat dashboard redesign we realized that, at the highest level, people had three questions they turned to Chartbeat for:
  • How many people are on my site?
  • What are they looking at?
  • Where are they coming from?
In short, they needed the pulse of their site. So, naturally, we decided to build something that delivered just that. Where next, you ask? Well, looking at the data of course! We’ll get there in tomorrow’s edition.