Posts Tagged ‘Designing Data’

Things to Know About the New Chartbeat Publishing Design

December 11th, 2013 by Tom

The goal of product interface design is to develop a product's personality (the new Chartbeat Publishing is friendly, trustworthy, fast, modern), and tell a story (publishers can build and retain a loyal audience from our data). Everything about a product’s design relates back to its personality and story, including the visual style, the interaction design, and the language. This isn't something we can get at the first go – it requires a lot of experimentation.

So at Chartbeat we have a process in place that allows us to rapidly try out ideas: not just at the design level, but at every stage of the project. We create a quick rough series of wireframes, and do the same for visual design sketches. Early on we prototype these designs in the browser (we love AngularJS over here), and even have a system that allows for quickly prototyping data in the backend (a custom LUA scriptable real-time backend). At every step of the way, we're testing and tweaking to make sure that our choices support the personality and story we’re trying to portray.

In the middle of all this endless iterating, we reached a point where we’re ready for an initial stable release. While we will keep nurturing and iterating on this product, here are four major design improvements that are in the new Chartbeat Publishing today.

Improved signaling

signaling A major goal in rebuilding Chartbeat Publishing was to further reduce the burden of interpretation on our users, i.e., make the product – not our already very busy clients – do more of the heavy-lifting. That goal was realized by a couple of different approaches, especially figuring out ways to answer common client questions of “what does this number mean?” and “how well is my site doing right now compared to at other times?”. We expanded the product’s efficacy by using our technology not just to report numbers to you, but to interpret numbers for you, too. For instance, now when you mouse over your Engaged Time section in the dashboard, a tooltip tells you how strongly your site is performing as compared to the past month – are you “on par” or “over-performing” – along with your site’s monthly average Engaged Time and its maximum average Engaged Time. We go one step further by pointing out which articles are potentially responsible for either an over-performing or under-performing Engaged Time. And at all times we call out which articles should be regarded as “good for your site’s health.” I talk a little more about the subject of signaling and invisible design on my own site. Next up: see how the product’s signaling starts even before you start processing your dashboard data – thanks to our strategic use of color.

Color isn’t just color

Color is now a fundamental signaling element in the dashboard – we’re moving away from color as a legend – allowing you to interpret what’s happening in the dashboard (and thus on your site), even more quickly than before. If you see green, your brain automatically picks up that something positive is happening, whereas anything red implies that something is underperforming or past its prime. By using these simple cues, paired with a baseline blue palette, users can navigate the product even more efficiently without having to refer to a legend.

General affordance

affordance

Our first version of the new Chartbeat Publishing dashboard was powerful in many regards. But we noticed that some dashboard elements, particularly a few crucial features related to building your loyal audience, were hard to discover. This was definitely not the client’s fault – it’s up to Design and UX to figure out how to make these things easy for clients to find. In our case, this issue had more to do with the fact that we had learned even more about the industry after users started beta-testing our first version of the new product.

We decided to make it really easy to find all our different types of filter and sorts we made available. The UI for sorting on Engaged Time is now as prominent as we think it is useful. Filtering by something like your “new” visitors are reading is really easy to find in the new dashboard. So you’re doing less guessing and more getting right to it. This easier-to-navigate interface allows us to expose more advanced features.

Things are more flexible than ever before

flexible

One of those features is multi-pivoting, which lets you combine different filters. In the original Chartbeat Publishing dashboard, you could click into a story and drill down into a particular story’s data. The new dashboard lets you pivot and manipulate the dashboard to pull out almost any specific data you want – whether it’s the number of mobile readers you have in Spokane, Washington, or which stories are attracting your most loyal visitors right now – thus increasing the number of actions you can take. Being able to do more within the dashboard allows you to surface the insights you need to inform your decisions and processes.

All in all

Our mission is to keep improving what we hope is an accessible dashboard that presents actionable insights. Right now we’re looking at how people are using the new Chartbeat Publishing, and we’ll take those learnings, along with whatever major needs arise in the industry, to inform the next batch of updates.

Designing Data – Part 5: Launch and Iterate

May 24th, 2012 by Matt

We’re asked about our design all the time – usually in an incredibly kind way full of high fives and “how’d you do that?!”s but sometimes in a “ugh, did you even think about talking to a customer??” kind of way. So, we decided to give you a week-long deep dive into our design process in this “Designing Data” series. Today's the last post in this series...though iterating never ends, so maybe there will  be more down the line. Stay tuned!

We make changes constantly. Seriously. Every day.

Sometimes they’re small and only our most dedicated users notice them; sometimes they entirely change the way people interact with our data. We don’t look at these changes as regressions, dead ends, or a waste of time; we look at this as the right way to design products. We vet everything intensely, but we can only do so much in the confines of our office with a few dozen users in pre-launch testing. And the only way to know if we’ve succeeded is to put it all out there into the real world (even if it sometimes results in a few harsh Tweets headed our way). So, when a new design is launched, we can't just pick up and move onto the next thing. We listen. We read every email coming into customer service, we consider every tweet, we grab everyone who wants to talk to us about what we’ve done and we listen. We figure out where things rocked and where things sucked and then this whole cycle starts again. If you've been following this design blog series, you know that in our initial design we tried to answer the question our users were always asking, “what's special happening on my site right now?”  with the new dashboard Overview. We stripped out a whole bunch of detail and put the focus on Notable Pages (an algorithm that detects what pages have unusual traffic) and Peer Stats (information from similar sites.) With the combination of those two stats our dashboard was always able to alert our users when something surprising, unique, or interesting was happening. But when it got out there into the wild...it turned out that people wanted to see more. They didn’t realize it at first, but it wasn’t just “is something special happening?” that they wanted answered rather it was “is everything normal?” While the questions sound similar, there is a world of difference between the two. We had built a dashboard that always caught and called out the big stuff, but it had trouble distinguishing between a normal day and a really bad one. We thought users would turn to the other views by clicking down their left-hand nav to understand these more nuanced things, but we were quickly told they wanted to put these new things on a second screen to always have a pulse of their site, even if it was just to tell them that everything was going smoothly. So we went back to into research mode. We asked what it meant when something was running smoothly, we watched how people used a dashboard when they weren’t paying attention, and we found out what screen they put our dashboard on. We learned a lot of little things (like that people liked to put our dashboards on big screens, but there wasn’t enough contrast in the colors to display information well enough at that scale). We got some bigger ones, too, that weren’t solved by changing colors. It turned out it wasn’t just the surging pages they wanted to see... they also wanted to know if anything unusual was happening with their more always-trafficked content.  So we brought back Top Pages and showed them with the Notable ones as context. Now the dashboard doesn’t just show you when something special is happening - it also lets you know when you’re doing alright and everything's just fine. But that only solved things on the individual page level when the information about the people sending it was every bit as important. So we changed the visualization of Traffic Sources to show two new things: the top referring domains and search terms and how your performance today compares with your average performance. Now, no matter if your day is normal, great, or terrible, you can tell right away. And that was the point all along. So welcome to our new Overview page. We think it answers the three questions that were most important...
  1. How many people are on my site?
  2. What are they looking at?
  3. Where are they coming from?
  ...but maybe you don’t. That’s awesome. We want to know. This process won’t stop until we do...

Designing Data – Part 4: Design and Prototype

May 24th, 2012 by Matt

We’re asked about our design all the time – usually in an incredibly kind way full of high fives and “how’d you do that?!”s but sometimes in a “ugh, did you even think about talking to a customer??” kind of way. So, we decided to give you a week-long deep dive into our design process in this “Designing Data” series. Yesterday we looked at the data, and today we're talking about designing against it.

Once we know that we’re solving real problems and using good data, we get to the part that everyone sees, touches, and (hopefully!) falls in love with: The design.

While we love delighting users and get pretty jazzed when people ohh or ahh over what they see on screen, we make sure to have a few people on our product design team who are solely responsible for making sure that the design uses and highlights the data we're displaying in the best possible way. So that it's not just pretty, but usable. Although this is the stage where pixels are pushed, lines of code are also written. Usually, we need to prototype a live design to make sure we can touch it, vet it, and sometimes get user feedback on it, regardless of how rough it is. Whether it’s quickly building a navigation system to make sure it functions instinctively or prototyping a visualization with real data pumping through it to see what it looks like -- a functioning, built design lets you see things that you never would in the static world of Photoshop. For this very reason, every designer on our team is also a developer. When pulling together the original Chartbeat dashboard Overview, we went through a bunch of prototypes that were pushed out to small groups of our users. Well, they didn't love everything, that's for sure. That sucked. We'd spent hours/days/weeks designing and prototyping....and it just didn’t work. But we don't try to make it fit or fall on our sword and go with it anyway. We made that first version of the Overview better. From the beginning, our goal for this view was to provide a high level, at-a-glance view of the site so it was obvious if something interesting or unusual was happening. Some of our early concepts were knocked down by users mainly because we weren't answering the questions they were asking, the needs they had at that instant level. Honestly, many of these concepts were visually much more appealing than the design we launched with but, as always, the goal isn’t only to make things pretty; the goal is finding the delicate balance between beauty, usefulness, and delight. To us, that's the core of good design. That's is what makes people who don’t have time for raw data able to understand, use, act on, and enjoy our products. So what were those pretty (but not so useful) designs that didn’t make it for Traffic Sources? The radar chart is a beautiful visualization and has the advantage of being able to quickly compare shapes, but it falls apart in the eyes of many users who are trying to get precise values of individual traffic sources. A radar chart is intended to show balance, when in reality most sites don’t have (and shouldn’t have) an equally balanced set of traffic. So we canned that guy and moved on to the dot chart. The dot chart was definitely cool, but  it didn't work for us a different reason. We liked the idea of being able to show a shape, or thumbprint, of your site’s traffic and added a connecting line between the values. The line didn't have a value - it was just meant to aid the eye in spotting the dots easily. But users were interpreting it is a relationship between each traffic source, which it certainly wasn't meant to be. We took a hit on that one for sure. The last thing we ever ever want is to launch a misleading visualization! Then came the circle chart. This guy had a lot of promise. Circle charts can be beautiful and usable, but its main weakness really shows when trying to compare values on different radii. The eye gets a little tripped up. Not to mention that it's tricky to grasp at a quick glance - the whole point of the Overview. So we inevitably trashed this guy, too. How did we land on our "final" design? Check back tomorrow to read our last piece of Designing Data: Launch and iterate.

Designing Data – Part 3: Look at the Data

May 23rd, 2012 by Alex Carusillo

We’re asked about our design all the time – usually in an incredibly kind way full of high fives and “how’d you do that?!”s but sometimes in a “ugh, did you even think about talking to a customer??” kind of way. So, we decided to give you a week-long deep dive into our design process in this “Designing Data” series. Yesterday we identified problems through user research, and today we'll look at the data to validate our thinking. We’ve identified the problem. Next step is to start sketching or to open up Photoshop and start throwing pixels on the screen, right? Nope. Negative. Wrong.

We always look at the data before even thinking about design.

We’re a company that eats, sleeps, and dreams information and insight, and we design from what we have -- not what we wish we had. It’s so easy to spend days pushing pixels in Photoshop making the perfect design for the perfect scenario, but it’s all meaningless if the result doesn’t carry over to reality. Great mockups are easy. Designs that work well with real data of all kinds are hard. Some days, really hard. When we were designing the Overview on the Chartbeat dashboard, we took each question people were trying to answer and made sure the data we were tracking matched up against those questions to provide real, unique value. We questioned and vetted every decision as to whether or not to put each data point on the Overview, a secondary page, or not include it at all. We looked at client after client, page after page after page trying to find places where our data fell down.
  • People wanted to know where traffic was coming from. But did that mean dropping a list of each referring site was be enough?
  • How did the dashboard look if your site had very low traffic, like 0-10 concurrents?
  • What if it had a ton, like 500,000+?
  • What if everyone was coming from search? Or social?
  • What if you got a flood of traffic all at once? Would the visualization work? Would it tell you that something special was happening?
To some people it can seem like overkill. But when you're working with data, you can't make assumptions and expect the design to match every possible form of data across thousands of sites. Otherwise, it's a ridiculous waste of time. Which is why we've gotten into the habit of  ending meetings the second someone says “Well, if the data says ____” by responding with “Let’s go look at it.” And all gather around my screen, or Isaac's (our chief data scientist's), or Matt's (our UI designer). We tear apart every single concept that makes it to this second stage in the process. When they survive this grueling data-review round, the fun part begins: Designing and prototyping - which we’ll dive into tomorrow.

Designing Data – Part 2: User Research

May 22nd, 2012 by Alex Carusillo

We’re asked about our design all the time – usually in an incredibly kind way full of high fives and “how’d you do that?!”s but sometimes in a “ugh, did you even think about talking to a customer??” kind of way. So, we decided to give you a week-long deep dive into our design process in this “Designing Data” series. Yesterday we covered the right mindset, today's all about user research. Check in tomorrow for the looking at the data portion of our design process.

We don’t make a single move without making sure it will solve a real problem for real users.

Whether they’re sitting here at the Chartbase telling us what makes their jobs hard, on a screen-share showing us how they use the dashboard , or through behavioral data from our APIs -- everything we do comes from real people and their real usage. Not best case, hypothetical personas who might one day open up our dashboard. That’s why we’re so serious when we talk about learning from our users. They are as much partners as they are anything else and we just couldn’t deliver without them. The biggest thing we’ve learned is to always be listening (ABL, if you will). And really listening. Not just hearing what they say and delivering that exact feature. But truly understanding what will make them better than they every thought possible. We can’t build anything until we really get that. But understanding and solving are very different things. Don’t try to solve everything or even anything right off the bat. In our early user research, we’re just trying to figure out the common problems people bump into. We don’t ask “what don’t you like about our dashboard?” we ask “what makes you come to Chartbeat? Why do you want to see data at all?” The former results in feature requests while the latter uncovers the things that make life harder -- and we’re way more interested in those. Until we know why people turn to us we can’t solve their problems, we can’t make something that will make them better. For example, in talking to people about our recent Chartbeat dashboard redesign we realized that, at the highest level, people had three questions they turned to Chartbeat for:
  • How many people are on my site?
  • What are they looking at?
  • Where are they coming from?
In short, they needed the pulse of their site. So, naturally, we decided to build something that delivered just that. Where next, you ask? Well, looking at the data of course! We’ll get there in tomorrow’s edition.