Archive for July, 2014

Attention Web World Cup Wrap-Up: Sample Size and Variability

July 17th, 2014 by Dan

After a month of exciting matches, the Attention Web World Cup has come to a close. In a time-honored tradition (pun intended) Ghana defeated the US with a score of 30 to 25. Congratulations to everyone from Ghana who was consuming content on the web during World Cup matches; you all contributed to this amazing achievement! And to my fellow Americans: next time around, let’s spend more time reading, okay?

To wrap up the festivities, one of our designers made these awesome animations of the time course of each tournament game based on the data I pulled. These plots show the median Engaged Time for users from each country as each match progresses.

When you view these animations, you’ll likely notice that some of these countries have incredibly stable Engaged Times while others have Engaged Times that are incredibly erratic. The U.S., for instance shows a very small amount of variance in median Engaged Time, while Cote d’Ivoire and Cameroon have median Engaged Times that jump all over the place.

This behavior is a consequence of sample size. At any particular time during a match, users from many of the African countries and other smaller countries were a much smaller sample size than, say, users from the US or Australia. In statistics and data analysis, we’re always concerned about sample size for exactly the reason illustrated in many of these graphs. The variability in the sampled statistic can mask the “true” value. We can try to capture this with a distribution, but if the width of that distribution is large, then we can’t be very confident in the value of whatever measure of central tendency we choose (mean, median, mode, etc.). And sample variance depends on the inverse of the sample size, so only as the number of points we’ve sampled gets large do we have a hope that the confidence in our estimate will rise.

I’m actually quite surprised the U.S. made it so far in my scoring scheme here. I knew going into the #AWWC that some countries were sorely underrepresented in our sample. I expected a fair chance that these countries would show a falsely high median Engaged Time. If enough of the small sample of users just so happened to be long-engagement users, this would skew their results. In the Group Round this was okay, because I performed a statistical test that tried to account for this variability. There, I asked a very common statistical question: Assuming these two teams actually have the same median Engaged Time, what is the probability that I’d observe a difference in medians at least as extreme as the one I’ve observed? If that probability was low enough, then I declared Team A and Team B to have different medians, and took the higher one as the winner. But in the bracket round, we needed clear winners (no draws were allowed), so we left it up to sampling variance. For the small-sample-size teams, this was a double edged sword. They only needed a few users spending an inordinate time engaged with content to edge above the higher-sample-size teams. But, conversely, if the users they had spent very short times, that would skew towards losing. We can see, though, that this seemed to work out well for these counties—they made a great showing all the way through the AWWC.

Thinking about variability is my job, so I might be biased here (yes, a statistics pun), but I hope you enjoyed this fun exploration of our data. I hope it got you thinking about international variability in engagement, and variability of metrics in general. Tweet me @dpvalente or email me at dan@chartbeat if you want to continue the discussion.

Revisiting Return Rates

July 14th, 2014 by Josh

Starting today, we’ve updated our definition of return rate in both our Weekly Perspectives and in the Chartbeat Publishing dashboard. Consequently, you’re likely to see a shift in the numbers in your dashboard — so we wanted to write a quick note explaining the change, why we made it, and what you can expect to see.

Defining return rate

Return rate, if you’re not familiar with it, is a metric designed to capture the quality of traffic that typically comes from a referrer. It measures the fraction of visitors coming from a given referrer who return to a site later — if 1,000 people come to a site from, say, Facebook, should we expect 10 of them to come back or 500? Depending on the answer, we might interpret and respond to a spike from Facebook quite differently. While the intuition behind return rate is straightforward, the actual formula used to calculate it is a bit more up for grabs. Up until now, we’ve calculated return rates using the following formula: CodeCogsEqn (3) That formula roughly captures a notion of “how likely is it, for a given visit from Facebook, that that visit will be ‘converted’ into a return?”   As we’ve talked through that definition over the past year, we’ve come to realize that it’s more natural to phrase returns in terms of people, not visits — to ask “how likely is it, for a given visitor from Facebook, that that person will be ‘converted’ into a return?” Hence, we’re now using the following calculation: CodeCogsEqn (4) So, rather than speaking in units of “visits,” this definition speaks in units of “visitors” — a seemingly small (but significant) change. In addition, we’re now only counting a return if it’s at least an hour after the initial entrance, which corrects for a pattern we sometimes see where visitors enter a site and then re-enter a few minutes later.    

What's changing?

It’s likely that the return rate numbers in your dashboard and Weekly Perspectives will drop under this new definition. To help you sort out whether your numbers are trending up or down, we’ve gone back and recalculated reports using the new methodology, going back to the beginning of June. We hope that the transition to the new definition is painless, but if you have any questions, feel free to comment or get in touch with me at josh@chartbeat.com

Building an Entire Product in 6 Weeks (How We Built the Chartbeat Paid Content Tool)

July 9th, 2014 by Harry

A little while ago we released our Paid Content product after two consecutive six-week sprints. The first six weeks were spent creating the MVP, and the second six weeks were spent polishing it up. This is the breathless tale of those first six weeks.

Part 1: Research

This whole thing began because we saw that our clients were struggling. Paid content, sponsored content, native content – whatever you call it – remains a mysterious beast for many folks and there were few options for measuring paid content performance, let alone figuring out what you can do to make it better.

For the first two weeks of our sprint we researched and brainstormed. We huddled into offices and littered the white boards with ideas, questions, diagrams, and whatever else we could to make sense of things. We’d pore over data to see what insights we could glean and what information would be helpful to know for any native content campaign. It was a lot of debating and arguing, breaking for lunch, and then regrouping for more debating and arguing.

Amidst these debates we talked to existing clients. A lot. We wanted to know how they created paid content campaigns, and what pain points they have experienced. We’d invite them into the office and talk to them on the phone. We’d visit them in their office and pummel them with questions, searching to find what we could do to improve how they analyzed paid content performance.

Part 2: Design

design

As we were assessing the type of data our clients needed, we also began to design our version of better.

We’d create one mock-up, show it around, gather feedback, and iterate. We’d see what worked in a design and what didn’t. We’d toss out the bad, toss out some of the good, and try again. We moved swiftly, for time was against us.

Something that proved to be a great success were clickable mocks. Typically a mock is static. With a static mock you can cycle through a list of images to give a sense of what the product will contain, but a clickable mock allows you to simulate how the product will feel when it’s complete.

These clickable mocks proved insanely helpful when discussing the product to clients. It enabled us to show our ideas and direction rather than just tell.

Part 3: Development

With under three weeks left to go in the cycle we knew we had to hustle.

We wanted to see what the data for a real paid content campaign looked like so we worked towards getting things working on the screen as fast as possible. Despite all our planning and designing we had yet to see real data for a paid content campaign and we had concerns that we had planned and designed for data points that may not exist. It’s fine to plan to include data about the amount of Twitter activity that drives traffic to a piece of native content, however if that value is always 0, it’s not helpful.

To our relief our planning paid off. The data worked and made sense. From there on out it was a sprint to bring all the beautiful designs to life.

Part 4: Launch

launch

With the launch of our MVP looming we knew we’d have to start making some hard decisions. Everything we wanted to include for the first version would not fit, so out came the knife as we looked to see what we could cut away.

Delicately we began to inspect what was left. We began to weigh things, deciding what were show-stopping features or essential functionalities that had to make it for the launch, versus things that would be fine to include afterwards. We’d see which features would be more ‘expensive’ to complete. At this stage the only currency we traded in was time, with everything balanced between time to complete versus its impact on the product.

Conclusion

Some hard decisions were made but ultimately we managed to ship on time and practically feature complete.

We were able to bring to market a product that six weeks prior did not exist as anything but an idea. Everyone at Chartbeat came together to make this a reality, each pulling their own weight and helping one another. Through and through it was an incredible team effort.

Within Chartbeat we managed to create a MVP in record time. We were able to assess client needs and industry gaps to form our product and get it out the door and into client’s hands. We’re not done, but we’re off to a strong start.