Archive for July, 2014

The Homepage, Social, and the Rise of Mobile

July 28th, 2014 by Josh

In the much-circulated New York Times Innovation Report, perhaps the most discussed graph was this one, showing a roughly 40% decline in homepage audience over the past three years.

nytimes-innovation-homepage

With that graph, innumerable articles announcing the “death of the homepage” were written, in The Atlantic, Poynter, and on numerous blogs. Most hinged on the relationship between the rise of social traffic and the decrease in homepage traffic. One thing that isn’t mentioned in most of these articles, though, is that the rise in social traffic was contemporaneous with a rise in mobile traffic, and that mobile is as much a principal part of the story as social is. Here, I’d like to explore the three-way interaction between mobile traffic, social, and homepage visitation.

Social traffic and mobile devices

The importance of social sharing on mobile devices is much discussed. (Take for example, the recent ShareThis report, which reported that 63% of Twitter activity and 44% of Facebook activity happens on mobile.) People aren’t just using social media on mobile to share articles, of course, they’re also clicking to those articles. Below, we break down the share of traffic coming from Facebook and Twitter by device across a random sample of our sites. (Note: We specifically chose sites without separate mobile sites and without mobile apps, to ensure that we’re making fair comparisons across devices.)

traffic-device

Facebook’s share of overall mobile referrals is nearly 2.7x larger than its share on desktop. Twitter’s share is 2.5x larger on mobile than on desktop. And, if anything, those numbers likely undercount the significance of social referrals, since many apps don’t forward referrer information and get thrown into the bucket of “dark social.” In some sense, then, it’s fair to say that—for most sites—mobile traffic more-or-less is social traffic.

Mobile and homepage traffic

Setting aside where visitors come from, mobile visitors are substantially less likely to interact with a site’s homepage. Below we plot, for the same collection of sites as above, the fraction of visitors that have visited any landing page (e.g. the homepage, a section front) over a month.

homepage-all

What we see is dramatic: Desktop visitors are over 4x more likely to visit landing pages than those on phones.

Is that because mobile visitors come from social sources, and social visitors are less likely to visit landing pages—a fact that’s often cited when discussing the state of homepage traffic? Or is it not an issue of referrer at all—are mobile visitors intrinsically less likely to visit landing pages? To move toward an answer, we can control for referrer and ask the same question. Below, we plot the fraction of visitors who come to the site from Facebook and then and during the same month (but not necessarily on the same visit) visit a landing page.

homepage-facebook

Comparing this graph to the previous one, three things are clear:

  1. As discussed above, mobile visitors are significantly less likely to ever visit landing pages than desktop and tablet visitors.
  2. Similarly, visitors who come from Facebook are significantly less likely to ever visit landing pages than those who come from other sources. On average, only 6% of visitors who come from Facebook ever visit a landing page, compared to nearly 14% of overall visitors.
  3. These two phenomena are to some degree independent—desktop-based Facebook visitors are half as likely to visit landing pages as other desktop-based visitors, while mobile Facebook visitors are one-third as likely to visit homepages as other mobile visitors.

It’s also worth a quick note that, in all of these respects, tablet traffic is much closer to desktop traffic than it is to mobile traffic.

Overall, this seems to be cause for substantial concern to publishers—increases in social and mobile traffic are the two most significant traffic trends of the past few years, and both are strongly associated with drops in homepage traffic. Since, as we’ve seen before, homepage visitors are typically a site’s most loyal audience, potential drops in homepage visitors should be concerning. In the short term, it’s safe to assume that a successful mobile strategy will hinge upon a steady stream of social links—that visitors won’t return unless we reach out to them directly. In the longer term, there’s a lot of work for all of us in determining how best to build an audience in a post-desktop (and potentially post-homepage) world.

What Is Viewability?

July 23rd, 2014 by Alexandra

This is the first post in a series about online advertising measurement and methodologies. Feel free to email me or post in the comments section about topics you’d like to see covered in this series. Curious about Chartbeat display advertising tools? Learn more here.

What is viewability, you ask?

First things first: A viewable impression is a metric of online advertising that indicates if a display ad is actually viewable when its served. More specifically, the IAB and MRC define a viewable impression as one that’s at least 50% visible for at least one second.

Simply put, viewability is a metric that tracks if at least half of a display ad has the chance to be seen in the viewable portion of a browser window for at least one continuous second.

Note: technically speaking, the guidelines measure time in terms of 100 millisecond intervals, so a continuous second equates to 10 consecutive 100 millisecond observations. To add even more confusion, if the ad is 242,500 pixels or greater, only 30% needs to be in view. You can check out the full set of guidelines on the Media Rating Council's website.

Back in March, the Media Ratings Council (MRC) lifted the advisory against using viewable impressions as a currency for buying, selling, and measuring advertising in the digital display space, marking the first time the industry has established a single measurement for viewability. (While viewability has been a topic for several years now, the MRC issued an advisory against transacting on viewable impressions in November 2012 due to known technological limitations. Removal of the advisory earlier this year gave marketplace players the go ahead to transact.)

Why is viewability such a hot topic of conversation?

Digital advertisers have been pushing for measurements that would give them a better sense of how many people their campaigns actually reach. Turns out, comScore found that up to 54% of display ads aren’t viewable as a result of things like rapid scrolling, ad placements that weren’t seen, and non-human (bot) traffic. Brand marketers were not too thrilled to find out they had been paying for ad impressions that nobody was seeing and called for a system with more accountability and transparency.

So, after much deliberation, enter IAB's and MRC's new standard of measurement. The hope among the trade body is that the new viewability standard will shift the entire currency of the industry from an impressions-served standard to an impressions-viewed one.

Big picture, what will this shift mean for the digital ad ecosystem?

In short, the way online media is sold is changing. On the demand side, advertisers are gaining more transparency and will expect guarantees on viewable display impressions in the future. Theoretically, this will improve campaign performance, as eventually, advertisers will only pay for ads that have the potential to be seen.

On the supply side, opinions are varied. Some publishers are concerned that this shift will have a negative impact on ad revenues since their supply of impressions to sell may be significantly reduced. This fear has already resulted in some publishers rethinking site design to increase viewability.

On the other hand, publishers that are focusing on premium ad experiences see this as a largely positive change. If a publisher can guarantee that ads are actually being seen by an engaged audience, it can leverage those high viewability percentages to demand higher costs for certain impressions. David Payne, Chief Digital Officer at Gannett, summed it up well in a recent post: “Viewability provides us another proof point that shows how our premium content creates highly engaged audiences perfect for branding campaigns.”

So, are people really adopting viewability as the standard?

Now that the viewability standard has been set (though some folks are questioning it), viewability requests are beginning to come in to direct sales teams. Major general interest publishers are already seeing more viewability requests, and expect to see an increase in requests in Q1 2015. Smaller, endemic publishers are initiating programs to research their own sites’ viewability, so as to prepare for when they too begin to encounter viewability requests in RFPs.

Up next in our series: "What Does Viewability Mean for Publishers?" Stay tuned...

Attention Web World Cup Wrap-Up: Sample Size and Variability

July 17th, 2014 by Dan

After a month of exciting matches, the Attention Web World Cup has come to a close. In a time-honored tradition (pun intended) Ghana defeated the US with a score of 30 to 25. Congratulations to everyone from Ghana who was consuming content on the web during World Cup matches; you all contributed to this amazing achievement! And to my fellow Americans: next time around, let’s spend more time reading, okay?

To wrap up the festivities, one of our designers made these awesome animations of the time course of each tournament game based on the data I pulled. These plots show the median Engaged Time for users from each country as each match progresses.

When you view these animations, you’ll likely notice that some of these countries have incredibly stable Engaged Times while others have Engaged Times that are incredibly erratic. The U.S., for instance shows a very small amount of variance in median Engaged Time, while Cote d’Ivoire and Cameroon have median Engaged Times that jump all over the place.

This behavior is a consequence of sample size. At any particular time during a match, users from many of the African countries and other smaller countries were a much smaller sample size than, say, users from the US or Australia. In statistics and data analysis, we’re always concerned about sample size for exactly the reason illustrated in many of these graphs. The variability in the sampled statistic can mask the “true” value. We can try to capture this with a distribution, but if the width of that distribution is large, then we can’t be very confident in the value of whatever measure of central tendency we choose (mean, median, mode, etc.). And sample variance depends on the inverse of the sample size, so only as the number of points we’ve sampled gets large do we have a hope that the confidence in our estimate will rise.

I’m actually quite surprised the U.S. made it so far in my scoring scheme here. I knew going into the #AWWC that some countries were sorely underrepresented in our sample. I expected a fair chance that these countries would show a falsely high median Engaged Time. If enough of the small sample of users just so happened to be long-engagement users, this would skew their results. In the Group Round this was okay, because I performed a statistical test that tried to account for this variability. There, I asked a very common statistical question: Assuming these two teams actually have the same median Engaged Time, what is the probability that I’d observe a difference in medians at least as extreme as the one I’ve observed? If that probability was low enough, then I declared Team A and Team B to have different medians, and took the higher one as the winner. But in the bracket round, we needed clear winners (no draws were allowed), so we left it up to sampling variance. For the small-sample-size teams, this was a double edged sword. They only needed a few users spending an inordinate time engaged with content to edge above the higher-sample-size teams. But, conversely, if the users they had spent very short times, that would skew towards losing. We can see, though, that this seemed to work out well for these counties—they made a great showing all the way through the AWWC.

Thinking about variability is my job, so I might be biased here (yes, a statistics pun), but I hope you enjoyed this fun exploration of our data. I hope it got you thinking about international variability in engagement, and variability of metrics in general. Tweet me @dpvalente or email me at dan@chartbeat if you want to continue the discussion.

Revisiting Return Rates

July 14th, 2014 by Josh

Starting today, we’ve updated our definition of return rate in both our Weekly Perspectives and in the Chartbeat Publishing dashboard. Consequently, you’re likely to see a shift in the numbers in your dashboard — so we wanted to write a quick note explaining the change, why we made it, and what you can expect to see.

Defining return rate

Return rate, if you’re not familiar with it, is a metric designed to capture the quality of traffic that typically comes from a referrer. It measures the fraction of visitors coming from a given referrer who return to a site later — if 1,000 people come to a site from, say, Facebook, should we expect 10 of them to come back or 500? Depending on the answer, we might interpret and respond to a spike from Facebook quite differently. While the intuition behind return rate is straightforward, the actual formula used to calculate it is a bit more up for grabs. Up until now, we’ve calculated return rates using the following formula: CodeCogsEqn (3) That formula roughly captures a notion of “how likely is it, for a given visit from Facebook, that that visit will be ‘converted’ into a return?”   As we’ve talked through that definition over the past year, we’ve come to realize that it’s more natural to phrase returns in terms of people, not visits — to ask “how likely is it, for a given visitor from Facebook, that that person will be ‘converted’ into a return?” Hence, we’re now using the following calculation: CodeCogsEqn (4) So, rather than speaking in units of “visits,” this definition speaks in units of “visitors” — a seemingly small (but significant) change. In addition, we’re now only counting a return if it’s at least an hour after the initial entrance, which corrects for a pattern we sometimes see where visitors enter a site and then re-enter a few minutes later.    

What's changing?

It’s likely that the return rate numbers in your dashboard and Weekly Perspectives will drop under this new definition. To help you sort out whether your numbers are trending up or down, we’ve gone back and recalculated reports using the new methodology, going back to the beginning of June. We hope that the transition to the new definition is painless, but if you have any questions, feel free to comment or get in touch with me at josh@chartbeat.com

Building an Entire Product in 6 Weeks (How We Built the Chartbeat Paid Content Tool)

July 9th, 2014 by Harry

A little while ago we released our Paid Content product after two consecutive six-week sprints. The first six weeks were spent creating the MVP, and the second six weeks were spent polishing it up. This is the breathless tale of those first six weeks.

Part 1: Research

This whole thing began because we saw that our clients were struggling. Paid content, sponsored content, native content – whatever you call it – remains a mysterious beast for many folks and there were few options for measuring paid content performance, let alone figuring out what you can do to make it better.

For the first two weeks of our sprint we researched and brainstormed. We huddled into offices and littered the white boards with ideas, questions, diagrams, and whatever else we could to make sense of things. We’d pore over data to see what insights we could glean and what information would be helpful to know for any native content campaign. It was a lot of debating and arguing, breaking for lunch, and then regrouping for more debating and arguing.

Amidst these debates we talked to existing clients. A lot. We wanted to know how they created paid content campaigns, and what pain points they have experienced. We’d invite them into the office and talk to them on the phone. We’d visit them in their office and pummel them with questions, searching to find what we could do to improve how they analyzed paid content performance.

Part 2: Design

design

As we were assessing the type of data our clients needed, we also began to design our version of better.

We’d create one mock-up, show it around, gather feedback, and iterate. We’d see what worked in a design and what didn’t. We’d toss out the bad, toss out some of the good, and try again. We moved swiftly, for time was against us.

Something that proved to be a great success were clickable mocks. Typically a mock is static. With a static mock you can cycle through a list of images to give a sense of what the product will contain, but a clickable mock allows you to simulate how the product will feel when it’s complete.

These clickable mocks proved insanely helpful when discussing the product to clients. It enabled us to show our ideas and direction rather than just tell.

Part 3: Development

With under three weeks left to go in the cycle we knew we had to hustle.

We wanted to see what the data for a real paid content campaign looked like so we worked towards getting things working on the screen as fast as possible. Despite all our planning and designing we had yet to see real data for a paid content campaign and we had concerns that we had planned and designed for data points that may not exist. It’s fine to plan to include data about the amount of Twitter activity that drives traffic to a piece of native content, however if that value is always 0, it’s not helpful.

To our relief our planning paid off. The data worked and made sense. From there on out it was a sprint to bring all the beautiful designs to life.

Part 4: Launch

launch

With the launch of our MVP looming we knew we’d have to start making some hard decisions. Everything we wanted to include for the first version would not fit, so out came the knife as we looked to see what we could cut away.

Delicately we began to inspect what was left. We began to weigh things, deciding what were show-stopping features or essential functionalities that had to make it for the launch, versus things that would be fine to include afterwards. We’d see which features would be more ‘expensive’ to complete. At this stage the only currency we traded in was time, with everything balanced between time to complete versus its impact on the product.

Conclusion

Some hard decisions were made but ultimately we managed to ship on time and practically feature complete.

We were able to bring to market a product that six weeks prior did not exist as anything but an idea. Everyone at Chartbeat came together to make this a reality, each pulling their own weight and helping one another. Through and through it was an incredible team effort.

Within Chartbeat we managed to create a MVP in record time. We were able to assess client needs and industry gaps to form our product and get it out the door and into client’s hands. We’re not done, but we’re off to a strong start.