Archive for October, 2014

As part of our larger efforts to help build an Attention Economy—in which success is measured not by clicks and pageviews but by time and audience attention earned—we’ve publicly released our Description of Methodology, which outlines the measurement process on which Chartbeat’s MRC accreditation is based.

Given that this document is a bit well, hefty, we figured we’d briefly explain a couple of our signature metrics here on the blog.

What is viewability?

A viewable impression is a metric of online advertising that indicates if a display ad is actually viewable when it’s served. More specifically, the IAB and MRC define a viewable impression as one that’s at least 50% visible for at least one second. To keep it simple, viewability is a metric that tracks if at least half of a display ad has the chance to be seen in the viewable portion of a browser window for at least one continuous second. Technically speaking, one second is measured as 10 consecutive 100 millisecond observations.

For the full scoop on viewability check out our 101 series:

  • What is Viewability?
  • What Does Viewability Mean for Publishers?
  • What Does Viewability Mean for Advertisers?
  • Viewability Metrics

    Chartbeat is accredited for the following viewability metrics:

    VIEWABLE IMPRESSION

    A count of the number of impressions that were deemed “viewable” under the MRC’s Viewable Impression Measurement Guidelines.

    Chartbeat Methodology: Every 100 milliseconds on in focus pages Chartbeat checks every ad tagged with Chartbeat’s “data-cb-ad-id” attribute to see if over 50% of the ad has entered the viewport (the viewable portion of your browser window). When the ad enters the viewport Chartbeat checks every 100 ms to ensure that it has remained on the screen and the window has stayed in focus. After ten consecutive successful checks (one continuous second), Chartbeat designates the impression as viewable.

    IMPRESSION BREAKDOWN

    The number of impressions that are considered non-viewable, standard, or premium

    NON-VIEWABLE AD IMPRESSION

    These represent served impressions where the viewable status is not met, but they can be “seen” by the viewable decisioning function.

    MEASURED RATE

    This is calculated as a percentage and represents (Viewable Impressions + Non-Viewable Impressions)/Total Served Impressions.

    VIEWABLE RATE

    This is calculated as a percentage and represents Viewable Impressions / (Viewable Impressions + Non-Viewable Impressions).

    Note: We are accredited for a few additional viewability metrics required by MRC’s Viewability Guidelines.

    What’s the industry saying about viewability?

    Well, they are saying a lot. While opinions certainly vary, it seems the common consensus is that the new viewability standard is, at the very least, a step in the right direction:

    “I don’t believe that viewability is a performance metric at all, but is rather just a huge step up from the old ‘served’ impression metric that we have used for years. However, a focus on increasing viewability will result in greater performance on the major engagement metrics like Universal Interaction Rate and Click Through that marketers value highly. It is this increased performance that will eventually lead to higher CPMs.”

    Jeff Burkett, Sr. Director Ad Innovation & Product Strategy at The Washington Post

    “The current viewability standard, while clearly nascent, serves an important purpose. It introduces a baseline criterion for and measure of accountability. At the end of the day, it is a means to a larger end: increased brand spend that better aligns with time spent online.”

    Neeraj Kochar, Tremor Video

    “Viewability is a positive development. The industry is at a major crossroads as we’re dealing with a growing amount of traffic being non-human, which has created a polluted ecosystem. The viewable impression is one step in the process to help solve this problem. It’s becoming the anchor that will allow for engagement and exposure metrics to be used to evaluate campaigns and prove for brands the value of the impressions being served.”

    Mark Howard, Chief Revenue Officer at Forbes

    “Unfortunately, it’s going to take a while before viewability becomes a valid currency and is established as a key metrics in determining impression value. However, I do think that there is an opportunity for publishers to take advantage of this debate to maintain and increase premium rates as more media becomes traded programmatically.”

    Peter Jones, The Guardian

    “Until now the ad impression was, essentially, a mechanical event — the creative file being loaded on the Web page. The viewability standard transforms an impression into an opportunity to see event: something of inherent value to a brand, just as in traditional media”

    Yaakov Kimelfeld, chief research officer at Millward Brown Digital

    Chartbeat has become the first analytics company accredited to measure attention metrics for both display advertising and content. The Media Rating Council has accredited 21 of the metrics featured in Chartbeat’s advertising platform including viewability and active exposure time.

    Earlier today Digital Content Next, an organization representing over 50 premium publishers, released a study and special report outlining how digital publishers currently view and use time-based metrics and what their expectations are for the future.

    In the report DCN suggests that shifting to a measurement framework that incorporates time-based metrics would “align valuation of content and advertising with time and attention…and offers solutions to significant industry challenges.” Namely, time-based metrics take viewability a step further and create an inventory constraint and, as a result, an economy of scarcity in which attention is a true measure of quality content and effective advertising.

    The report, which consisted of in-depth qualitative interviews with nine leading publishers, including CNBC, ESPN, Gannett and Wall Street Journal, as well as a quantitative survey with 25 DCN member publishers, covers current usage of time-based metrics, both internally and as a sales tool, as well as attitudes on the future of time as a currency. Here are some of the key takeaways:

    1. Time metrics are commonly used to evaluate performance.

    90% of DCN members surveyed use time metrics to internally evaluate performance of their sites and content among editorial and/or ad operations teams.

    Publisher use of time-based metrics
    Screen Shot 2014-10-22 at 2.29.14 PM

    2. Publishers are sharing time metrics with advertisers.

    85% of publishers using time-based metrics share these metrics with advertisers as proof of things such as audience engagement/attention, quality of content and audience loyalty.

    3. There is a real interest among premium publishers to transact on time.

    80% are already testing or express an interest in transacting on time.

    Publisher interest in transacting based on timeInterest in transacting on time

    4. Publishers believe there is potential for time to serve as currency.

    52% agree or strongly agree that transacting on time is the next evolutionary step of viewability implementation.

    Attitudes on the future of time as currency
    Attitudes on time as currency

    While a significant number of publishers are already using time metrics to gain insights about consumption patterns, adjust editorial cycles, and more accurately forecast ad inventory, many still see several hurdles to using time as currency. Among these, lack of standard metrics and measurement methodology, lack of research showing that time in view is correlated to ad effectiveness, lack of marketer and ad agency education and interest, and scope constraints were among the most common obstacles cited. Bottom line, time-based metrics are a big step in the right direction, but the road to a more sustainable media ecosystem will not be without challenges.

    So what’s next? As the buy side continues to grapple with the concept of viewability, publishers can continue pushing to integrate time measurement into their metrics suite. By better understanding their audiences and bringing the time dimension to ad unit measurement, publishers will be well positioned to prove the value of audience time spent with their content and introduce the time topic into conversations with the buy side.

    Read the full DCN report How Time-Based Measurement is Grabbing Digital Publishers’ Attention.

    If you’re reading this post, you’ve likely already read our recent announcement about gaining accreditation from the Media Ratings Council for measuring a number of metrics, including viewability and active exposure time. You may have also read the Advertising Age feature on the Financial Times plan to trade ads upon those metrics — which gives a view into why we’re so excited about the possibility of people valuing ad inventory based on the amount of time audiences spend with it.

    We went through the process of accreditation to bring us one step closer to a world where ads can be valued by the duration for which they’re seen. But the thing is, trading on time only works if

    1. Both buyers and sellers agree that time is a valuable unit to trade on
    2. There’s a unified standard for how time should be measured
    3. Both parties have access to the data and infrastructure on which to trade.

    We think the case for transacting on time is clear and compelling: time in front of eyeballs is closer to what advertisers are hoping to buy than impressions (viewable or not) are, and — as one example — Jason Kint made a compelling case of why time-based valuations benefits publishers as well. On points (2) and (3), though, we think there’s still a long way to go.

    On the measurement side, it’s critically important that — at least while there’s no IAB standard for time-based measurement — measurers be completely transparent about their exact methodologies so that buyers and sellers can understand exactly what the numbers they’re looking at mean. And, on the product side, even with an expanding set of products and a growing customer base, there’s simply more to be built than we ever will ourselves, and we think the industry will benefit from as many companies being involved as possible.

    To address both of those points, I’m excited to announce today that we’re publicly releasing our Description of Methodology.

    This is the main document on which our accreditation is based — it details the exact process of measurement that we use. Insofar as we have any “secret sauce” in our measurements, it’s in that document. Our goal in releasing it is twofold:

    • We think we all will benefit from others’ careful analysis, critique, and improvement upon the techniques we’ve proposed. Our hope is that others will adopt and refine our measurement process, and that the ability of all parties to accurately measure user engagement will improve over time.
    • The entire industry benefits from more people thinking carefully about their numbers, and we want it to be easier for other companies to gain accreditation. When we began our accreditation process, our largest hurdle was simply the fact that we didn’t have good examples of what successfully audited companies’ practices looked like. Our hope is that reading our document helps others down the line make it through in shorter order.

    Having spent several years refining our process, there are a few hard-fought bits of knowledge that I wanted to highlight.

    Measuring Engagement

    Our method of tracking engagement has been derived from a set of human-subject experiments, and comes down to a simple rule: at each second, our code makes a determination of whether or not the reader is actively engaged with the page and keeps a running counter of that engagement.

    In determining engagement, our code asks several questions:

    1. Is this browser tab the active, in-focus tab? If not, the user is certainly not actively engaged. If so, continue on to (2).
    2. Has the reader made any sort of console interaction (mousemove, key stroke, etc) in the last 5 seconds? If not, the user is not actively engaged. If so, consider them actively engaged and give one second of credit toward the visitor’s engaged time on the page.
    3. For each ad unit on the page: If conditions (1) and (2) have been met, is this ad unit viewable under the IAB viewability standard? If no, the ad has not been actively viewed this second. If so, give the ad one second of credit toward its active exposure time.

    The five second rule is an approximation (it’s easy to construct cases where a person, say, stares at a screen for 1 minute without touching their computer), but we believe we’ve collected compelling evidence that it correctly measures the vast majority of time spent, and it provides a good measurement for guaranteed time spent — it’s difficult to construct any scenario in which a visitor is measured as active by our system but isn’t actually looking at the page. It also meets the IAB standard for being a clear, consistently applied, evidence-based standard (contrasted with, for instance, a different approach in which per-user engagement models were built). It’s also worth noting that others have independently arrived at similar methodologies for measuring active engagement.

    It’s important to note, though, we made a few mistake early on that others might want to avoid:

    • Measuring engagement as a binary variable across a ping interval: For the most part, our code pings measurements back to our servers every 15 seconds (more on that later). An early attempt at measuring engaged time, recorded a visitor as either engaged for 15 seconds (if they’re interacted with the site at all since the last ping) or 0 seconds (if not). That gives undue credit if, for instance, a visitor engages just before a ping goes out.
    • Tracking too many interactions (especially on mobile): When we initially began tracking mobile sites, we thought we’d listen to every possible user interaction event to ensure we didn’t miss engagement. We quickly had to change course, though, after hearing customer complaints that our event listeners were adversely affecting site interactions. There’s a balance between tracking every possible event and ensuring the performance of clients’ sites.
    • Not correcting for prerender/prefetch: Before the MRC, we’d never seriously considered correcting for prerendering (browsers such as Chrome can render pages before you actually visit them to improve page load time). Having done it: in short, you need to do this if you want to count at all correctly.
    • Not considering edge cases: Does your code correctly measure if a person has two monitors? If the height of the browser is less than the height of an ad? If the person is using IE 6? If a virtual page change occurs or an ad refreshes, are you attributing measurements to the right entity?

    Because devices, events available to our JavaScript, and the patterns of consumption on the internet change with time, we revisit this measurement methodology annually. If you’re interested in contributing, get in touch: josh@chartbeat.com

    Ping Timing

    Measurements are taken by our JavaScript every second, but it would cause undue load on users’ browsers if we were to send this measurement data back every second. In that sense, there’s a balance we need to strike between accurate measurement and being good citizens of the internet. Here’s the balance we struck:

      1. When a visitor is active on a page, we ping data back every 15 seconds of wall clock time.
      2. If they go idle, by definition our numbers (except raw time on page) aren’t updating, so there’s no need to ping as frequently. We use an exponential backoff — pinging every 30 seconds, then 60 seconds, then two minutes, etc — and immediately bounce back to our 15 second timing if a visitor reengages with the page.
      3. When a qualifying event occurs, we ping that data immediately to ensure that we record all instances of these events. Currently qualifying events are data about a new ad impression being served coming in and the IAB viewability standard being met for an ad.
      4. When a user leaves the page, we take a set of special steps to attempt to record the final bit of engagement between their last ping and the current time (a gap of up to 14 seconds). As the visitor leaves, we write the final state of the browsing session to localstorage. If the visitor goes on to visit another page on the same site and our JavaScript finds data about a previous session in localstorage, it pings that final data along.
      5. We also hook into the onbeforeunload event, which is called when a visitor exits a page, and send a “hail Mary” ping as the person exits — because the page is unloading there’s no guarantee that the ping request will complete, but it’s a best effort attempt to get the visitor’s final state across in the case that they never visit another page on the site (in which case the method described in (4) isn’t able to help).

    If you found any of this useful, insightful, or flat-out incorrect, we’d love to be in touch. Feel free to reach out to me directly: Twitter @joshuadchwartz, email josh@chartbeat.com

    A few months ago, I wrote about coin-flipping experiments, Bayesian updating, and spurious certainty. Much of the discussion in that post centered on using probability densities to reason about data, but I put off detailed discussion about densities until a future time (and I passed the buck in a footnote, no less!). Well, the time has come to right that injustice. Over at our Engineering Blog, I’ve written up a bit of a tutorial on distributions and some different ways to visualize them. So, if this is something you are curious and/or interested in, head over there to read it. Enjoy!

    A few weeks ago everyone’s favorite Brit (who just happens to be our CEO) Tony Haile gave a talk at the annual Online News Association conference in Chicago. During his chat, officially titled “A Data State of the Union: Can We Make Quality Pay Online” he touches on the metrics that really matter, the challenge of metrics vs. mission that many journalists are faced with, and how we can fix some of the fundamental underpinnings of the media industry. Judging by the reaction on Twitter (check out #datasotu), a lot of attendees were digging what he had to say. Or maybe he’s just really charming. I’ll let you be the judge.

    Don’t have time to check out the whole thing? Well, you should make time. Kidding! (Sort of). I get it—and so does Tony—time is scarce. Here’s the TL;DR version + slides:

    Metrics vs. Mission

  • Many journalists are conflicted about data in the newsroom. Too often they feel they have to choose between metric or mission. It shouldn’t be an either/or.
  • Often, what seems like the simplest, most direct method of measuring success can actually backfire when it becomes the thing that’s most important. The job is not to chase traffic. In the business of news, random indiscriminate traffic is not what a business is built on.
  • It’s not traffic we monetize, but audience. Your audience knows who you are, likes what you do, and comes back. The goal is to build an audience—to acquire new people and convert them to loyal visitors.
  • And with this audience you’re not just after their index fingers, you’re after their minds. You have to create content that will make people like you and come back—and doing so often requires looking at data through a different prism.
  • Clicking and Reading are Different Things

  • Pageviews should not be privileged as the most important metric when 55% of clicks get less than 15 seconds of attention.
  • It’s not enough to get someone to click. We have to get them to read.
  • Newsrooms ought to be focusing on a reader’s propensity to return. That means thinking about capturing time, not just creating a catchy headline. A big spike in traffic doesn’t really matter if those readers don’t come back.
  • The Golden Metrics: Recirculation and Engaged Time

  • The key indicators of propensity to return are recirculation and engaged time.
  • Recirculation: the percentage of audience that has consumed a particular piece of content (e.g. actually read it) and chooses to go on to consume another piece of content. Are visitors sticking around to read another article, or are they leaving?
  • The number one way to increase recirculation is to write something good enough to make people want to read more. And then you have to give them somewhere to go. That means using referrer information to segment your audience (e.g. social vs. homepage visitors) and then promote the right stories in your side rails or through in-line links.
  • Engaged Time: The more time someone spends with your stuff, the more likely they are to come back. If someone spends three minutes on your site they are twice as likely to return as if they spend only one minute.
  • It’s important to remember that a visitor’s default behavior is to leave. When you are trying to hold someone’s attention, you are competing with the entire sum of human knowledge. Every form of mass entertainment is simply a click away. You’ve got to win them with every single paragraph.
  • Recirculation and Engaged Time are balanced metrics. Often, going overboard with one metric, such as trying to boost recirculation with slideshows, will reduce Engaged Time. Think of these two metrics in context of each other and try to get them both balanced to reach an ideal state.
  • Metrics are important, but they aren’t the only important thing.

  • Even the most meaningful metrics can mess up a newsroom if they become the basis of incentive plans. Metrics should be used as a guide, not as a cudgel for compliance.
  • Metrics shouldn’t be tied to a journalist’s pay. A journalist doesn’t need external motivation to want to create great content. For the most part, incentive plans mean journalists stop relying on metrics and start resenting them. Metrics stop becoming a trusted feedback loop and become a cruel judge to satisfy.
  • An incentive system that can be gamed will be. Quotas are good for quantity, but they diminish quality and creativity. With quotas, journalists don’t take risks. They stick to what worked yesterday.
  • If you want your newsroom to embrace metrics, to learn and to seek a more effective path towards reaching your organizations’s overarching goals, you have to give journalists the right metrics framed in the right way and trust their internal desire to do a great job.
  • We are NOT in a Golden Age of Journalism

  • We don’t actually monetize content at all. We monetize the links to content. If you click on a link and the page loads, it doesn’t matter whether someone even read the content, whether they liked or loathed it. The content itself doesn’t determine the value of the page.
  • The fact that it’s the clicking of the link (rather than the consuming of the content) that is the monetizable act, means we’re living in a world of infinite ad inventory where the marginal cost creating additional inventory is near zero.
  • In a world or infinite inventory, prices will always trend towards zero.
    The currency we predominately use to measure value is impressions, and thus pageviews, and that currency is killing us.
  • The Solution: An Economy of Scarcity

  • For us to be able to charge premiums we need to create an economy based on scarcity, where what happens with the content actually matters.
  • Time is the only unit of scarcity on the web, and it’s zero sum. A minute spent on one site is a minute not spent on another.
  • Attention correlates with quality. You have to be doing something right to capture that attention. And those who can capture more of it can charge more.
  • “If we can change the way we value what we do, then brands get happier, publishers have a sustainable business for quality journalism and the users get a Web… where anything that makes them want to leave is bad for business. That’s a Web worth fighting for.”

    – Tony Haile