Opening Up Our Measurement Process

If you’re reading this post, you’ve likely already read our recent announcement about gaining accreditation from the Media Ratings Council for measuring a number of metrics, including viewability and active exposure time. You may have also read the Advertising Age feature on the Financial Times plan to trade ads upon those metrics — which gives a view into why we’re so excited about the possibility of people valuing ad inventory based on the amount of time audiences spend with it.

We went through the process of accreditation to bring us one step closer to a world where ads can be valued by the duration for which they’re seen. But the thing is, trading on time only works if

  1. Both buyers and sellers agree that time is a valuable unit to trade on
  2. There’s a unified standard for how time should be measured
  3. Both parties have access to the data and infrastructure on which to trade.

We think the case for transacting on time is clear and compelling: time in front of eyeballs is closer to what advertisers are hoping to buy than impressions (viewable or not) are, and — as one example — Jason Kint made a compelling case of why time-based valuations benefits publishers as well. On points (2) and (3), though, we think there’s still a long way to go.

On the measurement side, it’s critically important that — at least while there’s no IAB standard for time-based measurement — measurers be completely transparent about their exact methodologies so that buyers and sellers can understand exactly what the numbers they’re looking at mean. And, on the product side, even with an expanding set of products and a growing customer base, there’s simply more to be built than we ever will ourselves, and we think the industry will benefit from as many companies being involved as possible.

To address both of those points, I’m excited to announce today that we’re publicly releasing our Description of Methodology.

This is the main document on which our accreditation is based — it details the exact process of measurement that we use. Insofar as we have any “secret sauce” in our measurements, it’s in that document. Our goal in releasing it is twofold:

  • We think we all will benefit from others’ careful analysis, critique, and improvement upon the techniques we’ve proposed. Our hope is that others will adopt and refine our measurement process, and that the ability of all parties to accurately measure user engagement will improve over time.
  • The entire industry benefits from more people thinking carefully about their numbers, and we want it to be easier for other companies to gain accreditation. When we began our accreditation process, our largest hurdle was simply the fact that we didn’t have good examples of what successfully audited companies’ practices looked like. Our hope is that reading our document helps others down the line make it through in shorter order.

Having spent several years refining our process, there are a few hard-fought bits of knowledge that I wanted to highlight.

Measuring Engagement

Our method of tracking engagement has been derived from a set of human-subject experiments, and comes down to a simple rule: at each second, our code makes a determination of whether or not the reader is actively engaged with the page and keeps a running counter of that engagement.

In determining engagement, our code asks several questions:

  1. Is this browser tab the active, in-focus tab? If not, the user is certainly not actively engaged. If so, continue on to (2).
  2. Has the reader made any sort of console interaction (mousemove, key stroke, etc) in the last 5 seconds? If not, the user is not actively engaged. If so, consider them actively engaged and give one second of credit toward the visitor’s engaged time on the page.
  3. For each ad unit on the page: If conditions (1) and (2) have been met, is this ad unit viewable under the IAB viewability standard? If no, the ad has not been actively viewed this second. If so, give the ad one second of credit toward its active exposure time.

The five second rule is an approximation (it’s easy to construct cases where a person, say, stares at a screen for 1 minute without touching their computer), but we believe we’ve collected compelling evidence that it correctly measures the vast majority of time spent, and it provides a good measurement for guaranteed time spent — it’s difficult to construct any scenario in which a visitor is measured as active by our system but isn’t actually looking at the page. It also meets the IAB standard for being a clear, consistently applied, evidence-based standard (contrasted with, for instance, a different approach in which per-user engagement models were built). It’s also worth noting that others have independently arrived at similar methodologies for measuring active engagement.

It’s important to note, though, we made a few mistake early on that others might want to avoid:

  • Measuring engagement as a binary variable across a ping interval: For the most part, our code pings measurements back to our servers every 15 seconds (more on that later). An early attempt at measuring engaged time, recorded a visitor as either engaged for 15 seconds (if they’re interacted with the site at all since the last ping) or 0 seconds (if not). That gives undue credit if, for instance, a visitor engages just before a ping goes out.
  • Tracking too many interactions (especially on mobile): When we initially began tracking mobile sites, we thought we’d listen to every possible user interaction event to ensure we didn’t miss engagement. We quickly had to change course, though, after hearing customer complaints that our event listeners were adversely affecting site interactions. There’s a balance between tracking every possible event and ensuring the performance of clients’ sites.
  • Not correcting for prerender/prefetch: Before the MRC, we’d never seriously considered correcting for prerendering (browsers such as Chrome can render pages before you actually visit them to improve page load time). Having done it: in short, you need to do this if you want to count at all correctly.
  • Not considering edge cases: Does your code correctly measure if a person has two monitors? If the height of the browser is less than the height of an ad? If the person is using IE 6? If a virtual page change occurs or an ad refreshes, are you attributing measurements to the right entity?

Because devices, events available to our JavaScript, and the patterns of consumption on the internet change with time, we revisit this measurement methodology annually. If you’re interested in contributing, get in touch: josh@chartbeat.com

Ping Timing

Measurements are taken by our JavaScript every second, but it would cause undue load on users’ browsers if we were to send this measurement data back every second. In that sense, there’s a balance we need to strike between accurate measurement and being good citizens of the internet. Here’s the balance we struck:

    1. When a visitor is active on a page, we ping data back every 15 seconds of wall clock time.
    2. If they go idle, by definition our numbers (except raw time on page) aren’t updating, so there’s no need to ping as frequently. We use an exponential backoff — pinging every 30 seconds, then 60 seconds, then two minutes, etc — and immediately bounce back to our 15 second timing if a visitor reengages with the page.
    3. When a qualifying event occurs, we ping that data immediately to ensure that we record all instances of these events. Currently qualifying events are data about a new ad impression being served coming in and the IAB viewability standard being met for an ad.
    4. When a user leaves the page, we take a set of special steps to attempt to record the final bit of engagement between their last ping and the current time (a gap of up to 14 seconds). As the visitor leaves, we write the final state of the browsing session to localstorage. If the visitor goes on to visit another page on the same site and our JavaScript finds data about a previous session in localstorage, it pings that final data along.
    5. We also hook into the onbeforeunload event, which is called when a visitor exits a page, and send a “hail Mary” ping as the person exits — because the page is unloading there’s no guarantee that the ping request will complete, but it’s a best effort attempt to get the visitor’s final state across in the case that they never visit another page on the site (in which case the method described in (4) isn’t able to help).

If you found any of this useful, insightful, or flat-out incorrect, we’d love to be in touch. Feel free to reach out to me directly: Twitter @joshuadchwartz, email josh@chartbeat.com


More in