Archive for the ‘Data Science’ Category

Audience Building on Vulture.com: A Case Study

April 2nd, 2014 by Josh

Want to know more about traffic sources how they can help you understand your audience's behavior? Download our guide.

Over the past year, we’ve published extensive research on how to use data to understand and build your audience — everything from the effects of Engaged Time to scrolling behaviors and traffic sources driving traffic to the sites in our network. All of the data in those pieces are combined from a set of customers who allow us to use their data in anonymous, aggregated form. Looking at statistics aggregated from across a wide swath of sites is interesting because it lets us identify network-wide facts.

But, subtle patterns often get averaged out, so it’s hard to tell a nuanced story using aggregated data. Today, in partnership with New York Magazine and Rick Edmonds and Sam Kirkland of Poynter, we’re excited to present something different: a deep look into the data for one site, New York Magazine’s Vulture.com, about what factors drive visitor loyalty. (A quick note: This data is presented with the consent of New York Magazine and Vulture.com. Chartbeat never shares customer-specific data.)

If you’re going to read one piece, I’d highly encourage you to click over and read  the Poynter team's piece, which contains much of the data given below, as well as extensive feedback from the Vulture team. But, we also wanted to present our own take on the data, which you’ll find below. Our goal is less to provide answers than to get you thinking about what questions you might ask of your own site.

nieman-probability-of-return

How We Define “Loyalty” and Why It's Important to Measure

Before we can look at how visitors become loyal to a site, the first thing to do is define loyalty. Informally, by “loyal” we mean something like “a person who is highly likely to continue to return to the site across time.” For instance, a person might be loyal to the site of their daily newspaper. One way of getting toward a specific definition using the data is by asking how many times a person must visit before we’re nearly certain they’ll continue to return. In the figure below, we plot the probability that a person will return to Vulture.com, given the number of times they’ve already been to the site.

There are perhaps three things worth noting on this plot:

  1. Visitors who have come once so far in a month are just over 20% likely to return.

  2. That rate of return climbs rapidly until we reach visitors who have visited five or six times. Once a person has come five or six times in a month, we can be highly confident that they’ll continue to return.

  3. The downward slope on the right side of the graph is a windowing effect because we’re looking at one month of data: people are unlikely to come every single day in a month, so once a visitor has come more than about 22 times their probability of returning more times begins to decrease.

Based on this, a reasonable definition of a “loyal” visitor is one who visits at least five times in a month — after a person has come five times, we have a strong belief that they’ll continue to come back.

The Relationship Between Time of Day and Return Rate

After asking if visitors returned to the site, the next question was when visitors returned. One of the most striking data points we found was that visitors are far more likely to return at the same time of day as that of their initial visit — those who first visit the site today at noon are most likely to come back to the site tomorrow at noon, and so on. While that pattern is significant throughout the day, for Vulture it’s substantially stronger for visitors who come in the afternoon and evening, as demonstrated in the figure below.

nieman-morning-evening

In this figure, we’re comparing two sets of visitors: those who first arrive on a Wednesday between 10:00 a.m. and 10:59 a.m. and those who arrive on the same day, but between 6:00 p.m. and 6:59 p.m. The red lines show what hours of the day the 10 a.m. visitors return to the site throughout the rest of the month, and the blue lines represent the same statistics for the 6 p.m. visitors. For both audiences, the vast majority of time spent on other days of the week is at the same time of day — for instance, the 10 a.m. audience is most likely to return on Tuesday, Wednesday, or Thursday at about 10 a.m. What’s striking, though, is that the 6 p.m. audience spends dramatically more time on site throughout the week when compared to the 10 a.m. crowd. It’s worth noting that, though we’re showing traffic from Wednesday morning and evening, the basic pattern holds for those who arrive at other hours on other days.

One theory might be that this variation is caused by a difference in topics consume — perhaps, for instance, readers are engaging with Vulture's TV coverage during the afternoon and evening. Interestingly, we saw no evidence that this is the case: the breakdown of traffic by topic is roughly constant throughout the day. On the other hand, this variation in return times lines up extraordinarily well with device usage. In the early daytime, when traffic is less likely to return, upwards of 40% of traffic is mobile. In the evening, when traffic is much more predictable and more likely to return, mobile falls to only 22% of overall traffic.

This data raises more questions than it answers: What can be done to get the morning audience to come back more frequently? How can editors take advantage of the daily patterns of their evening readers? Answering those questions is out of the scope of this article, but the upshot here is that there is a hugely interesting opportunity in understanding behavior as it relates to time of day.

Improving Return Rates of New Visitors

Obviously, one key challenge for any publication is in getting new, incidental visitors to move down the funnel toward loyalty. We saw three factors that exhibited significant influence over a new visitor’s probability of returning: how they arrived at the site, the type of content they landed on, and how much time they spent reading.

Vulture’s top referrers are similar to what we see across the internet, as are their relative rates of return. Unsuprisingly, new visitors coming from its sister site nymag.com are most likely to return (22%), followed those from Twitter (16%) and Buzzfeed (10%). Perhaps surprisingly, the length of an article proved to be a strong predictor of likelihood to return, as shown below.

nieman-page-height

Stepping through this graph from left to right:

  1. Visitors who land on the shortest articles are extremely unlikely to return, but their probability of return rapidly increases from there.

  2. Those who view the Vulture hompage, forming the first peak at about 3900 pixels, are substantially more likely to return than those who view average-length articles — this article, for example — which are 4000-4500 pixels high.

  3. However, those who visit longer articles — this article, for example — are substantially more likely to return.

We see similar trends when we look at the time that a visitor spends reading whatever page they land on.

nym-readlonger-1axis

Visitors who spend substantial time reading on the first page they land on are also much more likely to return to the site. Overall, this confirmed an editorial hunch the Vulture team had, that they were better off moving away from extremely short pieces of content.

But that’s the Vulture team specifically; shorter posts may work best for your site. We dove into this study with Vulture.com precisely because every site is different: the content is different, the people visiting are different, the goals and metrics are different. I hope you and your team will see this data as a starting point for everything you can be looking at and acting on. There's a lot more richness to your site's data than purely traffic numbers. If you need help getting started and knowing what to look for — Chartbeat or not — just send me an email at josh@chartbeat.com.

How Long Are Viewable Impressions Actually Seen?

March 23rd, 2014 by Alex Carusillo

On Friday, Digiday wrote a piece examining some assumptions that are all too often made about the way people read on the internet. It covered a bunch of our favorite stuff including that the conventionally “good” advertising spots aren’t necessarily in the places people read. In addition, Lucia introduced something we’ve been thinking about a lot lately: the duration of an impression.

Over the past year, the industry has finally rallied around a viewability metric. As a result we’ve seen a lot of premium publishers do great work to make their ads more viewable and - in turn - pull way ahead of their lower quality competitors. Naturally now, more and more ads are becoming viewable across the internet. Which begs a new question we hear almost daily from publishers: how do I prove that my inventory is actually better than the alternative?

Everyone is looking to sell based on reader attention, but I’ve yet to meet someone who thinks that the viewable impression actually helps them do that -- particularly when it comes to premium sites.

So we’ve been working with publishers, helping them take the next step: understanding how long people are actively focused on content while an ad is on the screen.

There’s a lot of research out there that says the more time people spend with an ad, the more likely that ad is to succeed, but this research rarely looks at how long real ads on the internet perform.

We decided to find out.

We took a look across a select group of publishers to find out how long ads are seen when they’re seen.  

Turns out, half of all viewable ads are seen for 1-5 seconds while the other half are, obviously, seen for longer than that. The natural reaction is to look at each number of seconds and sort of consider them “engagement points” and just assume that a higher number means better ads, but it’s actually not that black and white.

Research has shown over and over that at about ten seconds of exposure diminishing returns start to set in and each additional second is worth less in terms of recall than the previous ones; that doesn’t mean, however, that every ad should seek to be seen for ten seconds. It means that different ads are right for different goals.

If an advertiser is trying to get their name out there as efficiently as possible in a pure awareness play, they likely want to be buying a spot where shorter impression duration occurs. Whereas if an advertiser is trying to get a specific message out there or telling a more complex story than just their logo appearing in an ad, they should look for one in the 6-15 second range.

This, of course, leaves a big chunk of impressions that are longer than that ideal engagement time and don’t really help advertisers any more than 15 seconds ones. We’re working with a handful of people to see what kind of creative things they can do to solve that, getting the most of their inventory and giving their audience fresh content that benefits everyone involved -- the publisher’s business, the brand’s goals, the reader’s interest.

The point is, not every viewable impression is equal but that doesn’t mean that the shorter ones are categorically worse. It means that we should think about the goals of a campaign and which impressions are the right way to achieve these goals.

Second-Screen Viewing & the Super Bowl

February 3rd, 2014 by Josh

Current estimates are that nearly 100 million viewers tuned in to watch Seattle’s 43-8 win against Denver last night. Of course, there’ll be many reports that dissect the ways we watched the game, but for us, one particular area of interest is the prevalence of multi-device viewing. The concept of the “second screen”—people consuming media on multiple devices simultaneously—gets a lot of discussion these days, and sports sites are perhaps the best study in second screens. Sports fans still consume the vast majority of games on TVs but, while watching, they might also scan stats, highlights, and commentary on their phones, tablets, and computers.

That’s why I found myself flipping back and forth last night between a livestream of the game, my Chartbeat Publishing Dashboard, and an Emacs window, trying to figure out how online traffic varied throughout the night. Whereas on a typical night it’s hard to collate real-world events with online behavior, last night’s game was different. Whether you were watching online or on television, the commercials and game events happened at the exact same moment, which gave us the opportunity to watch second-by-second shifts in web traffic.

One of the most interesting observations was how much online traffic fluctuated before and after commercial breaks. Across sports sites, we saw upticks of 5% to 15% in traffic just as the game went to a commercial break, and that traffic drained off just as quickly when the game resumed play. That trend was present across every commercial break during the game. Perhaps unsurprisingly, the vast majority of those upticks were on mobile devices.

After watching that trend for the first half, I expected a similar increase in traffic during halftime. But, interestingly, halftime elicited exactly the opposite response; sports traffic dropped by 15% to 50% during the break, and the majority of that drop was on mobile.

Because it’s so difficult to know for certain that the same person is using multiple devices, most analyses of second-screen behavior have measured device usage via surveys. In this case, though, because we saw behavior that was so tightly coupled to events taking place on TV screens, we can start to get a sense of the scale of multi-device usage across the web. And, with patterns in usage as strong as we saw, it’s clear that a large portion of people tuning in were actively engaged on second screens in response to game events.

2014 Trends in Online Journalism

January 23rd, 2014 by Kyle

At Chartbeat, we have the great privilege of working with thousands of the world’s online publishers, giving us access to one of the most interesting data sets ever. (Ever!) So, with 2013 wrapped up and 2014 now in full swing, we thought it’d be cool to do a little futurecasting… you know, use what we’ve learned from our own data studies—and from others’—to read the tea leaves. What will 2014 mean for data-driven journalism?

Yesterday, we hosted a webinar, “2014 Trends in Online Journalism,” with our good friends at the Online News Association. Joe Alicata, our Product Owner for Chartbeat Publishing, and Doug Benedicto, our UX Researcher, talked about everything from multi-platform news consumption to experimentation with content monetization. They also talked about how publishers are starting to focus on quality content and building loyal audiences.

If you missed the webinar—or just want to relive it—we’ve got the video for you below. If it’s just the slides you’re after, you can check those out over on Slideshare. We’d love to hear what you think, too: Which areas of online publishing do you think will be emerging, developing, or accelerating in the next year? Tweet us @Chartbeat. If you’re interested in a free trial of the new Chartbeat Publishing, send a note to productoutreach@chartbeat.com.

Guest Post: Categorizing Your Site

December 20th, 2013 by Hannah

Hannah Keiler was our Fall 2013 Data Science intern here at Chartbeat, working with Chief Data Scientist Josh Schwartz. Hannah is a senior at Columbia University, where she studies Statistics with a concentration in Computer Science. This blog post details one of several projects she tackled during her internship at Chartbeat.

At Chartbeat, we sometimes want to compare metrics across similar sites. There are several different ways to group sites. For example, you can begin by thinking about grouping sites by size – comparing metrics like number of readers or articles published each day. We were also interested in grouping together sites that write about similar content. Grouping sites by content manually for thousands of domains is incredibly tedious, so we wanted to devise a metric that would allow us to group similar sites automatically.

One way to define sites as having similar content is if they write on similar subjects at around the same time. If sites write about the same subjects, they are probably using the same key words, like “Obama” or “Syria.” We knew that the words that best summarize the content of an article are likely the words appearing in its headline. Keeping these ideas in mind, we developed our metric.

Computing Similarity

We start by comparing sites two at a time. Let’s call the sites A and B. We look at the words used in the headlines in A and B day by day.

For each day, we record the words used in both A and B and compute a weighted sum of their counts. That means that we divide the number of times a certain word occurs in both A and B in one day by a number indicating how often that word occurs in headlines in general. Weighting the word counts helps us to pick out two sites that write about niche topics by giving more weight to rarer words. For each day, we then sum up all of these values and then we sum up all of the values for all of the days. Let’s call this final sum “Value 1.”

We also record all of the words used in headlines by either A or B for each day.  Then for each day we compute a weighted sum of these word counts and then add up all the weighted sums from each of the days into one value. Let’s call this “Value 2.”

 Then we divide Value 1 by Value 2. We now have a ratio of sorts of the number of words A and B share versus the number of words they use in total.

How does this look?

We first computed the similarity metric for sites whose content we thought was geared towards sports, music, or celebrity/entertainment news. To visualize the similarity metric, we plotted the sites as nodes in connected graph.

 

musiccelebsports_col_nolabels

 FYI: These graphs are anonymized because we don't share individual client data

The distance between the sites represents their similarity. Closer sites have a stronger similarity metric. On this graph, the sports sites are dark blue, the celebrities sites are red, and the music sites are teal. As you can see, sites with similar content group together! The fact that the celebrity sites are in the middle implies that they share some content with music and sports sites, which makes sense. The outlier posts fewer articles daily than the other celebrity news sites, so there was less overlap in term usage and, accordingly, the similarity metric was lower.

 We also tried out our metric with British and Australian news. We get the graph below.

 

ukauno_col_nolabels

Here, the UK sites in red group together and the Australian sites in teal group together. The outlier writes more niche news stories than general Australian news, so it had less overlap with the other Australian and British news sites.

Wrapping Up

These initial results show that sites that post articles with the same topics in the headlines at around the same time tend to be similar types of sites. Moving ahead, this could be a great way to group sites into different categories based on their content.