Archive for the ‘Data Science’ Category

Beginning several days ago (the evening of Tuesday, 1/20, to be precise), you may have noticed a significant increase in the traffic on your site from LinkedIn: across our network, traffic from increased by over 3x. Below, we’ll detail why that change occurred, and what publishers should expect going forward.

Over the past year, publishers have become increasingly interested in traffic from LinkedIn, as the LinkedIn team has been steadily working to improve their feed experience with the launch of their new mobile app and content platforms. Nevertheless, when looking at referrer traffic in analytics tools like Chartbeat, web traffic from has always seemed smaller than it should for such a large platform, especially given the volume of traffic we see from LinkedIn’s counterpart apps, which shows up under the referrer name

On January 20th, that changed when LinkedIn made a change to correctly attribute their traffic, some of which had previously been categorized as dark social. The impact of that change was immediate and significant.

Let’s look at traffic coming from to sites across the Chartbeat network over the last six months, we see two trends: a steady increase over the year, followed by a huge increase at the end of January.

Zooming in on the right side of the graph, January, 2016, we can see the immediate change in traffic as the attribution change was pushed:

linkedin_02_v3If we compare numbers from just after the change to the same time during previous weeks, traffic from was up by over 3x.

Some sites saw more than 6x increases in their LinkedIn traffic.

While LinkedIn still isn’t a major traffic source for many types of sites, we expect that many business-, media-, and technology-focused sites will see LinkedIn as a top-10 referrer going forward.

With Facebook’s change last year to help attribute all of their traffic, LinkedIn’s change here, and other work to come, we’re excited to see more traffic correctly attributed. We’ll continue to work with platforms in the coming months to bring their dark social traffic into the light.


2015 was a big year for top-quality journalism. Just looking at the 20 most read stories across the Chartbeat network, it’s clear that a heartening mix of longform reports and critical resources for breaking news captured and held the world’s attention this year. Quality content shone, even as the relationship between media and technology continued to shift – especially in the realms of mobile traffic, distribution platforms, and ad blocking.

In 2015, more than 70% of sites we measured saw traffic from mobile devices increase, and Facebook, as in prior years, generated the largest share of mobile traffic. In contrast to prior years, though, Facebook’s share of traffic itself was constant for most sites. That said, there’s no denying that the new channels for content distribution, like Instant Articles, Snapchat Discover, and Google AMP, will only grow in importance over 2016, presenting an opportunity for publishers to build their audiences. And this is the key. Even as some publishers, especially in Germany, are reporting high rates of ad blocking, by prioritizing audience, embracing new channels, and doubling down on speedy browsing we can build an even brighter media landscape for years to come.

So take some time to read Past/Forward. In it, we’ve proposed eight New Year’s resolutions for digital publishers seeking an outstanding 2016. We walk you through cutting down page load times, growing your loyal audience, writing winning headlines — pretty much everything future-focused publishers should strive for.

You can find Tony Haile’s forecast for 2016 and our eight digital media resolutions in Past/Forward.

When you work with as much data as we do—and trust me, it’s a lot—it’s humbling to show off the actual journalistic output we support. So, we’ve compiled a list of the 20 stories that held your attention longest in 2015 — for a grand total of 685,231,333 Engaged Minutes (or more than 1,300 years). These were stories that held you breathless. Enraged you. Inspired you. They were long-form reports, rich with narrative, like #1, 7, 11, and 17, which show that readers really do respond to quality (!!). They were live coverages of the attacks in Paris (#3, 4, 6) or the elections in Britain (#5). They were confessional essays and impassioned arguments, investigations and elegies. These are the stories that prove that digital storytelling isn’t just alive, it’s kicking ass.

1. What ISIS Really Wants

The Atlantic | February

2. The Science of Why No One Agrees on the Color of This Dress

Wired | February

In-depth examinations of global newsmakers topped the list in 2015. Undoubtedly, this was the year of long-form narrative.

3. Paris attacks: as they happened

BBC | November

4. Paris attacks: Bataclan and other assaults leave many dead

BBC | November

5. Election Live

BBC | May

6. Paris massacre: At least 128 killed in gunfire and blasts, French officials say

CNN | November

It goes without saying: Breaking news will always grab and hold attention.

7. Inside Amazon: Wrestling Big Ideas in a Bruising Workplace

The New York Times | August

8. Scott Weiland’s Family: ‘Don’t Glorify This Tragedy’

Rolling Stone | December

9. How One Stupid Tweet Blew Up Justine Sacco’s Life

The New York Times | February

10. Police: Bryce Williams fatally shoots self after killing journalists on air

CNN | August

11. The Lonely Death of George Bell

The New York Times | October

Honed craft. Timeless themes. Notice that these Times pieces are even more examples of the power of narrative journalism.

12. Spygate to Deflategate: Inside what split the NFL and Patriots apart

ESPN | September

13. At least 14 people killed in shooting in San Bernardino; suspect identified

CNN | December

14. The “Food Babe” Blogger is Full of Shit

Gawker | April

15. I Found An iPhone On the Ground and What I Found In Its Photo Gallery Terrified Me

Thought Catalog | April

16. No. 37: Big Wedding or Small?

The New York Times | January

Sometimes, the most engaging content is the most distracting. Readers will engage deeply with more than just serious news items.

17. Split Image

ESPN | May

18. This is Why NFL Star Greg Hardy Was Arrested for Assaulting His Ex-Girlfriend

Deadspin | November

19. The Coddling of the American Mind

The Atlantic | September

20. The Joke About Mrs. Ben Carson’s Appearance Is No Laughing Matter

The Root | September

Want to see how your stories stack up? Get in touch.

Update: a reader wrote in with the great suggestion of examining the effect of direct quotations in headlines. We found that headlines with direct quotes are 14% more likely to win headline tests than average headlines, making them the second most effective headline style we’ve tested. Please comment or get in touch with other suggestions for headline styles to examine!

Writing a catchy headline that captures the attention of your audiences is, without question, an art form. As demonstrated in this headline, blindly following guidelines can lead to copy that sounds cliché at best, and actively off-putting at worst. Still, effective headline writing can make quite a difference in the success of your content — after all readers have to get to the actual articles somehow — so it can be expensive to get wrong.

Chartbeat Engaged Headline Testing enables content creators and editors to become better headline writers. By testing copy in real time, newsrooms can challenge assumptions about what kinds of headline constructions work well and which don’t.

Accordingly, we would like to turn that introspective lens on some of our own recommendations of how best to use our tool and then on some commonly cited “tips and tricks” for getting the most out of your headlines. As a foreword, while we have the luxury of being able to plot general trends in a rich dataset of over 100 publishers and almost 10,000 headline tests, each publisher and audience is different. We encourage you to take a look at your own data and put some of our findings to the test (literally!) to see what works best for you.

Verifying Best Practices for Engaged Headline Testing

To help our clients get started with our tool, we often give them a list of best practices. Here are a few examples:

  • Test in Higher Traffic Positions
  • Don’t be Afraid to Test Multiple Variants
  • Test Distinct Differences

We like to encourage users to conduct headline tests that converge to a winner quickly, so that winning headlines spend the most possible time with the largest possible audience.

This begs the question of what “converging to a winner quickly” means, and to answer it, I would like to appeal to our data for an overall view. The graph below shows a histogram of experiments by the number of headline trials — that is, the number of unique visitors that see one of the tested headlines:


About half of conclusive experiments (those that determine a winner) need fewer than 2,500 trials to converge. More than 85% need fewer than 10,000 trials. That said, identifying an average convergence time for your site will depend on the amount of traffic you have and how “evergreen” your content is.

For sake of example, let’s imagine a publisher that gets 100 trials per minute. They want to see their experiments finish within 25 minutes. The above statistics imply that only about half of this publisher’s experiments will finish before we reach 25 * 100 = 2,500 trials.

Want to maximize the ROI of your headline testing practice? Learn how.

Click-Through Rate
Now, let’s take a look at how we can leverage higher traffic (click-through rate) positions to optimize for convergence time. The following graph is a density plot of number of trials needed for convergence against the CTR of the winning headline:


While there is a fair amount of noise in the plot, the main indication is that the needed number of trials is roughly inversely proportional to the CTR of the slot. So what does this mean in practice? If a publisher tests in a prominent headline position getting 8% CTR on the page, the test will converge in 4 times fewer trials than a position below the fold getting 2% CTR. That brings our convergence rate (within 25 minutes) from 50% to closer to 90%. Pretty astounding.

Number of Headline Variants
Finally, let’s graph the number of headline variants in each experiment:


Right now, we see that more than two-thirds of our headline tests are basic A/B tests, meaning only 2 variants. There are clear pros and cons for testing additional headline options. On the negative side, you need to actually write more headlines, and I can sympathize with the creative burden. (Unfortunately, taking the lazy way out in tweaking a word or rearranging a sentence tends to have less impact than trying to highlight different viewpoints or angles.) Also, adding an additional (average) headline often will hurt convergence time, because you need additional trials to explore the added headline.


But, as demonstrated in the table above, there is clear benefit to testing additional headlines as well. The above table shows the amount by which the winning headline exceeds an average headline, by number of headlines tested. The winning headline in a five variant experiment typically has more than a 50% higher CTR than the average headline, whereas you may only see a 23% benefit for a standard A/B test. This pattern of increasing divergence of winner to mean follows directly from the variance in the CTR of each headline. Another consideration is how often the original headline (Variant A) ends up as the winning headline. Admittedly, the following result depends fairly strongly on how organizations decide to come up with headlines; but even in the A/B headline case, publishers have been fairly significantly rewarded for using the additional variant. In some extreme cases, we have seen publishers use as many as 17 (!) different variants in a single headline test, successfully converging in fewer than 10,000 trials (!!).

Testing the Efficacy of Common Headline Themes

We wanted to take a closer look at the characteristics that make up a good headline. Some of the essence of a great headline, such as Vincent A. Musetto’s “Headless Body in Topless Bar,” can never be fully captured in categorical variables; but there are common tropes that are commonly used to capture audience attention. With the help of headline guides, other headline studies, and raw expertise, we compiled a list of 12 commonly-cited themes:

  1. Does the headline contain a question?
  2. Does the headline have a number?
  3. Does the headline use adjectives?
  4. Does the headline use question words (e.g., ‘who’, ‘what’, ‘where’, ‘why’)?
  5. Does the headline use demonstrative adjectives (e.g., ‘this’, ‘these’, ‘that’, ‘those’)?
  6. Does the headline use articles (e.g., ‘a’, ‘an’, ‘the’)?
  7. Is the headline in the 90th percentile of length (73 characters or greater)?
  8. Is the headline in the 10th percentile of length (32 characters or fewer)?
  9. Does the headline contain the name of a person?
  10. Does the headline contain any named entity (e.g., person, place, organization)?
  11. Does the headline use positive superlatives (‘best’, ‘always’)?
  12. Does the headline use negative superlatives (‘worst’, ‘never’)?

For this exercise, was used for the natural language processing tasks, including entity recognition and part-of-speech tagging for English language sites.

There are a number of statistical challenges in trying to sort out what characteristics have real significance and which are spurious outliers. The first thing to note when making multiple significance tests is that it is important to control the familywise error rate, via Bonferroni correction, or else you greatly increase the likelihood of spurious results. The second thing is that there are a number of confounding variables to consider. Raw CTR is appealing for its simplicity, but it could very well be the case that short headlines, for instance, are much more likely to be tested in leaderboard spots at the top of busy homepages, so despite being inferior to other headlines in the same spot, the CTR ends up being higher. This is a form of Simpson’s Paradox.

We will look at two alternate metrics of headline success. The first is scaled CTR, where instead of comparing CTRs globally, we look at the ratios of CTR of a given headline to the CTR of the headline that won the experiment. With this metric, the average scaled CTR of a headline is close to 77% in this data set, so we use that 77% as a benchmark to see whether a particular property has a beneficial effect.

The second metric is winner propensity. We look at the set of experiments that compare headlines with a given property to a headline without and calculate how often we would expect headlines with that property to win, if winners for each experiment were chosen randomly. We then see whether the headlines of the given property are more likely to win.


The results were somewhat mixed. Only long headlines and headlines with demonstrative adjectives show significantly higher scaled CTR, and only headlines with demonstrative adjectives and numbers show higher propensity of being declared winner in a given headline test. The presence of articles actually significantly detracts from scaled CTR.

It’s worth discussing the one unambiguous result in a bit more detail. Demonstrative adjectives can actually be used in multiple ways in a headline. You can use them to create intrigue in clickbait-ish fashion: “These simple tricks will leave you speechless” or “You’ve never tasted anything like this.” There are also quite a few examples in our dataset of using demonstrative adjectives as a temporal specifier: “GOP Debate this evening,” for instance. In the future, as we collect more data, we can think about drilling down more granularly into specific constructions.

Perhaps more interesting than the positive results is the lack of significance among other factors that have been cited to be useful in capturing the attention of an audience. “Use terse, punchy headlines”; “Ask questions”; “Name drop.” None of these properties show much predictive power in the general case.

“That’s right, writers: We’ve proven that ‘5 Ways To Write The Best Headline Ever’ isn’t actually that effective.”

Final Thoughts
So where does that leave us? If you want to be an effective headline writer, maybe there is no substitute for creativity and attention. Watch for patterns in the headlines that end up floating to the top. Take the time to discuss what worked and what didn’t. Avoid the formulas and cliches. Be liberal with your use of headline testing, so that you can harness feedback from your readers in real time.

If there are any other ideas that you would like us to take a look at in the data, especially as our repository of tests grows, please don’t hesitate to reach out.

In the meantime, here’s a great resource for headline testing optimization.

If you had to describe five important events that were happening in the world right now, what would they be? How would you even go about answering that question?

To start, you might visit the homepage of your favorite news site, aggregator, or publisher. But just one site won’t have everything you’re looking for — maybe you want different takes on today’s news. What you might do is collate articles across several sites, see which news events multiple publishers are reporting on, and look at different perspectives on each story.

For our Hackweek project, backend engineer Anastasis Germanidis and I developed a process to identify these trending, important, global news events automatically and in real-time, using publicly available data. With a few machine learning algorithms, we can group articles across different sites by news event and output a list of important news events being reported right now, each represented by a set of articles providing different angles on the story.

I’ll first show our results, and then talk about the data science that makes this work. Below, I’ve run our data science pipeline on the home pages of major U.S. publishers, including the New York Times, the Washington Post, and Wall Street Journal, scraping data from the afternoon of October 13. To be clear, this pipeline does not use any data from Chartbeat’s analytics products – everything we use comes from a web scraper, which sees what any reader on the web would see.

Our project captures the important events of the day through algorithms and provide multiple articles for each new story.

Results: October 13, 2015

News Event 1: Violence in Israel

News Event 2: Kansas City Fire

News Event 3: Democratic Debate

So How Does it Work?

First, we need a dataset of articles to work with. We start by using PhantomJS, an open-source web scraper, to scrape the homepages of several major U.S. publishers including the New York Times, Washington Post, and Wall Street Journal. We want articles that homepage editors think are important to today’s news, so for each page, we look at all article links above the fold on a desktop screen and pick the top ten articles by link size.

We feed our article links to Python-Goose, a Python library which extracts the content of an article given its URL. Now we have the title, description, and content of ten articles on each homepage we started with.

We want to organize our dataset of scraped articles into news events. We start by preprocessing our article text with two steps: 1) named entity extraction and 2) tf-idf vectorization. Let me explain:

Named entity extraction

This involves identifying words or phrases that correspond to names of things. We use the MITIE python library, which identifies the names of people, organizations, and locations and classifies each entity it finds into one of these three categories. For our purposes, we’re less concerned with the classification of each named entity than the identification of these words and phrases. We extract all instances of named entities in each article to use for the next step of our pipeline.

Because news events almost always can be uniquely identified by the names of people, organizations, and locations involved, named entity extraction is an effective way of filtering out relatively unimportant terms while retaining important information — think of it as an extension of stop-word removal.

tf-idf Vectorizer

Scikit-learn’s tf-idf vectorizer transforms our list of named entities into a numerical vector for each article, which allows us to cluster articles with standard clustering algorithms. tf-idf stands for term frequency-inverse document frequency. In this case, term frequency is the number of times a named entity appears in an article divided by the number of total entities in the article. Document frequency is the fraction of articles in our dataset that contain a particular named entity. For a given entity and article, term frequency-inverse document frequency is simply the term frequency divided by the document frequency.

Roughly speaking, tf-idf gives a higher weight to entities that appear frequently in the article but less frequently in other articles.

Each dimension of an article’s tf-idf vector represents the tf-idf statistic for a particular word in our vocabulary. In this pipeline, our vocabulary contains all entities that have appeared at least once in our article dataset.

We cluster our tf-idf article vectors using an algorithm called spectral clustering, again using scikit-learn. Spectral clustering consists of three steps: first, we use the similarity of tf-idf vectors between pairs of articles to construct a similarity matrix of our data. We perform dimensionality reduction on this matrix using an eigenvalue decomposition, and finally use the k-means algorithm on this low-dimensional matrix to obtain our article clusters. We’ve found that for a dataset with 60 articles from six publishers, clustering into seven or eight groups works well.

Why didn’t we use a probabilistic topic model such as Latent Dirichlet Allocation? We found that topic models such as LDA give you clusters that roughly correspond to sections, such as technology, science, and politics, and not individual news events. This is perhaps because these algorithms allow for an article to belong to multiple topics instead of forcing a hard classification. This doesn’t make sense if topics are to correspond to news events – we know that an article will rarely report on more than one news story.

Here’s a diagram of our full pipeline.

What’s Next?

Recently, Twitter released a product called Moments, which organizes tweets into events using a team of human curators. We want to use our automated process to do the same with news articles, and we’re working towards a web application that displays our news events in real-time.

By using algorithms to evaluate the importance of news stories, we give you an easy way to figure out what’s happening in the world right now — without having to organize articles yourself or even wait for human curators.