You’ll Never Guess How Chartbeat’s Data Scientists Came Up With the Single Greatest Headline

Writing catchy headlines that capture the attention of your audiences is, without question, an art form. As demonstrated in this headline, blindly following guidelines can lead to copy that sounds cliché at best, and actively off-putting at worst. Still, effective headline writing can make quite a difference in the success of your content — after all readers have to get to the actual articles somehow — so it can be expensive to get wrong.

Chartbeat Headline Testing enables content creators and editors to become better headline writers. By testing copy in real time, newsrooms can challenge assumptions about what kinds of headline constructions work well and which don’t.

Accordingly, we would like to turn that introspective lens on some of our own recommendations of how best to use our tool and then on some commonly cited “tips and tricks” for getting the most out of your headlines. As a foreword, while we have the luxury of being able to plot general trends in a rich dataset of over 100 publishers and almost 10,000 headline tests, each publisher and audience is different. We encourage you to take a look at your own data and put some of our findings to the test (literally!) to see what works best for you.

Verifying Best Practices for Engaged Headline Testing

To help our clients get started with our tool, we often give them a list of best practices. Here are a few examples:

  • Test in Higher Traffic Positions
  • Don’t be Afraid to Test Multiple Variants
  • Test Distinct Differences

We like to encourage users to conduct headline tests that converge to a winner quickly, so that winning headlines spend the most possible time with the largest possible audience.

This begs the question of what “converging to a winner quickly” means, and to answer it, I would like to appeal to our data for an overall view. The graph below shows a histogram of experiments by the number of headline trials — that is, the number of unique visitors that see one of the tested headlines:

graph_blog

About half of conclusive experiments (those that determine a winner) need fewer than 2,500 trials to converge. More than 85% need fewer than 10,000 trials. That said, identifying an average convergence time for your site will depend on the amount of traffic you have and how “evergreen” your content is.

For sake of example, let’s imagine a publisher that gets 100 trials per minute. They want to see their experiments finish within 25 minutes. The above statistics imply that only about half of this publisher’s experiments will finish before we reach 25 * 100 = 2,500 trials.

Click-Through Rate
Now, let’s take a look at how we can leverage higher traffic (click-through rate) positions to optimize for convergence time. The following graph is a density plot of number of trials needed for convergence against the CTR of the winning headline:

EHT_Headline_Writing_Blog_-_Google_Docs

While there is a fair amount of noise in the plot, the main indication is that the needed number of trials is roughly inversely proportional to the CTR of the slot. So what does this mean in practice? If a publisher tests in a prominent headline position getting 8% CTR on the page, the test will converge in 4 times fewer trials than a position below the fold getting 2% CTR. That brings our convergence rate (within 25 minutes) from 50% to closer to 90%. Pretty astounding.

Number of Headline Variants
Finally, let’s graph the number of headline variants in each experiment:

graph

Right now, we see that more than two-thirds of our headline tests are basic A/B tests, meaning only 2 variants. There are clear pros and cons for testing additional headline options. On the negative side, you need to actually write more headlines, and I can sympathize with the creative burden. (Unfortunately, taking the lazy way out in tweaking a word or rearranging a sentence tends to have less impact than trying to highlight different viewpoints or angles.) Also, adding an additional (average) headline often will hurt convergence time, because you need additional trials to explore the added headline.

table_01-1

But, as demonstrated in the table above, there is clear benefit to testing additional headlines as well. The above table shows the amount by which the winning headline exceeds an average headline, by number of headlines tested. The winning headline in a five variant experiment typically has more than a 50% higher CTR than the average headline, whereas you may only see a 23% benefit for a standard A/B test. This pattern of increasing divergence of winner to mean follows directly from the variance in the CTR of each headline. Another consideration is how often the original headline (Variant A) ends up as the winning headline. Admittedly, the following result depends fairly strongly on how organizations decide to come up with headlines; but even in the A/B headline case, publishers have been fairly significantly rewarded for using the additional variant. In some extreme cases, we have seen publishers use as many as 17 (!) different variants in a single headline test, successfully converging in fewer than 10,000 trials (!!).

Testing the Efficacy of Common Headline Themes

We wanted to take a closer look at the characteristics that make up a good headline. Some of the essence of a great headline, such as Vincent A. Musetto’s “Headless Body in Topless Bar,” can never be fully captured in categorical variables; but there are common tropes that are commonly used to capture audience attention. With the help of headline guides, other headline studies, and raw expertise, we compiled a list of 12 commonly-cited themes:

 

  1. Does the headline contain a question?
  2. Does the headline have a number?
  3. Does the headline use adjectives?
  4. Does the headline use question words (e.g., ‘who’, ‘what’, ‘where’, ‘why’)?
  5. Does the headline use demonstrative adjectives (e.g., ‘this’, ‘these’, ‘that’, ‘those’)?
  6. Does the headline use articles (e.g., ‘a’, ‘an’, ‘the’)?
  7. Is the headline in the 90th percentile of length (73 characters or greater)?
  8. Is the headline in the 10th percentile of length (32 characters or fewer)?
  9. Does the headline contain the name of a person?
  10. Does the headline contain any named entity (e.g., person, place, organization)?
  11. Does the headline use positive superlatives (‘best’, ‘always’)?
  12. Does the headline use negative superlatives (‘worst’, ‘never’)?

 

For this exercise, Spacy.io was used for the natural language processing tasks, including entity recognition and part-of-speech tagging for English language sites.

There are a number of statistical challenges in trying to sort out what characteristics have real significance and which are spurious outliers. The first thing to note when making multiple significance tests is that it is important to control the familywise error rate, via Bonferroni correction, or else you greatly increase the likelihood of spurious results. The second thing is that there are a number of confounding variables to consider. Raw CTR is appealing for its simplicity, but it could very well be the case that short headlines, for instance, are much more likely to be tested in leaderboard spots at the top of busy homepages, so despite being inferior to other headlines in the same spot, the CTR ends up being higher. This is a form of Simpson’s Paradox.

We will look at two alternate metrics of headline success. The first is scaled CTR, where instead of comparing CTRs globally, we look at the ratios of CTR of a given headline to the CTR of the headline that won the experiment. With this metric, the average scaled CTR of a headline is close to 77% in this data set, so we use that 77% as a benchmark to see whether a particular property has a beneficial effect.

The second metric is winner propensity. We look at the set of experiments that compare headlines with a given property to a headline without and calculate how often we would expect headlines with that property to win, if winners for each experiment were chosen randomly. We then see whether the headlines of the given property are more likely to win.

table_v2

Results
The results were somewhat mixed. Only long headlines and headlines with demonstrative adjectives show significantly higher scaled CTR, and only headlines with demonstrative adjectives and numbers show higher propensity of being declared winner in a given headline test. The presence of articles actually significantly detracts from scaled CTR.

It’s worth discussing the one unambiguous result in a bit more detail. Demonstrative adjectives can actually be used in multiple ways in a headline. You can use them to create intrigue in clickbait-ish fashion: “These simple tricks will leave you speechless” or “You’ve never tasted anything like this.” There are also quite a few examples in our dataset of using demonstrative adjectives as a temporal specifier: “GOP Debate this evening,” for instance. In the future, as we collect more data, we can think about drilling down more granularly into specific constructions.

Perhaps more interesting than the positive results is the lack of significance among other factors that have been cited to be useful in capturing the attention of an audience. “Use terse, punchy headlines”; “Ask questions”; “Name drop.” None of these properties show much predictive power in the general case.

“That’s right, writers: We’ve proven that ‘5 Ways To Write The Best Headline Ever’ isn’t actually that effective.”

Final Thoughts
So where does that leave us? If you want to be an effective headline writer, maybe there is no substitute for creativity and attention. Watch for patterns in the headlines that end up floating to the top. Take the time to discuss what worked and what didn’t. Avoid the formulas and cliches. Be liberal with your use of headline testing, so that you can harness feedback from your readers in real time.

If there are any other ideas that you would like us to take a look at in the data, especially as our repository of tests grows, please don’t hesitate to reach out.

In the meantime, here’s a great resource for headline testing optimization.


More in Research