Does headline testing lose value over time? Our data behind its lasting impact on engagement

Headline experimentation has become an increasingly popular tool among our global partners. Some of the largest publishers in the world deem it essential to homepage optimization — we’ve seen websites running upwards of 1,000 tests per week.

Headline testing drives value in two ways. From a quantitative perspective, it directly lifts engagement for each story that’s tested. Informed by that data, editorial teams can then learn how to write more engaging headlines.

With that second use case in mind, we’ve been regularly asked: Does the value of headline testing decline over time? One might hypothesize that, after a few months of running headline tests, editors gain enough intuition to know which headlines will perform best. That, in turn, would make continuing to test become less important and headline testing a depreciating asset.

Our research suggests that not only does the value of headline experimentation increase over time, but that it continues to steadily drive engagement. More on those findings below.

Create a more engaging homepage experience with Chartbeat’s suite of real-time optimization tools. Learn more

Analyzing headline testing quality over time

So, does the value of headline testing truly depreciate? To investigate that question, we looked at the dataset of over one million tests that we’ve run since the product’s inception.

First, we wanted to see whether the lift of tests changes over time. To quantify lift, we look at two metrics:

1. The fraction of non-original headlines that end up being the most engaging

2. The average lift in engagement for the winner

Using both metrics, we found that lift is fairly constant over time. When a site first starts using Headline Testing, about 65% of tests have a non-original winner, and an average test has a roughly 45% lift.

headline experimentation over time rate of non-original headlines

That’s what we’d expect after introducing a new culture of experimentation around headlines. Yet, two years later, those rates are the same, as we see in the graph below.

headline experimentation over time average lift

Given that, the evidence is clear that testing continues to drive engagement. Yes, you can say editors are getting better at writing original headlines over time, but they’re also getting better at writing alternate headlines!

However, we see that the story changes when we consider testing activity.

Building a culture of headline experimentation

We’ve found that the most successful Headline Testing users are those who build testing into their culture, with Slack rooms and team meetings dedicated to brainstorming headline ideas for important stories. They’ve all told us similar stories — the more headline tests we run, the stronger our culture around experimentation has become.

Building culture takes time, and we see that the data bears that out. On average, the number of people running headline tests for a given site grows by about 44% during their first year of testing and another 49% their second year, as we see below.

headline experimentation over time average users running tests

What’s more, as the number of people experimenting with headlines increases, it makes sense that the number of tests would go up over time. However, what we’re actually seeing a significant jump in the number of tests being run over time — a 411% increase in the number of tests run over a site’s first two years using the tool to be exact.

headline experimentation over time average weekly tests

If we combine that datapoint with the fact that the lift of tests stays constant, we’re left with an interesting finding. The average publisher is driving more than 5x the engagement with headline testing after two years when compared to the time they began using the tool.

The lasting effects of headline experimentation: Our takeaways

The question that we posed at the beginning of this article, “Does the value of headline testing decline over time?” is significant, particularly for the global publishers that need to ensure they’re allocating time to optimizations with the highest reader impact. Therefore, our analysis has hopefully shown you that yes, there are concrete benefits to experimenting with headlines.

A few other key takeaways:

1. Our analysis suggests that headline testing continues to drive engagement. Not only are editors are getting better at writing original headlines over time, their ability to create engaging variants are improving as well.

2. Consistency is key. The value of headline testing doesn’t wane with an increase in tests. We’ve found the opposite — it’s actually driving more engagement over time.

3. We can also see a link between a culture of headline testing and their sustained engagement. The more tests are encouraged, analyzed, and iterated upon by teams, the stronger the results are over time.

More in Research