Has that paywalled story done well? | Ritvvij Parrikh Humane ClubMade with Humane Club

Has that paywalled story done well?

Published May 10, 2022
Updated Dec 27, 2022

This post was originally published at next.timesofindia.com.


Has that story done well? Can it do better? Are there topics people prefer to read more than others? Should we commission more stories on those topics? Will those do well? These and more are the common questions that we receive from our Editorial Teams.

Answering these questions for TOI, our advertisement-led news product, is pretty straightforward. It mostly boiled down to one metric — Pageviews. But, this wasn’t the case with TOI Plus — our subscription news product.

Hence, a few months ago, we started work on Signals — an editorial analytics and recommendations solution for TOI Plus. In this post, we discussed how we built our first method to track story performance for paywalled stories.

Complication #1 — Dealing with noisy data

Traffic to a news site varies greatly based on what’s in the news cycle. The variation increases if there are marketing or audience engagement campaigns. Given that, how to benchmark if a particular story in a given news cycle is performing well or bad?

For example, 50k users on a story in a peak news cycle would be less but during a lull season, 50k users would be a stellar outlier story.

Chart simulates the noise that is commonly observed.

To reduce this noise, we calculate a 14-day moving average, which we consider as a typical newscycle period. We get a relatively stable line to base our calculation.

Once we have this, we calculate standard deviation.

And then compute high threshold as moving average plus standard deviation and low threshold as moving average minus standard deviation.

Rating: Each story’s performance is compared against the last 14-days averages from the story’s publish date. Finally, we can categorize traffic to each story into three clear buckets:

  • High: If the story’s traffic is above the high threshold
  • Medium: If the story’s traffic is between the high and low threshold i.e. the yellow area
  • Low: If the story’s traffic is below the low threshold i.e. the red area

Typically, there are three major metric groups in a subscription news product.

  • Interest: Are audiences interested in the topic? To evaluate interest, we fetch Users from Google Analytics on a particular story who are non-subscribers. This is regarded as top of funnel traffic.
  • Conversion: Once interested, are audiences buying subscriptions after seeing the story behind the paywall? For conversion, we rely on different SKUs (micropayments, annual subscriptions) bought from that story’s page.
  • Retention: Once interested, are subscribers reading or engaging with the story? Finally, for retention we look at users who have read the story and the number of comments.

However, subscriptions might be a three digit number while top of the funnel traffic might be a six-digit number. Hence, we normalized the moving average as percentile scores, that is ranked between 1 to 100.

This made all of these diverse metrics comparable.

Today, we are tracking retention with two metrics — reads and comments. Tomorrow, we can easily add more metrics like scroll depth, average time on page, and shares. While we do this, it is important that we make the communication to the editorial team future proof.

Hence, we abstracted these metrics into three composite metrics, one each for, interest, conversion, and retention.

  • Interest Performance is exactly the same as top of funnel score.
  • Conversion Performance is a weighted average of all the SKU purchases.
  • Retention Performance is a weighted average of reads and comments.

Going back to our original goal, we want to reduce everything down to only two actions — converting non-subscribers and retaining subscribers.

Hence, Net Conversion Score is a composite measure of Conversion Performance and Interest Performance.

Similarly, Net Retention Score is a composite measure of Retention Performance and Interest Performance.


Impact

Operational Use

Feedback Loop: The Signals dashboard daily tells the editorial team which of their recently published stories did well.

Replugging Efforts: We are extracting four-hour updates from Google Analytics and strive to give editorial regularly feedback on their replugging decisions. For example, the editorial team made a video of an evergreen TOI Plus story. Within a day, the algorithm picked up the spike in performance of the story.

Strategic

What to do more or less: With this, we know which topics, authors, style of writing is most conversion-worthy and which are retention-worthy.

Clarity about our audience behavior: We also came to know that the topics that convert aren’t the topics that people read after purchasing the subscription.

Similarly, there is no correlation between stories that convert well and stories that receive a lot of engagement via comments.