As we’re about to talk about simple metrics, let’s start with a simple question: what happens when you yield too much power to them?
Here’s Jostein Larsen Østring, Amedia’s VP for Editorial Development:
“I worked as a news editor in Nordlys – the largest newspaper in the northern part of Norway – and in 2014 and 2015 we were reporting less about local politics than before because we believed our readers weren’t interested – before we realized that we had been fooled by the metrics. Many of the stories that were most read did, by the way, not drive very much local engagement. They went viral, and were read by all others than those who lived in our local communities.”
Given the technological sophistication of the modern news publication, isn’t it bizarre that the journalism world (for the most part) still breathes, lives and dies by the assessments passed down to it from simple metrics?
SINGLE METRICS ARE SIMPLE, AND THAT’S A PROBLEM
Imagine two articles. One’s a lighthearted piece about the President’s new dog. The other’s a serious examination of an emerging political crisis (no, the two are not related and indeed are entirely fictional).
The first is less than 100 words long, punctuated with photographs.
The latter is well over 2,000 words, punctuated not by glitzy images of beautiful people (and animals), but by sober statistical analysis and graphs.
Even someone with little experience of editorial content can see that measuring the ‘success’ of these articles by a single – and identical – metric is problematic.
Look solely at clicks and it seems obvious which the victor would be. Examine shares and again – taken by volume alone – chances are that those canine antics are the news you’d probably send your co-workers or friends during the morning commute. Looking at something like time spent on page is an unreliable meter of how those articles are being viewed: one demands approximately a minute of your time, the other at least ten times that.
This is obviously over-simplifying the issue, but the point’s still valid: worshipping at the altar of single metrics gives you a reading, yes, but it’s not a very meaningful one.
The net result of this approach? The tendency to view articles which score highly against these scales as the ones which have been the most successful and then seek to replicate that formula.
Print publishers have long understood that variety – not homogeneity – of content is key to retaining reader interest – and loyalty. Most papers dedicate their second or third page to less serious stories with a lighter tone (whether that manifests itself as topless women or political cartoons depends very much on the newspaper in question). There’s a rhythm to this layout that evidently works. Even in a single publication there’s variety both in tone and length, as well as of course subject matter: a printed newspaper is a finite and curated collection of writings.
Digital, though, is a different matter. Articles don’t necessarily come packaged in a wrapper together with other complementary stories – often they come to us as separate entities via platforms like Facebook, and now that we can assess the worth of an article on an individual basis, we do.
So yes, the balance has shifted and individual articles are important, but the newsbrands which are reportedly doing well are the ones who are paying attention to the quality and consistency of the articles they publish whilst presenting a variety of subjects, lengths and formats and – and this is the salient point – judging the success of these articles by appropriate measures, which match the goals of each article.
If you’re obsessing about clicks and shares, it’s easy to lose that perspective, and if we remain beholden to the kind of real-time-metrics leaderboards that are now firmly ensconced in newsrooms everywhere, the race to the bottom starts to look more like a freefall, as Jon Wilks said on these pages recently.
DIFFERENT BEHAVIORS NEED DIFFERENT MEASUREMENTS
Single – and simple – metrics were designed to aid marketers and advertisers in their work. To this end they have a value. For editors, though? Well that’s another story. In surrendering editorial judgement to the authority of pageviews (or whichever single metric is currently in vogue), there’s a real risk – borne out in many newsrooms – that future content becomes dictated and driven by them.
Just as our presidential dog and political crisis non-stories illustrate how absurd it is to rank one against the other with the same measure, so too is it unwise to compare stories that serve different purposes. Different reading behaviors necessitate different measurement: just as a front page article on a print paper is designed to attract attention and an investigative piece published within the newspaper’s supplement designed to be something to read at leisure, so too should different digital stories be judged against different criteria.
Metrics are fallible: the answers received are only as good as the questions asked.
When we spoke to Serbian sports publication, Sportklub, recently and their experience is a case in point: results from simple metrics showed a certain degree of success, but upon closer inspection it was clear that this wasn’t the kind of success they sought:
Once I had an opportunity to look via the Content Insights tool, I could see that their CPI was well below even 400 and most readers were spending no longer than five seconds in an article before leaving, which said to me that the content was too much like clickbait and not analytical enough. We want to produce a more serious kind of journalism, even though this is sports, so those kind of results aren’t at all in line with our aims.
The solution is simple, even if the process is not: editorial analytics need to give the kind of information to editors that are useful and can help the entire editorial team improve their content – and the readers’ engagement with it. Blended metrics – which is when we start looking at the relationship between various different metrics – are much more illuminating than any single metric can ever be and have the added advantage of being able to be tailored to each section of a publication, where the needs and goals may be very different.
Let’s go back to the Presidential pup and the political hiccup, for a moment. If that first piece gains a million views whilst the second has only 10,000, the conclusion that a simple metrics analysis would draw is that the first is obviously more successful, right? If you look at the social actions generated by each, again you might find that that the first one registers something like 100,000, and then second one a mere 1,000. So number one’s the winner, right?
Well. Maybe. One more measure, just to drive the point home: let’s look at attention time. Do you remember the word count of each piece from earlier (and here we’re obviously hoping the attention time of this piece has been sufficient to find you still here…)? To recap, first article: 100 words. Second: 2,000. Right. Looking at attention time, you might find that our pup piece records an average of only twenty seconds. The latter piece, however might register well over ten minutes.
Even just taking these three metrics together, we can start to get a much better understanding of how content is working on a site. We can see that the pup piece is ‘clickworthy’: people click on it, feel all warm inside, share it with other fans of small, furry puppies and then get on with their day. The other piece, though it generates considerably fewer views and fewer social actions, keeps a much higher percentage of its readers reading for longer and those ‘social actions’ may well be in the form of thoughtful comments – something which in itself is markedly different to hitting ‘share now’. Both pieces serve entirely different purposes and could quite realistically appear under the same masthead, just in different places within the publication. But to judge both by the same analytical framework? That simply makes no sense at all from an editorial perspective.
ANALYTICS FOR EDITORS BY EDITORS
It is, as Federica Cherubini and Rasmus Kleis Nielsen say in their recent report from Digital Media Project “crucial to underline that what we will be more informed about in the future depends critically on who gets involved in developing analytics and metrics.”
The reason that simple analytics and metrics have been of little constructive use to editors is easy to understand: they weren’t meant for editors. They were developed and shaped to serve the needs of marketers and advertisers and in that regard they’ve done a stellar job.
The time’s come though. Editorial instinct should be supported by useful analytics, not dictated by them. Simple metrics might seem simple, but all they do is succeed in oversimplifying an increasingly complicated industry and readership. For accurate answers, we need to ask significantly more accurate questions. So, as counterintuitive as it might seem, it’s only when we move away from simple metrics to a system of blended ones, that we’re going to get the kind of straightforward, useful and actionable information we as editors and journalists actually need.
Head to the Content Insights website to find out more about the editorial analytics tool created by editors, for editors.