Metrics. Analytics. Content intelligence. Where do you stand with these at your publication? What purpose do they serve?
‘We don’t let pageviews dictate our coverage or approach,’ says The New York Times’ Kathy Zhang ‘but we do use data to understand reader engagement and the reach of our stories.’
The Times reported substantial profits as a result of its digital subscription campaign earlier this year. Industry conferences have featured various representatives from the publisher reporting on this success story and people are heartened by the news. Subscriptions are possible! Look, they’ve managed it! Why can’t we? Now, how did they do it….?
Emulation is a natural result of success stories like these, and completely understandable, but among the myriad questions asked of these presenters at the conferences we witnessed this year, a significant number were to the effect of ‘could your model work for me at mine?’
Maybe. Maybe not. This isn’t an article listing the ways you should monitor your content. It’s a piece imploring you to understand what you want to get from your data in order to help your business grow the way it needs to grow. There’s a difference.
Instead, here are some questions to get your thinking about why and how you want to them to work.
Metrics as part of your long-term strategy
“Things like views, reach, clicks and impressions may look impressive on aggregate but are very superficial,” says Esra Dogramaci, in conversation with Freia Nahser of GEN earlier this year. “They aren’t actionable metrics — meaning we can’t really use them to feed into editorial or content strategy.”
This is an important point. Insights must form part of the workflow and strategy, not fulfil the role of the office bore (loud, persistent and difficult to tune out). They should inform planning and reflect on past performance. That said, no metric is going to be able to act as sage and predict future success (and any which claim to do so should be regarded with acute suspicion and zoned out, like the office bore). The term ‘actionable insights’ is something we’re pleased is in common parlance this year: it recognizes that measuring content should hinge on the measurement being aligned with a desired outcome and, furthermore, being something which can used to edit, adjust and refine an approach.
Analytics are more than a result: they’re the start of a process, not the end of one.
Metrics that sound the trumpets – and the alarms
The allure of real-time lies in the simplicity of the scoreboard: of watching which articles grace the top spots, and how they change. Of course that’s the function of real time and they serve ‘a’ purpose, but metrics can do more than merely highlight the big hitter in a single moment.
When we spoke to Sueddeutsche Zeitung’s Philipp Bojen about the way he and his analytics team use metrics, he was quick to point out that one of their primary function is in alerting editors to outlier content.
While this could be those anomalous articles performing way above expectations (the ones that go viral), it can also mean those at the other end of the scale: the ones that should be performing well, but which aren’t. It’s more newsworthy perhaps to be the owner of that viral content, but there are two things to take from that kind of result: the jubilation of outperforming and the factors contributing to that outperformance. It would be easy to stop with the first, but it’s the second which is more intellectually lucrative.
If other articles in a section or on a topic are performing way above that [sad] outlier, the questions asked are going to be very similar to those asked (or hopefully asked) of the successful outlier. Why did it perform that way? Is there a problem with the headline? The image? Was it pushed properly, at the right times, on the best channels? Has it been tagged? Has it been tagged properly? All of these, you’ll no doubt note, are actionable.
Metrics across borders – and cultures
‘The adoption of metrics across newsrooms does not fit a neat pattern. Petre’s (2015) study of analytics in different journalistic contexts asserts that new kind of journalistic cultures are developing, albeit in different ways at different news organizations.’
That’s Matt Carlson writing in the journal Digital Journalism earlier this year. What he says is echoed both in research and anecdote elsewhere: as well as there being no appropriate single metric to cover all subjects and article types, there can be no metric that is appropriate for newsrooms across different counties, countries and continents. Cultural differences abound.
Quoted in an article on Futurity, Angèle Christin, an assistant professor of Communications at Stanford, shares her observations about the transformative effect metrics have had on newsrooms:
“We often think of the spread of new technologies as causing different cultures to converge, but journalists make sense of web analytics differently depending on the context. They put traffic numbers to their own uses and for their own ends.”
Christin studied newsrooms in the US and France for a number of years, looking at metrics in particular detail. Surprisingly it was the French reporters who obsessed more about them, surprising because – as author Melissa De-Witte points out – the French publications in question received financial assistance in the form of state funds, thus reducing the commercial pressures felt by many of their American counterparts. Their motivation to look at these metrics was because they viewed them as an indicator of their impact ‘as a writer shaping the public debate.’
So, journalists in France and the US have a different approach to metrics: one views them as a barometer of their journalistic value, the other as something of a technological intrusion.
It shouldn’t be a surprise that cultural differences affect newsroom work culture, but if they affect the attitudes to metrics, surely it stands to reason that it should affect the way that data is collected and reported.
Is there really a universal metric of success?
Well, no. Not in simple terms. But there are of course guidelines:
Those pursuing an advertising model are those more likely to find metrics which report on exposure to be useful.
If you’re adopting a paid content model, paying attention to which content inspired loyalty is fundamental.
Of course it’s more nuanced than that. Even within a single publication there are strong differences between sections which would make using the same metrics across the board to be foolish at best. The breaking-news bulletin can’t be viewed in the same light as a piece of investigative journalism a year in the making.
Sports coverage is likely consumed differently to interviews with those in the arts and media section. Cookery features exhibit different behavioural patterns once again.
At the core of this is a buzzword: personalization. Understanding that your audience has specific needs, which you’ll address through the planning, strategy and execution of content, but also recognising that measuring that content is going to take similar due diligence and thoughtfulness is the first step towards honing your specific analytics combination.
The problem with metrics has now been widely acknowledged and the solutions are getting coverage through case studies, conference presentations and word of mouth. Metrics have morphed into analytics which have developed into a more complex reporting tool which delivers relevant insights tailored to individual publishers. If there’s a universal truth, perhaps that’s it.