One of my favorite editors tells me my writing tone and attitude can sometimes be too mean. She tones me down because she believes in karma, or glass houses, or doing unto others as you would have done unto you, or all of the above.
But sometimes there are important lessons in failure, even if describing that failure is mean. The following has to be said; you need to know this. So I'm going to carefully protect the identities of all involved. Sure, if they read this they will recognize themselves. But at least they'll know I tried really hard not to be mean to them.
Sitting in a CMWorld session this week, it hit me that there's a really hard-to-express connection between the subjective and squishy metric of content quality and the hard data of content analytics/ROI. The rest of this post is my attempt to articulate that connection.
This insight came during a presentation by the director of content marketing of a very large company whose name you know, which is typically known only by its TLA (which is a TLA for three-letter acronym). She spoke about how the company invested in brilliant anchor content that it surrounded with secondary and tertiary content that independently drove awareness in traffic and also drove people to engage with the anchor content. So far so good right? This is a sound content marketing tactic that was hammered home in many CMWorld sessions (see my “dandelion” post).
The beginning of my epiphany came when she described the anchor content. We're not talking about a $10,000 white paper. We're talking about investing hundreds of thousands of dollars in truly insightful primary research presented in 30 page or 50 page reports. In one of the two examples she described, the anchor content did not achieve 2,000 downloads; the other did not break 500.
Remember, we're talking hundreds of thousands of dollars here. The presenter explained this performance by saying that the research houses commissioned to create the anchor content wrote in very “proper” style that was academic, dry and very hard for mere mortals to relate to or make sense of. There was nothing she could do; her 100+ year old organization was unlikely to change its approach.
The next element of my epiphany clicked into place when I recognized one of the two research organizations employed to create the anchor content.
It turns out that one of my clients used that same firm for a similar research-based tomb. This client is what you might call a household name professional services firm (if your household is a really large business). Except my client has a very good writing style guide. They demand I write in clear and simple prose that engages readers and invites them in to share important and sometimes complex thoughts. When faced with this same research house’s output, they took action. That year, I booked a non-trivial amount of business rewriting a research tomb.
The bottom line is that the TLA's content was crap. It was the same crap they've been producing for a century, and would continue to produce perhaps until the universe winds down to a stop. For me, that called into question all the statistics for all the content analytics that were presented; about all the anchor, secondary and tertiary content. This organization prides itself on making data-based decisions. That was one of the main talking points. The presenter said analytics results informs them about what content to focus on, what works best for their audience, etc., etc.
But if you keep heavying up where your analytics tell you to heavy up, and your analytics are based on crap content, how do you know you're doing the right thing? The analytics could be telling you to do the wrong thing based on what I suppose you could call a zero-point error – i.e., it's measuring crap to begin with, so even good analytics reveal bad directions.
I don't know how to solve this problem, because everyone defines content quality in their own subjective way. I feel like it’s some sort of cousin to the problem of not really being able to grasp the true meaning of any research without understanding the size and composition of the research sample – a thing that most companies gloss over, if they provide it at all.
So I guess the only moral to my story is: It's critically important to define and document what quality content means to you, and then take very special care to produce high-quality content adhering to your standard. Don't publish crap.