A cornerstone KCS quality measure is the Article Quality Index (AQI.) We calculate the AQI by sampling a handful of articles captured by licensed knowledge developers and checking them against an article quality checklist. If you sample five of my articles, and there are ten criteria on the checklist, you’ve checked 50 items. If I missed a total of two items across those 50, then my score is 100% – (2 / 50) = 96%. Not bad! We like to see individuals’ AQIs above 90%, or put more negatively, we like to see an error rate of less than 10%.
Of course, capturing articles is just one piece of KCS. The AQI doesn’t tell us if people are linking well, or capturing when they should be, or improving content. Perhaps it’s time to expand our vision and create a Resolution Quality Index (RQI.)
Instead of just sampling articles, let’s sample cases. In specific, let’s pick three different types of cases:
- Cases with a link to existing knowledge
- Cases with a link to newly-captured knowledge
- Cases with no link to knowledge
For cases with a link to existing knowledge, we’ll look at three things. First, link accuracy: are the linked article(s) relevant, and is there one to Solve Loop content that describes the customer’s specific resolution? (Just linking to the user manual doesn’t count.) Second, improvement loss: was there an opportunity to update the article that should have been taken, but wasn’t? It’s not necessary to go on a witch hunt here, but look for giveaways like customer emails that say, “in your case, you have to download this file before performing step five,” or for information in the case that’s different from information in the article’s environment. Finally, link timeliness: if you can tell in your case tracking system, was searching and linking happening at the early on the case, or only after the customer’s issue had already been resolved? (HT to Devra Struzenberg for this idea.)
For cases with a link to newly-captured knowledge, we’ll apply our existing article quality checklist. We’ll also check capture timeliness: as with link timeliness, if our technology supports it, we’ll look for when the article content was captured, and how much it was edited post-call. Some post-call editing is inevitable, but Capture in the Workflow tells us that most of the information should be in place by the time the case is closed.
For cases with no link to knowledge, the question is, should there have been a link? If the answer is no—for example, if we were just re-issuing a license key or if the customer never responded to a request for information—that’s legitimate nonparticipation. But, if there was an article available that wasn’t linked, that’s reuse loss, and if one should have been created, that’s capture loss.
Having finished this sampling process, you’ll have an overall view of how well people are following the KCS problem resolution process. To help visualize this, look at this pie chart of cases:
If these ideas sound familiar, they should: this is the kind of analysis provided by the New vs. Known Study described in the KCS Practices Guide. What’s new, though, is using this like the AQI, as a coaching tool. Rather than just doing it once every six months, coaches or knowledge domain experts can sample a few of each kind of case to see if there’s a pattern of capture loss, reuse loss, or inaccurate reuse, and can provide feedback accordingly.
This does take more time, and a bit more subject expertise, than the traditional AQI process. But I think the coaching opportunities would be more than worth it.
ps – Thanks to my colleagues working on the KCS Practices Guide v5.3 update for a vigorous and informative conversation on this topic. You inspired this post!