This is another inference from “you get what you measure.” If you don’t know how to measure an initiative, you don’t know what you’re trying to achieve. And if you don’t know why you’re undertaking an initiative, you shouldn’t do it. Fair enough.
But isn’t it sometimes the case that we have an intuition that something will be valuable, even though we’re not quite sure how to measure it?
Collaboration springs to mind. I’ve heard many support executives say things like, “if we could just get the right person with the right knowledge on the right case the first time, we’d be in much better shape.” And I believe that, too. But trying to construct a measurement framework to capture the health and value of collaboration is really hard. (“Ask me how I know.”)
In general, if what we’re trying to do is very innovative, we probably won’t be sure how to measure it, at least at first. And that’s probably OK.
But does that mean that people running innovative initiatives have no accountability? Absolutely not. And to discover how, we’ll look at another place where disruptive innovation is the norm: the startup.
The Lean Startup by Eric Ries is the most influential book of the decade in Silicon Valley. It’s hard to imagine a book since Crossing the Chasm that’s had more of an effect on how people think and talk about startup strategy. (Have you noticed people saying “pivot” all the time recently? Blame Ries.) Measuring innovative businesses in the midst of uncertainty is central to The Lean Startup’s message.
Ries argues that, in an innovative or uncertain environment, the goal should be validated learning—learning how to create sustainable value. He contrasts the purposeful acquisition of validated learning through a build/measure/learn loop in opposition to the kind of “learning” (in air quotes) that is the consolation prize from a failed initiative. “Sure, it didn’t work like we expected, but we’ve learned some lessons for next time” isn’t validated learning. Rather, validated learning uses the scientific method and a rapid cycle of experiments to propose, validate, or discard hypotheses, then move on to the next experiment.
Validated learning happens inside a framework of what Ries calls “innovation accounting” that holds innovators accountable for learning and improving. And, if sufficient improvement isn’t forthcoming, to rethink their assumptions and pivot to another innovation, until the value they’re expecting is forthcoming.
So, for really new initiatives, hold yourself accountable for validated learning. For example, in collaboration, enumerate your key success factors: people need to use it frequently, and the number of users should grow over time, for starters. Release a minimum viable collaboration program and see if people use it, and see if demand grows. If it does, improve it. If it doesn’t, figure out what to change and test that. Eventually, you’ll have a program that’s viable, or you’ll know you were on the wrong track and need to pivot to a new one.
By the time you’ve acquired sufficient validated learning, and you have a functioning program, then you’re in a position to figure out how to measure the value. It’s easy to measure the value of a program that isn’t working—it’s zero. But measuring the value of an effective program…now that’s interesting, and very worthwhile.