Technology Should Enable, Not Enforce


(Photo HT MM)

We spend a lot of time—an inordinate amount of time—talking with people about how to lock down permissions inside of knowledge management tools.  How to make sure Contributors can’t publish.  How to make sure Candidates can’t edit or approve.  How to make sure Readers can’t write.  It goes on and on.

Why?  Why this fixation about controlling permissions?

Just to be clear, if someone at work wants to do something bad, they can.  You give them all kinds of tools for mischief:

  • A telephone they could use to yell at customers or make prank phone calls
  • Web browsers with access to forums, where they could post proprietary and derogatory information.  (Not to mention the other inappropriate things that can happen inside web browsers.)
  • Email clients, where they can send almost anything to anyone…including, by the way, the contents of a knowledge base article

But mostly, most of the time, most employees don’t abuse your trust.  And in the rare cases they do, you do what you have to do to deal with the situation.

So, for the record: don’t lock down your knowledgebase more than is required by regulation and statute.  Explain to people what they should and shouldn’t do, and trust that they’re not going to go looking for trouble by pressing “publish” buttons you don’t want them to push.  (It’s hard enough to get them to press the buttons you do want them to push, right?)

I don’t want you to think that our position is based in some kind of namby-pamby granola sense of corporate Kumbaya.  The costs of getting this wrong are as serious as a heart attack.

  • Implementation costs.  As I said at the outset, we spend time on this because it’s in more-or-less 100% of the initial requirements for every knowledgebase implementation we’ve seen.  This drives implementation dollars, time, and risk.
  • Maintenance costs.  Every time a Reader is hired, every time a Candidate is licensed, every time a publication process changes, the tool’s identity model needs to reflect each user’s rights and privileges appropriately.  Provisioning users becomes a big job. If you have one hundred or two hundred users, this is a serious pain.  If you have one thousand or two thousand users, or more, this represents a huge barrier to scalability, especially globally and across business units.
  • Usability costs.  Inevitably, the ways that lockdowns get implemented in a tool involve extra clicks whether you have permissions or not.  Each click represents a serious barrier to adoption.  Don’t believe me?  Ask your support engineers.
  • Trust costs.  If you treat me like someone who shouldn’t be trusted, well, then, I’ll probably act that way.  As Barry Schwartz passionately argues,  rules and bureaucracy result in a lack of communal wisdom.  Stephen M.R. Covey points out that lack of trust slows us down.  None of this is what we need in our knowledge program.

Given how hard KM efforts are, why make things any harder on ourselves…especially if we’re fighting a losing battle?  Look, unless your name is Kim and you live in a palace in the northern part of the Korean peninsula, you simply can’t control what your people share.  Give up your illusion of control, make sure the right thing to do is clear and easy, and deal with the unfortunate exceptions only when needed…and they mostly won’t be.

Turkey’s Twitter users found a way around their government’s temporary technology controls (which relied on hacked DNS entries).  You just don’t need to go there with your knowledgebase users.



Reflections on Collective Wisdom on the Occasion of its Release as an Ebook


In 2006, my industry colleague Francoise Tourniaire and I wrote a book called Collective Wisdom: Transforming Support With Knowledge.  At the time, we billed it as the first book on knowledge management for the support industry; as time passes, I think it’s increasingly safe to call it the only book on KM for support…

By arrangement with HDI, the original publisher, we’ve now released Collective Wisdom as an eBook for the Kindle (or the Kindle Reader on your favorite device).  We’re pretty excited about this; we’re hoping that a sub-$10 price point will allow us to reach many more people than the impressively expensive paper copy did—and besides, it’s 2014: who wants to carry around a heavy paper book?

Rereading Collective Wisdom while formatting it for electronic distribution gave me the opportunity to reflect on what we’ve written with the benefit of several more years’ experience.  To my pleasure (and profound relief), the book has held up well.

  • It covers the things you need to know to run a knowledge program for support.  Creation, maintenance, delivery, staffing, measures, technology—it’s all in there.  The acid test for me is that I routinely find myself answering questions by “stealing” snippets from the book.
  • I’d take almost nothing back.  While I expect we’ve learned more nuance and context about what we wrote, there’s almost nothing that makes me cringe, or wish we hadn’t written it.  (I was too dismissive of wikis, but that was just a paragraph or two.)
  • We keep getting good feedback.  As recently as last week, someone came to me to say that Collective Wisdom really told him what he needed to know to do his job.  It’s a niche book, but within its niche, people seem to find value in it.

That’s not to say it’s perfect. All the vendor names are wrong by now (although in fairness, we predicted that.)  It’s hard to imagine reading a book today that doesn’t mention Salesforce, for example.  I wish the book were more fun to read. And, as Mark Twain may have said, “if I had more time, I would have written less.”

Still, if you’re at all tempted, we hope the new pricing and convenience will push you over the edge.  (Prime members can even borrow it for free.) And if you find yourself wanting to dog ear pages and highlight passages, well, the paper version is still in stock, too.

The Disconnect (Video)

Happy 2014!  It has been a very long time since we posted here, and I feel a little guilty because we have quite a few new subscribers.  Instead of blogging, we’ve been working on a video, The Disconnect, which we’re premiering at the bottom of this post.

I hope the video speaks for itself (and at eight minutes long, you might think it speaks plenty.)  Still, I wanted to give you a little bit of the backstory.

This video represents our attempt to answer two seemingly unrelated questions:

  • Strategically, where is our industry going, and how do we get there?
  • Tactically, why isn’t everyone equally successful with KCS?

Let’s start with the tactical one first.  We’ve been doing KCS for well over a decade.  While we learn things every time we work with a company, I think we’re more than solid on the practices.

So…why aren’t 100% of our clients successful?

Don’t get me wrong.  Essentially all of our clients get benefits from KCS.  Many are spectacularly successful.  But some fall short of the kind of transformation that we know is possible, and a few barely get off the ground.  Since we get paid to help people with this stuff, it’s our job to figure out how to get more clients “spectacularly successful” and fewer “meh.”

At the same time, strategically, we’re seeing support organizations expand their mission beyond just break-fix, to customer success and value.  Signs of this are everywhere:  the rise of Customer Success organizations; TSIA’s “B4B” focus; Customer Effort and Net Promoter scores; Customer Experience (CX) VPsdb; Support teams adopting CX techniques like Journey Mapping.

Yet, as William Gibson said, “the future is already here—it’s just not very evenly distributed.”  For every customer success-focused organization we work with, we see others stuck in the old, reactive, “our job is closing cases” model of Support.  And, reflecting on our experiences, these old-school orgs are the ones that have the hardest time with KCS.  In other words, ironically, if you want to get more internal benefit from KCS, focus on your customers first, not your internal measures.

So, The Disconnect is what happens when the way you measure yourself isn’t aligned with how customers perceive you.  If you’re more focused on your own business (e.g., case / incident closure statistics) than your customers’ (how effectively they use your products), then you’re going to have a hard time doing anything good, really, starting with KCS.

So, enjoy the video.  We start with a scenario illustrating what The Disconnect looks like in the real world, and then provide three approaches to resolving it.

Lowering the Cost of Failure

Yesterday, as I was driving near DB Kay & Associates “World Headquarters” in Santa Cruz, I noticed a home-painted smart car.  I wasn’t quick enough with my camera to take a picture, but it looked kind of like this.


Pretty cool, right?  Kind of fun? I actually imagined a family afternoon with paint brushes or markers, excited kids kibitzing and contributing.  Whether this paint job is your cup of tea or not, you have to admit that it’s more innovative than the metallic silver or blue most of us buy by default.

I thought about doing this myself and immediately thought, absolutely not!  There is no way in the world I’m going to paint my own car, fun or otherwise.

Naturally, this got me thinking, well, why not?  I don’t think of myself as a timid person; I can think outside the box.  I’m not a great artist (understatement!) but the rest of my family is pretty good.    My car’s already pretty distinctive; I don’t mind having it be a little more eye-catching.

So why would I not consider painting my own car?  Why haven’t you done it?  For most of us, the cost of failure is too high.

I mentioned this was a smart car (they prefer the lowercase ‘s.’)  One interesting smart design feature is that the bodywork is made of plastic panels that pop in and out.  You can buy multiple sets and swap colors based on your mood.  Or, in this case, you can buy a set of panels for under $400…and have fun painting them.  Don’t like the result?  It’s not like you need a new $5000 paint job.  Just get another set of panels, or paint them over and start again.  “Failure” is cheap—which means it’s not failure at all, just an interesting learning experience.

At that moment, I started looking around at our own business, and our clients’ businesses, and everywhere I looked, I saw operations that were Too Expensive To Fail.  Multimillion dollar CRM implementations.  Program offices working for months and years on a big initiative.  And yes, even six-figure consulting engagements, planned out six months in advance.  To quote Apollo XIII, “Failure is not an option.”  But in this case, that’s not a good thing.

Maybe when we plan our work, we can do so in a way that is explicitly designed to make it OK to paint over our mistakes.  Maybe we can stash a set of $400 body panels in our project plan somewhere.  Maybe then, we can feel the freedom to innovate, to create—to really make some art.

I know none of this is a new idea—I’m a big fan of Agile, and I’ve written here about The Lean Startup before.  It’s just that, seeing the joy and creativity that went into that smart car, well, it all just made a little more sense.

How have you reduced the cost of failure at work?  And what have you seen as a result?

ps – after I wrote this post, but before I posted it, I read a wonderful piece by Clay Shirky about making a similar point about “Failure is Not an Option,” and far more good points besides.

How To Map Your Customer Experience Journeys


Recently, we made the case for mapping your customer experience.  So, how do you do it?  If you can get the right people in the room, it’s surprisingly easy.  Here’s the drill.

Get Ready

  1. Scope the problem.  Taking on the customer lifecycle cradle-to-grave isn’t practical, but the best insights come when multiple groups explore their hand-offs from the customer’s perspective.  Pick a use case that’s important, but manageable.  When in doubt, scope it down.
  2. Get the right people in the room.  A group of four or five per scenario seems to work best.  Have representatives from each of the groups that own policies, operations, products, and services that will drive what the customer sees.  Make sure they’re prepared with process flows, reports, case notes, escalations, and other data that will help them understand and explain what the customer sees.
  3. Put up paper.  Lots of paper.  A double width of butcher paper seems to work well.  It seems like a small thing, but you’ll want to use the map that you create over and over again, to enlighten and encourage other stakeholders.
  4. Distribute colored stickies.  Different colors will represent different actions, objects, and experiences.  It’s funny, but any room brightens up when people get their hands on colored sticky notes and Sharpies.


  1. Walk through the customer’s experience as a series of events, creating a time-sequenced row of one sticky per event.  Here’s where the magic happens.  Make sure everyone contributes: no one knows it all.  There will be judgment calls as to whether suggestions represent corner cases or important steps to model; keep track of the ones that fall on the cutting room floor, as you may change your mind or choose to focus on these later.
  2. Identify on stage and back stage tasks that you do, again, each in its own row and color.  By “on stage,” we mean things that you do that the customer experiences directly, such as talking with a support analyst.  By “back stage,” we mean something that you do that the customer can’t see directly, like routing a case to a different queue or submitting a product defect.  These stickies are where your team lives, so this builds a bridge from the customer’s world to yours.
  3. “Watch” your customer’s reaction.  At each point in the customer’s map, what are they feeling, do you suppose?  Was something surprising, delightful, frustrating, annoying, or offensive?  Use your powers of empathy.  It’s often useful to remember back on experiences that you’ve had with other vendors.


  1. Identify Key Moments of Truth (KMOTs).  Of all the places customers react, which seem the most important to them?  Which are the ones most likely to adjust their opinion of you, for better or worse?  Which will surprise them?  KMOTs create the short list for further examination.
  2. Look for experience fracture points.  Are there places where the ball really gets dropped from the customer’s perspective?  (At this point, we don’t care if there are good reasons or not from your perspective.)  These, especially if they’re KMOTs, are worth a “five whys” deep dive.
  3. Look for ownership fracture points.  As enterprises, we’re at our blindest when we’re transitioning customers from one team to another: marketing to sales, sales to implementation, web to assisted, tier 1 to tier 2, EMEA to North American, etc.  Watch the customer carefully as they transition, and work especially hard to see if their experience goes sideways when no one is watching.
  4. Validate with customers.  This is optional, but what a great idea to walk customers through this map to find out if this really is what they experience and feel, and what you might have missed.  Again, judgment is required to not over-focus on a single customer.
  5. Explore hunches.  Before getting into a serious action plan, are there any quick wins?  Any brainstorms about what to do?  Sometimes, as soon as an experience gap is identified and the right people are in the room, the solution is completely obvious.
  6. Put together an action plan.  Prioritize the KMOTs with gaps, assigning owners to each.  Use the map to test proposed solutions—will they solve the problem at hand from the customer’s perspective? Do they create new problems?  If there’s a problem with a hand-off, make sure that all the right organizations are working together on it.


As with usability testing or customer surveying, the process isn’t all that difficult, but actually taking action requires commitment.  The good news is, as with usability tests or customer verbatims, the process makes believers out of those who participate in it.  So that product manager or sales exec who never could be bothered to listen to Support’s “complaints” may now become your biggest advocate for change, now that they see what’s at stake.

Ps – If you think an external facilitator might make this process more effective, especially a facilitator who has worked with your industry peers and leaders, we’d be happy to help:  please let us know.


Feedback: Style Matters! Allscripts Does It Right

Reports are boring.  Can we all agree on this?

Sure, reports may contain interesting information.  But there’s something deeply stultifying about the actual presentation of the data in commercial reporting packages.  With apologies to my friends at business intelligence vendors, it’s almost as though the designers had set out to create bad “before” examples for an Ed Tufte book.

But an even bigger issue is that automatically-generated reports don’t tell a story.  They don’t separate the important from the trivial—the signal from the noise.  That requires humans. (continued below the image)


Allscripts is on a transformational journey, implementing KCS and swarming practices in what they appropriately call their Many Minds Program.  Their leadership knows how important it is to provide feedback about their progress, celebrating the good and showing opportunities to get better.  They knew that generic reporting wasn’t going to do that for them.

Infographics are hot, in print publications and especially social media.  So Allscripts’s leadership borrowed good visual presentation ideas from infographics to come up with the image you see above.  In this widely disseminated—and frequently updated—image, Allscripts shows

  • Positive business results, such as decreasing backlog and increasing first day resolution (FDR)
  • Social proof that others are using the Many Minds practices—increasing adoption, users, and large numbers of swarms
  • Who the superstars are on a team-by-team basis (warning: make sure that focusing on activities doesn’t compromise outcomes)
  • The connection between activities and business outcomes, as swarms lead to high case resolution
  • A reminder of the key ideas—simple visual representations of the swarming process along with the core activities: search, swarm, attach, create.

So, the content is fantastic.  But the reason people will take the time to find out how good the content is, is that the presentation is fun and engaging.  (Did you solve the word puzzle?  Put your answer in the comments below!)

Now, this obviously requires some time from a graphics professional.  But, in the big scheme of things, probably not all that much time, and look at what you get as a return on your investment!  To me, this is far cooler—and has a far greater impact—than a generic program poster.

If you took a little time to tell the world your team’s story in infographic form, what would that infographic look like?

(HT J. Wade Yarbrough and Selbe Bartlett of Allscripts.  Wade presented this as part of a Many Minds update at TSW Las Vegas this year, and was gracious enough to let us share it in the blog.)

Map your Customer Experience…and Surprise Yourself!



“O wad some Pow’r the giftie gie us
To see oursels as ithers see us!”
Robert Burns

Seeing ourselves as others see us is a gift.  And Customer Experience Mapping gives us all the power to do it!  All it takes it stepping away from your desks for a while—and some empathetic imagination.

Our workdays are consumed with the things we do: closing cases, escalating, shipping replacement parts, improving knowledge and self-service, dispatching field staff, preparing for new product introductions…the list goes on and on.  But it turns out, this isn’t what your customers are experiencing, at all.

You think of escalations; they think of getting handed off to another person whom they hope can help.  You think about no-fault-found warranty return rates; your customers think you shipped them a broken and confusing product.  You think about VSOE and entitlement management; your customer wonders why you can’t just “do the right thing” by them.

You’re not wrong, by the way, to think of any of this.  But you need to be aware that the customer’s experience is really, really different from yours.  Perhaps the biggest difference is that you’re looking at the aggregate—at rates, and averages—while they’re experiencing their own singular situation.

So what? Well, your short-term financials, and your performance objectives, probably hinge on your experience.  But your long-term success, and your brand, depend on the aggregate (not average) of each customer experience.  Clearly, ignoring this isn’t an option.  But, short of renting an RV to visit all your customers, it’s hard to find out how they experience you.  That’s where customer experience mapping can help.

That’s the why: next week, we’ll explain how to do it, and then what to do once you’ve done it.

Great Customer Experiences

It’s been an atypically long break between blog posts.  (Some might call it a welcome respite.)  Lots of work travel, and we’re just back from a lovely vacation in Spain.  Thanks for your patience.

Support ultimately owns the post-sales customer experience, and has tremendous influence on the pre-sales experience, too.  (I think it’s not a coincidence that two of the executive sponsors of our work are VPs of Customer Experience.)  At a recent Consortium for Service Innovation meeting, Executive Director Greg Oxton asked each attendee to reflect on a great customer experience that they’d had recently.  What impressed me was not only the diversity of responses—there’s nosingle path to delighting customers—but also how much positive energy built in the room as people were telling their stories.  I tried to capture the characteristics that people mentioned:

  • It takes less effort than I expected to get something done.  I can easily do things myself on the web or on a mobile device.  The company cuts out unnecessary steps
  • The company stands behind their product or service, and makes things right for me, even if it’s not technically their fault
  • The company follows up with me—it’s proactive about my success.  It’s thinking ahead about what I need
  • The company meets me where I am.  (Example: a child got her first library card at a special child-sized desk that she could easily see over.)
  • The company provides me a high degree of control, and is transparent about what my options are and how things are going to work
  • The company is very responsive
  • The company takes care of things for me, even when they’re 100% my fault
  • Help is there just when I need it, in the context of what I’m trying to do
  • I get more than I deserve
  • I get an unexpected bonus—maybe a little thing, but it’s still nice
  • If they make a mistake, they go out of their way to make it right
  • They’re sincerely empathetic—they care about me; they’re not just reading from a script.  They know me
  • They work with me in my preferred channel, whether that’s a KB, in person, or something else
  • They encourage, challenge, and develop my skills.  (Another great children’s library example: reluctant young readers are encouraged to “read to the dog:” they go in a special room with a friendly, calm dog who sits and listens nonjudgmentally to stories)
  • It’s easy for me to exit the relationship when I want to
  • They do the right thing for me before trying to upsell me
  • They take responsibility for getting my situation resolved.  If a hand-off is necessary, it’s a warm handoff, with a personal introduction
  • They make me feel special–like a rock star
  • They display sensitivity to my needs and styles

I’m reading this list, and thinking about most of the support organizations we work with, and wondering how on earth we can afford to create these wonderful experiences?  But then, the more I think about the way the people talked about the companies that delivered these great experiences, I wonder, how on earth can we afford not to create experiences like these?

Have you experienced something wonderful recently?  Have you thought about how you and your organization can deliver great experiences to your customers?

ps – Interested in learning the ins and outs of running a successful knowledge program, or becoming KCS Certified?  Join us in Northern California next month!  Find out more at

What Do We Do About All Of These Myths?

Goodness me, listen to me go on about these metric myths.  If you’ve made it this far, you can be forgiven for asking, “That’s all fine, but what do I do about it?”  Metrics exist in a context, and that context isn’t going to change itself just because some consultant writes a few blog posts.  Or even a lot of blog posts.

Here’s some guidance for taking on your culture of measurement.

Address technical mistakes

Because measurement and leadership are so closely entangled, it’s hard to critique measures without also appearing to critique their owner.  But, a relatively safe critique is one that is purely technical.  For example, if people are miscalculating self-service metrics or leaving people and process costs out of an ROI, it’s pretty nonthreatening to offer to tweak the measure a bit…especially if it’s new and hasn’t been publicized widely in the past.

Similarly, sometimes just renaming measures can help.  For example, leaders might be reluctant to stop reporting a “Net Promoter Score” that’s support only, transactional, and calculated with bogus cutoffs but perhaps you can have it renamed to a “Support Recommender Index” or some such.  This gives people the metric they want without the possibility of misleading others and losing credibility.

Ask good questions

It’s always easier to change attitudes with questions rather than opinions.  (Even I won’t walk into your offices and start telling people they’re doing metrics wrong.  Probably.)

Most people at work are smart and well intentioned, and as we’ve seen, there’s at least a grain of truth in each of these myths.  With careful and appreciative inquiry, we can perhaps get people to move away from the unhelpful parts of their measurement strategies while holding on to the good core.

For example, if I see an ops meeting that attendees despise, I might ask,

  • What are the most useful insights you’ve gotten from these reviews this year?
  • What were some of the most effective corrective actions you’ve taken, and how long do those actions take to implement?
  • How do you assess what changes in the data are meaningful, and which are just usual variations?
  • What do you think is the normal variation in the data?  Are there patterns around seasons or new releases?

If I see only activity-based measures (SMART goals) on review forms, I might ask,

  • Who are your superstars here?  Why?  How does that show up in these goals?
  • Have you ever seen people attempt to game these metrics?
  • Have you had employees score well on these measures, despite impressions or feedback that they’re not the best performers?
    • What guidance has HR given you on the nature of your goals?
    • How do your knowledge-working counterparts in development and marketing measure their staff?

I hope answering these questions might cause some introspection and different ways of thinking about a desirable measurement strategy.  At the very least, having heard the answers, I have more ways of framing what I’m going to recommend in a way that’s responsive to my colleague’s challenges, values and interests.

Implement better alternatives

Living well is the best revenge…and it’s also a great way to get people to follow you!  Generating and analyzing metrics may not be in your formal job description, but everyone needs to measure their work at some level, so consider yourself officially empowered to be master of your own metrics.

Of course, if you manage a team or program, you have an even clearer mandate, and more opportunity to make a bigger difference.

For example, if you’re part of a team taking on a new initiative, why not experiment with some Innovation Accounting, perhaps tracking progress against your original ROI model? Or, you could implement a self-benchmark? However you choose to measure yourself, make sure that people will know what you’re learning and what you’re doing as a result of your measures.  And see if the things that you’re doing don’t start to spread across the organization.

One additional note on metrics innovation: the people you report into may not be ready to let go of their old measures, at least until they have significant experience and comfort with your new ones.  So it’s useful to “keep two sets of books,” as we’ve discussed before—the traditional numbers to report up, and your new metrics to learn from and to start to expose alongside the traditional numbers.  It may seem like extra work, but it’s hard for people to lose metrics they’re used to, so you’ll probably have to operate in parallel for a while.

Use a Strategic Framework

Measures are in service of a bigger goal or measurable vision.  The document that connects the operational metrics to the bigger goal is a strategic framework.

The strategic framework is a great way to communicate the “why” of your measures.  Because each measure supports an action that supports a strategic objective, there’s a clear reason for why we’re looking at what we’re looking at.

The Strategic Framework is simple—so simple, it’s tempting to skip.  Please don’t.  Your metrics, your executives, and the rest of your colleagues will thank you.

Outlast ‘em

Sometimes, it’s just not possible to change leadership’s measurement practices, especially if they’ve been doing things the same way for a very long time.  In this case, just wait for them to move on.  It’ll happen eventually—sometimes, gratifyingly quickly.  If you see dysfunction, you’re likely not the only one.

It’s a satisfying feeling, especially if you’ve been planting seeds with your own more effective measurement strategies.  Keep the faith!



Metrics Myth 11: Any Initiative for Which You Can’t Define Clear Outcomes Isn’t Worth Doing

This is another inference from “you get what you measure.”  If you don’t know how to measure an initiative, you don’t know what you’re trying to achieve.  And if you don’t know why you’re undertaking an initiative, you shouldn’t do it.  Fair enough.

But isn’t it sometimes the case that we have an intuition that something will be valuable, even though we’re not quite sure how to measure it?

Collaboration springs to mind.  I’ve heard many support executives say things like, “if we could just get the right person with the right knowledge on the right case the first time, we’d be in much better shape.”  And I believe that, too.  But trying to construct a measurement framework to capture the health and value of collaboration is really hard.  (“Ask me how I know.”)

In general, if what we’re trying to do is very innovative, we probably won’t be sure how to measure it, at least at first.  And that’s probably OK.

But does that mean that people running innovative initiatives have no accountability?  Absolutely not.  And to discover how, we’ll look at another place where disruptive innovation is the norm: the startup.

The Lean Startup by Eric Ries is the most influential book of the decade in Silicon Valley.  It’s hard to imagine a book since Crossing the Chasm that’s had more of an effect on how people think and talk about startup strategy.  (Have you noticed people saying “pivot” all the time recently?  Blame Ries.)  Measuring innovative businesses in the midst of uncertainty is central to The Lean Startup’s message.

Ries argues that, in an innovative or uncertain environment, the goal should be validated learning—learning how to create sustainable value.  He contrasts the purposeful acquisition of validated learning through a build/measure/learn loop in opposition to the kind of “learning” (in air quotes) that is the consolation prize from a failed initiative.  “Sure, it didn’t work like we expected, but we’ve learned some lessons for next time” isn’t validated learning.  Rather, validated learning uses the scientific method and a rapid cycle of experiments to propose, validate, or discard hypotheses, then move on to the next experiment.

Validated learning happens inside a framework of what Ries calls “innovation accounting” that holds innovators accountable for learning and improving.  And, if sufficient improvement isn’t forthcoming, to rethink their assumptions and pivot to another innovation, until the value they’re expecting is forthcoming.

So, for really new initiatives, hold yourself accountable for validated learning.  For example, in collaboration, enumerate your key success factors: people need to use it frequently, and the number of users should grow over time, for starters.  Release a minimum viable collaboration program and see if people use it, and see if demand grows.  If it does, improve it.  If it doesn’t, figure out what to change and test that.  Eventually, you’ll have a program that’s viable, or you’ll know you were on the wrong track and need to pivot to a new one.

By the time you’ve acquired sufficient validated learning, and you have a functioning program, then you’re in a position to figure out how to measure the value.  It’s easy to measure the value of a program that isn’t working—it’s zero.  But measuring the value of an effective program…now that’s interesting, and very worthwhile.