As any quality professional that quantifies performance has learned by now, a little bit of data can be intoxicating to upper management. “I can tell you why that is happening,” may be heard by the senior executive looking at no more than a few data points. Unfortunately, many times the underlying causation is nowhere on management’s radar, and as a consequence, industries are replete with bad decisions made on misleading data.
To cite one common example, a couple outstanding performances from a process is not only unlikely to generate another great result, it is actually more likely to generate a poor result. Companies that have permanently added the recorded message on their customer support lines that “your wait time may be longer than usual,” seemingly understand this phenomenon. They are preparing you for a poor experience by playing mind games with you. Because if the wait time is always longer than usual, what does usual even look like?
RELATED ARTICLES
What is driving outcomes in these situations is the law of regressive means. Your process is giving you the performance that you designed into it, which inevitably will result in both exceptional and disappointing cycles as outcomes bounce around the true mean. It is best to understand this dynamic before reprimanding employees for performance beyond their control. For a great group exercise among your team to illustrate this concept, look up Deming’s “red bead experiment.”
To understand additional challenges in rational consideration of data, it is also worth studying the work of Noble prize laureate Daniel Kahneman. Kahneman popularized the idea that homo sapiens are distinguished by other creatures by virtue of a brain with two functioning systems, designated simply as “System 1” and “System 2.” System 1 triages all input first, and uses heuristics and instinct to quickly assess conditions, to attribute causation to them, and to implement any necessary countermeasures. This system, throughout evolution, allowed humans to avoid a variety of threats, ranging from being consumed by a wild beast to being cut off in traffic.
System 1 is satisfied to draw conclusions with limited data, and more often than not, System 2 never bothers to get involved; it is trusting of System 1. It must be acknowledged that human survival is a testament to the great track record of System 1, however, it is not a perfect system, and it does make mistakes. In fact, it makes many more mistakes than we realize, or are willing to admit.
System 2, however, allows us to go deeper into analyzing a situation, drawing on experience, reasoning, and if available, data. But System 2 is lazy. We must make an effort to engage it, or roll the dice with System 1, which can be a career-limiting attitude, even if it seems that executive management relies excessively on it.
Consider the meeting in which you are presented data representing the average performance of your processes, using that metric as a surrogate for quality, or some dimension of “better, faster and cheaper.” Perhaps you’re also presented a histogram like that shown below.
Within a second or two, you and everyone else viewing the data will draw conclusions from it. Some people might like the performance that they see, others may feel that better performance is possible, and yet others may believe they can explain exactly why this particular performance is what it is. All of these reflexive reactions are relying exclusively on System 1, and many of them will be wrong!
Understanding what is happening behind the data is System 2 thinking, and System 2 may give you the insight to ask the right questions about what is really happening behind the data. In our example above, the answer might be dramatically different based on the variation inherent in the process. In fact, it may be trending in a positive or negative direction. “Averages insulate us from what is going on in a process,” says Jonathon Andell, a consultant in operational excellence. “To truly understand what is going on, you need to look at a control chart.”
Here are three possible distributions in control chart format that could give rise to the histogram shown above:
In the example provided we have what first appears to be random variation, then a distribution showing a dramatic shift, and finally a significant trendline. Once again System 1 will be anxious to serve up an explanation, but System 2 would be far more qualified to weigh in.
Instead of relying on System 1, we should be asking questions for each chart shown:
We’ve proven great performance is possible. Why can’t we perform this way all the time?
What caused the dramatic shift that we see here?
Why do cycle times seem to be significantly rising?
Indeed, it may be humbling – even humiliating – to reflect on the past, and the times that we surrendered ourselves to System 1 thinking. A seasoned quality professional, on viewing charts and data, must resist the temptation to jump to conclusions, instead saying nothing more than “that’s interesting.” Then, they should start asking the tough questions that System 2 has likely queued up.