Control Chart Logic

How much variation is built into this process?

These pages are not intended to teach or justify statistical methods. We hope that they may have some insights that supplement more formal treatments.

To follow this argument all you need to know is that we can describe a set of data in terms of its centre (mean or average) and the degree to which the data varies from the centre (commonly expressed as the standard deviation).

The empirical rule

empruleThe histogram forms the basis for determining how much variation we can expect to see from the process, all other things being equal.

If the distribution has only one peak and no discontinuities, a handy mathematical proposition called the empirical rule tells us that more than 99% of the data will fall within three standard deviations either side of the mean, regardless of the exact shape of the distribution. This is important because for most business processes we can never be sure what shape the distribution is, so we can't rely on formal mathematical properties e.g. of normal distributions.

The charts at the right show simulated distributions all of which have a mean of 100 and a standard deviation of 15. (Which coincidentally is the definition of how IQ scores are graded).

The consequence of the empirical rule is that for any point further away from the mean than three standard deviations, there is only about a 1:100 chance that the point "belongs" to the process. That is to say, there is a 99% chance that it signifies a change to, or abnormality in, the process.

Why do we set 1:100 as the cut off? Because it gives us a handy economic rule as to whether it is worth our while investigating the cause of the outlier. 

No detection rules are perfect! We will always miss some "real" changes, and some apparent changes will be red herrings. The art of using statistics is to try to strike the best balance between the two types of mistake. And not many management rules of thumb are right 99% of the time.

How control charts work

There are several methods of calculating the process data’s standard deviation, each of which have its merits and risks.

We find that the XmR control chart is the most useful for process exploration purposes because it is very robust - it doesn’t much care about the characteristics of the process, where as other methods are predicated on assumptions that may not be valid. The price you pay for this robustness is that the XmR chart is not as sensitive as other techniques, but in practice this is does not tend to be a great problem in the Measure phase of a typical business process improvement project. If you are building microchips or sequencing DNA then more rigorous tools would be called for. 

XmRbeads

The control chart at left shows the outputs of a completely random process - Dr Deming's Red Beads exercise as carried out with a training group recently. Each point shows one "production run" of beads. The measure is the number of beads (out of a batch of 50) that are deemed "defective". The workers producing these beads understand quite clearly that the customer requirement is for no more than 5 red beads in any production run.

As well as the individual points, the chart includes the overal mean, and two lines labeled UCL and LCL. These are the Upper and Lower Control Limits respectively, and they show, based on the observed data (not a theoretical model), the plus and minus three standard deviation values that the empirical model tells us bracket 99% of our data.

So, notwithstanding our intuition that there are some trends and significant variations when we look at the ups and downs of this data over time, what the control chart shows us is that in fact nothing special has happened at all over the entire history of this data.

One of the lessons from this exercise is that when we look at a chart showing numbers going up and down, we should ask ourselves - is this really significant, or is this just random variation that is built into the structure of our process?

Assignable and common causes

There are many versions of control charts and many rules that can be applied to detect variation that is worth following up. We suggest that in the first instance there are two rules that you should look for:

  • Any point that is above the Upper Control Limit or below the Lower Control Limit
  • Any run of eight or more points on the same side of the long term process mean.

XmRbeadsSignalsThe chart at left takes the first 24 points from the chart above and, without recalculating the control limits, plots another 12 hypothetical data points. (You don't get these results from any Red Beads exercise I've ever seen.)

Note that point 25 is above the Upper Control Limit - a signal that something happened that would be worth investigating.

Then, from point 28 onwards, we see a run of points on the same side of the mean. In this case, because less is better, we may interpret this as evidence of improvement (assuming that we know what we did to make the improvement!). We may then want to calculate new control limits based on the data from the new (improved) process.

Conversely, if the run is on the adverse side of the mean, it is evidence of a problem that merits investigation.

The XmR chart will show us two things that we need to know. Firstly, it will show us where there are isolated or persisting changes to the process. These are statistical signals - “assignable” or “special” causes -  that “something different happened here”. They should be followed up promptly by an investigation into the cause of the change, and appropriate action taken to prevent its recurrence.

Systematic process improvement is very difficult in processes that exhibit unpredictable variation, which is why assignable causes should be identified and rectified. Obviously, to find and fix the problems you need current data - it’s hard to fix a problem that occurred three months ago - so control charts need to be maintained and kept up to date, preferably on a daily basis.

Secondly, the XmR chart will show you the amount of residual “common cause” variation in the process that you can expect to see in the future, all else being equal. This spread of variation is not attributable to specific causes or problem instances, and can only be reduced by attention to the overall process, which is the thrust of the Analysis phase of the improvement project.

A process that has no assignable causes within the period of observation may be said to be stable, which is of course no guarantee that it will continue to be so. Being stable is also no guarantee that the process is meeting customer requirements. It is entirely possible to produce a predictable proportion of unsatisfactory outcomes.

Tampering

Making changes based on individual data points that are within the range of common cause variation is known in the trade as “tampering”. It can readily be shown that tampering will increase, not reduce, the amount of variation in the process, so the temptation should be resisted. However, the only way to resist tampering is to track your data on a control chart.

In training, I often ask for a show of hands "who doesn't have enough work to do?". I see very few hands. If you have plenty to do, then it makes sense to use control charts to show you where your efforts are needed.

Why the term “control chart”?

For some reason this is a term that bothers people. We think it is a perfectly reasonable term. The “control” in “control limits” signifies that this is the range of variation you can expect from the process if you continue to apply your current management controls. Variation outside of range this signifies a failure of the controls - it is “uncontrolled”.