Oil and Gas Industrial zone,The equipment of oil refining,Close-up of industrial pipelines of an oil-refinery plant,Detail of oil pipeline with valves in large oil refinery.

The EXPAT

=====================================================

39. Statistical Quality Control

=====================================================

One needs to understand how to obtain fair and representative samples from any production process.  It clearly isn’t enough to just pick the first 60 loaves that come off the assembly line during the early morning hours because this wouldn’t give every loaf the same chance of being in the sample.

When carrying out experiments to learn about the effects of certain controlled factors (that is, factors that can be changed on purpose), one must be keenly aware that results can be affected by “nuisance“ factors that are not controlled during the course of the experiment.

Quite often, one doesn’t even know what these nuisance factors are or how they enter into the experiment. For illustration, consider an experiment that attempts to compare the impact of two training methods by exposing different groups of employees to the two methods and studying whether the type of training affects certain test scores.

One must find a “fair” way to partition the work force into two comparable groups. It certainly would be bad to have responsible workers in one training method and the rest in the other; such an arrangement could bias the arrangement because responsible workers might score higher on any test.

Randomized Sequencing

Randomization, where the participants in method 1 (and method 2) are selected at random, provides a fair way. Such randomization is essential, as quite often we don’t even know which employee characteristics could have an impact on the comparison.

Variation in experimental results is a fact of life. It arises because of the many uncontrolled and uncontrollable factors factors that influence the process. By replicating (repeating) the experiment, one guards against interpreting too much from the result of a single run.

Learning is an iterative process. Each tme we look at the results of an experiment we obtain further information which allows us to focus our investigation even better.

Because of this fact, one should always carry out investigations in the form of a sequence of small experiments. Experiments coat money and normally one operates within a certain fixed budget. If all resources are used up in one giant experiment, then there is no opportunity to put the resulting knowledge to use and to ask more further, more focused, questions. Hence it is a bad idea to exhaust one’s budget with a single massive experiment. It is much better to to conduct a sequence of small experiments. The strategy of changing only one factor at a time is also a very poor approach.

Data Collection

A lengthy manufacturing process involves eight consecutive steps. The output, an electrical parameter, cannot be measured until the final process step is completed – even though the cause of variation within the output may be the raw material used at the first step or in any of the following operations.

The manufacturer found that nearly 50% of the output failed to meet the customer’s specifications. So the engineer who designed the process was called in to help find a solution.

The engineer, after consideration, decided that step 4 might be where the problem was originating. When he designed the circuit he designated a process temperature range from 160 to 180 degrees at this step. However, that designation was somewhat arbitrary and had never been statistically validated.

So he decided to conduct a test. He told the shop foreman to produce three pieces at each of these three temperatures; 160°, 170°, and 180°. “We’ll tag each of the nine pieces, so we’ll know at which temperature each was produced. Then, when they reach the end of the line, we’ll measure the parameter.”

The spec for the electrical parameter they would measure at the end of the process was 4.5 units minimum to 6.5 units maximum.

Here’s what happened: The first three pieces (#1, #2, #3) were put through step 4 at 160° at around 9:30 am. The next three (#4, #5, #6) were put through step 4 at 170° around 10:15 am the same morning. And the last three units (#7, #8, #9) were put through step 4 at 180° at 11:00 am.

Two days later, when the tagged pieces had become finished products, they were measured. The engineer found that all three pieces produced at 160° were unacceptable. One unit produced at 170° was unacceptable. But at 180°, all three units were acceptable.
T1 (160°) = 2.7, 1.9, 3.6
T2 (170°) = 4.2, 3.8, 4.5
T3 (180°) = 4.8, 5.7, 5.2

So, the engineer raised the temperature at step 4 from 175° to 185°.

But the yield remained at only 50%. What went wrong????

ANALYSIS

The way the engineer arranged the nine parts when he examined the results of the test led him to draw a false conclusion. He looked at the measurements for the pieces in the same order that they went through step 4: (#1, #2, #3 – – #4, #5, #6 – – #7, #8, #9). But something else was happening in the process at the same time that the test was conducted. The engineer’s sequence of data was in phase with some nonrandom pattern. The graph below reveals that pattern. The solid black dots represent the nine pieces tested. The other dots represent reading from units treated within the normal process temperature range.

As we can see, some factor connected to time, not temperature, was the actual source of variation. The output was more or less steadily rising during the day – – a nonrandom pattern.

The odds of the numbers 1 through 9 falling into the order 1,2,3,4,5,6,7,8,9 randomly, is over 300,000 to 1. However, if we force the sequence into a truly random pattern, we can avoid the engineer’s mistake. For example, find a nine digit random sequence (either from a random numbers table or a calculator or PC), such as the following:
30 93 44 77 33 73 78 80 65

Then assign the part numbers to the random numbers by labeling them according to their ranking from lowest to highest. So now we have:
(P = Part number)
30 93 44 77 33 73 78 80 65
P1 P9 P3 P6 P2 P5 P7 P8 P4

T1 T2 T3

The same part processed in this sequence would have produced no evident trend:
T1 (160°) = 2.7, 5.2, 3.6 (avg. = 3.83)
T2 (170°) = 4.5, 1.9, 3.8 (avg. = 3.40)
T3 (180°) = 4.8, 5.7, 4.2 (avg. = 4.90)

Randomized sequencing can be helpful in separating sources of variation in a multi-step process. If a non-random pattern results, we’ve found the likely culprit!

 

Funditty #39.

“There will never be a really free and enlightened state until the state comes to recognize the individual as a higher and independent power, from which all its own power and authority are derived, and treats him accordingly.”
Henry David Thoreau

Bob Robertson

Bob Robertson

Bob Robertson is a retired professional quality engineer and educator with extensive experience in manufacturing environments throughout the world, including Singapore, Indonesia, Russia, and various locations throughout the United States. Besides all that, he Leslie Householder's admired and revered father, and she is pleased to spotlight his "Expat" stories here on her Rare Faith blog.