• webmaster@caristem.org
Measurement credit Pixabay

It was theoretical physicist Werner Heisenberg who defined one of the cornerstones of quantum mechanics in his uncertainty principle as “It is impossible to measure something without disturbing it.” Measurements, however, are our proven method of validating and qualifying products and are extremely critical to the product life cycle.

Product life cycle begins with a conceptual stage of ideation, and then continues through product realization and disposal. During the production stage, measurements are acquired to determine product conformance. These measurements are critical in determining if a product is acceptable or not, and whether it should be scraped or reworked.

As such, specific scientific methodologies have been developed to ensure that the chosen measurement system is adequate. There has been extensive effort towards developing and improving the science of measurement systems. At the heart of any measurement system is the concept of precision. So what is precision? The general public tends to use precision as being synonymous to accuracy. In doing so, precision and accuracy are treated as interchangeable terms. However, we can be precise but inaccurate. In metrology, these terms are distinctly defined.

Precision is how closely values are to each other, whereas accuracy is a refinement of precision towards a target or an average. Let’s look at an example. Let’s say we are making cylindrical gadgets with an average diameter of 1”. In measuring 5 gadgets we obtained the following measurements; 4”, 4”, 3.9”, 4”, 4”. Assuming there was a conformance tolerance of ±0.5”, in this case, only gadgets of size ranging from 0.5” to 1.5” would be acceptable. Although these measurements would be unacceptable because of inaccuracy, they would still be precise. Accuracy would require that all measurements were within the specified conformance range. For arguments sake, let’s review another scenario. Suppose the conformance measurement was now set to 4”, what can we say about accuracy and precision? Well in this case, the original measurements would be both accurate and precise.

Now that we have distinguished accuracy from precision, the next question is, how do we measure accuracy and precision? Well, accuracy is measured by bias and precision is measured by variance. So what does this all mean? Bias would be defined as the average shift of measured values from a set reference. In other words, the bias is calculated from the difference of the average measured values to the reference. With that said, if the average measured values and the reference are equal, then the bias is zero. Let’s revisit the original example that was introduced earlier. The reference value was defined as 1” and the average measured values for the 5 cylindrical gadgets was calculated as [(4+4+3.9+4+4)/5] 3.98”. The bias would be calculated as the difference between the measured average and the reference (3.98-1). A bias of 2.98 would be reported for this example. Precision, on the other hand, is the spread of measurements from the average value. One commonly used indicator for precision is the standard deviation. Since the algebraic sum of the measurements from the average reference is zero, the standard deviation is used to determine variance.  It determines how each individual measurement differs from the average value. Standard deviation can also be stated as being inversely proportional to precision. In that, as standard deviation decreases, precision improves.

In summary, knowing the effectiveness of a measurement system requires understanding its accuracy and precision.  Accuracy and precision determines the robustness and reliability of the measurement system. Accuracy involves identifying the bias of the system and precision involves the spread. As this article attempts to explain the fundamental concepts of a measurement system, the second series will discuss the application of the measurement system. Application necessitates an understanding of normal distributions, the gage or measuring device, repeatability, and reproducibility (commonly referred to as a gage R&R study).

 

ABOUT THE AUTHOR(S)

Dr Winston Sealy

Winston Sealy is an assistant professor of manufacturing engineering technology at Minnesota State University.  He teaches design, metrology, and automation courses and has over 15 years of industrial experience. His areas of research are in product design and automation. He is a co-director of the Minnesota Center for Additive Manufacturing.

 

Published: 2017-10-27