Metrology Monday!  #84 A Discussion on Conformity Assessment, Decision Rules and Measurement Decision Risk – The Problem with Conformity Statements
Here is a hint, the problem with conformity statements has something to do with uncertainty

Metrology Monday! #84 A Discussion on Conformity Assessment, Decision Rules and Measurement Decision Risk – The Problem with Conformity Statements

I wanted to express my gratitude to all who have been patiently reading my posts on Measurement Decision Risk.  I know everyone wants to jump directly to decision rules and risk.  I feel that it was important to build up our discussion on this topic, so that we are aligned before we dive in to the more difficult parts of this discussion. 

That being said, I am ready to get to the heart of the issue.  The problem with conformity statements is that the measurement is only an estimate of the true value that we seek to measure.  We can’t say with absolute certainty that the true value lies inside or outside of the specification limits.  The simple reason for this is, as we know, every measurement has an associated measurement uncertainty.


An example of a measurement at a given test point

In the picture above the estimate of the measured value is indicated by the peak or center of the distribution.  The shape and width of the distribution communicates the uncertainty associated with the measured value.  We know that the true value associated with the measured value should exist somewhere within the area under the normal distribution (bell curve) for the measurement, but it is unknown and unknowable.

This picture demonstrates the measurement at a test point, where -SL represents the lower specification limit for the test point, Nom represents the nominal value, and +SL represents the upper specification limit.  You can observe that the measured value is within the specification limits, but the measurement uncertainty exceeds the upper specification limit.  Since the true value can exist anywhere under the uncertainty distribution, the true value could be within the specification limits, or it could also be outside of the specification limits, but still within the distribution.  This is the simplest way that I can illustrate the entire challenge of When we perform calibration and verification of the product, or are doing a final test on a manufactured product, our fundamental task is to

·       Accept good units

·       Reject bad units

When we perform a measurement for a given test point that is bounded by a specification or maximum permissible error, and we evaluate our measurement and rules and make a decision whether the unit is good or bad.  There are two possible outcomes on making this decision.

·       You are right

·       You are wrong


Example 1, calibrating a product and determining conformance

The first example I have here is where we once again show a test point that has a lower limit, a nominal value, and an upper limit.  We take a measurement, represented by the dot, and it looks like the dot is within the specification limits, so we should be confident with our decision that the unit is good, or is within specifications, right?


Example 2, calibrating a product and determining conformance

But we know that all measurements contain uncertainty, so we will show the same measurement with error bars that are used to communicate the magnitude of the measurement uncertainty.  Once again, most people feel pretty confident that the measurement indicates that the unit is within specifications.


Example 3, calibrating a product and determining conformance

Now, I am adding in another example of the exact same measured value, but the uncertainty is larger this time.  Are you still confident that the measurement indicates that the product is really within specifications?

If your uncertainty is large compared to the specified requirement, how confident are you in your declaration of pass/fail or in-tolerance or out-of-tolerance?  I think that instinctively we understand that the larger our uncertainty is as compared to the specification, the larger our risk of making a bad decision becomes.  I hope that these examples help to confirm this for you.  The first example also demonstrates a really important point, if we don’t know the uncertainty for a particular measurement, how could we ever be able to evaluate our risk for being right or wrong? #MetrologyMonday #FlukeMetrology

An insightful breakdown of the challenges surrounding measurement uncertainty and decision risk! Accurate measurements and managing uncertainty are critical to avoiding incorrect conformity decisions. A strong calibration system that tracks uncertainty and risk is key to making confident, data-driven choices!

Ajay MV

Helping manufacturers to perform better test from R&D to production level • Manufacturing Test Automation Consultant • Test Solution & Service • NI Consultant Partner

3mo

Simple insightful explanation Jeff Gust As a test development engineer, I rarely go deep about implanting uncertainties of measurements I read from instruments, certainly a must do check box to reduce failures in production downstream stations.

Like
Reply

Mr. Gust, my colleagues and I have had a brief discussion regarding the use of the term "true value" in your publication. Could you please let us know if you have explored this concept in more detail in another publication? If so, could you provide the reference number of that publication? Our discussion revolved around whether "true value" should be understood as a single unique value or a range of values that fit the definition of a measurable quantity.

Like
Reply

If high particle counts were reinforced with low yields This sounds like quality and manufacturing looking to blame the other for failures and loss of profits?

Like
Reply

Thanks for explaining the concept in the simplest words for ease of understanding by all.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics