Metrology Monday!  #90 A Discussion on Conformity Assessment, Decision Rules and Measurement Decision Risk – EOPR, TAR and TUR

Metrology Monday! #90 A Discussion on Conformity Assessment, Decision Rules and Measurement Decision Risk – EOPR, TAR and TUR

There were some great comments in the discussion on post 89 last week, and I wanted to clarify one part of the conversation before going to new subject matter.

While manufacturers can publish confidence specifications for products, when evaluating False Accept and False Reject risk for your calibration process, it is better to get an estimate of End of Period Reliability (EOPR) for your observed calibration process.  This can be evaluated by doing a count to find out how many units you calibrated over a period of time such as a year, and then counting all of the units that were found to be in tolerance during that time.  If you calibrated 100 Fluke 87’s, and 95 of them were found to be in-tolerance, your EOPR is 95%.  The larger your sample size, the better it is for your risk analysis, so you may want to evaluate the EOPR value for a larger common sample, such as all 3 ½ digit multimeters.  My experience has been that for most labs, the EOPR is usually between 90 and 95%.

Now, on to what I really wanted to talk about today, TAR and TUR.

TAR is an acronym for Test Accuracy Ratio.  This is defined as the ratio of the Tolerance limits of the Device Under Test (DUT) to the published accuracy specification for the instrument(s) used for the calibration (test).  This term began being used in the 1980’s, particularly with conformance to the calibration quality standard that most people in the United States used, MIL-STD 45662A, that was introduced in 1988.

What MIL-STD 45662A specifically stated was in requirement 5.2.  “Unless otherwise specified in the contract requirements, the collective uncertainty of the measurement standards shall not exceed 25 percent of the acceptable tolerance for each characteristic being calibrated.”

The term “collective uncertainty” in 1988 was interpreted as the collective accuracy specification of the standards used.  If there was more than 1 standard used in the measurement process, their accuracies were either added together directly or combined by root sum of squares.  The reason that this term was used was because in 1988, there was no ISO Guide to the Expression of Uncertainty in Measurement (GUM).  The GUM was not published until 1993. 

Now, on to TUR.  This is an acronym that is defined in ANSI/NCSL Z540.3-2006, definition 3.11, as the ratio of the tolerance of a measurement quantity subject to calibration, to twice the 95% Expanded Uncertainty of the measurement process used for calibration.  The term “twice” is used for a two-sided specifications.  For example, if the specification is +/- 1%, the total range of the specification is 2%, and if the Expanded Uncertainty is 0.25%, 2 times this is 0.5% so the TUR would be 2%:0.50%, or 4:1.

It is clear the definition of TUR meets the spirit of the TAR definition, but it was aligned to uncertainty that was described in the GUM.

When the concept of TAR was developed, the main types of calibration the author had in mind was electrical/RF calibrations needed to support Missiles and Nuclear weapons (which I will explain more about next week).  For these cases, the dominant source of measurement uncertainty was from the standard itself, with very little type A uncertainty from process variation, therefore, the TAR and TUR would be very close to the same value. 

There are times when TAR and TUR can be quite different.  This occurs when there is a significant amount of uncertainty from sources other than the standards.  A good example of this is dimensional calibrations, where operator influence as seen in repeatability can be much larger than the uncertainty for the standard, and environmental influences that can be large.  Another good example is torque wrench calibrations, where the product specification is usually around 4%, the uncertainty of the torque standards is about 0.2%, but the repeatability of the operator is about 1%.  For this case a TAR is about 20:1, but a TUR is a little less than 4:1.  That being said, if we think for a moment about the TAR definition from MIL-STD 45662A being the “collective uncertainty”, that this aligns closely with the definition of Expanded Uncertainty from the GUM.

If you happen to be using Fluke MET/CAL software, I want to note that Fluke MET/CAL defines TAR by a completely different name which the literature refers to as Test Specification Ratio, or TSR. 

For all future evaluations of False Accept and False Reject that I will show, any mention of ratio will always be TUR.  Does that mean we should clear the term TAR out of our minds and never speak of it again?  I would not recommend doing this.  It is important to understand what the difference between TUR and TAR.  I still find an area where we at Fluke use the term TAR, and that is when we are developing new calibration procedures.  One of the first steps that we do is to look at the specifications of the device we are trying to calibrate.  We want to end up with an acceptable value for False Accept and False Reject risk. When selecting the standards, we will look for something with a TAR of 5:1 or better in a first step.  We know that we can’t evaluate the uncertainty of measurement until most of the process is put together, and if we want a TUR of 4:1 or better when we include repeatability and reproducibility, the TAR needs to be better than that. #MetrologyMonday #FlukeMetrology  

Saeid Nabagh

Calibration Technician at Alpha Controls & Instrumentation Inc.

1mo

Is it correct to use tolerance or specification and uncertainty in different confidence level for conformity assessment? For example specification in 99% and uncertainty in 95% .

Like
Reply

  • No alternative text description for this image
Like
Reply

Nice explanation Jeff. When we were developing Z540.1 thjs was a point of a lot of discussion.

Like
Reply

Jeff Gust These posts continue to be really helpful for discussing the crucial overall question: "where is our industry headed"? Clearly TAR should remain as a useful rule of thumb, but the technical development of Measurement Uncertainty has been going on for three decades and will continue to provide a better and better perspective for managing overall measurement risk as far as our customers are going to be concerned. By no means does that mean that this will be an easier pill for them to swallow. Customers are at least as "risk averse" as ever. That is just a cost of doing business in the Metrology sector. GOOD POST!

Gregg Losonsky

Quality Manager (Metrology) at Precision Instrument Correction Inc.

2mo

Memories of 1987 Cal School NAS North Island San Diego CA. I like both. I look at the TAR and TUR.

To view or add a comment, sign in

More articles by Jeff Gust

Insights from the community

Others also viewed

Explore topics