May I Have an Extension Please? 

Committee News January 2008

By Mark Kuster

Most of us encounter or hear of the following scenario before our careers advance past their most tender years. Perhaps you use a piece of test equipment for which the calibration has just expired and you still have a series of urgent tests to complete over the next week with no backup equipment. You’d rather  not delay the tests in order to submit the test equipment to the cal lab for recalibration. So, do you ask for an extension or don’t you? What difference does it make?
 
Or maybe you’re the person in the cal lab who’s been asked to extend a calibration. How do you decide? Are there clearly defined criteria upon which to base a decision? An ancient Egyptian quality policy required either that you have your cubit recalibrated at every full moon or die. Not much ambiguity there! 

a can of wormsDo you have a stock answer from policy? “Our quality system doesn’t allow extensions.” Or, at the other extreme, “A six-month extension? Any time! We’ll send a new sticker over right away.” Ideally, a policy responds to circumstances more than either of these examples.

A step up from an all-or-nothing policy is reviewing the item’s calibration history (you do keep history, don’t you?): “It’s only been out of tolerance (OOT) once after twelve calibration intervals, so a short extension  shouldn’t be a problem.” Ok, maybe, but what if it had been OOT twice? Would that be too often? What about only once in seven intervals? Again, are there any clear criteria you can use?

You may not even have an explicit expiration date. If you procure accredited calibrations compliant to ANSI/ISO/IEC 17025, you probably receive a calibration date and all the measurements and uncertainties, but no stated due date. Your calibration certificates from national metrology institutes (NIST, NPL, PTB, etc.) probably have no expiration dates either. Are you off the hook? Saved by a total eclipse of the moon? 

Sorry, but no. Ask yourself when the quoted uncertainties held true. Strictly speaking, unless otherwise stated, the reported uncertainties were valid at the time of measurement only. So what uncertainty applies two years later? Is there a time (that pesky due date) when the uncertainty becomes too large? Moreover, if your accredited certificate states compliance to specifications (at the time of calibration of course), how much time can pass before that statement loses credence due to instrument drift, handling, exposure, etc.? In many cases, the manufacturer’s specifications may be complete enough to help you make an informed decision, but not always.

Impacts

Regardless of the situation, measurement equipment has due dates, whether explicit or not, whether we know it or not, and whether we acknowledge it or not. The optimum due date and the results of deviating from it ultimately involve risk, costs, and the consequences of every possible significant outcome influenced by the performance of the measurement equipment. What happens if your measurement equipment is OOT enough to make the wrong call on some or all of your tests? What are the odds of that happening in the first place? Are the consequences relatively minor, such as returns, wasted materials, unnecessary rework and rejects, or more major, like serious accidents, security incidents, weapon system failures, loss of reputation in the market place, massive recalls, or even bankruptcy?

That’s a huge can of worms to open and beyond our scope here. Typically, the measurement professional relies on specifications, experience, education and training and makes a judgment—not just when extension  issues arise, but also every time new equipment gets a calibration interval in the first place, or existing equipment without an explicit due date goes for recalibration. This expert-based judgment is perfectly suited to many situations—e.g., those in which there
are no serious consequences to worry about and little justification for more elaborate analysis. However, much of the measurement world exists in highly competitive or regulated environments where any measurement or testing decision could affect the bottom line, attract scrutiny, or have more serious consequences. Judgment calls become less satisfactory and riskier, and so we would like to have a more quantitative basis for our decisions.

RP-1

The guiding document for calibration intervals is NCSLI’s publication, Recommended Practice RP-1, Establishment And Adjustment Of Calibration Intervals. RP-1 provides the framework of parameters, tools and methods with which to establish, monitor, and adjust calibration intervals, and relate intervals and consequences. If you have any desire to move away from judgment-based calibration intervals, you should explore this RP.

Originally published in 1979, RP-1 now stands at edition three, issued in 1996. NCSLI’s 173.1 Calibration Intervals Subcommittee Working Group under the 173 Metrology Practices Committee is currently revising RP-1 with an eye toward releasing the fourth edition in 2008. The working group last reported progress toward this goal at the 2007 NCSLI Conference in St. Paul, so for more detailed information on the planned changes, reference the paper available in the proceedings or the presentation file available on the NCSLI web site. 

Looking back at our original questions, notice that they all boil down to asking what a calibration expiration date really means. What are the risks of extending the due date or setting an interval tool long or too short? Does the equipment immediately go OOT after that date? We joke about consumer products that that fail as soon as the manufacturer’s warranty expires, but in general we don’t see test and measurement equipment that is always in tolerance for some specific interval and then always out of tolerance if the interval goes any longer. Though we usually know which instruments are the “bad eggs” or the “dogs”, the specific time when they act up seems to be an unpredictable, hit and miss affair.



Figure 1. Measurement Reliability Chart.

That’s the first step to understanding interval analysis: Predicting when an instrument will be OOT is not practical but it is reasonable to believe that the chances of the instrument being OOT could be predictable. Speaking loosely, RP-1 defines measurement reliability as the probability that all the test equipment’s functions and features of interest are in tolerance. The figure, taken from the RP, shows measurement reliability decreasing with time after calibration. Advanced interval analysis methods attempt to identify the function or curve that most closely models this decline for a particular equipment type. Having that, we can immediately estimate how likely the instrument is still in tolerance at any time after calibration. That means a quality system can set reliability targets (the minimum reliability equipment should have during the interval) for equipment or processes, and bingo—we start to see some clear criteria for due date decisions. See Figure 1.

Corresponding with the decline in measurement reliability after calibration is the increase in the uncertainty of an instrument’s parameters, or uncertainty growth. Uncertainty growth reflects our growing lack of knowledge of the parameter’s current state after we remove it from the calibration set up. Drift, abuse, environment, handling, and other random effects are among the culprits that prevent us from claiming a constant uncertainty. Performing another calibration, however, removes that additional uncertainty. So if our test equipment’s measurement uncertainty grows with time until the next calibration, then the risk of making the wrong decisions based on test results increases throughout the interval. So, another way to control quality is to establish maximum acceptable risks for processes.

Answers

The technology exists today, to answer, “May I have an extension, please?” quantitatively and to set clear criteria in a quality system for establishing and adjusting intervals or extending due dates. Using your own calibration history and appropriate software based on the concepts of measurement reliability and uncertainty growth, one answer might be to provide the reliability consequences and turn the decision back to the equipment user or process owner:

Our best estimate of your instrument’s measurement reliability is 84 % today, declining to 77 % over the next six months. After recalibration, the instrument’s measurement reliability will be approximately 99 %. Would you like an extension or a calibration?


In the future, fixed due dates may become irrelevant in some organizations, replaced by reliability or uncertainty as a function of elapsed time since calibration. For multiuse equipment, the instrument’s reliability at time of use could be compared against quality criteria for the test process at hand. When test equipment reliability falls below the minimum requirement, then the instrument either goes for recalibration or is relegated to a less critical application. In a more sophisticated approach, the user could use the uncertainty at time of test to calculate the resulting risks in a particular testing process and compare that against maximum risks set forth in the quality program. Naturally, automating the computations and data access would be a key factor in such an approach.

I won’t claim that RP-1 is light reading, but take a look at it and see what you think. We welcome your ideas, comments and suggestions to make the RP more useful. As the measurement world progresses towards having a more solid analytical understanding of the effects of measurement decisions on an organization’s bottom line, the practice of having interval analysis systems in place will become more and more important.

The RP-1 subcommittee is part of the 173 Metrology Practices Committee headed by Howard Castrup. Recent RP-1 working group participants include Greg Cenker, Miguel Decos, Dennis Dubro, Steve Dwyer, Bill Hinton, Dennis Jackson, Mitch Johnson, Jorge Martins, Charlie Motzko, and Walt Seidl. For more information, contact:

Mark Kuster, mkuster@pantex.com
Howard Castrup, hcastrup@isgmax.com

With that, it’s time to sign off—this article is due at NCSLI and I don’t want to ask for an extension!