Managing Quality vs. Measuring Uncertainty in The Medical Laboratory
Managing Quality vs. Measuring Uncertainty in The Medical Laboratory
Managing Quality vs. Measuring Uncertainty in The Medical Laboratory
Clin Chem Lab Med 2010;48(1):31–40 2010 by Walter de Gruyter • Berlin • New York. DOI 10.1515/CCLM.2010.024
Opinion Paper
James O. Westgard* always trying to catch the next wave, whether it be quality
Department of Pathology and Laboratory Medicine, control, quality assurance, total quality management, contin-
University of Wisconsin Medical School, and Westgard uous quality improvement, Six Sigma, lean, patient safety,
QC, Inc, Madison, WI, USA quality indicators, or risk management. In addition, we have
national regulations wsuch as Clinical Laboratory Improve-
ment Amendments (CLIA) in the USx and global guidelines
Abstract
(such as ISO) for inspection and accreditation. Keeping up
ISO 15189’s particular requirements for quality management with the management trends and good practice guidelines
in medical laboratories provide guidance for (a) relating per- may even complicate the management of quality because we
formance specifications to the intended use of a test or exam- often think the newest recommendations and approaches
ination procedure, (b) designing internal quality control should replace the older ones, rather than recognizing there
(IQC) procedures to verify the attainment of the intended are new tools and techniques available to help us manage
quality of test results, as well as (c) determining the uncer- quality.
tainty or results, where relevant and possible. This guidance Actually, all these different quality programs fit into the
has particular implications for analytical quality manage- overall process for managing quality, as shown in Figure 1.
ment, specifically for validating method performance relative This quality management process represents the basic sci-
to quality goals or requirements (intended use), designing entific method, which has been described as the Plan-Do-
statistical quality control procedures on the basis of the qual- Check-Act, or PDCA cycle. In this illustration, the basic
ity required for a test and the precision and bias observed components of scientific management are quality planning
for a method, and characterizing the quality achieved in prac- (QP, the ‘‘Plan’’), quality laboratory processes (QLP, the
tice by calculating measurement uncertainty. There already ‘‘Do’’), quality control and quality assessment (QC and QA,
exists an error framework that provides practical tools and the ‘‘Check’’), and quality improvement (QI, the ‘‘Act’’) (1).
guidance for managing analytical quality, along with an The entire process is centered on quality goals, requirements,
existing concept of total error that can be used to character- and objectives, i.e., what are we trying to achieve.
ize the quality of laboratory tests, thus there is considerable Quality management begins by understanding what we
concern and debate on the merits and usefulness of meas- must achieve, i.e., the meaning of quality itself. Quality is
urement uncertainty. This paper argues that total error such a generic term that we often do not bother to define it
provides a practical top-down estimate of measurement carefully. That omission can become a serious limitation
uncertainty in the laboratory, and that the ISO/GUM because we believe that others have the same idea about
model should be primarily directed to and applied by quality. Most of us have experienced a difference of opinion
manufacturers. about quality when dealing with complaints from our labo-
Clin Chem Lab Med 2010;48:31–40. ratory customers. Quite often there is a difference about
which performance characteristics are important, as well as
Keywords: analytical quality management; measurement whether or not we satisfy the customer’s need for a particular
uncertainty; quality; total error. characteristic. To discuss quality, we need to have a common
starting point from which to build a management framework,
particularly an agreed upon definition of quality itself, such
Introduction as:
• ANSI/ASQC A3-1978: Quality – the totality of features
Quality management guidelines and practices keep evolving and characteristics of product or service that bear on its
in the medical laboratory. To some, it may seem like we are ability to satisfy given needs (2).
• ISO 9000:2005: Quality – degree to which a set of inher-
*Corresponding author: James O. Westgard, 7614 Gray Fox Trail, ent characteristics fulfills requirements (3).
Madison, WI 53717, USA • IOM 2000: Quality of care – degree to which health serv-
Phone: q1-608-833-4718, Fax: q1-608-833-0640,
E-mail: [email protected] ices for individuals and populations increase the likeli-
Received July 30, 2009; accepted July 30, 2009; hood of desired health outcomes and are consistent with
previously published online November 18, 2009 current professional knowledge (4).
2010/407
Article in press - uncorrected proof
32 Westgard: Managing quality vs. measuring uncertainty
level provides reference methods and certified reference between constant and random contributions may then be
materials, and finally, there are routine methods and calibra- chosen freely within the total sum, which may include a
tion materials. The traceability chain provides a series of known procedural bias. This is one reason for the outcome
links between different reference methods and materials to of EQA where results are clustered in method-dependent
document how the results of a routine method are connected groups when measuring systems are precise rather than true.
to the truth. Consequently,
For glycated hemoglobin, for example, there is an IFCC
definitive method, three national reference methods (Sweden,
• Results from different measurement procedures are not
Japan, US), and a network of reference laboratories that val-
directly comparable;
idate reference materials and monitor the performance of the
• Biological reference intervals will depend on the proce-
national reference methods (11). These national reference
dure or have to be widened to accommodate results from
methods in turn are used to maintain and monitor routine
all procedures, leading to a loss of diagnostic capability;
service methods and calibration materials. In the US, this
• Classification of biological states by comparing a given
function is performed by the National Glycohemoglobin
result with common limits becomes hazardous;
Standardization Program (NGSP), which certifies the trace-
• Equations between different types of quantities cannot
ability of routine methods to assure their comparability
work across procedures;
(www.ngsp.org). Thus, every routine method available in the
• Movement of patients between health services requires
US has been compared and evaluated against the US national
repeat measurements.
reference method, and is calibrated to provide results that are
• Such conditions do not seem acceptable in terms of health
comparable to the US national reference method.
and resources, and may lead to complaints and loss of
business’’.
Why measure uncertainty? The remedy proposed is to avoid defining a quality
requirement that takes bias into account and, instead, correct
The intent of ISO is to make measurements transferable, or
for bias and then estimate the uncertainty of the test result:
comparable, on a global basis. This requires eliminating or
correcting biases or systematic errors between measurement
‘‘Rather than defining an allowable total error with estimated
systems, and then, according to ISO, reporting any remaining
elements of all types of systematic and random error (hith-
variance of a test result (uncertainty) to inform the user of
erto often called inaccuracy and imprecision, respectively),
its quality. These are good intentions, but they are certainly
any result should be corrected for known significant biases
not new or unique to ISO, nor are these objectives achievable
and should have a measure of uncertainty attached giving an
using the ISO proposed concepts for measurement trueness
interval comprising a large fraction of the reasonably pos-
and uncertainty only.
sible values of the measurand with a given level of
Dybkaer, a medical laboratory scientist who is one of the
confidence’’.
leading advocates of the ISO/GUM approach, argues for
trueness and measurement uncertainty and against total error
The call for correction of any known biases is true to the
because it allows systematic errors to exist, rather than
principles of metrology, but it is a risky business in medical
requiring their elimination (12):
laboratories because there are relatively few reference meth-
ods and materials. Thus, it is difficult to know what correc-
‘‘When describing the performance of procedures and the
tion is actually correct. Even after correction, there will likely
reliability of their results, ISO terminology should be used.
be some residual biases remaining, as well as differences in
Results should be universally comparable and this requires
the specificity of some measurement principles. Bias is not
metrological traceability, the concomitant uncertainty
(inversely) indicating reliability should be obtained in a uni- as simple as a correction factor or a conversion algorithm
versal and transparent fashion, and should be combinable. (or a HbA1c ‘‘master equation’’ for that matter).
Therefore, the approach of (GUM), leading to a result with For example, an EQA survey of HbA1c methods in the
known bias and a combined standard uncertainty has advan- US (13) showed that some method subgroups were biased
tages over the allowable total error concept, incorporating as much as –0.3% HbA1c low and q0.4% HbA1c high for
procedural bias’’. a sample having a value of 6.8% HbA1c according to the
NGSP reference method. The means and 95% ranges of
The phrase ‘‘incorporating procedural bias’’ is important results for 19 different method subgroups are shown in Fig-
here. It implies that the inclusion of ‘‘procedural bias’’ or ure 2 for some 2676 laboratories surveyed by the College of
systematic error in the concept of total error allows manu- American Pathologists (CAP) in 2008. Trueness is measured
facturers to avoid the goal of globally consistent test results: by a bias of 0.06, which is excellent. However, the survey
sample was reported in individual laboratories as a value as
‘‘The allowable total error (which for practical purposes low as 5.98% HbA1c and as high as 7.86% HbA1c, i.e., with
could also have been termed ‘allowable deviation’) is set for errors as large as –0.82% and q1.06% HbA1c. If a value
a given type of quantity and purpose. The distribution of 6.5% HbA1c were used for diagnosis of diabetes, there
Article in press - uncorrected proof
34 Westgard: Managing quality vs. measuring uncertainty
istry laboratories, though it still employed the classical con- In 1992, the US CLIA regulations (27) defined criteria for
cepts of precision and accuracy and their separate effects on acceptable performance for PT, which were, in effect, state-
test results. There was no guidance on how to judge the ments of allowable total errors because laboratories had to
acceptability of performance when a single measurement led test PT samples in the same manner they tested patient sam-
to the reported test result. ples, i.e., a single measurement. In 2002, CLSI published a
In 1974, working with R. Neill Carey and Svante Wold, consensus guideline on ‘‘Estimation of Total Analytical
we published a paper on ‘‘Criteria for judging precision and Error’’ (28) and in 2006 a guideline for IQC that outlined a
accuracy in method development and evaluation’’ (18). To step-by-step process for selecting QC rules and numbers of
my knowledge, that paper was the first publication in the control measurements based on the allowable total error
clinical chemistry literature to propose the use of total ana- for the test and the observed imprecision and bias of the
lytic error for the purpose of characterizing the quality of a measurement procedure (29). In 2008, FDA provided manu-
measurement procedure. We argued as follows: facturers with guidance on ‘‘waiver applications for manu-
facturers of in vitro diagnostic devices’’ (30), recommending
‘‘To the analyst, precision means random analytic error. that manufacturers establish performance criteria for an
Accuracy, on the other hand is commonly thought to mean allowable total error together with error grids to demonstrate
systematic analytic error. Analysts sometimes find it useful and document that test performance satisfies a defined level
to divide systematic error into constant and proportional of quality. Thus, total analytical error and its corresponding
components, hence, they may speak of constant error or pro- target in the form of an allowable total error are well accept-
portional error. None of this terminology is familiar to the ed as part of good laboratory practices, as well as the US
physician who uses the test values, therefore, he is seldom regulatory process for demonstrating performance for waived
able to communicate with the analyst in these terms. The tests.
physician thinks rather in terms of the total analytic error, Today we have well-established method validation proto-
which includes both random and systematic components. cols, yet they do not require the ISO/GUM measurement
From his point of view, all types of analytic error are accept- uncertainty. Today we have quality-planning processes and
able as long as the total analytic error is less than a specified tools that support the design of IQC procedures to verify the
amount. This total analytic error « is more useful; after all attainment of the intended quality of test results, yet they do
it makes little difference to the patient whether a laboratory not require the ISO/GUM measurement uncertainty. Today
value is in error because of random or systematic analytic we have PT and EQA schemes that monitor test quality and
error, and ultimately he is the one who must live with the assist laboratories in monitoring long-term bias, yet they do
error’’. not require the ISO/GUM measurement uncertainty. Only
accreditation under ISO 15189 requires measurement uncer-
Indeed, our purpose in introducing this concept was to tainty! The actual management of analytical quality in a
characterize the uncertainty of the measurement procedure in medical laboratory does not!
the context of the reported result, which we did using a linear
combination of bias plus two times the method SD. At that
time, the standard practice in metrology laboratories and How measure uncertainty?
standardization organizations, such as the US National
Bureau of Standards (NBS, the precursor to today’s NIST) Technically, one of the differences between total error and
was to judge method performance based on separate assess- measurement uncertainty is the mathematical formula for
ment of precision and accuracy and to consider their com- combining components of bias and imprecision. Total error
bined effects by a classification scheme: (case 1) systematic makes use of a linear combination of bias plus a certain
error and imprecision both negligible; (case 2) systematic multiple of the SD, e.g., bias q2 SD. Measurement uncer-
error not negligible, imprecision negligible; (case 3) neither tainty requires that all error components be squared to pro-
systematic error nor imprecision negligible; (case 4) system- vide variances that can be added. Then the square root is
atic error negligible, imprecision not negligible – a schema extracted and multiplied by an appropriate coverage factor,
recommended by Churchill Eisenhart from NBS (25). which is usually 2 or 3. Mcdonald (31) has proposed the use
The total error concept took time to gain acceptance from of a ‘‘root mean square measurement deviation’’ (RMSD) to
clinical chemists and clinical pathologists. In the landmark combine imprecision and bias, arguing that this estimation
‘‘1976 Aspen Conference on Analytical Goals in Clinical would be consistent with the ISO/GUM principles:
Chemistry’’, leading laboratory professionals voiced their
arguments against such a combination of errors (26). None- ‘‘The quantity « is approximately (assuming that n is large
theless, the concept survived and within the next decade enough) the square root of the variance of the combined
became well accepted in the laboratory community. Accep- distribution resulting from the convolution of these two dis-
tance came because of its usefulness in evaluating the per- tributions. In fact, this kind of symmetric interpretation of
formance of new measurement procedures, characterizing random and systematic deviations of measurement is exactly
measurement performance in peer-comparison and EQA the way measurement uncertainty is considered in the GUM,
schemes, and planning and designing internal QC which is internationally accepted today in the field of
procedures. metrology’’.
Article in press - uncorrected proof
Westgard: Managing quality vs. measuring uncertainty 37
This RMSD approach seems to have been adopted in the error possible, and who is not particularly interested in the
2008 German regulations for laboratories, where error mar- question whether the errors are systematic or random«’’
gins can be calculated by the laboratory in this manner (32).
Thus, there exists a total error model with a linear com- Note that this is the same as our original rationale justi-
bination of bias and imprecision, a RMSD model for com- fying the use of total error (18). And observe that the lan-
bining the squares of bias and imprecision, and the detailed guage is almost the same – total error, not whether the errors
GUM model which involves many different components of are systematic or random. But while the reasoning here looks
variation, along with a host of rules and recommendations to be the same, our original objective was to use this infor-
for estimating and combining variances. The GUM approach mation to manage quality to meet the physician’s needs, not
is often described as bottom-up because of its emphasis on to report such information to physicians. I have yet to hear
characterizing all individual components of variation, then any customers ask for this information, and regardless
adding them together to predict their total effect. Total error, whether it is an implied or unspoken need, I disagree with
in contrast, is top-down because it is estimated from the total the idea that reporting measurement uncertainty will improve
systematic error and total random error, which include all the the use and interpretation of laboratory tests. Our physician
effects of the many individual sources of variation. customers have limited time and patience to deal with meas-
Top-down estimates typically make use of long-term urement uncertainty. Along with our patient customers, they
imprecision, often determined over several months that expect us to provide them with test results whose quality is
include changes in reagent lots, calibrator lots, operators, managed and assured to satisfy their intended use.
operating conditions, etc. Such estimates are practical in a To do so, we must define the quality for intended use,
laboratory because they utilize routine QC data. One profes- validate that our measurement procedures can produce that
sional guideline by White and Farrance (33) recommends quality, control our measurement procedures to verify the
this approach, as well as pointing out that long-term esti- attainment of that quality every day in routine laboratory
mates of bias may be determined from EQA surveys: operation, and report test results in a manner that provides
• ‘‘Record mean long-term imprecision of QC as estimate guidance for interpretation. Fraser (34) has demonstrated a
of uncertainty of measurement ("1.96 SD or "1.96 practical reporting system that flags results as follows:
CV%). The uncertainty of the value assigned to a cali- )higher than reference limit; -lower than reference limit;
brator(s) should be included if available’’. ))higher than reference limit and likely clinically impor-
• ‘‘For some methods, test results are interpreted against tant; --lower than reference limit and likely clinically
reference or clinical decision values that have been deter- important; *significant change (95% confidence level);
mined by a different method. In this situation, the uncer- **highly significant change (99% confidence level). These
tainty of the results includes not only the analytical latter two flags relate to the calculation of a reference change
imprecision of the method, but also any systematic error value that takes into account both analytical variation and
(method bias). For such methods the long-term bias within-subject biological variation. In effect, measurement
should be recorded, ideally as full calibrator traceability uncertainty is being reported here without actually providing
and uncertainty data from the commercial supplier, or in any numerical values for uncertainty.
its absence, from proficiency testing (external quality Other advocates emphasize the importance of estimating
assurance) reports’’. measurement uncertainty to identify the need for improve-
ments in analytical methodology (37):
While this guideline does not specifically recommend how
the information on imprecision and bias should be combined, ‘‘«focusing on traceability and uncertainty has the potential
it does make recommendations for assessing fitness for pur- to increase pressure on manufacturers of assays so that they
pose by comparison with analytical goals for imprecision, increase their efforts to improve the quality of their products.
bias, and total error based on biologic variation, following This drive for quality will include both analytical quality, i.e.,
the guidance of Fraser (34, 35) where bias and imprecision the specificity of the assay, and the metrological quality of
are combined using a linear model. Thus, one might justify calibrators«’’
determining the expected total error as a practical estimate
of measurement uncertainty in a service laboratory. Of course, manufacturers should already be doing this if
they are following the ISO guidance! Laboratories do not
actually need the information required by the ISO/GUM esti-
Why NOT measure uncertainty? mation of measurement uncertainty, except for a few cases
where measurements are combined to provide a calculated
Advocates of measurement uncertainty often justify their parameter and error propagation rules must be applied to
position by emphasizing that laboratory customers need this estimate uncertainty, rather than being able to measure
information. For example, the Tietz Textbook for Clinical imprecision directly. The strongest argument is actually for
Chemistry makes the statement (36): traceability of calibration, where measurement uncertainty
plays a secondary role.
‘‘The uncertainty concept is directed towards the end user These difficulties in adapting the ISO/GUM approach to
(clinician) of the result, who is concerned about the total medical laboratories seem to be due to an attempt to force-
Article in press - uncorrected proof
38 Westgard: Managing quality vs. measuring uncertainty
fit the concepts and principles from metrology laboratories tion of the data provided’’. Effective communication, as not-
without carefully considering the practical applications that ed here, involves more than just reporting the uncertainty of
are necessary to measure and manage analytical quality in a results. Such efforts should build on the existing error frame-
high production testing laboratory. The earlier warning by work and Fraser’s system for identifying clinically significant
Horwitz and Albert (38) seems to apply again: results in laboratory reports (34), taking into account biologic
‘‘Without a refinement of concept, the metrologists risk los- variation as well as analytical variation. Laboratories should
ing a large part of their chemical constituency. The presen- assure the analytical quality of their test results and provide
tations of the metrologists suffer from a lack of clarity and aid and support to improve the interpretation of test results,
transparency to a chemical audience«’’ without burdening our physician customers and patient con-
sumers with metrological uncertainty.
‘‘We suggest that instead of trying to disentangle the various In managing quality internally, medical laboratories should
threads involved in the error budget approach to uncertainty, utilize the existing concept of total error as a practical top-
let the measurements speak for themselves. Why bother with down estimate of measurement uncertainty. Estimates of
the various individual sources of bias and imprecision when within-laboratory precision can be provided by internal QC
what the chemist wants to know is merely the final integrated data and estimates of bias from EQA and PT data, as sug-
result«’’ gested by White and Farrance (33). To provide estimates for
conditions of reproducibility (different laboratories with dif-
‘‘The major advantage of the top-down approach is that it ferent analysts, reagent lots, calibrators, etc.), EQA and PT
randomizes the locally constant individual laboratory biases data can provide estimates of the bias and variation of meth-
into interpretable critical limits that include the major sources od subgroups (41), which can be used to characterize quality
of chemical deviations. These are laboratories, analysts, on the sigma-scale (41). It is also important to have an esti-
methods, and time-factors that are left out of the uncertainty mate of error that incorporates the directional effect of bias
error budget calculations«’’ to recognize when methods produce consistently high or low
results (as illustrated in Figure 3). The ISO/GUM construct
Horwitz put it more succinctly in a later paper (39): ‘‘The of measurement uncertainty assumes symmetry, i.e., expect-
absurd and budget-busting approach (for analytical chemis- ed value"measurement uncertainty, whereas total error cal-
try) arose from metrological chemists taking over in entirety culations of upper and lower limits will reveal any
the concepts developed by metrologists for physical pro- asymmetry due to bias (42). An important drawback of the
cesses measured with 5–9 significant figures (gravitational uncertainty concept is the assumption that bias is completely
constant, speed of light, etc.) and applying them to analytical eliminated via traceability, standardization, and correction,
chemistry measurements with 2 or 3 significant figures’’. which is not true in the real world of medical laboratories.
The theory from metrology may be pure, but the appli- As long as bias exists in the real world, the ISO/GUM
cations in medical laboratories are messy, limited, and often methodology is flawed and combinable variances will not
impractical. Unless these shortcomings are overcome, the provide reliable and realistic estimates of the quality of meas-
ISO/GUM push for measurement uncertainty may cause urements and test results! At such time that bias is truly
more harm than good, violating a fundamental tenet of med- eliminated, the estimate of total error will converge with that
ical practice to ‘‘do no harm’’. Such efforts may be harmful of measurement uncertainty, will be a function of only ran-
because the need to characterize measurement uncertainty dom errors, and the variance will be able to be combined.
will consume time and resources that might be better spent Measurement uncertainty should therefore be preserved
managing analytical quality. They may be harmful because for use in the ISO/GUM tradition of a bottom-up estimate
laboratories will have the false comfort that characterizing based on detailed modeling and combination of individual
measurement uncertainty somehow assures analytical quality, components of variation. Any application where traceability
whereas it only provides a measure of how good or bad the is important should require that this estimate of measurement
results can be. And unless there is a clear rationale for defin- uncertainty be used. Those applications should be of impor-
ing goals or targets for acceptable measurement uncertainty, tance particularly to manufacturers, regulators, and perhaps
laboratories may not recognize whether the observed uncer- academic laboratory scientists. In medical laboratories, the
tainty is good or bad and whether improvement is needed or ISO/GUM approach will mainly be of use for those relatively
not. few measurands where Certified Reference Materials are
available to characterize the trueness and uncertainty of the
calibration of routine methods. And even then, many labo-
What to do? ratories may find it difficult to make these estimates unless
practical calculator tools become available.
In discussing the continued need to improve analytical qual-
ity, Plebani (40) has pointed out that the ‘‘« solution to this
is to use more stringent metrics to define and monitor ana- Concluding comments
lytical tolerance limits, on the one hand, and on the other to
meet the need for more effective communication of labora- ISO 15189 makes measurement uncertainty a certainty,
tory results to clinicians, specifying their uncertainty and where relevant and possible! I have long argued that labo-
offering advice for improving the interpretation and utiliza- ratories need to consider the total effect of all sources of
Article in press - uncorrected proof
Westgard: Managing quality vs. measuring uncertainty 39
21. Westgard JO, Barry PL, Hunt MR, Groth T. A multi-rule Shew- Devices and Radiological Health, Office of In Vitro Diagnostic
hart chart for quality control in clinical chemistry. Clin Chem Device Evaluation and Safety, January 30, 2008.
1981;27:493–501. 31. Mcdonald R. Quality assessment of quantitative analytical
22. Westgard JO. Internal quality control: planning and implemen- results in laboratory medicine by root mean square of meas-
tation strategies. Ann Clin Biochem 2003;40:593–611. urement deviation. J Lab Med 2006;30:111–7.
23. Petersen PH, Fraser CG, Kallner A, Kenny D, editors. Strate- 32. German ‘‘RilliBAK’’ regulations. www.bundesaerztekammer.
gies to set global analytical quality specifications in laboratory de/page.asp?hiss1.120.121.1047.6009. Accessed May 5, 2009.
medicine. Scand J Clin Lab Invest 1999;57:475–585. 33. White GH, Farrance C. Uncertainty of measurement in quan-
24. Ricos C, Alvarez V, Cava F, Garcia-Lario JV, Hernandez A, titative medical testing – a laboratory implementation guide.
Jimenez CV, et al. Current databases on biologic variation: Clin Biochem Rev 2004;25:S1–24.
pros, cons and progress. Scand J Clin Lab Invest 1999;59: 34. Fraser CG. Biological variation: from principles to practice.
491–500. Washington, DC: AACC Press, 2001.
25. Eisenhart C. Expression of the uncertainties in final results. 35. Fraser CG, Petersen PH, Libeer J-C, Ricos C. Proposals for
Science 1968;160:1201–4. Reprinted in NBS Special Publica- setting generally applicable quality goals solely based on biol-
tion 300, vol 1. Precision measurement and calibration. Ku HH, ogy. Ann Clin Biochem 1997;34:8–12.
editor. Washington, DC: US Government Printing Office, 1969. 36. Linnet K, Boyd J. Selection and analytical evaluation of meth-
26. Westgard JO. Development of performance standards and cri- ods – with statistical techniques. Chapter 14. Burtis CA, Ash-
wood ER, Bruns DE, editors. Tietz textbook of clinical
teria for testing the precision and accuracy of laboratory meth-
chemistry and molecular diagnostics, 4th ed. St. Louis, MO:
ods. Proceedings of the 1976 Aspen Conference on Analytical
Elsevier Saunders, 2006.
Goals in Clinical Chemistry. College of American Pathologists,
37. Kristiansen J. The guide to expression of uncertainty in meas-
Chicago, IL, 1977:105–14.
urement approach for estimating uncertainty: an appraisal. Clin
27. US Department of Health and Human Services. Medicare,
Chem 2003;49:1822–9.
Medicaid and CLIA Programs: regulations implementing the
38. Horwitz W, Albert R. The concept of uncertainty as applied to
Clinical Laboratory Improvement Amendments of 1988
chemical measurements. Analyst 1997;122:615–7.
(CLIA). Final rule. Fed Regist 1992;57:7002–186.
39. Horwitz W. The certainty of uncertainty. AOAC International
28. CLSI EP21-A. Estimation of total analytical error for clinical 2003;86:109–11.
laboratory methods; approved guideline. Clinical and Labora- 40. Plebani M. Errors in laboratory medicine and patient safety:
tory Standards Institute, 940 West Valley Road, Wayne, PA, the road ahead. Clin Chem Lab Med 2007;45:700–7.
2003. 41. Westgard JO, Westgard SA. The quality of laboratory testing
29. CLSI C24-A3. Statistical quality control for quantitative meas- today: an assessment of sigma metrics for analytic quality using
urement procedures: principles and definitions; approved performance data from proficiency testing surveys and the
guidelines – 3rd ed. Clinical and Laboratory Standards Insti- CLIA criteria for acceptable performance. Am J Clin Pathol
tute, 940 West Valley Road, Wayne, PA, 2006. 2006;125:343–54.
30. Guidance for Industry and FDA Staff: Recommendations for 42. Petersen PH, Stockl D, Westgard JO, Sandberg S, Linnet K,
Clinical Laboratory Improvement Amendments of 1988 Thienpont L. Models for combining random and systematic
(CLIA) Waiver Applications for Manufacturers of In Vitro errors. Assumptions and consequences for different models.
Diagnostic Devices, Food and Drug Administration, Center for Clin Chem Lab Med 2001;39:589–95.
Copyright of Clinical Chemistry & Laboratory Medicine is the property of De Gruyter and its content may not
be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written
permission. However, users may print, download, or email articles for individual use.