CC Midterm Module 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

QUALITY MANAGEMENT

Unit 1: Introduction and Definition


Of Terms

What is quality?
We all search for the best quality when we avail of certain products or
services. Most of us are willing to pay a hefty price as long as we trust their quality. So, what
then is quality? Quality as a term may be defined differently by certain industries like
manufacturing, business, engineering and health. According to ISO (International
Organization for Standardization), quality refers to the totality of features and characteristics
of a product or service that bear on its ability to satisfy stated or implied needs. WHO (World
Health Organization) defines it as meeting the predetermined requirements of users for a
particular substance or service. Other definitions include: degree of congruence between
expectation and realization and fitness for purpose or intended use.

Laboratory quality can be defined as reliability and timeliness of reported test


results. The laboratory results must be accurate as possible, all aspects of the laboratory
operations must be reliable, and reporting must be timely in order to be useful in a clinical or
public health setting.

History of Quality
The concept of quality management as we know it today has evolved
through time. The core concepts of quality control can be traced as far as the medieval
times where stringent product quality standards were developed by craftsmen associations.
Quality check was done through audits and inspections and defective products were either
removed or redone. Prominent names who pioneered total quality management
techniques include:

✓ Dr. Walter A. Shewhart introduced statistics in evaluating quality control in the


manufacturing process. His work focused on reducing variation in
manufacturing process. He was recognized as the creator of statistical quality

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 3
control (SQC) and also created the “Shewhart Cycle” or “Plan-Do-Check- Act
or PDCA cycle.
✓ Dr. Bradley E. Copeland developed a quality assurance program for the
College of American Pathologists (CAP) which addressed non-comparable
laboratory results. He wrote the first manual on quality control for CAP and was
responsible for its implementation in a number of laboratories accredited by
CAP. He had great influence in standardization of laboratory methods. He
further recognized the importance and utility of standard deviation in clinical
chemistry and the two components of reliability which are accuracy and
precision.
✓ Dr. S. Levey and Dr. E. Jennings introduced the SQC or Shewhart’s charts in
medical laboratories. They created the L-J (Levey-Jennings) chart still used at
the present.
✓ James Westgard applied the Shewhart’s
multirule system to the evaluation of the
quality control data in the laboratory. He Those who don't learn from the
past are condemned to repeat it.
created several control rules for evaluating
That saying is as applicable to QC
the L-J graph known as the “Westgard
practices as it is to the lessons of
Rules”.
history. (Dr. James Westgard)
✓ Philip Crosby was referred to as the
evangelist of quality management. He
preached the need for quality practices in his book “Quality is Free” (poor
quality is expensive). The book has been credited with playing a large part in
the beginning the quality revolution in the United States and Europe. He
promoted the concept of “zero defects” (zero defect is the only legitimate
goal of a quality program), of doing things right the first time. He is recognized
for defining quality as conformance to requirements.
✓ Dr. William Edwards Deming was credited with providing the Japanese the
information and training that brought them to their position as the world leader
in the production of quality products. He introduced the use of statistical tools
in decision making, problem solving and troubleshooting in the production
process. He is one of the significant contributors of concepts and methods
contained in the TQM model. The need for a working understanding of basic
statistical principles is at the heart of Deming’s teaching.
✓ Joseph Juran established the concept that quality is a continuous
improvement process that requires a manager’s active pursuit in reaching and
setting goals. He is often hailed as the “father of quality”. His management
approach is based on three key principles. The first principle he introduced was
the pareto principle or the 80/20 rule which states that 80% of serious problems
arise from only 20% of the causes or trouble points. The second principle is his
approach to quality management theory. This involves a change of thinking
away from mere focus on the quality of the end product, to a wider
examination of the human dimension of quality management. Education and
training for managers in the workplace is as important. Juran’s management

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 4
theory was fundamental in expanding quality management principles beyond
the factory floor to principles that could also be applied to service-related
processes. He promoted the need for participatory management style. The
final principle consists of three processes often known collectively as the Juran
Trilogy. These three elements are quality planning (the design stage), quality
control (ongoing inspections to ensure that processes are in control)
and quality improvement (including proactive refinement of processes to
improve processes).

Hierarchy of Quality Systems


The principle of quality management, assurance and control have
become the foundation by which clinical laboratories are managed and
operated. The quality systems include:

I. Total Quality Management (TQM)


✓ Considered the major level in healthcare quality hierarchy. It defines overall
management functions that determine and implement quality policy.
✓ TQM may also be referred to as (1) total quality control, (2) total quality leadership,
(3) continuous quality improvement, (4) quality management science and (5)
industrial quality management.
✓ It is guided by the philosophy of perpetual improvement.
✓ The principles and concepts of Total Quality Management have been formalized into
a quality management process composed of Quality Improvement (QI), Quality
Planning (QP), Quality Laboratory Process (QLP), Quality Control and Quality
Assessment (QA). The traditional framework for managing quality in a healthcare
laboratory has emphasized the establishment of quality laboratory process (QLPs),
Quality Control (QC) and Quality assessment (QA). A QLP includes analytical
processes and the general policies,
practices and procedures that define
how all aspects of the work get done.
To provide a fully developed system
and framework for managing quality,
the QI and QP component must be
established. QI provides a structured
problem-solving process for identifying
the root cause of a problem and also
for identifying a remedy for the
problem. Quality Planning is necessary
Total Quality Management Framework for
to (1) standardize the remedy, (2) managing quality in healthcare Laboratory

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 5
establish measures for monitoring performance, (3) ensure that the performance
achieved satisfies quality requirements and (4) document the new QLP. The
new process is implemented through QLP, measured and monitored through QC
andQA, improved through QI, and re-planned through QP. These five components
working together in a feedback loop, illustrate how continuous QI is accomplished
and how quality assurance is built into laboratory processes. The “five-Q”
framework also defines how quality can be managed objectively using “Scientific
method” or the PDCA cycle (plan, do,check, act). QP provides the planning
step, QLP establishes standard processes for doing things, QC and QA provide
measures for checking how well things are done, and QI provides a mechanism
for acting on these measures. For objective management decisions, we apply
methodology we use in scientific experiments.

II. Quality Management System (QMS)


✓ A quality management system can be defined as “coordinated activities to direct
and control an organization with regard to quality”. This definition is used by
the International Organization for Standardization (ISO) and by the Clinical and
Laboratory Standards Institute (CLSI).
✓ In a quality management system, all aspects
of the laboratory operation, including the
organizational structure, processes and
procedures, need to be addressed to assure
quality.
✓ When all of the laboratory procedures and
processes are organized into an
understandable and workable structure, the
opportunity to ensure that all are
appropriately managed is increased. The
quality management system model used
here organizes all of the laboratory activities
into 12 quality system essentials. These
quality system essentials are a set of
coordinated activities that serve as building
blocks for quality management. Each must
Overview of the quality management system
be addressed if overall laboratory quality model
improvement is to be achieved. This quality
management system model was developed by CLSI, and is fully compatible with ISO
standards.

III. Quality Assurance Program (QAP)


✓ A part of quality management focused on providing confidence that quality
requirements will be fulfilled.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 6
✓ It is a systematic monitoring of quality control results and quality practice parameters
to assure that all systems are functioning in a manner appropriate to excellence in
health care delivery.
✓ It also involves determination of a quality goal, deciding whether or not a goal has
been achieved and implementing corrective action if the goal has not been
reached. It is a broad spectrum of plans, policies and procedures that together
provide an administrative structure for a laboratory’s efforts to achieve quality goals.
✓ A quality assurance program involves virtually everything and everybody in the
clinical laboratory. An error in any step during the acquisition, processing, and
analysis of a specimen and the reporting of a laboratory test result and in validating
the quality of the analysis can cause the laboratory to fall short of its quality goals.
✓ Quality Assurance, as currently applied is primarily concerned with broader
measures and monitor of laboratory performance, such as turnaround time,
specimen identification, patient identification, and test utility.
✓ Quality assurance requires either that causes of problems be identified through
quality improvement (QI) and eliminated through quality planning (QP) or that
Quality Control be able to detect the problems early enough to prevent their
consequences
✓ There are several essential elements of a quality assurance program.
(1)COMMITMENT or dedication to quality service which must be central and
a team effort driven. A true commitment is required from laboratory
directors, managers and supervisors if the efforts of their laboratory personnel
are to be successful.

(2) FACILITIES AND RESOURCES. Laboratories must have the administrative


support necessary to provide the quality services desired. This means
having adequate space, equipment, materials, supplies, staffing, supervision
and budgetary resources. This serves as basis upon which quality services are
developed and maintained.

(3) PERSONNEL COMPETENCY AND TRAINING. High quality personnel are


essential for high-quality services. Training and education are vital to the
performance of staffs. Successful training and assessment of personnel can
be realized through planning and targeted education programs.
There are six elements required of a laboratory competency assessment
program which includes (i) direct observation of patient test performance;
(ii)monitoring of the recording and reporting of results; (iii) review of
intermediate test results, QC records, proficiency testing results and preventive
maintenance records; (iv) direct observation and performance of instrument
maintenance and function checks; (v) assessment of test performance
through testing previously analyzed specimens, internal blind testing samples
or external proficiency samples; and (vi) assessment of problem solving skills.
CLIA requires that assessment of competence in job tasks be done
semiannually for the first year of employment and annually thereafter and

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 7
upon implementation of new test methods before reporting of patient test
results. Detailed instructions for this are found in the CLSI Guideline: Training and
Competence Assessment.
A workforce deficiency sometimes found with laboratory personnel is
lack of academic education in basic quality control practices and statistical
methods of analysis. Without formal training in statistical quality control for the
clinical laboratory, employees are often taught what to do, but may not
understand the “why”, which can impact their ability to troubleshoot and
problem solve.
Ways by which continuous personnel training may be done are through
in-service trainings/in-house education programs, web based training
programs and actual participation in seminars or trainings on specific
laboratory skills or concept updates.

(4) TECHNICAL PROCEDURES. High-quality technical procedures are


necessary to provide quality laboratory services. These groups of procedures
include the (i) control of preanalytical conditions or variables from test
requisition to processing and distribution of specimens including preparation of
work lists, logs and maintenance of records; (ii) control of analytical variables
which include analytical methods , standardization and calibration
procedures, documentation of analytical protocols and procedures and
monitoring of critical equipment and materials; (iii) monitoring of analytical
quality by the use of statistical methods and control charts and (iv)control
of postanalytical conditions or variables.

(5) PROBLEM SOLVING MECHANISMS or trouble shooting capabilities is an


important component of QAP although often not emphasized. This mechanism
provides a link between problem identification and implementation of a
solution to the problem. It is a feedback loop that responds to an error signal
by making adjustments to reduce the size of the error or to prevent its
recurrence. Specialized troubleshooting skills need to be developed and
improved and preventive maintenance programs instituted. A quality team
may be formed to meet regularly to analyze problems and identify solutions.

IV. Quality Control (QC)


✓ A system for verifying and maintaining desired level of quality in an individual test or
process. These are techniques and procedures that monitor performance
parameters.
✓ Statistical process used to monitor and evaluate the analytical process that produces
results. Generally, these are quantitative techniques that monitor particular sources
of errors, estimate the magnitude of the errors, and alert laboratory personnel when
indications suggest that quality has deteriorated.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 8
✓ Quality Control emphasizes statistical control procedures but also includes
non-statistical check procedures, such as linearity checks, reagent and standards
checks and temperature monitors.
✓ It is classified into two:
(1) Internal Quality Control which is used to establish the stable performance
of an assay system and assure that patient results are reliable. These are
processes that we do within our own laboratories such as regular running of
control materials. It is necessary for the daily monitoring of the precision and
accuracy of the analytical method. It detects changes in performance only
between the present operation and the stable operation that was
characteristic during the baseline period when the analytical method was
thought to be operating properly. IQC can detect both random and
systematic errors in the analytical system. This is otherwise known as intra-lab
QC.

(2) External Quality Control provides a medical technologist with a tool to


evaluate his own laboratory performance in relation to other laboratories.
Common applications of this process are participation in External Quality
Assessment Programs (EQAP) and Proficiency Testing programs. EQAP are
available to laboratories, some sponsored by professional societies and other
offered by manufacturers of control materials. Proficiency testing programs are
a type of external quality assessment in which simulated patient specimens
made from a common pool are analyzed by the laboratories enrolled in the
program. Results are returned to a central facility and are evaluated to
determine the quality of each laboratory’s performance. Government and
licensing agencies are increasingly using PT as an objective method for
accrediting laboratories, thereby giving them official authorization to operate.
In general, the performance characteristics of PT programs for detecting bias
are evaluated using methods similar to those used in IQC. EQC is important for
maintaining long term accuracy of the analytical methods.

Philosophical Basis of Quality Control


I. It is interesting to note that what the chemists established as quality measure was nothing
more than reaffirmation of a timeless axiom of philosophy “Man is a Fallible ( Imperfect)
Animal”. Because of his imperfect nature, he continues to exhibit some measure of variability
in everything that he strives to do. No matter how expert he is in performing a job, no matter
how sophisticated the equipment he uses to perform his job, no matter how carefully he tries,
his work is characterized as somewhat less than perfect. This is perhaps the most important
lesson which man must learn when he performs in the clinical laboratory, He must accept
the fact that he, being a human being, must demonstrate some degree of variability in his
work. Once he has accepted this axiom, all he needs to do is to determine how much
variability he demonstrates. Quality control is nothing more than the numerical expression of
this variability and at the same time a control of our variability. This philosophy should not

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 9
become an excuse for imperfect work but rather a reminder to observe and adhere to
quality control procedures in order to decrease the effect of variability to an insignificant
level.

II. Another philosophical basis is the proper exercise of freedom. An individual should be free
to do what he ought to do, not necessarily what he wants to do. One has to exercise logic
and reason in his thought processes. It is only then that he can follow the dictates of his
thoughts. This simply refers to the responsible practice of freedom.

Benefits of TQM
The rewards of a good quality control program are many. The clinical staff, the
patients, the laboratory personnel, and whole medical profession benefit from a good
quality control program in clinical laboratories. Such a program can produce more
reliable test results. Physicians can then make faster and more accurate diagnosis; in
turn, patients recover faster and their hospital stays are shortened. Quality in laboratory
service and test results can create a good reputation for the laboratory among the
clinical staff. Moreover, the pride and morale of laboratory workers increase with the
quality of their services. In external surveys, laboratories with good quality control
programs perform better consistently. The fact that a laboratory has a quality control
program may also be beneficial when one is dealing with the law and the government.
With the increasing number of malpractice cases in the medical profession, laboratories
with good quality control program can expect to have less trouble with the courts. A
good quality control program is also an absolute necessity for laboratory accreditation
and licensing . Other benefits include:
✓ Address operational weaknesses in the delivery process that cause delays or
inaccuracies.
✓ Helps to improve accuracy.
✓ Provides relief from prescriptive integrated verification requirements.
✓ Reduce financial liability and decrease staff time needed to resolve audit exceptions.
✓ Process financial aid in a more efficient and productive manner.
✓ General improved services.

Characteristics of an Ideal QC System


✓ The ability to detect analytical errors on a real time basis.
✓ Sensitivity to both types of analytical errors ( random and systematic errors)
✓ A mechanism for evaluating when each type of error was of sufficient magnitude to
indicate that the system was stable or unstable.
✓ The ability to correctly reject only analytical runs that were unstable (i.e. the
probability of falsely rejecting runs when in fact it was stable is low).
✓ The ability to be easily applied and interpreted.
✓ Cost-effective achievement of all aforementioned characteristics.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 10
Fundamental Concepts of TQM
I. Commonly Used Terms
1. Quality
✓ Defined as conformance with the requirements of users and costumers. It refers to the
satisfaction of the needs and expectations of users and costumers. The focus on users
and costumers is important, particularly in the healthcare setting to which laboratory
services belong to. Users of laboratories are often physicians and their costumers are
patients and other parties who pay the bills
✓ Cost must be understood in the context of quality. If quality is conformance to
requirements, then quality cost must be understood in terms of “cost of conformance”
and “cost of nonconformance”. Cost of conformance are divided into prevention
cost and appraisal cost while cost of nonconformance consist of internal and external
failure cost. For a laboratory testing process, calibration is a good example of a cost
incurred to prevent problems, QC is a cost for appraising performance, repeat run is
an internal failure cost for poor analytical performance and repeat requests for tests
because of poor analytical quality are an external failure cost.

2. Accuracy
✓ Degree to which a measured value of an entity agrees with its true value. Best
measured through reference methods and peer group comparison.

3. Precision

✓ Degree to which measured values of an entity agree with each other. Good
precision does not always connote accuracy. Measured through determining
standard deviation and coefficient of variance.

4. Reliability
✓ A combination of accuracy and precision. Should be the goal of all laboratory
workers.

5. Sensitivity
✓ Ability of the method to detect slight differences in concentration. High sensitivity
means few false negatives and is desirable in screening tests.

6. Specificity
✓ The ability of a method to determine solely the compound it is supposed to measure.
High specificity means few false positives and is desirable for confirmatory tests.

7. Variance
✓ Describes the many fluctuations in a data that will interfere in the measurement of a
substance in question.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 11
II. Fundamental Principles
✓ Quality improvement occurs when the problems are eliminated permanently.
Industrial experience has shown that 85% of all problems are process problems that
are solvable only by managers; the remaining 15% are problems that require the
action and improvement in performance of individual workers. Thus, quality problems
are primarily management problems because only management has the power to
change work processes.
✓ Problem solving requires a carefully structured process to ensure that root causes are
identified and proposed solutions are verified. Juran’s “project-by-project” quality
improvement process provides the following guidelines or distinct steps:
(1) Carefully defining the problem.
(2) Establishing baseline measures of process performance.
(3) Identifying root causes of the problem
(4) Identifying a remedy for the problem
(5) Verifying the remedy actually works
(6) Standardizing or generalizing the solution for routine implementation of an
improved process
(7) Establishing ongoing measures for monitoring and controlling the process.
✓ A “project team” which is a group of employees appointed by management to solve
specific problems that has been identified by management or staffs is responsible for
implementing the project-by-project steps.

III. Establishing Quality Goals and Analytical Performance Limits


✓ Fundamental requirements for all objective quality control systems are clearly
defined quality goals. Laboratories must define their service goals and establish
clinical and analytical quality requirements for testing processes. Without such quality
goals, there is no objective way to determine whether acceptable quality is being
achieved or to identify processes that have to be improved, or to design new
processes that ensure that specified level of quality will be attained.
✓ The establishment of analytical performance goals may represent a compromise
between what would be optimal for best medical practice and what is realistically
achievable by current technology, given healthcare cost limitations. Two types of
system analysis are used to determine what analytical performance is achieved with
a particular laboratory system.
(1) Bottom-up Analysis is divided into multiple components. The uncertainties
of all components are defined and statistically combined to obtain the total
uncertainty of the complete system. Error limits for each of these components
are obtained from the manufacturers, published literature and in-house
validation studies.

(2) Top-down analysis generally utilizes quality control measurements and/or


proficiency testing results. In-house quality measurements may underestimate
the total errors, particularly if the target values for the controls are not
independently assigned, or if the data are collected only over a short period

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 12
of time. Potential differences across multiple calibrators and differences across
multiple reagent lots should be accounted for in estimating total analytical
variations. On the other hand, between laboratory proficiency testing data
may overestimate the analytical variation within an individual laboratory.
✓ Quality goals cannot be set on absolute basis because they vary from one
laboratory to another depending on the medical mission and professional interests
of physicians using the laboratory tests.
✓ Quality goals must also be considered in relation cost. A goal of achieving the
highest possible quality is not realistic when costs are being cut. IN establishing
quality goals, it is therefore more realistic to specify the quality that is necessary or
adequate for medical applications of the laboratory test results to be produced.
✓ Method evaluation is the first step in validating that analytical performance
satisfies quality goals. Quality control procedures provide for continuing
verification that these goals are met during routine laboratory services.

IV. Quality Control Materials


✓ These are specimens that are analyzed for QC purposes. It is with a known
unchanging concentration and physically resembling the unknown samples. It is
commonly tested prior to the testing of series of unknowns or patient samples. It is
used to monitor the stability of the test system within predetermined limits. Known
values are usually presented by an interval of acceptable values, or upper and lower
limits for control (control limits). When observed values fall within the control limits, the
analyst is assured that the analytical method is functioning properly.
✓ These are commonly supplied as liquid, frozen or lyophilized materials that are
reconstituted with water or a specific diluent solution. Liquid control materials have
the advantage of eliminating errors caused by reconstitution. However, the matrices
of these materials may contain other materials that are potential source of error with
some analytical methods and instruments.
✓ Two or three materials should be selected to provide concentrations that monitor
performance at different medical decision levels (representative of low abnormal,
normal, high abnormal values).
✓ Desired characteristics of a QC Material include:
(1) Analyte concentration should be at medically significant levels. The concentration
of analyte should be in normal and abnormal levels, corresponding to concentrations
that are critical in the medical interpretation of test results.

(2) Material should be available in large quantities but in aliquots or vials. The size of
the aliquots or vials should be convenient for analytical methods to be monitored.
While larger vials are cheaper. The tendency of leaving unused samples may
eliminate any savings. Once a reconstituted QC material aliquot has been thawed
it cannot be refrozen and be reused.

(3) Material matrix should be similar to the human sample as possible. Protein matrix
may be best when serum is the test material to be analyzed. Materials from human

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 13
sources (“pooled sera”) have generally been preferred but because of the risk of of
hepatitis infection, bovine and synthetic materials offer a certain advantage in
safety.

(4) Constituents should be stable for a long period of time. It is desirable to purchase
a year’s supply of one manufacturing batch or lot . Different batches (or lot numbers)
of the same materials will have different concentrations which requires new estimates
of mean and standard deviation.

(5) Material should have a low vial-to-vial variability so that differences between
repeated measurements are attributed to the analytical method alone.

(6) If prepared, it should be stable for the period of use.


(7) Requires minimum preparation.

(8) Readily available.

(9) Reasonably priced.

V. Establishing Control Value Ranges


(1) Select appropriate controls. Commonly assayed in the clinical chemistry
laboratory are levels I and II controls.
(2) Assay them repeatedly overtime. At least 20 data points are needed. This allows
the determination of the mean and standard deviation of control materials
(3) Represent any procedural variation. Assay may be done by different performers,
different times of the day.
(4) Determine degree of variability in the data to establish acceptable range.

VI. Measurement of Variability


✓ Measure of variability normally occurs when a control is tested repeatedly.
Includes computation of COV and SD.
✓ Variability is affected by different factors which include operator’s technique,
environmental condition and performance characteristics of assay method.
✓ The goal is to differentiate between variability due to chance from that due to
error.

VII. Analytical Errors in the Clinical laboratory


✓ Analytical errors in the laboratory can be classified into two groups:

(1) Random Errors. These are imprecision of the test system which causes a
scatter or spread of control values around the mean due to errors which vary

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 14
from one sample to another. The exact magnitude of which cannot be
predicted. Random error is detected as positive or negative bias for a given
analytical method. It is estimated by repetitive testing or precision studies.
Common causes include:

i. handling techniques (non-vertical handling of pipets)

ii. operator variables (technical competencies)

iii. instrument instability (pipet tips not fitting properly, clogged pipets,
imprecise pipets, pipet contamination)

iv. temperature and timing variations

v. reagent variation (air bubbles in reagent)

vi. power supply fluctuations

(2) Systematic Errors. These are systematic change in the test system that
displaces the mean of distribution from its original value. It significantly affects
the accuracy of the test system. It is predictable and causes shifts or trend on
control charts. The term inaccuracy and bias often are used to emphasize the
lack of agreement among methods being compared. Systematic error is
detected as either positive or negative bias for a given analytical method.
Systematic errors can be classified as either a constant error or proportional
error. Constant errors show constant difference between comparative method
and test method. The magnitude of error is constant and error is NOT
dependent on analyte concentration. Proportional error is also known as slope
or percent error. In this type of systematic error, the magnitude of error changes
as a percent of analyte. The error is dependent on analyte concentration.
Common causes of systematic error includes:

i. Equipment non-reliability which results from change of temperature in


incubators or reaction blocks, deterioration of a photometric light source
and change in sample volumes due to pipette mis adjustments or
misalignments.

ii. Reagent and Calibrator changes which include changes in reagent or


calibrator lot numbers, use of wrong calibrator values, improperly prepared
reagents, deterioration of reagents or calibrators and inappropriate
storage of reagents and calibrators.

iii. Changes in procedure from one


performer to another. The combination of random
errors and systematic errors are
iv. Errors in specificity, precision and
collectively referred to as TOTAL
accuracy ERROR.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 15
QUALITY MANAGEMENT
Unit 2: Statistics

Statistics in Clinical Chemistry


In the preceding unit of this module, we mentioned that man is an
imperfect being subject to exhibiting certain degrees of variability. In the practice of quality,
it is important then to be able to determine this extent of variability. Quality control measures
variability through repetitive testing. Differentiating variability due to chance and due to
error is an application of statistics. To this end, it is important to understand basic statistical
concepts that enable the laboratorian to measure quality.

Statistics is a branch of mathematics that collects, analyzes, summarizes and


presents information about “observations”. In the clinical laboratory, these observations are
usually numerical test results.

Qualitative Terms Redefined

TRUENESS AND ACCURACY

✓ Trueness of measurements is defined as closeness of agreement between the


average value obtained from a large series of results of measurments and the true
value. It is the closeness of agreement of mean value with the true value.
✓ The difference between the average value and the true value is the bias, which is
expressed numerically and is inversely related to the trueness.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 17
✓ Accuracy is the closeness of agreement of a single measurement with the “true
value”. It is influenced by both bias and imprecision and in this way reflects the
total error.
✓ The ISO has introduced the trueness of expression as a replacement for the term
“accuracy” which now has gained a slightly different meaning.
✓ From a theoretical point of view, the exact true value for a clinical sample is not
available; instead, an “accepted reference value” is used , which is the true value
that can be determined in practice. Trueness can be evaluated by comparison
of measurements by a given routine method and a reference measurement
procedure. Such an evaluation may be carried out through parallel
measurements of a set of patient samples.

REPEATABILITY AND REPRODUCIBILITY


✓ Precision has been defined as the closeness of agreement between independent
results of measurements obtained under stipulated conditions . The degree of
precision is usually expressed on the basis of statistical measures of imprecision,
such as coeffcient of variation or standard deviation. Which are inversely related
to precision. Imprecision of measurements is distinctively related to the random
error of measurements and has no relation to the trueness of measurements.
✓ Precision is specified as either repeatability or reproducibility. There is however a
difference between this expressions of precision.
✓ Repeatability refers to the closeness of agreement between results of successove
measurements carried out under the SAME conditions. That is corresponding to
with-in run precision.
✓ Reproducibility refers to closeness of agreement between results of the
measurements performed under CHANGED conditions of measurement. This may
be through a change in the time of test performance, change in
performers/operators of the test or even changes in calibrators or reagent lots.

LINEARITY

✓ Refers to the relationship between measured and expected values over the
analytical measurement range. Linearity may be considered in relation to actual
and relative analyte concentrations. In the latter case, a dilution series of sample
may be examined. This dilution series examines whether the measured
concentration changes as expected according to the proportional relationship
between samples introduced by the dilution factor.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 18
Descriptive Statistics
Each day, high volume clinical laboratories generate significant number of
results. This clinical laboratory data must be summarized to monitor test performance. The
foundation of monitoring performance or QC is descriptive statistics.

When comparing and analyzing colletctions of laboratory data, patterns can


be described by their center, spread and shape. Although comparing the center of the
data is common, comparing the spread can be even more powerful. Assessment of data
dispersion or spread allows laboratorians to assess the predictability (or the lack of) in a
laboratory test or measurement.

I. Measures of Center/Location

1. Mean
✓ The calculated average of a set of values or simply called the average. The sum
of all data values is divided by the total number of data values.
✓ It is the most commonl used measure of location.

2. Median
✓ The value of the middle observation, whe the set is arranged in rank order. It is the
value that divides the data into half. It is often used for skewed data.
✓ With an ODD number of data values, for example 21, we have:

Data 96 48 27 72 39 70 7 68 99 36 95 4 6 13 34 74 65 42 28 54 69

Ordered Data 4 6 7 13 27 28 34 36 39 42 48 54 65 68 69 70 72 74 95 96 99

Median Get the Middle data value which is 48

✓ With an EVEN number of data values, like 20:

Data 57 55 85 24 33 49 94 2 8 51 71 30 91 6 47 50 65 43 41 7

Ordered Data 2 6 7 8 24 30 33 41 43 47 49 50 51 55 57 65 71 85 91 94

Median Get the Average of the 2 middle data values- 47 and 49, and so the median is 48

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 19
3. Mode
✓ It is the most frequently observed value in a set of observations.
✓ It is nnot commonly used as a measure of the data’s center but is more often used
to describe data that seem to have two centers (bimodal).
✓ Ther can be more than one mode, if two or more values are equally common. I t
is possible than in a set of data there is no mode at all.

Example: Control Level 1 results are distributed as follows:

Days 1 2 3 4 5 6 7 8 9

Results 94 81 56 90 70 65 90 90 75

Mode 90

II. Measure of Dispersion


✓ In general, it indicates how the data are distributed or spread. The spread represents
the relationship of all the data points to the mean, Three commonly used descriptions
of spread are range, standard deviation and coefficient of variation.

1. Range
✓ A measure of the spread or the dispersion of data points.
✓ It is the difference between the largest and the smallest observed value. Thus,
only the largest and least data values are considered.
✓ This is often a good measure of dispersion for small samples of data.
✓ The range value of a data set is greatly influenced by the presence of just one
unusually large or small value in the sample lcommonly referred to as an outlier.

Example: Consider the following level 1 control values:


65, 73, 89, 56, 73, 52, 47
Range: 89-47= 42

2. Standard Deviation
✓ It measures the “spread”, “variation” or “ dispersion” of the set of data about
the mean or the expected value. It is the most frequently used measure of
variation.
✓ It is the square root of the variance, which is the average of the squared
differences from the mean.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 20
n = the # of observations (how many numerical values )

 ( x − x)
2
2
Σ = the sum of … in this case, the sum of all the (x - x)

SD =
(n − 1)
x = the mean value

X = the value of each individual observation

✓ Low standard deviation indicates data points which tend to be very close
to the mean, while a high standard deviation value indicates data widely
spread out over a large range of values.

3. Coefficient of Variation
✓ It is defined as the ratio of the standard deviation to the mean valueof the
data used in the analysis. It is expressed as a percentage.
✓ Similarly, it is used to measure the spread or dispersion of a set of data in
proportion to its mean.
✓ It is considered a relative measure of precision. Signifies random error or
imprecision .
✓ The smaller the CV value, the more reproducible the results, meaning more
values are closer to the mean. The higher the CV, the greater the dispersion
in the variable.

CV % = Standard Deviation X 100

Mean

4. Variance
✓ Average distance from the center of the data and every value in the data
set. It is depicted by the symbol S2.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 21
III. Measures of Shape
✓ While there are many different
“shape” distributions that data Take note that most patient data are not
sets can exhibit, the most normally distributed. These data may be skewed or
commonly discussed if the exhibit multiple centers (bimodal, trimodal, etc.).
Gaussian distribution or Normal Plotting data in histograms is a useful and easy
distribution curve. The gaussian way to visualize distribution.
distribution describe many
continuous laboratory variables and shares several unique characteristics: the mean,
median, and mode are identical; the distribution is symmetric which means that half
of the values fall to the left of the mean and the other half to the right. The symmetrical
shape is often called a “bell curve”.

✓ The gaussian curve is defined as a symmetrical curve representing the normal


distribution . It is a continuous function which approximates the exact binomial
distribution of events.

The peak of the distribution


should be centered on the MEAN,
(best estimate of the true value)

The width of the curve is related to


the STANDARD DEVIATION of the
data, (a measure of its spread
precision).

"bell-shaped curve".

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 22
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 23
Reference Interval Studies
Laboratory test results are used primarily for medical diagnosis. It is also
used for assessment of physiologic function and for therapy management of patients. When
interpreting laboratory results, physicians compare the measured test results of a patient with
a reference interval.

Reference intervals include all the data points that define the range of observations
usually to define a apecified percentage of values for apopulation. These are usual values
for a healthy population , commonly the central 95% of the population of interest. Reference
intervals are sometimes wrongly referred to as “n ormal values or ranges”. While all normal
ranges are within reference intervals, not all reference intervals are normal ranges.
Reference intervals area sometimes called reference ranges, the former is preferred
because range implies an absolute minimum and maximum values.

Reference intervals are usually established by the scientific community or the


manufacturers of reagents and new methodologies. Presence of age and sex-stratified
normal values for all populations tested. The clinical laboratory is required by good
laboratory practice and accreditation agencies to either verify or establish reference
intervals for any new tests or significant changes in methodologies.

Establishment of a Reference Interval

✓ A new reference intervala is established when there is no existing analyte or


methodology in the clin ical or reference laboratory with which to conduct
comparatice studies. It is a costly and labor intensive study that will involeve
laboratory resources at all levels and may require from 120 to as many as 700 study
individuals. The minimum requirement of 120 subjects are from each age and sex
group.

Verification of a Refence Interval (Transference)

✓ This is done to confirm the validity of an existing reference interval (commonly


published by manufacturers) for an analyte using the same type of analytic system,
method or instrument. These are the most common reference interval studies
performed in the clinical laboratory and can require as few as 20 study individuals.
When two or less than two values fall outside the suggested interval, the
manufacturer’s reference interval is accepted. However, if there are more than two
values falling outside the suggested interval, a second testing will be required. The
second testing will require another 20 subjects. In the second testing, two or less than

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 24
two falling outside the interval means acceptance of the manufacturer’s established
reference interval. More than two values
outside the interval requires a complete
study which requires a study using 120 Inpatient samples should not be
subjects. used for reference interval studies that
are designed to reflect a population

Module 5 Unit 2
Multiple Choice: Choose the best answer.

1. Reference interval transference studies

A. Are used to verify a reference interval.


B. Are used to establish a reference interval.
C. Require as many as 120 normal study sub jects.
D. Uses a 68% reference limit for acceptability.

2. A gaussian distribution is usually:


A. Bell shaped
B. Rectangular
C. Uniform
D. Skewed

3. Which measure of location is commonly used for trimodal patient data?


A. Mode
B. Mean
C. Median
D. Range

4. Which is the best way to present observation data showing non-typical distribution?

A. Histogram
B. Gaussian curve
C. Either histogram or gaussian curve
D. Neither histogram nor gaussian curve

5. Which measure of dispersion is most affected by the presence of an outlier?

A. Range
B. Standard Deviation
C. Coefficient of variation
D. Variance

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 25
QUALITY MANAGEMENT
Unit 3: Quality Control
Charts

Your first laboratory test result


Imagine yourself on your first working day as a Registered Medical
Technologist assigned in the Clinical Chemistry section. Finally, you will be signing an actual
laboratory result which means that you are accepting responsibility for the reliability of said
result. After you run all the requested tests, you come up with results falling within the
reference range and results which are significantly way above the reference intervals. How
confident will you be of signing these varied test results?

Commonly (especially early in our practice of our profession), when results are normal
or within reference interval, we’re comfortable about them and do not worry. However, if
the results are abnormal or beyond the reference interval, we become uncomfortable and
tend to second guess our results. Is our result really accurate? Did we do the right procedure?
Should we repeat it again, just to make sure? What happens when upon repetition of the
test you come up with a significantly different result from your previous result?

Basic statistics and QC Charts in Clinical Chemistry


Statistical analysis of laboratory test data significantly affects our
confidence in the laboratory results which we generate. In chemistry, when control limits are
set at +/- 2SD, the sample results are reported at a 95% confidence level. When control limits
are set at +/- 3SD, the sample results are reported at a 99% confidence level.

The most common method of comparing the values observed for control materials
with their known values is the use of control charts. Control charts are simple graphical
displays in which the observed values are plotted versus the time when the observations
were made. The known values are represented by an acceptable range of values, as

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 26
indicated on the chart by lines for upper and lower limits. When plotted point fall within
control limits, this is generally interpreted to mean that the method is performing properly.
When point falls outside the control limits, this may signify a problem with the analytical
system or with the controls. The use of QC charts is a graphical method for evaluating a
process whether it is (or not) in a “state of statistical control.

Quality Control Charts

I. Levey- Jennings Control Chart

✓ Control charts were first introduced into the clinical chemistry laboratory by Levey and
Jennings in 1950. They demonstrated how the industrial control procedures
developed by Shewhart could be used with the mean and range of duplicate
measurements from clinical chemical methods. In an alternate chart, single control
values are plotted directly. Today, this single-value chart is commonly known as
Levey-Jennings (LJ) chart, even though Levey and Jennings recommended plotting
the mean and range of replicate measurements.
✓ Allows observation of shifts, trends and multigard rules violations.
✓ To use the LJ chart, follow these steps:
1. Analyze samples of the control material by the analytical method to be controlled
on at least 20 different days DURING STABLE METHOD PERFORMANCE. Calculate
the mean and standard deviation for those results.

Many
instruments,
laboratory
information
systems, and
software
packages are
available for the
automatic
computation of
control values for
QC purposes. It is,
however,
important to
understand how
these measures
are calculated.
So, let’s try doing
this manually.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 27
2. Construct a control chart manually on graphing paper. Label the y-axis “control
values” and set the range of concentrations to include the mean and +/-4SD.
Draw horizontal lines for the mean and the upper and lower limits. Set the control
limits as the mean +/-3SD when the number of control observations, n, is 2 or
greater. When n is 1, control limits may be set as the mean +/- 2SD. Label the x-axis
in terms of time, using day, run number, control observation number, or whatever
is most appropriate for recording the relative time of the control observations.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 28
Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 29
3. Introduce control specimens into each analytical run, record the values, and plot
each value on the control chart.
Note that the first set of values used in labeling the x-axis (under no. 2) of
the QC chart were derived from the control runs of the previous month (for
example during the running of controls for the month of February 2021). The
control specimens indicated under this number (3) refers to the values of
control runs being done this month of March 2021. If we were to prepare a
new QC chart for the month of April, the control values recorded for March
will be utilized for the computation of mean and standard deviations used
to label the x-axis of the new chart to which control values of April will be
plotted against (as long as there are no changes in the method
performance like the use of a new batch/lot of control materials).

Ranges for patients are


4. When the control values fall within the control limits, referred to as within or beyond
interpret the run as being “in control” and report reference intervals. Ranges for
the patient results. When a single control method control specimens are referred
exceeds control limits, stop the method; Do not to as “in” or “out” of control
report the patient results (In actual practice, due to
economic reasons, the control and patient
samples are not run simultaneously. Instead, the controls are run first, only when it
is “in-control” do we proceed to running of the patient samples). Inspect the
method to determine the cause for errors. Resolve the problem and repeat the
entire run-specimen and controls (Again, if they are not run simultaneously, we
only repeat running the controls). Determine control status for the new run in the
same manner.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 30
To confirm if the instrument is malfunctioning, rerun the standard once more. If
the instrument is determined to be functioning properly, the run of samples is
recounted. If the source of error is NOT due to instrumentation malfunctioning, the
analyst shall RECHECK the reagents, dilutions, data reduction and other possible
sources of error (like technical errors).
Ideally, the LJ chart should have control values clustered about the mean
(+/- 2SD) with little variation in the upward or downward direction.
Imprecision is commonly indicated when there is a large amount of scatter
about the mean. This is usually caused by errors in technique. Inaccuracy
maybe seen as TREND or a SHIFT which is usually caused by change in the
testing process- systematic error ( change in the mean of control values).
At this point, the Westgard Multirule Chart is also used to assess
acceptability or non-acceptability of the control run.

Shift and Trend


✓ Shift and trends seen in an LJ chard are both indicative of systematic errors.
✓ A shift is the sudden switch of data points to another area of the control chart
away from the mean. It is indictive of abrupt changes in the control mean. Control
values distribute themselves on one side of the mean. Commonly occurs due to:
1. Sudden failure or change in the light source
2. Change in reagent formulation
3. Change of reagent lot number
4. Major instrument maintenance
5. Sudden change in incubation temperature
6. Change in room temperature or humidity
7. Failure in the sampling system
8. Failure in the reagent dispense system
9. Inaccurate calibration or recalibration

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 31
✓ Trends are characterized by an increase or decrease of control values for 6
consecutive days. There is a systematic drift in one direction away from the
established mean. This shows a gradual loss of reliability in the test system. This is
commonly caused by:
1. Deterioration of the instrument light source
2. Gradual accumulation of debris in the sample/reagent tubing.
3. Gradual accumulation of debris on electrode surfaces.
4. Aging reagents
5. Gradual deterioration of control materials
6. Gradual deterioration of incubation chamber temperature
7. Gradual deterioration of light filter integrity
8. Gradual deterioration of calibration

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 32
Westgard Multirule Chart

✓ The multirule procedure developed by Westgard and associates uses a series of


control rules for interpreting control data. The probability of false rejection is kept low
by selecting only those rules whose individual probabilities for false rejection are very
low (0.01 or less). The probability for error detection is improved by selecting those
rules that are particularly sensitive to random and systematic errors.
✓ Rules governing the acceptance or rejection of a control value based on the
expected gaussian distribution.
✓ Uses a combination of decision criteria, or control rules, to decide whether an
analytical run is in-control or out-of-control. It is named for the type of error detected
and number of QC samples that show the error. It is expressed as N L where N is the
number of control observations and L is the statistical limit.
✓ The following control rules used are:

1. 12s
✓ A single control measurement exceeds +/- 2SD
from the target mean.
✓ This is a warning rule to trigger careful inspection of
the control data by other control rules.
✓ It is indictive of an acceptable random error.

2. 13s

✓ One control observation exceeding the mean +/-


3SD from the mean.
✓ A rejection rule primarily sensitive to random error.

3. 22s

✓ Two consecutive control measurements exceed


the same mean +/-2SD control limit.
✓ A rejection rule sensitive to systematic error.
✓ There are two applications: within-run or across
control materials (observing all control results
obtained for the current analytical run; two control
levels are considered at one time) and across runs or within one control
material (Observing results of one control level over subsequent runs).

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 33
WITHIN RUN ACROSS RUN

4. R4s

✓ One control measurement in a group exceeds the


mean + 2SD and another exceeds the mean - 2SD.
Applied only within a current run so that between run
systematic errors are not wrongly interpreted as
random errors. However, the rule may be applied
“across” materials, meaning that one of the
observations can be on the low material and the other
on the high material as long as they are within the same
run.
✓ A rejection rule sensitive to random error.

5. 41s

✓ Four consecutive control measurements exceed the


same mean +1SD or the same mean - 1SD control limit.
✓ Rejection rule sensitive to systematic error.
✓ Applied within and across control materials

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 34
6. 31s and 41s
✓ May be applied within control materials or across control materials.
✓ May not necessarily require rejection of analytical run, may be eliminated
with calibration or instrument maintenance.

7. 10x
✓ Ten consecutive control observations falling on one side
of the mean (above or below, with no other requirement
on size of deviations).
✓ A rejection rule that is sensitive to systematic error.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 35
8. 7x, 8x, 9x, 12x
✓ These rules are violated when there are 7,8,9 or 12 control results on the same
side of the mean regardless of the specific standard deviation in which they
are located.
✓ Each of these rules have two applications: within control materials and across
control materials. Within control materials violations indicate systematic bias in
a single area of the method curve while violation of the across control
material application indicates systematic bias over a broader concentration.

RANDOM Errors affect the reproducibility or precision of a test system. Usually indicated
by 13s or R4s rules violations. This may be due to variations in line voltage, pipettes/dispensers,
contamination, errors in volume dispensed and bubble in lines of reagents, etc.

SYSTEMATIC Errors exhibited by shifts, trends and bias are errors affecting the accuracy of test
system. Usually indicated by 22s, 41s and 10x rules. This may be due to calibration lot changes,
temperature changes in incubator unit. Light source deterioration, reagent lot changes, etc.

✓ When considering whether a control value is within or out of control, the following
illustration summarizes the Westgard rules to be considered in sequence

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 36
✓ If a run is out of control, investigate the process and correct the problem. Do not
automatically repeat the control!!What do you need to do to investigate the process?
1. Determine the type of error based on your rule violation (random or systematic).
2. Relate the type of error to the potential cause.
3. Inspect the testing process and consider common factors on multi-test systems.
4. Relate causes to recent changes.
5. Verify the solution and document the corrective action.

II. Cumulative Sum (Cusum) Graph

✓ A CUSUM control chart monitors the deviations of individual samples results (X values)
or subgroup averages (X values) from a target value. The target value is the same as
your process aim – where you want the process to operate. The cumulative sum is
the sum of these deviations over time.
✓ Calculates the difference between QC results and target means.
✓ Very sensitive in detecting small, persistent errors.
✓ Commonly occurs in modern, low calibration-frequency analyzers.
✓ Gives earliest indication of systematic errors.
✓ Most common method: V-mask
✓ Requires computer implementation.

III. Youden Plot

✓ Also known as Twin Plot


✓ Youden plots are a graphical technique for analyzing interlab data when each lab
has made two runs on the same product or one run on two different products. The
Youden plot is a simple but effective method for comparing both the within-
laboratory variability and the between-laboratory variability.
✓ Comparing results of high and low control serum from different laboratories.
Commonly used for external quality programs to compare the performance of the
laboratory on paired samples with other laboratories using the same lots or survey
materials
✓ Plotting mean values: one specimen on the ordinate, other specimen on the abscissa.

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 37
Control of analytical quality using patient data
QC mechanisms based on patient data provide additional information
useful in monitoring the quality of laboratory analyses. These procedures are
often time consuming and generally are not sensitive enough to serve as the only means of
QC. However, many of the control problems detected with these techniques may not be
evident with conventional QC systems.

I. Individual Patient Results


Patient test results are the final product of most laboratory procedures, and
monitoring of these results is the most direct form of QC.

II. Clinical Correlation


In general, the clinician ordering the tests is in a better position to evaluate
appropriateness of test results. However, clinicians often order a test because they are
uncertain of the exact clinical status of the patient and base their diagnosis heavily on lab
test results. Clinical correlation studies are best done retrospectively when laboratory test
results are directly related to other evidence, such as surgical findings and response to
therapy. In an individual patient, clinical correlation can identify impossible or highly unlikely
test results that cannot be flagged in the laboratory, such as a normal serum bilirubin in a
highly jaundiced patient. Clinicians should be encouraged to report these discrepancies to
the laboratory and a mechanism should be available to follow up with these problems.

III. Correlation with other laboratory tests


As in clinical correlation, a single test results can be judged to be implausible in a
limited number of situations, but the combination of several test results is impossible or very
unlikely. If the tests involved in these comparisons are performed at the same time, it is often
possible to identify errors and correct problems before reporting the test results to the
clinicians.

IV. Intralaboratory duplicates


In practice, samples are often divided into two aliquots and analyzed, and the
duplicates used for control purposes. This is a simple QC procedure that does not require
stable control materials; therefore, it is used when stable materials are not available or as a
supplemental procedure when stable control materials are available. When duplicates are
obtained from the same method, this range chart monitors only random error and thus is not
adequate for ensuring accuracy of the analytical method. When duplicates area obtained

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 38
from two different laboratory methods, then the range chart actually monitors both random
and systematic errors but cannot separate the two type of errors.

V. Delta Checks with previous test results


The practice of comparing laboratory test results with values obtained on previous
specimens from the same patient. Delta checks limits are defined based on a 3-day interval
in terms of a percentage change from the initial value. Delta check limits vary from one
analyte to another. Delta checks compare current laboratory results to previous results; if the
difference between the two values exceeds predetermined biological limits (within a
predetermined length of time), a technologist is alerted and the discrepancy can be
investigated further.

VI. Limit checks


A patient’s test results should be reviewed to check that they are within the
physiologic ranges compatible with life. These limit checks are helpful for detecting clerical
errors, such as transposed errors, such as transposed digits or misplaced decimal points.
These warning limits are dependent on the test methodology and on characteristics of the
patient population being tested.

Module 5 Unit 3
Identify the following:

1. What is the most common method of comparing the values observed for control
materials with their known values?

2. What is commonly done to confirm if the instrument is malfunctioning during an out- of-
control result?

3. Which is referred to as the sudden switch of data points to another area of the control
chart away from the mean?

4. Which Westgard control rule is considered an indicator of acceptable random error?

5. What do we call the practice of comparing laboratory test results with values obtained
on previous specimens from the same patient?

Property of and for the exclusive use of SLU. Reproduction, storing in a retrieval system, distributing, uploading or posting online, or transmitting in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise of any part of this document, without the prior written permission of SLU, is strictly prohibited. 39

You might also like