The House of Reliability: Ijqrm 24,4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0265-671X.htm

IJQRM
24,4 The house of reliability
M. Braglia, G. Fantoni and M. Frosolini
Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione,
Università di Pisa, Pisa, Italy
420
Received July 2005 Abstract
Revised September 2006 Purpose – The purpose of this paper is to provide a structured methodology for performing build-in
Accepted October 2006 reliability (BIR) investigation during a new product development cycle.
Design/methodology/approach – The methodology in this paper represents an extension of the
Quality Functional Deployment/House of Quality (QFD/HoQ) concepts to reliability studies. It is able
to translate the reliability requisites of customers into functional requirements for the product in a
structured manner based on a Failure Mode And Effect Analysis (FMEA). Besides, it then allows it to
build a completely new operative tool, named House of Reliability (HoR), that enhances standard
analyses, introducing the most significant correlations among failure modes. Using the results from
HoR, a cost-worth analysis can be easily performed, making it possible to analyse and to evaluate the
economical consequences of a failure.
Findings – The paper finds that the application of the proposed approach allows users to identify
and control the design requisites affecting reliability. The methodology enhances the reliability
analysis introducing and managing the correlations among failure modes, splitting the severity into a
detailed series of basic severity aspects, performing also cost/worth assessments.
Practical implications – It is shown that the methodology enables users to finely analyse failure
modes by splitting severity according to the product typology and the importance of each Severity
criterion according to laws or international standards. Moreover the methodology is able to consider
the “domino effects” and so to estimate the impact of the correlation between the causes of failure.
Finally a cost/worth analysis evaluates the economical consequences of a failure with respect to the
incurred costs to improve the final reliability level of the product.
Originality/value – The paper proposes a completely new approach, robust, structured and useful
in practice, for reliability analysis. The methodology, within an integrated approach, overcomes some
of the largely known limits of standard FMECA: it takes into account multiple criteria, differently
weighted, it analyses the product considering not only the direct consequence of a failure, but also the
reaction chain originated by a starting failure.
Keywords Cost analysis, Failure modes and effects analysis, Quality function deployment
Paper type Research paper

Introduction
Today, industries face continuously with higher customer expectations for quality and
reliability of products but in contraposition with severe cost constraints and with the
pressure for shorter product-cycle times.
The terms quality and reliability are often used interchangeably. While strongly
connected, they are not the same. Indeed, reliability can be defined as the continuation
International Journal of Quality & of quality over time. It simply represents the ratio given by the time period during
Reliability Management which a product meets the expected standards of quality over the time of use. The
Vol. 24 No. 4, 2007
pp. 420-440 importance of reliability during product design development grows greater each day.
q Emerald Group Publishing Limited
0265-671X
Actually, if the product quality is now necessary in order to stay in business, in
DOI 10.1108/02656710710740572 tomorrow’s marketplace, reliability will be the standard for doing business.
Build-in-reliability (BIR) is the philosophy/activity where the design requisites The house of
affecting reliability must be identified, controlled and enhanced leading to continuous reliability
improvement in product reliability. With BIR significant effort is placed on the
identification of the design actions that have a significant effect on final reliability.
However, the task of build-in reliability (BIR) during a product development is
frequently resource-intensive, since it is necessary to identify the possible failure
modes associated with a (new) product. If the product is complex, the number of 421
possible failure modes can be great, rendering BIR activity rather unpractical (Tan,
2003).
What are the existing methods and tools able to support design engineers during
BIR activity? Ahmed (1996) presents an interesting discussion concerning tools and
techniques in support of product reliability improvement and that are classified in:
(1) On-line techniques – techniques used purely for measuring, monitoring and
assessing the progress of product reliability improvement, identifying
inhibitors responsible of poor reliability performance, so that design
engineers can act on this information;
(2) Off-line techniques – techniques used to assure reliability during product
design and manufacture. These are methods:
.
used as a design activity during the introduction of new products and
product uplifts and as a result of customer needs and technological
advancement, or
.
used for analysing the effects of proposed engineering changes to the
reliability of the product.
Despite the literature emphasizing the more mathematical modelling and predictive
aspects of this subject (i.e. on-line techniques), the author underlines how the
application and diffusion of the more rational reliability tools in the manufacturing
industry is strongly limited. This is due to:
.
the complexity of these approaches,
.
the request of historical information such as failure data, mechanisms, and
modes of component failure that are difficult to obtain, and
.
the high specificity of these models to a single, particular, case study.
On the contrary, less researches but a larger number of industrial applications can be
found based on off-line techniques. Quality Function Deployment (QFD) and Failure
Modes Effects Analysis (FMEA) probably represent the most popular examples of
off-line techniques. These are two technical tools dealing with the same issue, namely
the customer’s satisfaction, each operating from a different point of view. QFD can be
considered as the defender of the customer needs, while FMEA is the defender of the
voice of engineer (Ginn et al., 1998).
QFD is a methodology that systematically translates the “voice of customer” into
functional requirements for the product. However, the translation and deployment of
reliability requisites from customer expectations to design specifications is neither
clearly defined nor procedurized. Taking into account failure aspects, the “reliability
design” results conceptually different to the “quality design”, considering that it
requires a failure analysis as an indispensable part of design activity. This is probably
IJQRM the main difficulty in applying QFD directly to support the reliability studies during
24,4 product development.
On the other hand, FMECA represents an analytical technique adopted by engineers
to ensure that all the potential failures of a new product have been considered and
analysed in terms of failure modes, related causes, and possible effects on the customer.
Unfortunately, the customers’ expectations are not considered during the analysis. In
422 this way, only the manufacturer’s point of view is taken into account when the
prioritization of reliability efforts is actually decided. The customers’ requests, that
should be the true “guide” in the build-in reliability implementation during product
development, are completely neglected. But, as required by the Engineering Quality
Improvements Program (EQIP) at Ford Motor Company, it is necessary to avoid,
during a product development program, to engineer “out” what the customer did not
want with respect to engineer “in” what the customer wants (Al-Mashari et al., 2005;
Henshall, 1995).
Considering that, if individually used, the two previous methods are inadequate to
guide the engineer to design a new product able to satisfy the customer’s reliability
requirements with low costs and short development times, in the recent years some
efforts have been realised to propose new approaches to support the build-in reliability
task during product development.
The natural consequence of the above mentioned concepts is the attempt of
introducing an integrated usage of QFD and FMEA. The idea of combining QFD and
FMEA is not new at all, and several papers have been written about. For example,
Ginn et al. (1998), Ginn and Zairi (2005), and Al-Mashari et al. (2005) discuss the QFD
linkages with other quality tools (Taguchi methods, value management, etc.) and, first
of all, with FMEA. In particular, they support their analysis describing the approaches
followed and suggested by Ford EQIP training process (Ford Motor Company, 1983;
Henshall, 1995). Ginn et al. (1998) arrive to propose a general methodology for
interactions between QFD and FMEA, emphasising their common features. Ginn and
Zairi (2005) present a benchmarking exercise involving a large number of QFD
practitioners within Ford Motor Company. They find that FMEA is considered by the
users one of the most important quality tools adopted in conjunction with QFD.
Al-Mashari et al. (2005) underline how QFD represents, for Ford EQIP, the key tool in
linking all other quality tools, through the QFD process itself.
Frequently, researches are focused on QFD/FMEA interface and concern about the
development of more coherent approach for integrating these two customer driven
quality tools. Yang and Kapur (1997) suggest a pure theoretical model based on a
combined use of QFD and FMEA to provide a conceptual framework to be used to
deploy reliability characteristics into the functional design of a product/component.
The reliability customer’s voice is fed into the House of Quality (HoQ) of a traditional
QFD as functional requirements and transformed into engineering specifications. In a
second step, these specifications are used as input of a FMEA. Finally, the results of
this FMEA are introduced into a “reliability optimized design matrix” where the effect
of each failure mode is placed in function of the corresponding engineering
specification to obtain a reliability optimised specification. Unfortunately, several
aspects of the methodology are not described by the authors, such as how the
customers provide the priority rating of their reliability requirements, and also how the
rating is incorporated into FMEA.
In Tan (2003) and Tan and Neo (2002), a reliability HoQ permits to identify the The house of
potential failure modes that are directly derived from the reliability requirements of the
customer. At the same time, a quality and functional HoQ is used to detect the
reliability
associated effects of failures and the corresponding criticality levels. In a second step,
these data are used as inputs of a FMEA that is able to specify the consequent
recommended (design) actions. In this way the FMEA becomes customer-focused
considering that the prioritisation of the reliability corrective actions on the product 423
design are effectively guided by customers requests.
With the “Small QFD/FMEA” approach (Kosche, 2002), the inputs of a starting QFD
are not the consumer’s requirements but the consumer’s complaints with respect to the
existing similar products available on the market. These complaints are evaluated by
the QFD and linked to the possible causes of failure of a FMEA by means of a
correlation matrix “consumer’s complaint vs. potential failure”. This matrix shows on
one side the criticality of a failure that causes consumer’s complaints and on the other
side all the failures which have to be eliminated to avoid a certain consumer’s
complaint completely. This evaluation makes it possible to focus on crucial items to
achieve improvements that are relevant for the consumer.
In Furakawa (1984) the potential failure-modes derived from customers are obtained
through Fault Tree Analysis (FTA) in the place of FMEA. Unfortunately, customers
rarely have sufficient product knowledge and usually have not interest in constructing
the fault tree for their suppliers. For these reasons, Trewn and Yang (1997) extended
the work by Furakawa (1984), introducing a model where the inclusion of the
customer’s reliability requirements into the design of a product is achieved through an
integrated use of QFD with FTA.
The aim of the present research is not to propose a new possible interface between
QFD and FMEA, but to develop a new, self-standing and operative tool able to bring
the voice of customer and the voice of engineer closer together during a full product
development program. In this paper, we introduce the concept of “House of
Reliability” (HoR) as a new operative tool of a methodology that makes it possible to
achieve the alignment of reliability customer’s requirements and reliability design
choices. The gap between QFD and FMEA approaches is fully bridged through the
use of HoR, which is able to translate the reliability requisites of the customer into
functional requirements for the product in a structured manner, based on a failure
analysis. In other words, HoR can be effectively used to identify the reliability
characteristics of the product that are perceived as important to the customers and
relate these costumer’s requirements to engineering actions that can be identified. In
other words, with this methodology the BIR analysis becomes effectively
customer-focused, and hence the prioritization of the corrective actions to relevant
failure modes can be done objectively.
Finally, another important and attractive characteristic of HoR methodology is
presented in the paper. Thanks to the derivation from QFD and HoQ concepts, also
HoR represents a tool that can be integrated closely with other product development
methodologies, such as Cost/Worth Analysis. Using the results from HoR, a cost-worth
analysis can be performed using the Tanaka method (Tanaka, 1989). This method
compares relative cost to component “worth” generated by HoR. It yields a graphical
representation of those components that should be cost-reduced and those that might
be enhanced. By determining the cost-worth ratio of the various parts, the engineers
are able to focus their efforts and to identify where they could apply reductions or
IJQRM enhancements to the product (i.e. to define the reliability design priorities). In other
words, the methodology makes it possible to analyse and to evaluate the economical
24,4 consequences of a failure with respect to the incurred costs to improve the final
reliability level of the product. The ability to support a cost/value analysis must be
pinpointed. In fact, the incapacity to execute an economical evaluation of the
recommended corrective actions obtained from a FMEA is considered one of the most
424 important limits of this popular technique.

The “house of reliability”


The house of reliability is built by analogy with the HoQ. It means that the new tool
formally follows the structure and shape of the well known “house of quality” with
rooms and roof, whereas its goals are deeply different.
The HoR is organized as shown in Figure 1, where: the “Severity” room, on the left
within the matrix, represents the partition of the Severity criterion into sub criteria and
replaces the Customers’ needs of the HoQ. The HoR pays particular attention to a
detailed analysis of Severity since some authors (Braglia, 2000; Bevilacqua et al., 2000)
underline that it represents the fundamental and most critical parameter in FMEA. In
general, Severity can be further detailed into two more levels (Figure 2). The second
level consists of four classes (safety, quality, maintenance cost and mean time to repair)
which impact on the overall customers’ satisfaction. In turn, these classes can be
further split into differently weighted sub-criteria.
In this customer-focused method, weights are assessed by the customer and the
partition varies, according to the analyzed product or process. Clear differences emerge
when analyzing the Severity splitting for a process, a product to be maintained, or a
replaceable component. Table I presents the main differences between:
.
a machine for a continuous process;
.
a car; and
.
a car latch, respectively.

Figure 1.
The House of Reliability
The house of
reliability

425

Figure 2.
The tree diagram of
severity

A machine for a
continuous process A car A car latch

Quality loss Quality/performance Performance loss Performance loss


loss
Mean time to repair Production Loss Mean time to repair
Delay in delivery time
Maintenance cost Labor maintenance cost Labor maintenance cost
Spare parts Spare parts
maintenance cost maintenance cost
and inventories and inventories Table I.
Safety Human safety Human safety Human safety Severity criterion split in
Environmental impact Environmental impact Environmental impact different cases

Evidently, whenever one or more elements of the Severity splitting are not applicable,
it is possible to assign null values to their weights. For instance, the criterion “delay in
delivery time”, that is clearly fundamental in the case of a machine for a continuous
process, does not make sense both in the case of a new car and a car latch.
. The “Degrees of importance” become essential to weight the importance of each
criterion. Actually, not only the Severity splitting varies according with the
product, but the importance of each Severity criterion can be advantageously
modified according to laws (i.e. environmental laws, automotive OEM
prescriptions) or international standards.
IJQRM .
The “Causes” room, located under the “roof”, contains the “causes of failure”, and
substitutes the “technical characteristics” of the HoQ.
24,4 .
The “Relationship Matrix” represents the link between the “causes of failure”
and the severity of their effects. Instead of the well-known symbols and the
standard conversion metrics adopted in classic QFD (((, O, D), in the HoR a
linguistic approach, typical of reliability analysis, is used. Well known
426 “conversion” tables (Ford Motor Company, 1983; Ben-Daya and Raouf, 1996;
Gilchrist, 1993) are used to convert the linguistic judgments, concerning the
severity of each failure effect, into a 1 to 10 numeric scale (Appendix 1, Table III).
.
The “Probability” room, located under the “Relationship Matrix”, contains the
assessed value (again on a scale ranging from 1 to 10) of the probability of each
cause of failure to happen. Each value represents a multiplicative coefficient for
the computation of the final rating.
.
The “Non Detectability” room is located below the “Probability” room. For each
cause of failure, the chance of the failure being undetected is expressed in a 1 to
10 scale.
.
The “Domino Matrix” constitutes the “roof” of the HoR. It is located just above
the “Causes of failure” room, and identifies the dependencies between different
causes. This means that it represents the room where the “domino effects” that
occur between the causes of failure have to be evaluated (Appendix 2).
.
The “Final Score” room is located in the lowest box, beneath the matrix. This
room represents the final rating of the HoR and quantifies the reference values
for evaluating the risk of each cause of failure. The actual numerical value of
each cause of failure measures the customer’s dissatisfaction degree.
Prior to begin the description of the evaluation algorithm, it is necessary to underline
that the Domino Matrix plays a fundamental role within the HoR and is a key element
for the evaluation of the final score. Hence, it is essential to emphasize the most
interesting similarities with some key works about the correlation matrix in QFD.
Even if the correlation matrix (the so called “roof”) is often neglected, more than a
few authors revealed its key role in advanced QFD. By means of this particular
structure, Wasserman (1993) took into account the effect of dependencies among
engineering design requirements and normalized the technical importance of design
requirements reducing the weight of correlated characteristics.
Further, Franceschini (2002) suggested to use the correlations to eliminate
redundancies and duplications of those characteristics that are formally but not
actually different. The underlying idea is that correlated characteristics affect the same
set of customer needs and, therefore, they have to be deeply analyzed in order to
eliminate all possible overlaps.
Fantoni et al. (2004) defined an algorithm for assessing the effect of synergies
between the characteristics/solutions in QFD. The authors propose to use correlations
for managing the effect of synergy between different characteristics.
Georgantzas and Hessel (1995) proposed an analytical approach to assess the effects
of multiple interdependencies, dealing with the problem of second, third and
higher-order effects among QFD needs and characteristics.
An original use of the “roof” is proposed in the HoR. In fact, in order to consider the
“domino effects” it is important to estimate the correlation between the causes of
failure. The domino effect takes into account how a fault of a component could spread The house of
within a system, leading other components to failure. Therefore, the Domino Matrix
describes the fault tree of a system and evaluates each cause of failure through all the
reliability
possible consequences of its effects.
While the correlation matrix in the HoQ contains the symbols “0”, “ þ ” and “ – ” to
indicate the absence, a positive or a negative correlation between technical
characteristics, the Domino Matrix in HoR contains the dependencies between 427
different causes of failure. Therefore, while the correlation between two technical
characteristics in the HoQ is biunique, for each couple of causes of failure i and j the
Domino Matrix in the HoR contains two distinct values: ai; j and aj;i . In fact, ai; j
represents the assessment of the probability that the i th cause affects the j th one,
whereas aj;i represents the assessment of the probability that the j th cause influences
the i th one. Obviously, in general ai; j differs from aj;i .
An appropriate linguistic conversion table, similar to those used within the FMEA,
has been prearranged to support the analysts in assessing the value of each couple ai; j
(Appendix 2, Table AII).

Evaluating the criticality of the causes of failure


Failure modes criticality evaluation (US MIL-STD 1629-A, 1988) is usually executed in
two different ways (Bowles and Pelaez, 1995; Pelaez and Bowles, 1994):
.
Calculating a Criticality Number (CN).
. Developing a Risk Priority Number (RPN).
Because of its intrinsic easiness, the RPN method is largely adopted and diffused in
industrial practice. The RPN criticality evaluation adopts linguistic terms to rank the
chance of the failure mode occurrence P, the severity of its failure effect S, and the
chance of the failure being undetected D on a numeric scale from 1 to 10. Well known
“conversion” tables (see, for example, Ben-Daya and Raouf, 1996; Gilchrist, 1993;
Pelaez and Bowles, 1994) report the typical basis for the linguistic judgment scales
used to estimate the three quantities which are used to calculated the RPN value in the
following manner:

RPN ¼ P £ S £ D:

In HoR, the risk levels of each cause of failure can be determined:


.
by considering the priorities assigned by the customer to each criterion of the
Severity splitting;
.
by using the information contained in the “Relationship” matrix of the HoR;
.
those coming from the “Probability” and the “Non Detectability” rooms; and
. by considering the domino effects between the causes of failure.
Consequently, the Critical Risk number (CR) of HoR is the following:

X
n X
m X
n
CRi ¼ P i · NDi · wi cij þ aik · P k · NDk · wk ckj
j¼1 k¼1 j¼1

where:
IJQRM Pi ¼ Probability of failure;
24,4 NDi ¼ Non Detectability for the failure cause;
wi ¼ weight of each criterion which Severity is split into;
cij ¼ Severity of the j th cause of failure, evaluated according to i th the criterion;
428 ai;k ¼ 0;i ¼ k.
The first term of equation (1) simply represents a weighted RPN (with the significant
difference that here Severity depends on different contributions). The second one, on
the other hand, takes into account the first level domino effects among the causes of
failure. Its practical numerical contribution is that of increasing the Critical Risk
number of those causes of failure whose effects may originate other failures (causal
chain).
Equation (1) can also be written in the following condensed form:

X
m X
n
CRi ¼ aik · P k · NDk · wk ckj
k¼1 j¼1

where ai;k ¼ 1;i ¼ k.


Here it is quite more difficult to identify the two different contributions to the
Critical Risk number, since the first level domino effects are joined within the first
addendum by a simple transformation of ai;k coefficients. Indeed, when i ¼ k the
coefficient ai;k reduces to 1 and the formula gives the “weighted RPN” value that
appears in (1), whereas, when i – k, it represents the domino effects between cause i th
and cause k th.
On the other hand, this condensed form gives the interesting opportunity of writing
an extremely handy matrix definition. In fact, it is possible to represent the Domino
Matrix as a square matrix A where the elements i; j are the parameters ai; j . According
to Figure 3, the equation (2) becomes:
     
r¼ w T S pI mm ndI mm A

where:
r ¼ row (1 £ m) vector of critical risk numbers;
A ¼ m £ m matrix of “domino effects”;
W ¼ column (n £ 1) vector of weights;
S ¼ n £ m matrix of relationships severity;
P ¼ row (1 £ m) vector of probability;
Nd ¼ row (1 £ m) vector of non detectability;
Imm ¼ m £ m identity matrix.
The algorithm, in its final matrix form, can be easily implemented by using a simple
spreadsheet.
The house of
reliability

429

Figure 3.
Matrix form of the House
of Reliability

Because an effect generated by the failure of a component can force other components
to fail, in a chain reaction, the existence of “Domino effects” of higher order than first
has to be examined. The evaluation of the second, third and higher order effects of the
relationships contained within the Domino Matrix can be performed by analogy with
the Georgantzas’ approach in QFD (1995). Briefly, if ai; j represents an assessment of
the probability that the i th cause affects the j th one directly and aj;k that the cause j th
affects the k th one directly, then the probability that a failure generated by the i th cause
spread until the k th cause of failure indirectly (or, better, through cause j th) can be
calculated by multiplying ai; j by aj;k . Hence, A 2 represents the second order “domino
effects”. Unfortunately, when squaring the Domino Matrix A we introduce undesirable
not null values within the diagonal (where no domino effects are expected at all),
modifying the overall behavior of the algorithm with respect to its original formulation,
as it is given in (1). To avoid this issue we are forced to subtract to the result, namely
A 2, the squared matrix itself multiplied by the Identity Matrix Imm. This procedure
will restore null values where necessary. Obviously, it must be accomplished for all
higher order matrices, if present.
Therefore in a matrix form the evaluation of the “domino effect” until the f th order is
obtained in accordance with the following equation:
     
r ¼ w T S pI mm ndI mm A þ ðA 2 þ :: þ A f Þð1 2 I mm Þ

A case study
To test the proposed methodology, a case study taken from literature (Dethoor and
Groboillot, 1968) and dealing with the criticality analysis of the control system of a
digger’s bucket is presented, in order to show and highlight the results deriving from
IJQRM the adoption of this new technique and to compare them with those coming from
24,4 standard FMEA.
The case study is organized as follows:
.
First, the causes of failure are evaluated by using the HoR;
.
Subsequently, alternative technical solutions are provided and evaluated by
deploying the first HoR in a Reliability Function Deployment chart; and
430 .
Finally, a cost analysis is executed.

Building the house of reliability


A functional analysis of the system has been initially performed. The arm of a scraper
can be divided into five main groups (Table II): the bucket, its hydraulic guide, the
hydraulic jack, the flexible pipes of the jack, the control valve for the bucket. For each
group, the main functions and the causes of failure have been identified. Severity has
been split into seven parameters that have to be checked in order to evaluate their
applicability. In the case of the scraper arm, for example, the parameter “quality” does
not make sense because a failure in the arm functionalities can not cause a decrease in
the quality of the dug hole. On the contrary, the other six parameters of the severity
split are evaluated by using a conversion linguistic table (Appendix 1), on a scale
ranging from 1 (low) to 9 (high).
Afterwards, the analysis of the possible “domino effects” has been carried out
(Figure 4). The only highlighted relationship concerned the effect of the break of the
valve spring and the obstruction of the valve vent duct, which can cause oil leakage
from the joints. Values of 0.7 and 0.4 represent the probability/judgments that the fault
on one of the two parts of the valve negatively affects the joint seal (Appendix 1).
In Figure 5, the complete HoR of the arm is presented. The CR number and the
related ranking have been calculated respectively considering and omitting the
“domino effects”. Different results have been highlighted within the grayed cells.
The presence of a cause-effect relationship between two causes of failure and “Oil
leakage from the joints” increases the risk they happen because they influence both
directly and indirectly the entire product reliability. In fact, their CR number depends
on their direct impact on the severity but inherited also a quote of the risk of their
propagation through the entire product.

Group Function Failure

Hydraulic guide of the bucket Guide the jack of the bucket Break down of the jack spring
Guide of the piston Piston guide sticks
Bad conditions of the control lever
Hydraulic jack of the bucket Guide of the bucket Oil leakage from the joints
Oil leakage because of shaft deformed
Brake down of fixing bolts
Table II. Flexible pipes of the jack Oil supplying Fatigue
BOM of the scraper arm, Break down caused by wear
functions of each group Control valve for the bucket oil Limit the oil pressure Break down of the valve spring
and main causes of Obstructed vent duct
failure Bucket Soil excavation Break down of the axle
The house of
reliability

431

Figure 4.
The roof representing the
first level “domino effects”

Research of technical solutions to the causes of failure


The following step consisted in the search of technical solutions for all the listed causes
of failure. The aim of this specific activity is to determine the potential technical
changes in the product/process that can lead to an actual improvement of the system
reliability.
This first level HoR deployment works similarly to the standard QFD, where the
causes of failure take the place of the customers’ needs and their weights are the final
scores obtained by the HoR (Figure 6). It can be called Reliability Function Deployment
(RFD) and provides a ranking of the potential alternative technical solutions. During
the RFD evaluation (Jayant and Yang, 1997), all the numerical operations and
normalization methods proposed in literature and used in standard QFD can be
advantageously re-applied.
In Figure 7, a complete deployment of the RFD is reported. As much as 21
modifications for solving or reducing the causes of failure have been proposed. They
vary from specific solutions on the materials and the types of the springs, to a new
layout of the oil pipes, to the adoption of sensors and electronic control devices. There
are reported three different scores and rankings:
IJQRM
24,4

432

Figure 5.
The House of Reliability
for the scraper arm

Figure 6.
First deployment chart
(Reliability Function
Deployment)
The house of
reliability

433

Figure 7.
First HoR deployment
chart (numerical)
IJQRM (1) The first one is calculated by using the traditional method (Independent Scoring
Method, proposed by Akao, 1990). Unfortunately, as reported by Franceschini
24,4 (2002), this approach is characterized by some problems if correlations between
different technical solutions exist.
(2) The problem of sub-specifics of technical solutions could be overcome by
Lyman (1990) normalization. Each normalized coefficient is obtained by the
434 ratio between the un-normalized value and the sum of the values of its row. The
result of this normalization is also reported (second score in Figure 7).
(3) It is important to underline that the previous procedure does not take into
account the correlation of technical characteristics of a product. In order to
reduce the level of dependencies between design characteristics Wasserman
(1993) normalization can be suitably used. By considering the role of the
correlation matrix (roof), Wasserman’s algorithm allows to evaluate each
solution considering the value of the dependencies (correlations) between design
characteristics (Appendix 3, Figure A1).

Cost-worth analysis
The integration of HoR and cost/worth analysis permits to answer to common
questions about BIR: where and how to incorporate cost/benefit considerations during
product development. Using the results from HoR Phase II, a cost-worth analysis can
be performed using the Tanaka method (Tanaka, 1989). The method compares relative
component costs to estimated value. The premise of this technique is that the ratio of
cost to worth should be close to unity for effective allocation of resources. In Tanaka
method, the relative worth of each component is determined by a qualitative evaluation
by the design team. Here, the authors saw an opportunity to use the output from HoR
Phase II to eliminate the more subjective evaluation used in Tanaka method. Indeed, as
it happened during the normalization process, also during the cost analysis the
numerous alternative approaches, proposed in literature for QFD and suitably
modified, can be profitably used. Each proposed solution has been analyzed from an
economic point of view considering the costs of materials, management, and design.
The last two rows in Figure 7 indicate the total cost per machine of each proposed
technical change and its percentage on the total cost of all possible new solutions.
Within the “cost/worth analysis” the importance of each proposed solution comes
from the normalized Wasserman’s score as shown in Figure 7. Following Tanaka’s
approach, costs and worth of each solution are represented on a two dimensional
diagram (Figure 8). Each number indicates the corresponding technical action from the
set of the 21 analyzed solutions.
A “target zone”, close to the quadrant diagonal, indicates the optimal area where the
solutions should fall. Actually, the diagonal represents the ideal place where the match
between costs and worth is perfect. Upper and lower boundaries are placed around the
diagonal to define this target zone. Solutions outside the boundaries and above the
ideal area have a higher cost (with respect to their worthiness) and, therefore, they are
good candidates for cost reduction, mainly because the company is investing in
reliability of some parts more than that the users (can) perceive. Conversely, solutions
that fall below the target zone have a prominent importance and play a key role for the
reliability, as perceived by the customers. Company can certainly decide to invest on
these areas, appreciated and recognized by the customers.
The house of
reliability

435

Figure 8.
Cost analysis

It is clear, from the above-mentioned analyses, that the tool gives the possibility of
achieving a guideline for the interventions concerning the product reliability. The
intervention strategy will be, in general and within a predetermined budget, a mix of
actions, which could even reduce the reliability of some components of the system if
this decreased reliability does not impact on the customers’ perceived
quality/reliability. It is pleonastic to underline that all the minimum safety
requirements have to be satisfied always and however.
No FMEA analyses or other derived methods are able to perform cost/worth
analyses as those described above.

Conclusions
The paper introduces a new way to manage build-in reliability (BIR) aspects in product
development. The authors reviewed current approaches and frameworks to support
reliability design during product development, surveyed current research which uses
QFD and FMECA as BIR design tools, and synthesized these concepts to develop a
new methodology, named “House of Reliability” (HoR).
HoR does not represents a different way to interface QFD and FMECA tools but it
is, also if conceptually derived from the “House of Quality” approach, a new
methodology able to manage BIR analysis. HoR results to be an operative tool that
makes it possible to provide a customer-focused foundation for BIR by reason of a
prioritisation of the design actions concerning the final product reliability effectively
derived by the customer needs.
IJQRM In addition, in the paper is showed how HoR becomes even more powerful when it is
carefully combined with other product development methodologies, such as
24,4 Cost/Worth Analysis. Using Cost/Worth Analysis, the team of engineers can keep
cost metrics and requirements separate from the HoR. By determining the cost-worth
ratio of the various parts, the group is able to identify, in reliability terms, where we
could make reductions or enhancements to the final product.
436 More work remains to make the methodology fully robust and complete. For
example, is there a more appropriate scale for the target objectives and how can the
effect of a particular design choice be assessed? Mathematical tools such as
optimization theory, fuzzy set theory, and utility theory could be applied during the
process. Other advanced technologies such as expert systems or neural networks could
be used in an integrated way to automate and increase its efficiency in terms of
durability and reliability analysis of components. These are areas for continuing
research.

References
Ahmed, J.U. (1996), “Modern approaches to product reliability improvement”, International
Journal of Quality and Reliability Management, Vol. 13 No. 3, pp. 27-41.
Akao, Y. (1990), Quality Function Deployment, Productivity Press, Cambridge, MA.
Al-Mashari, M., Zairi, M. and Ginn, D. (2005), “Key enablers for the effective implementation of
QFD: a critical analysis”, Industrial Management and Data Systems, Vol. 105 No. 9,
pp. 1245-60.
Ben-Daya, M. and Raouf, A. (1996), “A revised failure mode and effects analysis model”,
International Journal of Quality and Reliability Management, Vol. 13 No. 1, pp. 43-7.
Bevilacqua, M., Braglia, M. and Gabbrielli, R. (2000), “Monte Carlo simulation approach for a
modified FMECA in a power plant”, Quality and Reliability Engineering International,
Vol. 16, pp. 313-24.
Bowles, J.B. and Pelaez, C.E. (1995), “Fuzzy logic prioritization of failures in a system failure
mode, effects and criticality analysis”, Reliability Engineering and System Safety, Vol. 50,
pp. 203-13.
Braglia, M. (2000), “MAFMA: multi-attribute failure mode analysis”, International Journal of
Quality and Reliability Management, Vol. 17 No. 9, pp. 1017-33.
Dethoor, J.M. and Groboillot, J.L. (1968), La vie des equipements: investissement, renouvellement,
maintenance, Dunod, Paris.
Fantoni, G., Santoni, G., Taviani, C. and Vezzi, F. (2004), Synergies between Engineering Solutions
in QFD Analysis, pp. 87-94, Proceedings of 10th International Conference on Concurrent
Enterprising, Escuela Superior de Ingenieros; Seville, June 14-16.
Ford Motor Company (1983), Module 7, Customer Focused Engineering, Level 1, QFD Manual,
EQIP Centre, Ford Motor Company Ltd, Boreham Airfield.
Franceschini, F. (2002), Advanced Quality Function Deployment, CRC Press, Boca Raton, FL.
Furakawa, O. (1984), “A methodology for quality goal-seeking and coordination, and a practical
application”, International Journal of Systems Research, Vol. 1, pp. 71-82.
Georgantzas, N.C. and Hessel, M.P. (1995), “The intermediate structure of design for quality”,
International Journal of Quality & Reliability Management, Vol. 12 No. 6, pp. 97-108.
Gilchrist, W. (1993), “Modelling failure modes and effect analysis”, International Journal of
Quality & Reliability Management, Vol. 10 No. 5, pp. 16-23.
Ginn, D.M. and Zairi, M. (2005), “Best practice QFD application: an internal/external The house of
benchmarking approach based on Ford Motors’ experience”, International Journal of
Quality & Reliability Management, Vol. 22 No. 1, pp. 38-58. reliability
Ginn, D.M., Jones, D.V., Rahnejat, H. and Zairi, M. (1998), “The ‘QFD/FMEA interface’”,
European Journal of Innovation Management, Vol. 1 No. 1, pp. 7-20.
Henshall, E. (1995), EQIP (Engineering Quality Improvement Programme) at Ford Motor
Company, Proceedings of 6th European Symposium for Taguchi Methods and QFD, 437
Kenilworth, May, pp. 16-18.
Jayant, T. and Yang, K. (1997), Reliability Function Deployment (RFD): A System. Approach,
Proceedings of the 9th Symposium on QFD, pp. 55-74.
Kosche, K. (2002), Small QFD/FMEA Approach: Using Qfd Technologies to Link Customer
Complaints to Failures out of FMEA, Proceedings of the 8th International Symposium on
Quality Function Deployment, Munich, September, pp. 247-54.
Lyman, D. (1990), Deployment Normalisation, 2nd Symposium on QFD Co-Sponsored by Ascq
and Asi, Washington, pp. 307-15.
Pelaez, C.E. and Bowles, J.B. (1994), Using Fuzzy Logic for System Criticality Analysis, Proceeding
of the IEEE Annual Reliability and Maintainability Symposium, Anaheim, CA, pp. 449-55.
Tan, C.M. (2003), “Customer-focused build-in reliability: case study”, International Journal of
Quality & Reliability Management, Vol. 20 No. 3, pp. 378-97.
Tan, C.M. and Neo, T.-K. (2002), QFD Implementation in a Discrete Semiconductor Industry,
IEEE Proceedings Annual Reliability and Maintainability Symposium, pp. 484-9.
Tanaka, M. (1989), “Cost planning and control systems in the design phase of a new product”, in
Monden, Y. and Sakurai, M. (Eds), Japanese Management Accounting: A World Class
Approach to Profit, Productivity Press, Boston, MA, pp. 49-71.
Trewn, J. and Yang, K. (1997), Reliability Function Deployment (RFD): a Systems Approach,
Proceedings of 9th Symposium on QFD, pp. 55-74.
US MIL-STD 1629-A (1988), Procedure for Performing a Failure Mode, Effects and Criticality
Analysis, Department of Defense, Washington, DC.
Wasserman, G.S. (1993), “On how to prioritise design requirements during the QFD planning
process”, II Transaction, Vol. 25 No. 3, pp. 59-65.
Yang, K. and Kapur, K.C. (1997), Customer Driven Reliability: Integration of QFD and Robust
Design, IEEE Proceedings of Annual Reliability and Maintainability Symposium,
pp. 339-45.

Further reading
Teng, S.H. and Ho, S.Y. (1996), “Failure mode and effects analysis: an integrated approach for
product design and process control”, International Journal of Quality & Reliability
Management, Vol. 13 No. 5, pp. 8-26.

Appendix 1
(See Table AI overleaf.)
24,4

438
IJQRM

Table AI.
Design FMEA
Suggested severity evaluation Suggested occurrence evaluation Suggested non-detection evaluation

Effect Criteria Score Prob. Criteria Score Detect. Criteria Score


Hazardous Non-compliance with 10 Very high: . 100 per 10 Absolute Design control cannot detect 10
without warning government regulation without Persistent thousand uncertainty failures
warning failures items
Hazardous with Non-compliance with 9 50 per 9 Very remote Very remote chance to detect 9
warning government regulation with thousand failures
warning items
Very high Loss of primary function 8 High: Frequent 20 per 8 Remote Remote chance to detect 8
failures thousand failures
items
High Customers very dissatisfied 7 10 per 7 Very low Very low chance to detect 7
thousand failures
items
Moderate Customers dissatisfied 6 Moderate: 5 per thousand 6 Low Low chance to detect failures 6
Occasional items
failures
Low Customers somewhat 5 2 per thousand 5 Moderate Moderate chance to detect 5
dissatisfied items failures
Very low Defects noticed by most 4 1 per thousand 4 Moderate Good chance to detect 4
customers (more than 75%) items high failures
Minor Defects noticed by 50% of the 3 Low: Relatively 0.5 per 3 High High chance to detect 3
customers few failures thousand failures
items
Very minor Defects noticed by 2 0.1 per 2 Very high Very high chance to detect 2
discriminating customers (less thousand failures
than 25%) items
None No discernible effect 1 Very low: , 0.1 per 1 Almost Design control will almost 1
Remote failures thousand certain certainly detect failures
items
Appendix 2 The house of
reliability
Probability of
“Domino effects” Linguistic description Ranking

Very high A cause of failure is the direct cause of another one 1.0
A cause negatively affects another one 0.9
High A cause is strongly linked with another one 0.8 439
A cause is linked with another one 0.7
Moderate A cause is moderately related with another one 0.6
Sometimes a cause produces the occurrence of another one 0.5
Low A low link can be seen between a cause and another one 0.4
A very low link can be seen between a cause and another one 0.3 Table AII.
Remote A cause infrequently leads to another one 0.2 Standard linguistic
A cause only rarely affects another one 0.1 conversion table

Appendix 3
The solution proposed to solve the problem of dependencies is the Wasserman’s normalization
(Wasserman, 1993), that helps the analysts to evaluate each solution considering the assessment
and the direction (positive or negative) of each correlation. The normalization classical form is
reported in the following equation:

X
m
ðri;k · gk;j Þ
i;j¼ mk¼1m
XX
ðr i;j · gj;k Þ
j¼1 k¼1

where:
ri; j ¼ coefficient of relationship matrix;
gi;k ¼ intensity of correlation between characteristics (ranging from 0 to 1);
gi;k ¼ 1;i ¼ k.
This formula could also be used in modified form, easier to be understood and to be used within
software applications:
 
 gj;1 
 
 
j i;1 i;m j   ::: 
r ;:::;r ·

g 
 j;m 
i;j¼  m 
X 
 gj;1 
 
 j¼1 
 
 ::: 
jri;1 ;:::;ri;m j ·  m 

X 
 gj;m 
 j¼1 
 
IJQRM
24,4

440

Figure A1.
The Wasserman’s
normalization process

Briefly, this represents the ratio between the scalar product of the i th row of QFD in Figure A1)
and the j th column in the symmetric roof, and the scalar product between the same row and the
(m þ 1)th column (i.e. the column of the sums).

Corresponding author
M. Braglia can be contacted at: m.braglia@ ing.unipi.it

To purchase reprints of this article please e-mail: [email protected]


Or visit our web site for further details: www.emeraldinsight.com/reprints

You might also like