Systems Thinking in Risk Management

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

I nt r oduct i on

Advances in technology in recent years have been


accompanied by an increasing number of hazards and
failures. Risk has therefore become an issue of growing
concern[1]. This article reviews the literature on risk from
a systems perspective.
Systems thinking is characterized by its holistic
approach to problem solving while analytical methods
used in risk assessment can be viewed mainly as
reductionist.
Although there is little agreement over a definition of
risk the notion of probability is central to all risk
assessment techniques identified in the literature,
although the interpretation of probability depends on
whether it is viewed objectively or subjectively.
First the most cited methods for managing risk together
with other related theories will be critically examined.
The concepts used will then be classified according to
their systemic properties. Finally a systems approach to
failure[2] will be briefly described and it is suggested
that the use of this method could provide the risk
manager with a valuable tool.
The anal yt i cal met hod
The concept of analysis is usually understood to be a
three-stage process, namely:
(1) taking apart the thing to be understood;
(2) trying to understand the behaviour of the parts
taken separately; and
(3) trying to assemble this understanding into an
understanding of the whole[3].
This mechanistic and reductionist approach to problem
solving underpins most risk assessment methods.
However, as history has shown, failures continue to occur,
arising mostly from increasingly complex ill structured
socio-technical systems.
Syst ems t hi nk i ng
In recent years systems thinking has developed as an
alternative to mechanistic thinking[4]. This approach is
based on four ideas as characteristics of systems[5]. They
are:
(1) emergence;
(2) hierarchy;
(3) communication; and
(4) control.
Checkland defines a system as:
a set of elements connected together which form a whole,
this showing properties which are properties of the whole,
rather than properties of its component parts[5, p. 3].
This is a holistic approach, which tackles problems by
examining the context of the system in which they
occur[6] and is particularly relevant to tackling ill-
structured messy problems.
The pr ocess of r i sk assessment
The concept of probability is fundamental to the
assessment of risk, and the use of statistical databases
has become a prerequisite.
A review of the literature suggests that the process of risk
assessment is usually broken down into three stages:
(1) risk identification;
(2) risk estimation; and
(3) risk evaluation as shown in Figure 1.
35 APPLI CATI ON OF SYSTEMS THI NKI NG TO RI SK MANAGEMENT: A REVI EW OF THE LI TERATURE
Appl i cat i on of syst ems t hi nk i ng
t o r i sk management :
a r evi ew of t he l i t er at ur e
Diana White
Revi ews t he l i t er at ur e on r i sk f r om a syst ems per spect i ve
Management Decision, Vol. 33 No. 10, 1995, pp. 35-45 MCBUniversity Press
Limited, 0025-1747
Numerous techniques and concepts used in risk
assessment have been identified in the literature. There
follows a critical discussion on the most cited methods.
Techni ques and concept s used i n r i sk
assessment
Failure mode and effects analysis
Failure mode and effects analysis (FMEA) is essentially a
systematic brainstorming session aimed at finding out
what could go wrong with a system or process[7]. To be
effective it requires expertise and sound prior knowledge
of the system under analysis.
The essence of the FMEA technique is to list for each
system component, all possible modes of failure and the
potential effect of each failure on the rest of the system.
Results are usually presented in the form of a table. The
most common headings used[8] are:
G Component identification.
G Function.
G Failure mode and cause.
G Failure mode frequency.
G Failure mode effects.
G Detection.
G Corrective measures.
G Severity.
The likelihood of each failure mode can then be classified
in terms of a probabilistic value by the use of four
severity classes and five levels[8] as outlined below.
(1) Severity:
G catastrophic;
G critical;
G marginal; and
G negligible.
(2) Levels:
G frequent;
G reasonably probable;
G occasional;
G remote; and
G extremely unlikely.
The concept behind an FMEA is to render the system
capable of being analysed by breaking it down into its
component parts. However, most systems are so complex
that analysts resort to limiting the number of failure
modes for a component or restricting the types of risks to
be considered. This implies that not all risks a system
might face will be discovered[9].
The analysis of each component is usually carried out in
isolation, since if combinations were considered the task
would become intractable. This means that specific
combinations of events or component failures which may
cause a system failure could be overlooked[10].
An FMEA relies on the correctness of the technology or
science on which the model is built. Only perceived
possible risks can be guarded against and operator errors
are seldom considered.
Important qualitative insights can be gained by using
this technique. However an FMEA is a reductionist
procedure which fails to identify interactive combinations
of equipment failures or common cause failures.
Fault tree analysis and event tree analysis
Fault tree analysis (FTA) and event tree analysis (ETA) are
a systematic method for encouraging analysts to consider
how a particular condition could arise or what may ensue
from a particular event, hence allowing causes or outcomes
of undesired events to be identified[7,11]. However, for the
method to be effective, analysts need to have a thorough
understanding of the system under consideration, the
cause-effect process[12] and all possible failure modes[13].
This graphical technique uses logic diagrams for
identifying causes (FTA) or consequences (ETA) of
potential failures[10]. Event trees move forward from an
initiating event to show accident sequences (see Figure 2).
Fault trees start from a postulated failure and move
backwards to show all possible causes (see Figure 3).
Although designed to give qualitative insights, sequences
can be quantified to allow probabilities of events (or
chains of events) to be obtained[14].
Failures very seldom have a single cause. Often chains of
events combine to produce a disaster. However, the
approach assumes that each branch of a tree involves
mutually exclusive events which are independent of one
another. The method therefore fails to identify common
mode or common cause failures[10,15]. The technique is
36 MANAGEMENT DECI SI ON 33,10
Risk
identification
Which
involves
Perceiving hazards
Identifying failures
Recognizing adverse
consequences
Risk
estimation
Which
involves
Estimating risk
probabilities
Describing the risk
Quantifying the risk
Risk
evaluation
Which
involves
Estimating significance
of the risk
Judging acceptability
of the risk
Comparing risks
against benefits
Risk
assessment
Fi gur e 1. The process of risk assessment
also unsuitable for handling partial failures, and time
delays[10,12].
Fault and event trees by their nature include a large
component of judgement. Analysts using the technique
must determine the structure of the problem and the
importance of the various branches. In a study in which
participants were asked to judge the completeness
of a tree, respondents failed to detect pruned
branches[16,17].
Fault and event trees are simplified models of systems
which can yield valuable information on process logic.
Because the models are once again reductionist, emergent
properties arising from the whole system would not be
recognized.
Hazard and operability study
A hazard and operability study (HAZOP) is a
brainstorming technique using guide words like none,
more of, less of, other to identify risks. The method
was developed to discover hazards and operability
problems during the design or redesign of systems[18]. It is
mainly used in the oil and chemical industries, where the
technique is applied to line diagrams, line-by-line[19].
HAZOP assumes analysts have a complete and detailed
knowledge of the system and procedures; it is therefore
not suitable for bringing fresh minds to look at a
design[18]. Its most serious shortcoming is its failure to
anticipate human errors[19]. Because components are
considered one by one, interdependence between
elements may be missed, and whole system properties
would fail to be detected.
Cost benefit analysis and risk-benefit analysis
One of the best developed techniques for risk assessment
is cost-benefit analysis (CBA)[20]. This technique is used
for estimating and quantifying risk. The basic premiss is
that alternatives can be selected by systematically
comparing the advantages and disadvantages which
would result from the estimated consequences of a choice.
CBA therefore presumes that there are alternative options
available[21].
The technique involves compiling a balance sheet of
expected benefits against expected costs or adverse
consequences. Each benefit and consequence is given a
monetary value (taking the probability of each condition
into account).
The efficiency criterion underlying CBA is a potential
Pareto improvement, that a decision is an improvement
if those who are better off could potentially share some of
their gains with those who are worse off in such a way
that everyone would be better off. It does not however
require that such compensation actually be made[22,
p. 189]. The basic decision criterion function used is net
present value.
One of the main difficulties with CBA is assigning a
monetary value to life. Several procedures are cited,
ranging from the estimated present value of an indivi-
duals projected earnings (now recognized as inappro-
priate), deriving the value of a life from compensation
damages awarded by juries, to valuation procedures
based on an individuals willingness to pay for some
statistical reduction in the risk of death[22].
The concept underpinning risk benefit analysis (RBA) is
that the expected benefits from a proposed technology or
activity are balanced against the expected risks[23]. As
risks cannot readily be reduced to monetary terms the
analysis usually takes the form of a comparison of
benefits with injury or fatality rates. The method
assumes that individuals are prepared to undertake
increasing risks (up to a certain level) provided the
benefits increase proportionately[24]. To be effective
voluntary and involuntary categories of risk must be kept
apart when using this procedure[25].
There are many limitations to CBA and RBA, including:
G adverse conditions are assumed in advance[23];
G data on probability of occurrence is required;
37 APPLI CATI ON OF SYSTEMS THI NKI NG TO RI SK MANAGEMENT: A REVI EW OF THE LI TERATURE
Pa
Initiating event
Success
Pb
Failure
Pa
Success
Pc
Failure
Pa x Pb
Pa x Pb x Pc
Fi gur e 2. Basic form of event tree
Accident ABC
P = Pa x Pb x Pc
Event
a
Failure
b
Failure
c
Fi gur e 3. Basic form of fault tree
G decisions are often based on value judgements;
G prejudiced analysts may push their calculations in
the direction of a predetermined outcome[26];
G implications of human error are not included[27];
G uncertainties are given little explicit attention;
G the contexts in which the risks are placed are not
considered[28].
Although CBA and RBA are useful tools for quantifying
risk they cannot be described as holistic techniques
because amongst other things they fail to consider
context, uncertainty and the implications of human
error.
Human reliability analysis
Human reliability analysis (HRA) is a procedure for
identifying events where human interaction could
contribute to a potential failure. These events can then
be quantified with a ranking based on risk. A large
number of techniques are cited in the literature; early
methods include THERP (The Human Error Rate
Procedure), Data Store (a data bank of performance data
on the use of common controls and displays) and Siegels
Simulation (a computer model of tasks)[29].
More recent approaches can be divided into probability
models TESEO (tecnica empirica stima errori operatori),
techniques based on time-reliability relationships OATS
(operator action trees), HCR (human cognitive reliability),
techniques based on human judgements (Confusion
Matrix, SLIM (success likelihood index methodology),
SCHEMA (system for critical human error management
and assessment), APJ (absolute probability judgement))
and influence diagramming techniques STAHR (socio-
technical assessments of human reliability)[30].
THERP is probably the most well known and widely used
technique[30]. It has been modified and updated over the
past decade. The technique involves the decomposition of
tasks into elements and the use of binary logic diagrams
in which the limbs represent decision points (correct or
incorrect performance), the probability of each outcome
being combined with performance shaping factors
(type of equipment used, stress, proficiency etc.).
Probability values are contained in tables and databases
in which tasks are decomposed into their component
parts. Steps in a THERP analysis are shown diagram-
matically in Figure 4[30].
THERP makes the basic assumption that human
functions can be considered in the same way as other
components, that is functions are decomposed into task
elements which can be handled separately. A further
assumption is that the performance of one task has an
effect on the performance of concurrent or subsequent
tasks[29] and that there are adequate databases of tasks
and error probabilities[10].
THERP suffers from the following limitations:
G it is difficult to represent the variability of human
behaviour adequately;
G the technique assumes each task segment can be
handled separately;
G it is difficult to combine human and equipment
reliability values;
G it is difficult to identify inter-task dependencies;
G the technique is not appropriate for continuous
tasks;
G the method does not determine motivation of the
individual;
G analysts have a tendency to model only errors that
appear in databases[30].
Any decompositional approach to analysing human error
(of which THERP is a well-established example) is based
on reductionist principles and would therefore fail to
detect failures of a systemic nature.
Risk perception techniques
During the evaluation stage of risk assessment,
acceptability of the risk must be considered. Several
techniques for measuring peoples attitude to risk have
evolved:
G The revealed preferences technique uses
historical data on past behaviours in relation to
risk to establish future choices. This approach
therefore assumes that through trial and error
society has arrived at a near optimal and
acceptable balance between risk and benefits
associated with any activity[23]. The approach
fails to consider how time might change what
society finds acceptable.
G The expressed preferences approach evaluates
perceived risks through direct measurement. The
amount of agreement or consensus that prevails
over an issue is assessed by the use of surveys[23].
This technique assumes that respondents will
always act in the way they say they will on
questionnaires when actually faced with a
hazardous situation.
38 MANAGEMENT DECI SI ON 33,10
Identify
system
functions
influenced
by human
error
List and
analyse the
related
human
functions
Estimate
the relevant
error
probabilities
Estimate
the effect of
human
error on the
system
Fi gur e 4. Steps in a THERP analysis
G The implied preferencesmethod is a compromise
between the above two techniques in that it
considers what people want based on both past
data and what current economic arrangements
will allow them to have[31].
G The psychometric studiestechnique uses psycho-
physical scaling and multivariate analysis tech-
niques to produce quantitative representations or
cognitive maps of risk attitudes and per-
ceptions[32, p. 4].
G The Delphi methodis a procedure for aggregating
expert opinions. The technique is characterized by
the use of questionnaires which are completed
independently by each member of a panel of
experts. These are then circulated anonymously
between the panel along with rationales and
reasons for each opinion. The process is repeated
several times and usually produces some conver-
gence of opinion[33].
Risk perception can be viewed from two perspectives:
(1) The expected utility view assumes that humans
take decisions to maximize their profit or other
preferred advantage and that all alternatives and
consequences are known.
(2) The bounded rationality view recognizes that it
is not possible for humans to know or reliably
assess all alternative options when faced with a
complex situation, therefore decisions are made
using the satisficing method.
Risk means different things to different people because
each person holds a unique view of the world. All
individual perceptions of risk can be regarded as equally
valid[34].
Sensitivity analysis
Sensitivity analysis (SA) is used to indicate the
robustness of the overall results of an analysis.
This procedure is usually based on a spreadsheet model
and involves a series of what if? questions by giving a
percentage change to each key parameter one at a time so
that significant and insignificant variables can be
identified[35].
The technique is constrained by time, effort and imagi-
nation, which leads to systematic over- simplification of
problems and exclusion of many considerations. The
method cannot do justice to profound moral questions
posed by many decisions[36].
Hertz-type simulation
This technique uses a computer simulation to model a
situation where a number of key variables which impinge
on a situation could vary simultaneously. The model
generates a probability distribution based on the various
combinations[37].
Monte Carlo simulation
The Monte Carlo technique can be used to reduce
probability variance. Probability distributions of relevant
input variables are fed into a computer and a simulation
program selects values for each variable on the basis of
their relative likelihood of occurrence[33].
Expert systems
Expert systems are computer models which use artificial
intelligence to emulate the reasoning process of an expert.
The knowledge base includes data and rules which must
always be kept up to date[18]. The computer can be
interrogated (using a standard dialogue system) much as
one would approach a human expert[8].
However, expert opinions suffer from the following
weaknesses:
G experts may differ in their judgement of safety[25];
G experts could be influenced by political power[38];
G empirical data suggests that experts and lay
people have quite different perceptions about the
riskiness of various technologies[16];
G expert assessments are susceptible to bias[16];
G it is difficult to transplant a deduction from one
context to another.
Expert systems usually present information in a
simplified, unidimensional format. This often hides the
complexity of the subject-matter under consideration[39].
Other decision analysis techniques
Numerous computer models and databases are available
which provide probabilistic data and other information
on which to base decisions. DENZ (a program developed
to calculate the dispersion of gases from instantaneous
releases) and CRUNCH (a program to calculate the
dispersion of gases from continuous releases) are
computer models which can be used to predict the
behaviour of dense gases from instantaneous (DENZ) or
continuous (CRUNCH) releases. GASP models the spread
of a pool of refrigerated liquid[7]. Data on events such as
fires, explosions and toxic gas releases can be extracted
from MHIDAS (Major Hazards Incident Database)[7,18].
Dow Index and Mond Fire Explosion and Toxicity Index
can be used to give a quantitative indication of the
relative potential of a hazardous incident associated with
a given process[7,40].
Databases are useful for stimulating thought and enquiry
but do not produce a broad ranging or deep enough
consideration of what could go wrong[7] and quantitative
39 APPLI CATI ON OF SYSTEMS THI NKI NG TO RI SK MANAGEMENT: A REVI EW OF THE LI TERATURE
models are unable to capture holistic or qualitative
elements of a problem[22].
Ri sk assessment and use of t echni ques
Most techniques identified are used across the risk
assessment process as shown in Figure 5.
Probability and perspectives
The concept of probability can be viewed from two
perspectives. The objective classical approach sees risk
as a measurable property of the physical world, therefore
a risk assessment carried out by an analyst who adopts
the objective perspective will use methods based on the
classical theory of probability and statistics[33]. This
view assumes probabilities to be real properties of real
physical systems and requires the value of variables to be
drawn solely from available data.
On the other hand, the subjective Bayesian perspective
sees risk as a product of perceptions. A risk assessment
carried from this perspective will adopt the view of
probability often associated with the eighteenth-century
mathematician Thomas Bayes, that is probability is a
number expressing a state of knowledge or degree of
belief that depends on the information, experience and
theories of the individual who assigns it[33, p. 209]. This
approach therefore requires information (about likely
parameter values) that is unrelated to the available data.
Results of an assessment using the classical view depend
on a good fit between basic maths concepts and the real
world. This means that practical situations are often
tailored to fit the mathematical models. Results obtained
using a subjective perspective will vary depending on the
state of information and the analysts assumptions;
different analysts often arrive at different answers to the
same problem[12].
Therefore the outcome of a risk assessment is dependent
on the approach used and the analysts world view.
Rel at ed concept s and t heor i es
Common mode and common cause failures
The literature suggests there is no entirely acceptable
method for dealing with failures caused by outside
influences, component weaknesses or a cascade effect.
A procedure for examining physical, environmental and
procedural interconnection between systems and their
components has been suggested by Bell[10]. However, the
method cannot guarantee anticipating every interaction
or combination of primary events.
Root cause analysis
This is an investigative procedure using a total system
approach to investigate causes of accidents. The method
involves working back from an accident to consider all
possible causes by using the key word why?.
Root cause analysis recognizes that accidents are defects
in the total system, that people are only part of the system
and assumes multiple causes for accidents[41].
Cultural theory
The cultural theory of risk is based on the pioneering
work of Douglas and Wildavsky[42] who advanced the
notion that the selection of risk was value laden and
culturally constructed and reflected moral, political,
economic and power positions[23]. This hypothesis
suggests that risk is socially constructed, that is
individuals choose between risk-avoiding or risk-
accepting strategies guided by their culture and social
context[43].
Further work on the theory by Adams[44] links risk-
taking decisions with four distinctive world views:
(1) the fatalist;
(2) the hierarchist;
(3) the individualist; and
(4) the egalitarian.
These views act as filters through which the world is
perceived and through which risk-assessment decisions
are made[43]. Therefore the successful management of
risk depends on the cultural and social context in which
the risk is placed[45]. This is a holistic view of risk that
recognizes that every person has their own individual
view of the world, and this view will influence how they
behave.
40 MANAGEMENT DECI SI ON 33,10
Risk
estimation
FTA
FMEA
HAZOP
Risk
identification
CBA/RBA
ETA
HRA
Expert
systems
Databases
SA
Hertz-type
simulation
Risk perception
techniques
Risk
evaluation
Monte Carlo
simulation
Key
HAZOP
FTA
FMEA
HRA
ETA
CBA/RBA
SA
= Hazard and operability study
= Fault tree analysis
= Failure mode and effects analysis
= Human reliability analysis
= Event tree analysis
= Cost benefit analysis/risk benefit analysis
= Sensibility analysis
Fi gur e 5. Map of risk assessment techniques
Risk homeostasis theory
The theory of risk homeostasis was proposed by
Wilde[46-50]. The theory suggests that accidents are a
result of behaviour which attempts to balance an
accepted target level of risk against perceived risk, that is,
if the level of subjective risk perceived is higher or lower
than the level of risk desired, an individual will take
action to eliminate this discrepancy[46]. This theory has
caused widespread debate. Many articles refute the
hypothesis[51-55], although experiments have been done
which support the theory[56]. Others believe the theory is
untestable[57-59]. It is possible that the theory has been
rejected because, if it is proved to be correct, many safety
measures would be rendered pointless. However, the risk
homeostasis theory has had the important effect of
bringing to the forefront the fact that behaviour is not
necessarily a constant and that behaviour modification
has important implications for safety[60, p. 1540].
Risk compensation theory
The risk compensation theory has evolved from the risk
homeostasis theory. This theory proposes that when
individuals balance their target level of risk against
perceived risks, partial compensation rather than
complete homeostasis takes place.
Adams has developed a model of the theory. This
assumes that individual risk-taking decisions represent
a balancing act in which perceptions of risk are weighed
against propensity to take risks[44, p. 15]. This is shown
diagrammatically below in Figure 6.
This theory has many advocates, including Underwoodet
al.[61], Asch et al.[62], Janssen[63], Adams[44,64-66],
Mathew[67], Davis[68] and Hakamies-Blomqvist[69].
Indeed, many researchers now view risk compensation as
a normal part of everyday behaviour[70]. A recent report
by a scientific group of experts from 16 counties
examined the evidence for risk compensation and
concluded that controversy now centred on the inter-
pretation of what causes it, and how complete it is, not on
whether it occurs[71, p. 55]. However others suggest
there is weak[72] or conflicting evidence for the
theory[73-75].
Risk compensation encourages risk to be viewed as an
interactive phenomenon, one persons behaviour has
consequences for others, it reinforces the view that people
respond to risk from a subjective perspective.
Cl assi f i cat i on of t echni ques and concept s
i dent i f i ed i n t he l i t er at ur e
Most of the techniques identified in the literature assess
risk by dividing systems into their component parts and
considering each part separately. This is an important
step in identifying failures as it ensures that the
elements are all functioning correctly. However, it is at
the assembly stage that this reductionist view breaks
down because of its failure to identify emergent
properties arising from interacting elements and
because it does not consider that the behaviour of
systems is due as much to their external environment as
to their internal mechanisms.
By contrast many of the related theories take a more
holistic view. The following diagram (see Figure 7) locates
the techniques and concepts in a two-axis framework
which distinguished between holistic and reductionist
approaches and between frequent and infrequent use.
A syst ems appr oach t o f ai l ur es
The framework illustrated in Figure 7 suggests the most
frequently used approaches for managing risk are
reductionist in nature. This approach to problem solving
is characterized by a downward movement which reduces
the problem into smaller and smaller parts.
This can be contrasted with some of the less used concepts
which take a more holistic view of risk. This view
considers the system as a whole, and is characterized by an
upward movement which tackles problems by
investigating the systems environment. A systems
approach simplifies by taking multiple partial views of
reality[6, p. 98] (rather like a CAT Scan), these views are
then built up into a complete representation of the whole.
Table I illustrates the difference between the views.
The f ai l ur es met hod
The failures method was developed as a way of using
systems thinking and systems concepts to study failure.
It is a holistic approach which aims to describe a failure
and explain why it occurred.
Briefly, the method involves representing the failure
situation as a system, comparing this model with a model
41 APPLI CATI ON OF SYSTEMS THI NKI NG TO RI SK MANAGEMENT: A REVI EW OF THE LI TERATURE
Propensity to
take risks
Perceived
danger
Balancing
behaviour
Accidents Rewards
Source: [44]
Fi gur e 6. Risk-taking model
of a system that is capable of purposeful activity without
failure (the formal system model[5, p. 175]). Based on the
outcome of this first comparison further models (known as
paradigms) are selected and used to examine appropriate
aspects of the situation. Iteration is an essential component
of the approach[2]. This is shown in Figure 8.
The formal system model lies at the heart of most
systems concepts. It is shown diagrammatically in
Figure 9.
Further paradigms used in the method include[2, p. 76]:
42 MANAGEMENT DECI SI ON 33,10
FTA/ETA
FMEA
RBA
Risk perception
techniques
Common mode
Common cause
Reductionist Holistic
Frequent
Infrequent
Root cause
analysis
Cultural
theory
Risk
compensation
theory
Risk
homeostasis
theory
Hertz-type
simulation
HAZOP
Other
databases/
models
Monte Carlo
simulation
Sensitivity
analysis
Expert
systems
CBA/RBA
CBA/RBA
FTA/ETA
FMEA
HAZOP
RBA
= Cost benefit analysis/risk benefit analysis
= Fault tree analysis/event tree analysis
= Failure made and effects analysis
= Hazard and operability study
= Risk benefit analysis
Key
Fi gur e 7. Classification of concepts relating to risk
Structure relevant aspects
into the form of a system
Compare with formal
system model
Compare with further
paradigms
Interpretation
of results
Fi gur e 8. Brief description of the failures method Tabl e I . Table of differences between holistic and reductionist
thinking
Holistic thinking Reductionist thinking
Method Systemic Systemic
Issue tackled by Investigating the Reducing problem
problems into smaller and
environment smaller parts
Approach An upward A downward
characterized by movement movement
Simplifies by Taking multiple Breaking down
partial views problem into
simplest parts
G control;
G communication;
G FMEA;
G fault trees;
G reliability block diagrams;
G common mode failures;
G cascade failures;
G various human factors paradigms.
Use of the approach helps identify some of the most
common characteristics[77] of failure, namely:
G organizational deficiencies;
G ineffective control and communication;
G poor reliability;
G disregard for human factors;
G neglected environmental effects.
The approach has been widely used as an aid to under-
standing failures, and in quality management[77]. Some
of the paradigms used in the method mirror techniques
used in risk analysis. It is therefore suggested that the
failures method could be incorporated into the risk
assessment process.
Concl usi on
A review of the literature on risk has revealed many
techniques and concepts for identifying, evaluating and
estimating risks. Most approaches are based on the
concept of divide and conquer. This view has several
benefits. It provides a formal structure for collating
expertise on different aspects of the problem; it makes
explicit the rationale used in reasoning; and it facilitates
constructive criticism[22].
However, this reductionist principle fails to consider that
accidents and failures are emergent properties arising
out of whole systems. That is it fails to consider how
different parts of a system interact and it underestimates
environmental effects.
Concepts and theories which do not appear to be cur-
rently used for risk assessment have also been identified.
Many of these ideas appear to be more holistic in that
they consider how human behaviour and context can
affect the management of risk.
Risk managers face situations that can be described as
ill-structured messes[78], arising from complex socio-
technical systems. Systems methodologies have been
developed to deal with precisely these sort of
problems[78].
One such method is the failures method, which reflects
many of the techniques used in risk assessment. It is
therefore suggested that the use of systems ideas and the
failures method in particular could be a valuable risk
management tool.
Ref er ences
1. Wharton, F., Risk management: basic concepts and
general principles, in Ansell, J. and Wharton, F. (Eds),
Risk: Analysis, Assessment and Management, John Wiley
& Sons, Chichester, 1992.
2. Fortune, J., Studying Systems Failures, Open University
Press, Milton Keynes, 1993.
3. Ackoff, R.L. and Jackson, M.C., Creating the Corporate
Future, John Wiley & Sons, Chichester, 1981.
4. Flood, R.L. and Jackson, M.C., Creative Problem Solving:
Total Systems Intervention, John Wiley & Sons,
Chichester, 1993.
5. Checkland, P., Systems Thinking, Systems Practice, John
Wiley & Sons, Chichester, 1990.
6. Martin, J., Working with Systems: Diagnosis, Open
University Press, Milton Keynes, 1991.
43 APPLI CATI ON OF SYSTEMS THI NKI NG TO RI SK MANAGEMENT: A REVI EW OF THE LI TERATURE
Decision-making
subsystem
Subsystems
and
components
that carry out
transformations
Performance
monitoring
subsystem
Decides on
transformations
implemented
by designed
set of
Provides
resources and
legitimates
operations
Makes
known
expectations
Wider system boundary
Wider system
Formulates
initial design
of
Provides
resources
and
legitimates
area of
operation
Makes
known
expectations
Supplies
performance
information
System boundary
System
Provide performance
information
Reports
to
Environment
Disturbs
Environment
Attempts to
influence
Source: [2, p.15]
Fi gur e 9. The formal system paradigm
7. Clifton, J.J., Hazard prediction, in Keller, A.Z. and
Wilson, H.C. (Eds), Disaster Prevention, Planning and
Limitation, Disaster Prevention and Limitation Unit,
University of Bradford and The British Library, 1990.
8. Kara-Zaitri, C., Applying safety and reliability method-
ology for high risk installations to other areas of life, in
Keller, A.Z. and Wilson, H.C. (Eds), Disaster Prevention,
Planning and Limitation, Disaster Prevention and
Limitation Unit, University of Bradford and The British
Library, 1990.
9. Ansell, J., Reliability: industrial risk assessment, in
Ansell, J. and Wharton, F. (Eds), Risk: Analysis,
Assessment and Management, John Wiley & Sons,
Chichester, 1992.
10. Bell, T., Managing risk in large complex systems
special report, Spectrum, June 1989.
11. Ballard, G.M., Industrial risk: safety by design, in
Ansell, J. and Wharton, F. (Eds), Risk: Analysis,
Assessment and Management, John Wiley & Sons,
Chichester, 1992.
12. Vesely, W.E., Robust risk analysis: the need for it in
nuclear probabilistic risk evaluation, in Waller, R.A. and
Covello, V.T. (Eds), Low Probability High Consequence
Risk Analysis, Plenum Press, New York, NY, 1984.
13. Fischhoff, B., Acceptable Risk, Cambridge University
Press, Cambridge, 1981.
14. Whyte, A.V. and Burton, I. (Eds), Environmental Risk
Assessment, SCOPE Report 15, John Wiley & Sons,
Chichester, 1980.
15. Stewart, I., Risky business, New Scientist, 19 May 1990.
16. Slovic, P., Fischhoff, B. and Lichtenstein, S., Facts and
fears: understanding perceived risk, in Schwing, R.C.
and Albers, W.A. (Eds), How Safe Is Safe Enough?,
Plenum Press, New York, NY, 1980.
17. Lewis, H.W., Technological Risk, W.W. Norton & Com-
pany, New York, NY, 1990.
18. Kletz, T.A., Critical Aspects of Safety and Loss
Prevention, Butterworths, London, 1990.
19. Hambly, E.C. and Hambly, E.A., Risk evaluation and
realism, proceedings of the Institution of Civil Engi-
neers, Civil Engineering, Vol. 102 No. 2, 1994, pp. 64-71.
20. Lave, L.B., Approaches to risk management: a critique,
in Covello, V.T., Menkes, J. and Mumpower, J. (Eds), Risk
Evaluation and Management, Plenum Press, New York,
NY, 1986.
21. Lawless, E.W., Jones, M.V. and Jones, R.M., Methods for
comparing the risks of technologies, in Covello, V.T.,
Menkes, J. and Mumpower, J. (Eds), Risk Evaluation and
Management, Plenum Press, New York, NY, 1986.
22. Merkhofer, M.W., Comparative analysis of formal
decision-making approaches, in Covello, V.T., Menkes, J.
and Mumpower, J. (Eds), Risk Evaluation and Manage-
ment, Plenum Press, New York, NY, 1986.
23. Cutter, S., Living with Risk: Geography of Technological
Hazards, Edward Arnold, London, 1993.
24. Starr, C., Benefit-cost studies in socio-technical sys-
tems, in Perspectives on Benefit-Risk Decision Making,
National Academy of Engineering, Washington, DC,
1972.
25. Irwin, A., Risk and the Control of Technology, Manchester
University Press, Manchester, 1985.
26. Henderson, J., Living with Risk: A Report from the BMA
Professional and Scientific Division, BMA, Wiley Medical
Publication, London, 1987.
27. Lowrance, W.W., The nature of risk, in Schwing, R.C.
and Albers, W.A. (Eds), How Safe Is Safe Enough?
Plenum Press, New York, NY, 1980.
28. Glendon, A.I., Risk cognition, in Singleton, W.T. and
Hovden, J. (Eds), Risk and Decisions, John Wiley & Sons,
Chichester, 1987.
29. Meister, D., Alternate approaches to human reliability
analysis, in Waller, R.A. and Covello, V.T. (Eds), Low
Probability, High Consequence Risk Analysis, Plenum
Press, New York, NY, 1984.
30. Reason, J., Human Error, Cambridge University Press,
Cambridge, 1990.
31. Rowe, W.D., Risk assessment approaches and methods,
in Conrad, J. (Ed.), Technology and Risk Assessment,
Academic Press, London, 1980.
32. Slovic, P., Fischhoff, B. and Lichtenstein, S., The
psychometric study of risk perception, in Covello, V.T.,
Menkes, J. and Mumpower, J. (Eds), Risk Evaluation and
Management, Plenum Press, New York, NY, 1986.
33. Covello, V.T. and Merkhofer, M.W., Risk Assessment
Methods Approaches for Assessing Health and
Environmental Risks, Plenum Press, New York, NY,
1993.
34. Smith, K., Environmental Hazards: Assessing Risk and
Reducing Disaster, Routledge, London, 1992.
35. Covello, V.T., Decision analysis and risk management
decision making: issues and methods, Risk Analysis,
Vol. 7 No. 2, 1987, pp. 131-9.
36. Conrad, J., Society and risk assessment: an attempt at
interpretation in society, in Conrad, J. (Ed.), Technology
and Risk Assessment, Academic Press, London, 1980.
37. Ho, S.S.M. and Pike, R.H., The use of risk analysis
techniques in capital investment appraisal, in Ansell, J.
and Wharton, F. (Eds), Risk: Analysis, Assessment and
Management, John Wiley & Sons, Chichester, 1992.
38. Starr, C., Social benefit versus technological risk: what is
our society willing to pay for safety?, Science, Vol. 165,
19 September 1969, pp. 1232-38.
39. Hansson, S.O., Dimensions of risk, Risk Analysis, Vol. 9
No. 1, 1989, pp. 107-12.
40. Perrow, C., Normal Accidents Living with High Risk
Technologies, Basic Books, New York, NY, 1984.
41. Senecal, P. and Burke, E., Root cause analysis: what took
us so long?, Occupational Hazards, March 1993, pp. 63-5.
42. Douglas, M. and Wildavsky, A., Risk and Culture: An
Essay on the Selection of Technological and Environ-
mental Dangers, University of California Press, Berkeley,
CA, 1982.
43. Thompson, M., Aesthetics of risk: culture or context?, in
Schwing, R.C. and Albers, W.A. (Eds), How Safe Is Safe
Enough?, Plenum Press, New York, NY, 1980.
44. Adams, J.G.U., Risk, UCL Press, London, 1995.
45. Hovden, J. and Larson, T.J., Risk: culture and concepts,
in Singleton, W.T. and Hovden, J. (Eds), Risk and
Decisions, John Wiley & Sons, Chichester, 1987.
46. Wilde, J.S.G., Objective and subjective risk in drivers
response to road conditions: the implications of the
44 MANAGEMENT DECI SI ON 33,10
45 APPLI CATI ON OF SYSTEMS THI NKI NG TO RI SK MANAGEMENT: A REVI EW OF THE LI TERATURE
theory of risk homeostasis for accident aetiology and
prevention, paper presented at the seminar on the
Implications of Risk Taking Theories for Traffic Safety,
West Berlin, 1981.
47. Wilde, J.S.G., The theory of risk homeostasis:
implications for safety and health, Risk Analysis, Vol. 2
No. 4, 1982, pp. 209-25.
48. Wilde, J.S.G., Critical issues in risk homeostasis theory,
Risk Analysis, Vol. 2 No. 4, 1982, pp. 249-58.
49. Wilde, J.S.G., Assumptions necessary and unnecessary
to risk homeostasis, Ergonomics, Vol. 28 No. 11, 1985,
pp. 1531-38.
50. Wilde, J.S.G., Notes on the interpretation of traffic
accident data and of risk homeostasis theory: a reply to L.
Evans, Risk Analysis, Vol. 6 No. 1, 1986, pp. 95-101.
51. Graham, J.D., On Wildes theory of risk homeostasis,
Risk Analysis, Vol. 2 No. 4, 1982, pp. 235-37.
52. Lund, A.K. and Zador, P., Mandatory belt use and driver
risk taking, Risk Analysis, Vol. 4 No. 1, 1984, pp. 41-53.
53. Evans, L., Risk homeostasis theory and traffic accident
data, Risk Analysis, Vol. 6 No. 1, 1986, pp. 81-94.
54. Evans, L., Comments on Wildes notes on risk homeo-
stasis theory and traffic accident data, Risk Analysis,
Vol. 6 No. 1, 1986, pp. 103-7.
55. McKenna, F.P., Behavioural compensation and safety,
Journal of Occupational Accidents, Vol. 9, 1987, pp. 107-21.
56. Trankle, U.G. and Gelau, C., Maximization of subjective
expected utility or risk control? Experimental tests of
risk homeostasis theory, Ergonomics, Vol. 35 No. 1, 1992,
pp. 7-23.
57. Adams, J.G.U., Risk homeostasis and the purpose of
safety regulation, Ergonomics, Vol. 31 No. 4, 1988,
pp. 407-28.
58. Hoyes, T.W. and Glendon, A.I., Risk homeostasis: issues
for future research, Safety Science, Vol. 16, 1993, pp. 19-33.
59. Janssen, W. and Tenkink, E., Risk homeostasis theory and
its critics: time for an agreement, TNO Institute of
Perception, The Netherlands, 1987.
60. McKenna, F.P., Evidence and assumptions relevant to
risk homeostasis, Ergonomics, Vol. 28 No. 11, 1985,
pp. 1539-41.
61. Underwood, G., Jiang, C. and Howarth, C.I., Modelling of
safety measure effect and risk compensation, Accident
Analysis and Prevention, Vol. 25 No. 3, 1993, pp. 277-88.
62. Asch, D., Levy, D.T., Shea, D. and Bodenhorn, H., Risk
compensation and the effectiveness of safety belt use
laws: a case study of New Jersey, Policy Sciences, Vol. 24,
1991, pp. 181-97.
63. Janssen, W., Seat-belt wearing and driving behaviour: an
instrumented-vehicle study, Accident Analysis and
Prevention, Vol. 26 No. 2, 1994, pp. 249-61.
64. Adams, J.G.U., Risk and Freedom: The Record of Road
Safety Regulation, Transport Publishing Projects,
London, 1985.
65. Adams, J.G.U., Evaluating the effectiveness of road
safety measures, Traffic Engineering & Control, June
1988, pp. 344-52.
66. Adams, J.G.U., Risk compensation, cultural theory and
the conduct of debates about risks, draft paper for
workshop on risk and rationality, University College
London, 7-8 September 1990.
67. Mathew, D., Helmets: the debate in a nutshell, CTC
Cycle Digest, Spring 1991.
68. Davis, R., Death on the Streets, Leading Edge Press, 1993.
69. Hakamies-Blomqvist, L., Compensation in older drivers
as reflected in their fatal accidents, Accident Analysis
and Prevention, Vol. 26 No. 1, 1994, pp. 107-12.
70. Wright, C.C. and Boyle, A.J., Road accident causation
and engineering treatment: a review of some current
issues, Traffic Engineering & Control, September 1987,
pp. 475-79.
71. Adams, J.G.U., Risk compensation and the problem of
measuring childrens independent mobility and safety on
the roads, in Hillman, M. (Ed.), Children, Transport and
the Quality of Life, Policy Studies Institute, London, 1993.
72. Evans, W.N. and Graham, J.D., Risk reduction or risk
compensation? The case of mandatory safety-belt use
laws, Journal of Risk and Uncertainty, 1991, pp. 61-73.
73. Ryan, D.L. and Bridgeman, G.A., Judging the roles of
legislation, education and offsetting behaviour in seat
belt use: a survey and new evidence from Alberta,
Canadian Public Policy, Vol. XVIII No. 1, 1992, pp. 27-46.
74. Rock, S.M., Risk compensation and the Illinois seat belts
use law, Accident Analysis and Prevention, Vol. 25 No. 5,
1993, pp. 537-44.
75. Wong, Y.D. and Nicholson, A., Driver behaviour at
horizontal curves: risk compensation and the margin of
safety, Accident Analysis and Prevention, Vol. 24 No. 4,
1992, pp. 425-36.
76. Bignell, V. and Fortune, J., Understanding Systems
Failures, Manchester University Press, Manchester, 1992.
77. Fortune, J. and Peters, G., Systems analysis of failures as
a quality management tool, British Journal of
Management, Vol. 5, 1994, pp. 205-13.
78. Rosenhead, J. (Ed.), Rational Analysis for a Problematic
World, John Wiley & Sons, Chichester, 1989.
Appl i cat i on quest i ons
(1) Given the phenomena of emergent properties and ill-structured messes, can accidents and failure ever be
effectively managed by expert systems?
(2) Which techniques are used in your organization to manage risk (or to investigate failure)? Which of those described
in this article might be of most benefit if adopted?
Diana White is a Part-time Postgraduate Research Student in the Systems Department, Open University, UK.

You might also like