0% found this document useful (0 votes)
163 views30 pages

Aven (2016) Risk Assessment and Risk Management

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 30

Accepted Manuscript

Risk assessment and risk management: review of recent advances


on their foundation

Terje Aven

PII: S0377-2217(15)01147-9
DOI: 10.1016/j.ejor.2015.12.023
Reference: EOR 13414

To appear in: European Journal of Operational Research

Received date: 14 September 2015


Revised date: 13 December 2015
Accepted date: 14 December 2015

Please cite this article as: Terje Aven , Risk assessment and risk management: review of re-
cent advances on their foundation, European Journal of Operational Research (2015), doi:
10.1016/j.ejor.2015.12.023

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service
to our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and
all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT

Risk assessment and risk management: review of recent advances


on their foundation

 The paper reviews recent advances on the foundation of risk assessment and management
 Trends in perspectives and approaches are identified
 The paper points to areas where further developments of the risk field are needed
 Examples of integrative risk research are highlighted

T
IP
CR
US
AN
M
ED
PT
CE
AC

1
ACCEPTED MANUSCRIPT

Invited Review

Risk assessment and risk management: review of recent advances


on their foundation
Terje Aven, University of Stavanger, Norway
Phone 47832267, fax 4751831750, Email: terje.aven@uis.no
Abstract

T
Risk assessment and management was established as a scientific field some 30-40 years ago.
Principles and methods were developed for how to conceptualise, assess and manage risk.

IP
These principles and methods still represent to a large extent the foundation of this field
today, but many advances have been made, linked to both the theoretical platform and

CR
practical models and procedures. The purpose of the present invited paper is to perform a
review of these advances, with a special focus on the fundamental ideas and thinking on
which these are based. We have looked for trends in perspectives and approaches, and we
also reflect on where further development of the risk field is needed and should be

US
encouraged. The paper is written for readers with different types of background, not only for
experts on risk.
AN
Keywords
Risk assessment; Risk management; Foundational issues; Review

1 Introduction
M

The concept of risk and risk assessments has a long history. More than 2400 years ago the
Athenians offered their capacity of assessing risk before making decisions (Bernstein 1996).
However, risk assessment and risk management as a scientific field is young, not more than
ED

30-40 years old. From this period we see the first scientific journals, papers and conferences
covering fundamental ideas and principles on how to appropriately assess and manage risk.

To a large extent, these ideas and principles still form the basis for the field today – they are
PT

the building blocks for the risk assessment and management practice we have seen since the
70s and 80s. However, the field has developed considerably since then. New and more
sophisticated analysis methods and techniques have been developed, and risk analytical
CE

approaches and methods are now used in most societal sectors. As an illustration of this,
consider the range of specialty groups of the Society for Risk Analysis (www.sra.org)
covering inter alia: Dose Response, Ecological Risk Assessment, Emerging Nanoscale
AC

Materials, Engineering & Infrastructure, Exposure Assessment, Microbial Risk Analysis,


Occupational Health & Safety, Risk Policy & Law, and Security and Defense. Advances
have also been made in fundamental issues for the field in recent years, and they are of special
interest as they are generic and have the potential to influence a broad set of applications.
These advances are the scope of the present paper.

The risk field has two main tasks, I) to use risk assessments and risk management to study and
treat the risk of specific activities (for example the operation of an offshore installation or an
investment), and II) to perform generic risk research and development, related to concepts,
theories, frameworks, approaches, principles, methods and models to understand, assess,
characterise, communicate and (in a wide sense) manage/govern risk (Aven and Zio 2014,

2
ACCEPTED MANUSCRIPT

SRA 2015b). The generic part II) provides the concepts and the assessment and management
tools to be used in the specific assessment and management problems of I). Simplified, we
can say that the risk field is about understanding the world (in relation to risk) and how we
can and should understand, assess and manage this world.

The aim of the present paper is to perform a review of recent advances made in the risk field,
having a special focus on the fundamental ideas and thinking that form the generic risk
research II). The scope of such a review is broad, and it has been a challenge to select works
for this review from among the many seminal contributions made over the past 10-15 years.
Only works that might reasonably be considered to contribute to the foundations of the field
have been included. Priority has been given to works that are judged to be of special

T
contemporary interest and importance, recognising the subjectivity of the selection and a
deliberate bias towards rather recent papers and the areas of interest of the author of this

IP
manuscript. For reviews and discussions of the early development of the risk field, see Henley
and Kumamoto (1981), Covello and Mumpower (1985), Rechard (1999,2000), Bedford and

CR
Cooke (2001), Thompson et al (2005) and Zio (2007b).

The following main topics will be covered: Risk analysis and science; risk conceptualisation;
uncertainty in risk assessment; risk management principles and strategies, having a special

US
focus on confronting large/deep uncertainties, surprises and the unforeseen; and the future of
risk assessment and management.
AN
Special attention will be devoted to contributions that can be seen as a result of an integrative
thinking process, a thinking which per definition reflects a strong “ability to face
constructively the tension of opposing ideas and instead of choosing one at the expense of the
other, generate a creative resolution of the tension in the form of a new idea that contains
M

elements of the opposing ideas but is superior to each” (Martin 2009, p. 15). As an example,
think about the conceptualisation of risk. There are a number of different definitions, which
can be said to create tension. However, integrative thinking stimulates the search for
ED

perspectives that extend beyond these definitions – it uses the opposing ideas to reach a new
level of understanding. The coming review will point to work in this direction and discuss
trends we see in the risk research.
PT

2 The risk field and science


Generic risk research II) to a large extent defines the risk science. However, applications of
CE

type I) may also be scientific if the work contributes to new insights, for example a better
understanding of how to conduct a specific risk assessment method in practice. Rather few
publications have been presented on this topic, discussing issues linking science and scientific
AC

criteria on the one hand, and risk and the risk fields on the other. Lately, however, several
fundamental discussions of this topic have appeared. These have contributed to clarifying the
content of the risk field and its scientific basis; see Hansson and Aven (2014), Hollnagel
(2014), Hale (2014), Le Coze et al (2014) and Aven (2014). Here are some key points made.

We should distinguish between the risk field characterised by the totality of relevant risk
educational programmes, journals, papers, researchers, research groups and societies, etc. (we
may refer to it as a risk discipline), and the risk field covering the knowledge generation of I)
and II).

3
ACCEPTED MANUSCRIPT

This understanding (I and II) is in line with a perspective on science as argued for by Hansson
(2013), stating that science is the practice that provides us with the epistemically most
warranted statements that can be made, at the time being, on subject matters covered by the
community of knowledge disciplines, i.e. on nature, ourselves as human beings, our societies,
our physical constructions, and our thought constructions (Hansson, 2013). By publishing
papers in journals, we are thus contributing to developing the risk science.

The boundaries between the two levels I) and II) are not strict. Level II research and
development is to a varying degree generic for the risk field. Some works are truly generic in
the sense that they are relevant for all types of applications, but there are many levels of
generality. Some research may have a scope which mainly covers some areas of applications,

T
or just one, but which is still fundamental for all types of applications in these areas. For
example, a paper can address how to best conceptualise risk in a business context and have

IP
rather limited interest outside this area.

CR
Consider as an example the supply chain risk management area, which has quite recently
developed from an emerging topic into a growing research field (Fahimnia et al 2015). The
work by Fahimnia et al (2015) presents a review of quantitative and analytical models (i.e.
mathematical, optimisation and simulation modelling efforts) for managing supply chain risks

US
and points to generative research areas that have provided the field with foundational
knowledge, concepts, theories, tools, and techniques. Examples of work of special relevancy
here include Blackhurst and Wu (2009), Brandenburg et al (2014), Heckmann et al (2015),
AN
Jüttner et al (2003), Peck (2006), Tang and Zhou (2012), Zsidisin (2003) and Zsidisin and
Ritchie (2010). These works cover contributions to I) but also II), although they are to a
varying degree relevant for other application areas.
M

As an example of I), consider the analysis in Tang (2006), specifically addressing what are the
risks that are most relevant for the supply chain area. Although not looking at a specific
system, it is more natural to categorise the analysis in I) than II), as the work has rather
ED

limited relevance for areas outside supply chain management. Another example illustrates the
spectre of situations between I) and II). Tang and Musa (2011) highlight that the
understanding of what risk is definitely represents a research challenge in supply chain
management. Heckmann et al (2015) review common perspectives on risk in supply chain
PT

management and outline ideas for how to best conceptualise risk, and clearly this type of
research is foundational for the supply chain area, but not for the risk field in general. The
work by Heckmann et al (2015) is in line with current generic trends on risk conceptualisation
CE

as for example summarised by SRA (2015a,b), with respect to some issues, but not others (see
a comment about this in Section 3). This is a challenge for all types of applications: transfer
of knowledge and experience are difficult to obtain across areas, and we often see that the
AC

different fields develop tailor-made concepts, which are not up-to-date relative to the
developments of the generic risk field. This demonstrates the generic risk research’s need for
a stronger visibility and impact. On the other hand, the restricted work in specific areas can
often motivate and be influential for generic risk research. The author of the present paper
worked with offshore risk analysis applications, and issues raised there led to generic risk
research about risk conceptualisation (Aven 2013a). There is a tension between different types
of perspectives and this can stimulate integrative and ground-breaking ideas. For another
example of work in the borderline between I) and II), see Goerlandt and Montewka (2015),
related to maritime transportation risk. See also Aven and Renn (2015), who discuss the
foundation of the risk and uncertainty work of the Intergovernmental Panel on Climate
Change (IPCC) which is the principal international authority assessing climate risk. This

4
ACCEPTED MANUSCRIPT

discussion addresses a specific application and is thus of type I), but it is strongly based on
generic risk research II).

Next we will discuss in more detail how science is related to key risk assessment and risk
management activities, in particular the process in which science is used as a base for
decision-making on risk. A key element in this discussion is the concept “knowledge”.

Science, knowledge and decision-making

T
In Hansson and Aven (2014) a model which partly builds on ideas taken from Hertz and
Thomas (1983), is presented, showing the links between facts and values in risk decision-

IP
making; see Figure 1.

CR
Data and information, gathered through testing and analysis, about a phenomenon provides
the evidence. These data and information contribute to a knowledge base which is the
collection of all “truths” (legitimate truth claims) and beliefs that the relevant group of experts
and scientists take as given in further research and analysis in the field. The evidence and the

US
knowledge base are supposed to be free of non-epistemic values. Such values are presumed
to be added only in the third stage. Concluding that an activity is safe enough is a judgement
based on both science and values. The interpretation of the knowledge base is often quite
AN
complicated since it has to be performed against the background of general scientific
knowledge. We may have tested a product extensively and studied its mechanism in great
detail, but there is no way to exclude very rare occurrences of failures that could materialise
25 years into the future. Although the decision to disregard such possibilities is far from
M

value-free, it cannot in practice be made by laypeople, since it requires deep understanding of


the available evidence seen in relation to our general knowledge about the phenomena
studied.
ED

This leads us into the risk evaluation step, as shown in Figure 1. This is a step where the
knowledge base is evaluated and a summary judgement is reached on the risk and
uncertainties involved in the case under investigation. This evaluation has to take the values
PT

of the decision-makers into account, and a careful distinction has to be made between the
scientific burden of proof – the amount of evidence required to treat an assertion as part of
current scientific knowledge – and the practical burden of proof in a particular decision.
CE

However, the evaluation is so entwined with scientific issues that it nevertheless has to be
performed by scientific experts. Many of the risk assessment reports emanating from various
scientific and technical committees perform this function. These committees regularly operate
AC

in a “no man’s land” between science and policy, and it is no surprise that they often find
themselves criticised on value-based grounds.

But the judgments do not stop there, the decision-makers need to see beyond the risk
evaluation; they need to combine the risk information they have received with information
from other sources and on other topics. In Figure 1 we refer to this as the decision-maker’s
review and judgement. It goes clearly beyond the scientific field and will cover value-based
considerations of different types. It may also include policy-related considerations on risk and
safety that were not covered in the expert review. Just like the expert’s review, it is based on a
combination of factual and value-based considerations.

5
ACCEPTED MANUSCRIPT

Experts Decision maker

Broad Decision
Knowledge
Evidence risk maker’s Decision
base
evaluation review

Fact-based
Value-based

Figure 1: A model for linking the various stages in the risk informed decision-making (based

T
on Hansson and Aven 2014)

IP
Above we have referred to “knowledge” a number of times, but what is its meaning in this

CR
context? The new SRA glossary refers to two types of knowledge:
“know-how (skill) and know-that of propositional knowledge (justified beliefs). Knowledge

testing.” (SRA 2015a)


US
is gained through for example scientific methodology and peer-review, experience and

However, studying the scientific literature on knowledge as such, the common perspective is
AN
not justified beliefs but justified true beliefs. The SRA (2015a) glossary challenges this
definition. Aven (2014) presents some examples for this view, including this one: “A group
of experts believe that a system will not be able to withstand a specific load. Their belief is
M

based on data and information, modelling and analysis. But they can be wrong. It is difficult
to find a place for a “truth requirement”. Who can say in advance what is the truth? Yet the
experts have some knowledge about the phenomena. A probability assignment can be made,
for example that the system will withstand the load with probability 0.01, and then the
ED

knowledge is considered partly reflected in the probability, partly in the background


knowledge that this probability is based on”. The above knowledge definition of science and
the model of Figure 1 work perfectly in case of the “justified belief” interpretation of
PT

knowledge, but not for the “justified true belief” interpretation.

From such a view the term ‘justified’ becomes critical. In line with Hansson (2013), it refers
CE

to being the result of a scientific process – meeting some criteria set by the scientific
environment for the process considered. For example, in the case of the system load above,
these criteria relate to the way the risk assessment is conducted, that the rules of probability
are met, etc. Aven and Heide (2009), see also Aven (2011a), provide an in-depth discussion of
AC

such criteria. A basic requirement is that the analysis is solid/sound (follows standard
protocols for scientific work like being in compliance with all rules and assumptions made,
the basis for all choices are made clear, etc.). In addition, criteria of reliability and validity
should be met. The reliability requirement here relates to the extent to which the risk
assessment yields the same results when repeating the analysis, and the validity requirement
refers to the degree to which the risk assessment describes the specific concepts that one is
attempting to describe. Adopting these criteria, the results (beliefs) of the risk assessments can
to a varying degree be judged as “justified”.

As shown by Aven and Heide (2009) and Aven (2011a), this evaluation depends strongly on
the risk perspective adopted. If the reference is the “traditional scientific method”, standing on
6
ACCEPTED MANUSCRIPT

the pillars of accurate estimations and predictions, the criteria of reliability and validity would
fail in general, in particular when the uncertainties are large. The problems for the risk
assessments in meeting the requirements of the traditional scientific method were discussed as
early as in 1981 by Alvin M. Weinberg and Robert B. Cumming in their editorials of the first
issue of the Risk Analysis journal, in relation to the establishment of the Society for Risk
Analysis (Weinberg 1981, Cumming 1981). However, a risk assessment can also be seen as a
tool used to represent and describe knowledge and lack of knowledge, and then other criteria
need to be used to evaluate reliability and validity, and whether the assessment is a scientific
method.
This topic is discussed by Hansson and Aven (2014). They give some examples of useful

T
science-based decision support in line with these ideas:

IP
- characterisations of the robustness of natural, technological, and social systems and
their interactions.

CR
- characterisations of uncertainties, and of the robustness of different types of
knowledge that are relevant for risk management, and of ways in which some of these
uncertainties can be reduced and the knowledge made more robust.
- Investigations aimed at uncovering specific weaknesses or lacunae in the knowledge

-
on which risk management is based.
US
Studies of successes and failures in previous responses to surprising and unforeseen
events.
AN
Returning to the concept of integrative thinking introduced in Section 1, we may point to the
tension between the ideas that risk assessment fails to meet the criteria of the traditional
scientific method, and that it should be a solid and useful method for supporting risk
M

decision-making. The result of a shift in perspective for the risk assessment, from accurate
risk estimation to knowledge and lack of knowledge characterisations, can be viewed as a
result of such thinking. We will discuss this change in perspective for the risk assessments
ED

further in Section 6.
PT

3 Risk conceptualisation
Several attempts have been made to establish broadly accepted definitions of key terms
related to concepts fundamental for the risk field; see e.g. Thompson et al (2005). A scientific
CE

field or discipline needs to stand solidly on well-defined and universally understood terms and
concepts. Nonetheless, experience has shown that to agree on one unified set of definitions is
not realistic. This was the point of departure for a thinking process conducted recently by an
expert committee of the Society for Risk Analysis (SRA), which resulted in a new glossary
AC

for SRA (SRA 2015a). The glossary is founded on the idea that it is still possible to establish
authoritative definitions, the key being to allow for different perspectives on fundamental
concepts and to make a distinction between overall qualitative definitions and their associated
measurements. We will focus here on the risk concept, but the glossary also covers related
terms such as probability, vulnerability, robustness and resilience.

Allowing for different perspectives does not mean that all definitions that can be found in the
literature are included in the glossary: the definitions included have to meet some basic
criteria – a rationale – such as being logical, well-defined, understandable, precise, etc. (SRA
2015a).

7
ACCEPTED MANUSCRIPT

In the following we summarise the risk definition text from SRA (2015a):

“We consider a future activity (interpreted in a wide sense to also cover, for example, natural
phenomena), for example the operation of a system, and define risk in relation to the
consequences of this activity with respect to something that humans value. The consequences
are often seen in relation to some reference values (planned values, objectives, etc.), and the
focus is normally on negative, undesirable consequences. There is always at least one
outcome that is considered as negative or undesirable.

Overall qualitative definitions of risk:


a) the possibility of an unfortunate occurrence.

T
b) the potential for realisation of unwanted, negative consequences of an event
c) exposure to a proposition (e.g. the occurrence of a loss) of which one is uncertain

IP
d) the consequences of the activity and associated uncertainties
e) uncertainty about and severity of the consequences of an activity with respect to

CR
something that humans value
f) the occurrences of some specified consequences of the activity and associated
uncertainties
g) the deviation from a reference value and associated uncertainties

US
These definitions express basically the same idea, adding the uncertainty dimension to events
and consequences. ISO defines risk as the effect of uncertainty on objectives (ISO 2009a,b).
AN
It is possible to interpret this definition in different ways; one as a special case of those
considered above. e.g. d) or g).

To describe or measure risk – to make judgements about how large or small the risk is, we use
M

various metrics:

Risk metrics/descriptions (examples)


ED

1. The combination of probability and magnitude/severity of consequences


2. The triplet (si, pi, ci), where si is the ith scenario, pi is the probability of that scenario, and ci
is the consequence of the ith scenario, i =1,2, …N
3. The triplet (C’,Q,K), where C’ is some specified consequences, Q a measure of uncertainty
PT

associated with C’ (typically probability) and K the background knowledge that supports C’
and Q (which includes a judgement of the strength of this knowledge)
4. Expected consequences (damage, loss), for example computed by:
CE

i. Expected number of fatalities in a specific period of time or the expected number


of fatalities per unit of exposure time
ii. The product of the probability of the hazard occurring and the probability that the
AC

relevant object is exposed given the hazard, and the expected damage given that
the hazard occurs and the object is exposed to it (the last term is a vulnerability
metric)
iii. Expected disutility
5. A possibility distribution for the damage (for example a triangular possibility distribution)

The suitability of these metrics/descriptions depends on the situation. None of these examples
can be viewed as risk itself, and the appropriateness of the metric/description can always be
questioned. For example, the expected consequences can be informative for large populations
and individual risk, but not otherwise. For a specific decision situation, a selected set of
metrics have to be determined meeting the need for decision support.“

8
ACCEPTED MANUSCRIPT

To illustrate the thinking, consider the personnel risk related to potential accidents on an
offshore installation. Then, if risk is defined according to d), in line with the
recommendations in for example PSA-N (2015) and Aven et al (2014), risk has two
dimensions: the consequences of the operation covering events A such as gas leakages and
blowouts, and their effects C for human lives and health; as well as uncertainty U, we do not
know now which events will occur and what the effects will be; we face risk. The risk is
referred to as (A,C,U). To describe the risk, as we do in the risk assessment, we are in general
terms led to the triplet (C’,Q,K), as defined above. We may for example choose to focus on
the number of fatalities, and then C’ equals this number. It is unknown at the time of the
analysis, and we use a measure to express the uncertainty. Probability is the most common

T
tool, but other tools also exist, including imprecise (interval) probability and representations
based on the theories of possibility and evidence, as well as qualitative approaches; see

IP
Section 4 and Aven et al. (2014), Dubois (2010), Baudrit et al (2006) and Flage et al. (2014).
Arguments for seeing beyond expected values and probabilities in defining and describing

CR
risk are summarised in Aven (2012,2015c); see also Section 4. Aven (2012) provides a
comprehensive overview of different categories of risk definitions, having also a historical
and development trend perspective. It is to be seen as a foundation for the SRA (2015a)
glossary.

US
The way we understand and describe risk strongly influences the way risk is analysed and
hence it may have serious implications for risk management and decision-making. There
AN
should be no reason why some of the current perspectives should not be wiped out as they are
simply misguiding the decision-maker in many cases. The best example is the use of expected
loss as a general concept of risk. The uncertainty-founded risk perspectives (e.g. PSA-N
2015, ISO 2009a,b, Aven et al 2014, Aven and Renn 2009) indicate that we should also
M

include the pure probability-based perspectives, as the uncertainties are not sufficiently
revealed for these perspectives; see also discussion in Section 4. By starting from the overall
qualitative risk concept, we acknowledge that any tool we use needs to be treated as a tool. It
ED

always has limitations and these must be given due attention. Through this distinction we will
more easily look for what is missing between the overall concept and the tool. Without a
proper framework clarifying the difference between the overall risk concept and how it is
being measured, it is difficult to know what to look for and make improvements in these tools
PT

(Aven 2012).

The risk concept is addressed in all fields, whether finance, safety engineering, health,
CE

transportation, security or supply chain management (Althaus 2005). Its meaning is a topic of
concern in all areas. Some areas seem to have found the answer a long time ago, for instance
the nuclear industry, which has been founded on the Kaplan and Garrick (1981) definition
AC

(the triplet scenarios, consequences and probabilities) for more than three decades; others
acknowledge the need for further developments, such as in the supply chain field (Heckmann
et al 2015). Heckmann et al (2015) point to the lack of clarity in understanding what the
supply chain risk concept means, and search for solutions. A new definition is suggested:
“Supply chain risk is the potential loss for a supply chain in terms of its target values of
efficiency and effectiveness evoked by uncertain developments of supply chain characteristics
whose changes were caused by the occurrence of triggering-events”. The authors highlight
that “the real challenge in the field of supply chain risk management is still the quantification
and modeling of supply chain risk. To this date, supply chain risk management suffers from
the lack of a clear and adequate quantitative measure for supply chain risk that respects the
characteristics of modern supply chains” (Heckmann et al 2015).

9
ACCEPTED MANUSCRIPT

We see a structure resembling the structure of the SRA glossary, with a broad qualitative
concept and metrics describing the risk. The supply chain risk is just an example to illustrate
the wide set of applications that relate to risk. Although all areas have special needs, they all
face risk as framed in the set-up of the first paragraph of the SRA (2015a) text above. There is
no need to invent the wheel for every new type of application.

To illustrate the many types of issues associated with the challenge of establishing suitable
risk descriptions and metrics, an example from finance, business and operational research will
be provided. It is beyond the scope of the present paper to provide a comprehensive all-
inclusive overview of contributions of this type.

T
In finance, business and operational research there is considerable work related to risk

IP
metrics, covering both moment-based and quantile-based metrics. The former category covers
for example expected loss functions and expected square loss, and the latter category, Value-

CR
at-Risk (VaR), and Conditional Value-at-Risk (CVaR); see e.g. Natarajan et al (2009).
Research is conducted to analyse their properties and explore how successful they are in
providing informative risk descriptions in a decision-making context, under various
conditions, for example for a portfolio of projects or securities, and varying degree of

US
uncertainties related to the parameters of the probability models; see e.g. Natarajan et al
(2009), Shapiro (2013), Brandtner (2013) and Mitra et al (2015). As these references show,
the works often have a rigorous mathematical and probabilistic basis, with strong pillars taken
AN
from economic theory such as the expected utility theory.

4 Uncertainty in risk assessments


M

Uncertainty is a key concept in risk conceptualisation and risk assessments as shown in


Section 3. How to understand and deal with the uncertainties has been intensively discussed in
ED

the literature, from the early stages of risk assessment in the 70s and 80s, until today. Still the
topic is a central one. Flage et al (2014) provide a recent perspective on concerns, challenges
and directions of development for representing and expressing uncertainty in risk assessment.
Probabilistic analysis is the predominant method used to handle the uncertainties involved in
PT

risk analysis, both aleatory (representing variation) and epistemic (due to lack of knowledge).
For aleatory uncertainty there is broad agreement about using probabilities with a limiting
relative frequency interpretation. However, for representing and expressing epistemic
CE

uncertainty, the answer is not so straightforward. Bayesian subjective probability approaches


are the most common, but many alternatives have been proposed, including interval
probabilities, possibilistic measures, and qualitative methods. Flage et al (2014) examine the
AC

problem and identify issues that are foundational for its treatment. See also the discussion
note by Dubois (2010).

One of the issues raised relates to when subjective probability is not appropriate. The
argument often seen is that if the background knowledge is rather weak, then it will be
difficult or impossible to assign a subjective probability with some confidence. However, a
subjective probability can always be assigned. The problem is that a specific probability
assigned is considered to represent a stronger knowledge than can be justified. Think of a
situation where the assigner has no knowledge about a quantity x beyond the following: The
quantity x is in the interval [0,1] and the most likely value of x is ½. From this knowledge
alone there is no way of representing a specific probability distribution, rather we are led to

10
ACCEPTED MANUSCRIPT

the use of possibility theory; see Aven et al (2014, p. 46). Forcing the analyst to assign one
probability distribution, would mean the need to add some unavailable information. We are
led to bounds of probability distributions.

Aven (2010) adds another perspective to this discussion. The key point is not only to
represent the available knowledge but also to use probability to express the beliefs of the
experts. It is acknowledged that these beliefs are subjective, but they nevertheless support the
decision-making. From this view it is not either or; probability and the alternative approaches
supplement each other. This issue is also discussed by Dubois (2010).

The experience of the present author is that advocators of non-probabilistic approaches, such

T
as possibility theory and evidence theory, often lack an understanding of the subjective
probability concept. If the concept is known, the interpretation often relates to a betting

IP
interpretation, which is controversial (Aven 2013a). For a summary of arguments for why
this interpretation should be avoided and replaced by a direct comparison approach, see

CR
Lindley (2006, p. 38) and Aven (2013a). This latter interpretation is as follows: the
probability P(A) = 0.1 (say) means that the assessor compares his/her uncertainty (degree of
belief) about the occurrence of the event A with the standard of drawing at random a specific
ball from an urn that contains 10 balls (Lindley 2006).

US
If subjective probabilities are used to express the uncertainties, we also need to reflect on the
knowledge that supports the probabilities. Think of a decision-making context where some
AN
risk analysts produce some probabilistic risk metrics; in one case the background knowledge
is strong, in the other, weaker, but the probabilities and metrics are the same. To meet this
challenge one can look for alternative approaches such as possibility theory and evidence
theory, but it is also possible to think differently, to try to express qualitatively the strength of
M

this knowledge to inform the decision-makers. The results are then summarised in not only
probabilities P but the pair (P,SoK), where SoK provides some qualitative measures of the
strength of the knowledge supporting P. Work along these lines is reported in, for example,
ED

Flage and Aven (2009) and Aven (2014), with criteria related to aspects like justification of
assumptions made, amount of reliable and relevant data/information, agreement among
experts and understanding of the phenomena involved.
PT

Similar and related criteria are used in the so-called NUSAP system (NUSAP: Numeral, Unit,
Spread, Assessment, and Pedigree) (Funtowicz and Ravetz 1990, 1993, Kloprogge et al
2005,2011, Laes et al 2011, van der Sluijs et al 2005a,2005b), originally designed for the
CE

purpose of analysis and diagnosis of uncertainty in science for policy by performing a critical
appraisal of the knowledge base behind the relevant scientific information.
AC

See also discussion by Spiegelhalter and Riesch (2014), who provide forms of expression of
uncertainty within five levels: event, parameter and model uncertainty – and two extra-model
levels concerning acknowledged and unknown inadequacies in the modelling process,
including possible disagreements about the framing of the problem.

For interval probabilities, founded for example on possibility theory and evidence theory, it is
also meaningful and relevant to consider the background knowledge and the strength of this
knowledge. Normally the background knowledge in the case of intervals would be stronger
than in the case of specific probability assignments, but they would be less informative in the
sense of communicating the judgements of the experts making the assignments.

11
ACCEPTED MANUSCRIPT

As commented by the authors of SRA (2015b), many researchers today are more relaxed than
previously about using non-probabilistic representations of uncertainty. The basic idea is that
probability is considered the main tool, but other approaches and methods may be used and
useful when credible probabilities cannot easily be determined or agreed upon. For situations
characterised by large and “deep” uncertainties, there seems to be broad acceptance of the
need for seeing beyond probability. As we have seen above, this does not necessarily mean
the use of possibility theory or evidence theory. The combination of probability and
qualitative approaches represents an interesting alternative direction of research. Again we see
elements of integrative thinking, using the tension between different perspectives for
representing and expressing uncertainties to obtain something new and more wide-ranging
and hopefully better.

T
A central area of uncertainty in risk assessment is uncertainty importance analysis. The

IP
challenge is to identify what are the most critical and essential contributors to output
uncertainties and risk. Considerable work has been conducted in this area; see e.g.

CR
Borgonovo (2006,2007,2015), Baraldi et al (2009) and Aven and Nøkland (2010). In Aven
and Nøkland (2010) a rethinking of the rationale for the uncertainty importance measures is
provided. It is questioned what information they give compared to the traditional importance
measures such as the improvement potential and the Birnbaum measure. A new type of

US
combined sets of measures is introduced, based on an integration of a traditional importance
measure and a related uncertainty importance measure. Baraldi et al (2009) have a similar
scope, investigating how uncertainties can influence the traditional importance measures, and
AN
how one can reflect the uncertainties in the ranking of the components or basic events.

Models play an important role in risk assessments, and considerable attention has been
devoted to the issue of model uncertainty over the years and also recently. Nevertheless, there
M

has been some lack of clarity in the risk field regarding what this concept means; compare, for
example, Reinert and Apostolakis (2006), Park et al (2010), Droguett and Mosleh (2013,
2014) and Aven and Zio (2013). According to Aven and Zio (2013), model uncertainty is to
ED

be interpreted as uncertainty about the model error, defined by g(x)-y, where y is the quantity
we would like to assess and g(x) is a model of y having some parameters x. Different
approaches for assessing this uncertainty can then be used, including subjective probabilities.
This set-up is discussed in more detail in Bjerga et al (2014).
PT

5 Risk management principles and strategies


CE

Before looking into recent developments in fundamental risk management principles and
strategies, it is useful to review two well-established pillars of risk management: a) the main
AC

risk management strategies available and b) the structure of the risk management process.

For a), three major strategies are commonly used to manage risk: risk-informed, cautionary/
precautionary and discursive strategies (Renn 2008, SRA 2015b). The cautionary/
precautionary strategy is also referred to as a strategy of robustness and resilience. In most
cases the appropriate strategy would be a mixture of these three strategies.

The risk-informed strategy refers to the treatment of risk ‒ avoidance, reduction, transfer and
retention ‒ using risk assessments in an absolute or relative way. The cautionary/
precautionary strategy highlights features like containment, the development of substitutes,
safety factors, redundancy in designing safety devices, as well as strengthening of the immune
system, diversification of the means for approaching identical or similar ends, design of
12
ACCEPTED MANUSCRIPT

systems with flexible response options and the improvement of conditions for emergency
management and system adaptation. An important aspect here is the ability to adequately read
signals and the precursors of serious events. All risk regulations are based on some level of
such principles to meet the uncertainties, risks and the potential for surprises. The discursive
strategy uses measures to build confidence and trustworthiness, through reduction of
uncertainties and ambiguities, clarifications of facts, involvement of affected people,
deliberation and accountability (Renn 2008, SRA 2015b).

For b), the process can be broken down into the following steps (in line with what one finds
in standards such as ISO 31000 and most risk analysis text books (e.g. Meyer and Reniers
2013, Zio 2007a, Aven 2015a):

T
i. Establish context, which means for example to define the purpose of the risk

IP
management activities, and specify goals and criteria.
ii. Identify situations and events (hazards/threats/opportunities) that can affect the

CR
activity considered and objectives defined. Many methods have been developed for
this task, including checklists, HAZOP and FMEA.
iii. Conduct cause and consequences analysis of these events, using techniques such as
fault tree analysis, event tree analysis and Bayesian networks.
iv.

v.
US
Make judgements of the likelihood of the events and their consequences, and establish
a risk description or characterisation.
Evaluate risk, to judge the significance of the risk.
AN
vi. Risk treatment.

In addition, implementation issues related to the risk management process need to be


mentioned, see for example ISO (2009b), Banks and Dunn (2003) and Teng et al (2012,2013).
M

The risk assessments provide decision support in choosing between alternatives, the
acceptance of activities and products, the implementation of risk-reducing measures, etc. The
ED

generation of the risk-information is often supplemented with decision analysis tools such as
cost-benefit analysis, cost-effectiveness analysis and multi-attribute analysis. All these
methods have in common that they are systematic approaches for organising the pros and
cons of a decision alternative, but they differ with respect to the extent to which one is willing
PT

to make the factors in the problem explicitly comparable. Independent of the tool, there is
always a need for a managerial review and judgement, which sees beyond the results of the
analysis and adds considerations linked to the knowledge and lack of knowledge on which the
CE

assessments are based, as well as issues not captured by the analysis, as was discussed in
Section 2. The degree of “completeness” of an analysis depends on the quality of the analysis
and applied cut-offs (SRA 2015b).
AC

For a review of some alternative recent decision analytical approaches, see Gilboa and
Marinacci (2013). This reference provides some interesting historical and philosophical
reflections on the foundations of the Bayesian and the expected utility based perspectives to
decision-making under uncertainty.

From this basis, considerable work on risk management principles and strategies has been
conducted in recent years. A pioneering work was carried out by Klinke and Renn (2002),
who offered a new classification of risk types and management strategies. The scheme
includes seven aspects of uncertainty and the extent of damage, e.g. delay effects and the
geographical dispersion of the damage, and the potential of mobilisation the risk may have.

13
ACCEPTED MANUSCRIPT

For each risk type, a set of risk management strategies is defined. The work integrates the
three major strategies for managing risk as discussed above: risk-informed,
cautionary/precautionary and discursive strategies (Aven and Cox 2015).

Risk management is closely related to policy and policy analysis. A policy can be defined as a
principle or plan to guide decisions and achieve desirable outcomes, and the term applies to
international organisations, governments, private sector organizations and groups, as well as
individuals. The development and operation of policies are often structured by the following
stages inspired by decision theory (e.g. Althaus et al 2007):

1. Problem identification – the recognition of an issue that demands further attention

T
2. Generating alternatives, analysis
3. Processing covering aspects like policy instrumentation development, consulting,

IP
deliberation and coordination
4. Decision-making

CR
5. Implementation
6. Evaluation (assessing the effectiveness of the policy)

Linking stage 6 with 1, the process is referred to as the policy cycle. It has similar elements as

US
we find in the quality and project management field for ensuring continuous improvement –
plan, do, study and act. The above steps i) – vi) for the risk analysis process can also be
structured in line with this cycle. The risk field provides input to the elements of the policy
AN
process for example by:
 Conceptualization and characterization of the problem/issue, covering aspects like
objectives, criteria, risk, uncertainties, knowledge and priorities.
 Structuring the problem by clarifying and highlighting key principles (e.g. the
M

precautionary principle) and dilemmas, such as the balance between development and
value creation on one side and protection on the other.
 Statistical data analysis to identify those hazards/threats that contribute the most to
ED

risk, and in this way guide the decision making on where to most effectively reduce
the risk.
 Risk assessments and in particular Quantified Risk Assessment (QRA) of alternative
PT

potential developments (for examples technological arrangements and systems), to be


able to compare the risk for these alternatives and relate them to possible criteria, and
other concerns such as costs.
 Risk perception and related studies, providing insights about how different actors
CE

perceive the risk and what concerns they have regarding the risk and the potential
consequences.
AC

Precautionary principle
Few policies for risk management have created more controversy than the precautionary
principle, and it is still being discussed; see for example Aven (2011b), Cox (2011), Lofstedt
(2003), Sunstein (2005), Peterson (2006), Renn (2008)) and Aldred (2013). Two common
interpretations are (SRA 2015a):

- a principle expressing that if the consequences of an activity could be serious and


subject to scientific uncertainties then precautionary measures should be taken or the
activity should not be carried out

14
ACCEPTED MANUSCRIPT

- a principle expressing that regulatory actions may be taken in situations where


potentially hazardous agents might induce harm to humans or the environment, even if
conclusive evidence about the potential harmful effects is not (yet) available.

Acknowledging the ideas of Figure 1, the principle has a rationale, as no method –


quantitative risk analysis, cost-benefit analysis or decision theory – can prescribe what the
best risk management policy is in the face of scientific uncertainties. However, it does not
provide precise guidance on when it is applicable, as the judgement of what constitute
scientific uncertainties is subject to value judgements. If for example the scientific uncertainty
is related to the difficulty of establishing a prediction model for the consequences (Aven
2011b), subjective judgements are needed to decide when this is actually the case.

T
Much of the debate on this principle is due to different understandings of the fundamentals of

IP
the risk field, for example related to risk and uncertainties. If one studies the above references,
it is evident that the risk field needs a stronger conceptual unity. From the perspective of the

CR
present author, a key point is the difference between the cautionary and precautionary
principles (Aven 2011b). The former principle is broader than the precautionary principle,
stating that if the consequences of an activity could be serious and subject to uncertainties,
then cautionary measures should be taken or the activity should not be carried out, i.e. faced

US
with risk we should take action. This principle is used for all industries. For example in the
Norwegian oil and gas industry there is a requirement that the living quarters of an installation
should be protected by fireproof panels of a certain quality for walls facing process and
AN
drilling areas. There are no scientific uncertainties in this case, the phenomena are well-
understood, yet measures are implemented which can be seen as justified on the basis of the
cautionary principle. One knows that such fires can occur and then people should be protected
if they occur. Of course, the decision may not be so straightforward in other cases if the costs
M

are very large. A risk assessment could then provide useful decision support, and, in line with
the ideas on risk described in Section 3, weights should also be placed on the uncertainties. At
the final stage the decision-makers need to find a balance between the costs and benefits
ED

gained, including the weight to be given to the cautionary principle.

In view of the discussion in Section 2 (Figure 1) and the first part of this Section 5, proper risk
management relies both on being risk-informed and on cautious (robust/resilient) policies.
PT

One of these pillars alone is not enough.

Robustness
CE

Considerable works have been conducted in recent years related to robustness in a context of
risk and uncertainties; see e.g. Hites et al (2006), Baker et al (2008), Roy (2010), Klibi et al
(2010), Joshi and Lambert (2011), Ben-Haim (2012), Fertis et al (2012), Gabrel et al (2014),
AC

and Malek et al (2015). Roy (2010) provides a review of research related to robustness. He
uses the term ‘robust’ as an adjective referring to a capacity for withstanding ‘‘vague
approximations” and/or ‘‘zones of ignorance” in order to prevent undesirable impacts, notably
the degradation of the properties to be maintained. In this view, the research dealing with
robustness seeks to insure this capacity as much as possible. Robustness is related to a process
that responds to a concern: the need for a capacity for resistance or self-protection. Gabrel et
al (2014) present a review of recent developments in the field of robust optimisation, seeking
to find the best policies when parameters are uncertain or ambiguous. Ben-Haim (2012)
provides some overall perspectives on tools and concepts of optimisation in decision-making,
design, and planning, related to risk. The author argues that, in decisions under uncertainty,
what should be optimised is robustness rather than performance; the strategy of satisficing

15
ACCEPTED MANUSCRIPT

rather than optimising. Joshi and Lambert (2011) present an example of a “robust
management strategy” using diversification of engineering infrastructure investments, and
Klibi et al (2010) discuss robustness of supply chain networks under uncertainty.

Gabrel et al (2014) underline some of the challenges of robust optimisation. They write

At a high level, the manager must determine what it means for him to have a robust solution:
is it a solution whose feasibility must be guaranteed for any realization of the uncertain
parameters? or whose objective value must be guaranteed? or whose distance to optimality
must be guaranteed? The main paradigm relies on worst-case analysis: a solution is evaluated
using the realization of the uncertainty that is most unfavorable. The way to compute the worst
case is also open to debate: should it use a finite number of scenarios, such as historical data,

T
or continuous, convex uncertainty sets, such as polyhedra or ellipsoids? The answers to these
questions will determine the formulation and the type of the robust counterpart. Issues of

IP
overconservatism are paramount in robust optimization, where the uncertain parameter set
over which the worst case is computed should be chosen to achieve a trade-off between system

CR
performance and protection against uncertainty, i.e., neither too small nor too large. (Gabrel et
al 2014)

Aven and Hiriart (2013) illustrate some of these points. Using a simple investment model, it is

US
demonstrated that there are a number of ways the robust analysis can be carried out – none
can be argued to be more natural and better than others. This points to the need for a cautious
policy in making conclusions on what is the best decision, with reference to one particular
AN
robustness scheme. It is concluded that there is a necessity to see the robustness analyses as
nothing more than decision support tools that need to be followed up with a managerial
review and judgement. It is underlined that such analyses should be supplemented with
sensitivity analyses showing the optimal investment levels for various parameter values
M

followed by qualitative analyses providing arguments supporting the different parameter


values.
ED

Resilience
Resilience type of strategies play a key role in meeting risk, uncertainties, and potential
surprises. The level of resilience for a system or organisation is linked to the ability to sustain
PT

or restore its basic functionality following a stressor. A resilient system has the ability to
(Hollnagel et al 2006):
CE

 respond to regular and irregular threats in a robust yet flexible (adaptive) manner,
 monitor what is going on, including its own performance,
 anticipate risk events and opportunities,
AC

 learn from experience.

Through a mix of alertness, quick detection, and early response, the failures can be avoided.
Considerable work has been conducted on this topic in recent years; see for example Weick
and Sutcliffe (2007), Lundberg and Johansson (2015), Sahebjamnia et al (2015), Patterson
and Wears (2015), Righi et al (2015) and Bergström et al (2015). The Weick and Sutcliffe
reference addresses the concept of collective mindfulness, linked to High Reliability
Organisations (HROs), with its five principles: preoccupation with failure, reluctance to
simplify, sensitivity to operations, commitment to resilience and deference to expertise. There
is a vast amount of literature (see e.g. Weick et al 1999, Le Coze 2013, Hopkins 2014)
providing arguments for organisations to organise their efforts in line with these principles in

16
ACCEPTED MANUSCRIPT

order to obtain high performance (high reliability) and effectively manage risks, the
unforeseen and potential surprises.

According to Righi et al (2015), resilience engineering supports studies in risk assessments,


identification and classification of resilience, training and accident analysis, and it should be
seen in relation to the theory of complex systems. Bergström et al (2015) confirm this and
argue that “resilience engineering scholars typically motivate the need for their studies by
referring to the inherent complexities of modern socio-technical systems; complexities that
make these systems inherently risky. The object of resilience then becomes the capacity to
adapt to such emerging risks in order to guarantee the success of the inherently risky system”
(Bergström et al 2015). Although resilience is a generic term, it is most used in the safety

T
domain, whereas robustness is most commonly referred to in business and operational
research contexts.

IP
Traditional risk assessments are based on causal chains and event analysis, failure reporting

CR
and risk assessments, calculating historical data-based probabilities. This approach has strong
limitations in analysing complex systems as they treat the system as being composed of
components with linear interactions, using methods like fault trees and event trees, and have
mainly a historical failure data perspective. These problems are addressed in resilience

US
engineering, which argues for more appropriate models and methods for such systems; see
e.g. Hollnagel et al (2006). Alternative methods have been developed, of which FRAM and
STAMP are among the most well-known (Hollnagel 2004, Leveson 2004,2011). At first
AN
glance, resilience engineering seems to be in conflict with risk management as it rejects the
traditional risk assessments, but there is no need for such a conflict. With sufficiently broad
risk management frameworks, the resilience dimension is a part of risk management as was
highlighted at the beginning of this section, and discussed for example by Steen and Aven
M

(2011) and Aven (2015b). The latter reference relates to the antifragility concept of Taleb
(2012), which builds on and extends the resilience concept. The key message of Taleb is that
to obtain top performance over time one has to acknowledge and even “love” some level of
ED

variation, uncertainty and risk. Taleb (2012, p. 4-5) proposes “to stand current approaches to
prediction, prognostication, and risk management on their heads”. However, as discussed
above, there is no conflict here if risk and risk management are sufficiently framed and
conceptualised. Proper risk management needs to incorporate these ways of thought, which
PT

relate risk to performance and improvement processes over time.

Large/deep uncertainties
CE

The above analysis covers in particular situations characterised by large or deep uncertainties,
such as in preparing for climate change and managing emerging diseases. What policies and
decision-making schemes should be implemented in such cases? Traditional statistical
AC

methods and techniques are not suitable, as relevant supporting models cannot easily be
justified and relevant data are missing. The answer is as discussed above,
cautionary/precautionary, and robust and resilient approaches and methods.

Cox (2012) reviews and discusses such approaches and methods to meet deep uncertainties.
He argues that the robust and adaptive methods provide genuine breakthroughs for improving
predictions and decisions in such cases. Ten tools that “can help us to better understand deep
uncertainty and make decisions even when correct models are unknown are looked into:
(subjective) expected utility theory; multiple priors, models or scenarios, robust control,
robust decisions; robust optimisation; average models; resampling; adaptive boosting;
Bayesian model averaging; low regret online detection; reinforcement learning; and model-

17
ACCEPTED MANUSCRIPT

free reinforcement learning”. These tools are founded on two strategies: “finding robust
decisions that work acceptably well for many models (those in the uncertainty set); and
adaptive risk management, or learning what to do by well-designed and analysed trial and
error” (Cox 2012).

Adaptive analysis is based on the acknowledgement that one best decision cannot be made but
rather a set of alternatives should be dynamically tracked to gain information and knowledge
about the effects of different courses of action. On an overarching level, the basic process is
straightforward: one chooses an action based on broad considerations of risk and other
aspects, monitors the effect, and adjusts the action based on the monitored results (Linkov et
al 2006). In this way we may also avoid the extreme events. See also Pettersen (2013) who

T
discusses abductive thinking, which is closely linked to adaptive analysis.

IP
Aven (2013b) provides some reflections on some of the foundational pillars on which the
work by Cox (2012) is based, including the meaning of the concept of deep uncertainty. He

CR
also provides some perspectives on the boundaries and limitations of analytical approaches
for supporting decision-making in the case of such uncertainties, highlighting the need for
managerial review and judgements, as was discussed in Section 2.

US
For some alternative perspectives on how to meet deep uncertainties, see Karvetski and
Lambert (2012) and Lambert et al. (2012), who seek to turn the conventional robustness
discussion away from its urgency to know which action is most robust and towards
AN
identifying which are the uncertainties that matter most, which matter least, which present
opportunities, and which present threats, and why. See also Hamilton et al (2015).

Surprises and black swans


M

Taleb (2007) made the black swan metaphor well-known and it is widely used today. His
work has inspired many authors, also on foundational issues (e.g. Chichilnisky 2013, Feduzi
and Runde 2014, Masys 2012, Aven 2015a), and recently there has been a lively discussion
ED

about the meaning of the black swan metaphor and its use in risk management; see Haugen
and Vinnem (2015) and Aven (2015d,e). The metaphor has created a huge interest in risk, in
particular among lay persons. It has also created increased focus in the professional risk
analysis society about risk, knowledge and surprises. Different types of black swans have
PT

been defined and measures to meet them discussed (e.g. Paté-Cornell 2012, Aven and Krohn
2014, Aven 2015d). But it is just a metaphor and cannot replace the need for conceptual
precision linked to terms such as ‘risk’, ‘probability’ and ‘knowledge’. As highlighted by
CE

Aven (2015b), the basic idea of addressing black swans is to obtain a stronger focus on issues
not covered by the traditional risk perspectives, highlighting historical data, probabilities and
expected values (the world of Mediocristan in Taleb’s terminology). Surprises do occur
AC

relative to the beliefs determined by these measures and concepts (historical data,
probabilities and expected values). We need to get more focus on the world outside
Mediocristan, what Taleb refers to as Extremistan. Approaches to meet the potential surprises
and blacks swans include improved risk assessments better capturing the knowledge
dimension, and adaptive and resilient (antifragile) thinking and analysis, as discussed in the
references mentioned in this paragraph.
Risk criteria
Risk management is about balancing different concerns, profits, safety, reputation, etc. In
general one considers a set of alternatives, evaluates their pros and cons, and makes a decision
that best meets the decision-makers’ values and priorities. In this process, it is common to
introduce constraints, in particular related to safety aspects, to simplify the overall judgements

18
ACCEPTED MANUSCRIPT

and ensure some minimum level on specific areas, to avoid the consideration of too many
variables at the same time.

Such constraints are often referred to as risk criteria, risk acceptance criteria and tolerability
criteria; see e.g. Rodrigues et al (2014) and Vanem (2012). For example, in Norway the
petroleum regulations state that the operator has a duty to formulate risk acceptance criteria
relating to major accidents and to the environment. This practice is in line with the internal
control principle, which states that the operator has the full responsibility for identifying the
hazards and seeing that they are controlled. This practice is, however, debated, and in a recent
paper Abrahamsen and Aven (2012) argue that it should be reconsidered. It is shown that if
risk acceptance criteria are to be introduced as a risk management tool, they should be

T
formulated by the authorities, as is the common practice seen in many countries and
industries, for example in the UK. Risk acceptance criteria formulated by the industry would

IP
not in general serve the interest of the society as a whole. The main reason is that an
operator's activity usually will cause negative externalities to society (an externality is an

CR
economically significant effect due to the activities of an agent/firm that does not influence
the agent's/firm's production, but which influences other agents' decisions). The increased
losses for society imply that society wants to adopt stricter risk acceptance criteria than those
an operator finds optimal in its private optimization problem. The expected utility theory,

US
which is the backbone for most economic thinking, is used as a basis for the discussion.

The critique against the use of such criteria also covers other aspects; see e.g. Aven (2015a).
AN
Firstly, tolerability or acceptance levels expressed through probability ignore important
aspects of risk as discussed in Sections 3 and 4. A key point is that the strength of knowledge
on which the probability judgements are based, is not reflected in the probabilities used for
comparing with these levels. Secondly, the use of such criteria can easily lead to the wrong
M

focus, namely meeting the criteria rather than finding the best possible solutions and
measures, taking into account the limitations of the analysis, uncertainties not reflected by the
analysis, and other concerns important for the decision-making. As strongly highlighted by
ED

for example Apostolaksis (2004), a risk decision should be risk-informed, not risk-based.
There is always a need for managerial review and judgement, as indicated by Figure 1.
The ALARP principle (ALARP: As Low As Reasonably Practicable) is a commonly adopted
risk-reduction principle, which is based on both risk-informed and cautionary/precautionary
PT

thinking. The principle is founded on the idea of gross disproportion and states that a risk-
reducing measure shall be implemented unless it can be demonstrated that the costs are in
gross disproportion to the benefits gained. The principle’s practical implementation is still a
CE

matter of discussion and research; see e.g. Ale et al (2015), French et al (2005), Melchers
(2001), Vinnem et al (2006) and Jones-Lee and Aven (2011). It is tempting to use cost-
benefit analysis, calculating expected net present values or expected costs per expected saved
AC

lives, to verify the gross disproportionate criterion. And this is commonly done, but should be
used with care as these cost-benefit criteria do not adequately reflect the uncertainty
component of risk (Aven and Abrahamsen 2007). Uncertainty assessments extending beyond
the cost-benefit analyses consequently need to be supplemented with broader processes, as
discussed for example by Aven and Vinnem (2007).

Integrative perspectives
Again we can see aspects of integrative thinking, the tension being caused by the different
perspectives, traditional risk analysis, resilience and antifragility, leading to broader risk
management frameworks incorporating all these elements. Several frameworks have been

19
ACCEPTED MANUSCRIPT

developed with such an aim, e.g. the risk frameworks of Renn (2008) and Aven and Krohn
(2014). This former approach has a perspective of governance and combines scientific
evidence with economic considerations as well as social concerns and societal values. The
latter framework builds on risk thinking as described in Section 3, focuses on knowledge
building, transfer of experience and learning, and adds theories and practical insights from
other fields specifically addressing the knowledge dimension. Three areas are given main
attention, firstly the collective mindfulness concept linked to High Reliability Organisations
(HROs), with its five principles mentioned above. The second area relates to the quality
discourse, with its focus on variation, system thinking and continuous improvements
(Bergman and Klefsjö 2003, Deming 2000), while the third includes the concept of
antifragility (Taleb 2012).

T
IP
6 The future of risk assessment and management

CR
The future of risk assessment and risk management is discussed in SRA (2015b) and Aven
and Zio (2014); see also reviews and reflections by Venkatasubramanian (2011), Pasman and
Reniers (2014) and Khan et al (2015).

US
A key challenge is related to the development of the risk field, as outlined in Section 2,
having a focus on knowledge and lack of knowledge characterisations, instead of accurate
risk estimations and predictions, to meet situations of large uncertainties. Today risk
AN
assessments are well established in situations with considerable data and clearly defined
boundaries for their use. Statistical and probabilistic tools have been developed and provide
useful decision support for many types of applications. However, risk decisions are, to an
increasing extent, about situations characterised by large uncertainties and emergence. Such
M

situations call for different types of approaches and methods, and it is a main challenge for
the risk field to develop suitable frameworks and tools for this purpose (SRA 2015b). There
is a general research focus on dynamic risk assessment and management rather than static or
ED

traditional risk assessment.

The concept of emerging risk has gained increasing attention in recent years. Flage and
PT

Aven (2015) perform an in-depth analysis of the emerging risk concept and in particular its
relation to black swan type of events through the known/unknown. According to this work,
we face emerging risk related to an activity when the background knowledge is weak but
contains indications/justified beliefs that a new type of event (new in the context of that
CE

activity) could occur in the future and potentially have severe consequences to something
humans value. The weak background knowledge inter alia results in difficulty specifying
consequences and possibly also in fully specifying the event itself; i.e., in difficulty
AC

specifying scenarios.

We need to further develop risk assessments that are able to capture these challenges linked to
the knowledge dimension and the time dynamics. A pure probabilistic approach, for example
a Bayesian analysis, would not be feasible as the background knowledge – the basis for the
probability models and assignments – would be poor. There is a need to balance different risk
management strategies in an adaptive manner, including cautionary strategies and attention to
signals and warnings.
There is also a need for substantial research and development to obtain adequate modelling
and analysis methods – beyond the “traditional” ones – to “handle” different types of systems.

20
ACCEPTED MANUSCRIPT

Examples include critical infrastructures (e.g. electrical grids, transportation networks, etc.),
which are complex systems and often interdependent, i.e. “systems of systems”. Another
example is security-type applications, where qualitative assessments are often performed on
the basis of judgements of actors’ intentions and capacities, without reference to a probability
scale. There seems to be a huge potential for significant improvements in the way security is
assessed by developing frameworks that integrate the standard security approaches and ways
of assessing and treating uncertainty. The paper by Aven (2013c) provides an example of a
work in this direction.

Societal risk decision-making is more and more challenging – it is characterised by many and
diverse stakeholders. Some of the challenges and research issues that need to be focused on,

T
here, relate to, inter alia (SRA 2015b, Aven and Zio 2014):

IP
- “how the outcomes of the risk and uncertainties assessment should be best described,
visualised and communicated, for their informative use in the above described process

CR
of societal decision-making involving multiple and diverse stakeholders,
- how issues of risk acceptability need to be seen in relation to the measurement tools
used to make judgements about risk acceptability, accounting for the value generating
processes at the societal level,
-
US
how the managerial review and judgement should be defined in this context.

Key issues that we need to address are:


AN
- In intergenerational decision-making situations, what are the available frameworks
and perspectives to be taken? What are other options? When are different frameworks
more appropriate than others? How do we capture the key knowledge issues and
uncertainties of the present and future? What duty of care do we owe to future
M

generations?
- How can we describe and represent the results of risk assessments in a way that is
useful to decision-makers, which clearly presents the assumptions made and their
ED

justification with respect to the knowledge upon which the assessment is based?
- How can we display risk information without misrepresenting what we know and do
not know?
PT

- How can we accurately represent and account for uncertainties in a way that properly
justifies confidence in the risk results?
- How can we state how good expert judgements are, and how can we improve them?
CE

- In the analysis of near-misses, how should we structure the multi-dimensional space of


causal proximity among different scenarios in order to measure “how near is a miss to
an actual accident”?”
AC

The above list covers issues ranging from important features of risk assessment to overall
aspects concerning risk management and governance. It can obviously be extended. One
example to add is the link between sustainability and risk, which is an emerging research
topic; see e.g. Fahimnia et al (2015) and Giannakis and Papadopoulos (2015).
7 Conclusions
Risk assessment and risk management are established as a scientific field and provide
important contributions in supporting decision-making in practice. Basic principles, theories
and methods exist and are developing. This review paper has placed its focus on recent work
and advances covering the fundamental ideas and thinking on which the risks fields are based.

21
ACCEPTED MANUSCRIPT

Having evaluated a considerable number of papers in this area, the following main
conclusions are drawn:
1. The scientific foundation of risk assessment and risk management is still somewhat
shaky on some issues, in the sense that both theoretical work and practice rely on
perspectives and principles that could seriously misguide decision-makers. Examples
include the general conception of risk as an expected value or a probability
distribution.
2. In recent years several attempts at integrative research have been conducted,
establishing broader perspectives on the conceptualisation, assessment and

T
management of risk. The present author sees this way of thinking as essential for
developing the risk field and obtaining a strong unifying scientific platform for this

IP
field. These perspectives relate to:
 Concepts and terms, like risk, vulnerability, probability, etc.

CR
 The emphasis on knowledge and lack of knowledge descriptions and
characterisations in risk assessments
 The way uncertainty is treated in risk assessments

robustness and resilience US


 The way the risk thinking is combined with principles and methods of

 The acknowledgement of managerial review and judgement in risk


AN
management

3. There are signs of a revitalisation of the interest in foundational issues in risk


M

assessment and management, which is welcomed and necessary for meeting the
challenges the risk field now faces, related to societal problems and complex
technological and emerging risks.
ED

It is hoped that the present review and discussion can inspire more researchers to take part in
this work, building a stronger platform for risk assessment and management, meeting current
PT

and future challenges, in particular related to situations of large/deep uncertainties and


emerging risks. The risk field needs more researchers that have the passion and enthusiasm to
bring this field to the next level.
CE

Acknowledgements
The author is grateful to four anonymous reviewers for their useful comments and suggestions
AC

to the original version of this paper. The work has been partly funded by the Norwegian
Research Council – as a part of the Petromaks 2 program (grant number 228335/E30). The
support is gratefully acknowledged.

References
Abrahamsen, E. and Aven, T. (2012) Why risk acceptance criteria need to be defined by the
authorities and not the industry. Reliability Engineering and System Safety, 105, 47-50.
Aldred, J. (2013) Justifying precautionary policies: Incommensurability and uncertainty.
Ecological Economics 96, 132-140

22
ACCEPTED MANUSCRIPT

Ale, B.J.M., Hartford, D.N.D. and Slater, D. (2015) ALARP and CBA all in the same game.
Safety Science, 76, 90-100.
Althaus, C.E. (2005) A disciplinary perspective on the epistemological status of risk. Risk
Analysis, 25(3), 567-88.
Althaus, C., Bridgman, P. and Davis, G. (2007) the Australian Policy Guidance, 4th ed.
Sydney: Allen & Unwin.
Apostolakis, G.E. (2004) How useful is quantitative risk assessment? Risk Analysis, 24, 515-
520.
Aven, T. (2010) On the need for restricting the probabilistic analysis in risk assessments to
variability. Risk Analysis, 30(3), 354-360. With discussion 381-384.
Aven, T. (2011a) Quantitative Risk Assessment. The Scientific Platform. Cambridge:

T
Cambridge University Press.
Aven, T. (2011b) On different types of uncertainties in the context of the precautionary principle. Risk

IP
Analysis, 31(10), 1515-1525. With discussion 1538-1542.

Aven, T. (2012) The risk concept – historical and recent development trends. Reliability

CR
Engineering and System Safety, 99, 33-44.
Aven T. (2013a) How to define and interpret a probability in a risk and safety setting.
Discussion paper, with general introduction by Associate Editor, Genserik Reniers. Safety
Science, 51, 223-231.
US
Aven, T. (2013b) On how to deal with deep uncertainties in a risk assessment and
management context. Risk Analysis, 33(12), 2082-91.
Aven, T. (2013c) Probabilities and background knowledge as a tool to reflect uncertainties in
AN
relation to intentional acts. Reliability Engineering and System Safety, 119, 229-234.
Aven, T. (2014) What is safety science? Safety Science, 67, 15-20.
Aven, T. (2015a) Risk Analysis, 2nd ed. Chichester: Wiley.
M

Aven, T. (2015b) The concept of antifragility and its implications for the practice of risk
analysis. Risk Analysis, 35(3), 476-83.
Aven, T. (2015c) On the allegations that small risks are treated out of proportion to their
ED

importance. Reliability Engineering and System Safety, 140, 116-121. Open access.
Aven, T. (2015d) Implications of black swans to the foundations and practice of risk
assessment and management. Reliability Engineering and System Safety, 134, 83-91. Open
PT

Access.
Aven, T. (2015e) Comments to the short communication by Jan Erik Vinnem and Stein
Haugen titled “Perspectives on risk and the unforeseen”. Reliability Engineering and
CE

System Safety, 137, 69-75.


Aven, T. and Abrahamsen, E.B. (2007) On the use of cost-benefit analysis in ALARP processes.
International Journal of Performability Engineering, 3, 345-353.
AC

Aven, T., Baraldi, P., Flage, R. and Zio, E. (2014) Uncertainties in Risk Assessments. Chichester: Wiley.

Aven, T. and Cox, T. (2015) Virtual issue foundations of risk analysis.


http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1539-
6924/homepage/special_issue__foundations_of_risk_analysis.htm. Accessed 19 August 2015.
Aven, T. and Heide, B. (2009) Reliability and validity of risk analysis. Reliability Engineering and
System Safety, 94, 1862-1868.

Aven, T. and Hiriart, Y. (2013) Robust optimization in relation to a basic safety investment model with
imprecise probabilities. Safety Science, 55, 188-194.

23
ACCEPTED MANUSCRIPT

Aven, T. and Krohn, B.S. (2014) A new perspective on how to understand, assess and manage
risk and the unforeseen. Reliability Engineering and System Safety, 121, 1-10.
Aven, T. and Nøkland, T.E. (2010) On the use of uncertainty importance measures in
reliability and risk analysis. Reliability Engineering and System Safety, 95, 127-133.
Aven, T. and Renn, O. (2009) On risk defined as an event where the outcome is uncertain.
Journal of Risk Research, 12, 1-11.
Aven, T. and Renn, O. (2015) An evaluation of the treatment of risk and uncertainties in the
IPCC reports on climate change. Risk Analysis, 35(4), 701-712. * Open access.
Aven, T. and Vinnem, J.E. (2007) Risk Management. NY: Springer Verlag.
Aven, T, and Zio, E. (2013) Model output uncertainty in risk assessment. International
Journal of Performability Engineering, 9(5), 475-486.

T
Aven, T. and Zio, E. (2014) Foundational issues in risk analysis. Risk Analysis, 34(7), 1164-
1172.

IP
Baker, J.W., Schubert, M. and Faber, M.H. (2008) On the assessment of robustness.
Structural Safety, 30(3), 253-267.

CR
Banks, E. and Dunn, R. (2003) Practical Risk Management. Chichester Wiley.
Baraldi, P., Zio, E. and Compare, M. (2009) A method for ranking components importance in
presence of epistemic uncertainties. Journal of Loss Prevention in the Process Industries,
22, 582-592.

US
Baudrit, C., Guyonnet, D., Dubois, D. (2006) Joint propagation and exploitation of
probabilistic and possibilistic information in risk assessment. IEEE Transactions on Fuzzy
Systems, 14, 593-608.
AN
Bedford, T. and Cooke, R. (2001) Probabilistic Risk Analysis. Cambridge: Cambridge
University Press.
Ben-Haim, Y. (2012) Doing our best: optimization and the management of risk. Risk
Analysis, 32(8), 1326-31.
M

Bergman, B. and Klefsjö, B. (2003) Quality, 2nd ed. Lund, Sweden: Studentlitteratur.
Bergström, J., van Winsen, R. and Henriqson, E. (2015) On the rationale of resilience in the
domain of safety: A literature review. Reliability Engineering and System Safety, 141,
ED

131-141.
Bernstein, P.L. (1996) Against the Gods: The Remarkable Story of Risk. New York: John
Wiley & Sons.
Bjerga, T., Aven, T. and Zio, E. (2014) An illustration of the use of an approach for treating
PT

model uncertainties in risk assessment. Reliability Engineering and System Safety, 134,
75-82.
Blackhurst, J. and Wu, T. (2009) Managing Supply Chain Risk and Vulnerability: Tools and
CE

Methods for Supply Chain Decision Makers. New York: Springer Publishing.
Borgonovo, E. (2006) Measuring uncertainty importance: Investigation and comparison of
alternative approaches. Risk Analysis, 26(5), 1349-1362.
AC

Borgonovo, E. (2007) A new uncertainty importance measure. Reliability Engineering and


System Safety, 92(6), 771-784.
Borgonovo, E. and Plischke, E. (2015) Sensitivity analysis: A review of recent advances.
European Journal of Operational Research, 000, 1-19.
Brandenburg, M., Govindan, K., Sarkis, J. and Seuring, S. (2014) Quantitative models for
sustainable supply chain management: Developments and directions. European Journal of
Operational Research, 233, 299-312.
Brandtner, M. (2013) Conditional value-at-risk, spectral risk measures and (non-)
diversification in portfolio selection problems – A comparison with mean–variance
analysis. Journal of Banking and Finance, 37, 5526-5537.

24
ACCEPTED MANUSCRIPT

Chichilnisky, G. (2013) The foundations of statistics with black swans. Mathematical Social
Sciences, 59, 184-192.
Covello, V. T. and Mumpower, J. (1985) Risk analysis and risk management: An historical perspective.
Risk Analysis, 5(2), 103-119.

Cox, L.A.T. (2012) Confronting deep uncertainties in risk analysis. Risk Analysis, 32, 1607-
1629.
Cox, T. (2011) Clarifying types of uncertainty: When are models accurate, and uncertainties
small? Risk Analysis, 31, 1530-33.
Cumming, R.B. (1981) Is risk assessment a science? Risk Analysis, 1, 1-3.

Deming, W.E. (2000) The New Economics, 2nd ed. Cambridge, MA: MIT CAES.

T
Droguett, E.L. and Mosleh, A. (2013) Integrated treatment of model and parameter
uncertainties through a Bayesian approach. Journal of Risk and Reliability, 227(1), 41-54.

IP
Droguett, E.L. and Mosleh, A. (2014) Bayesian treatment of model uncertainty for partially
applicable models. Risk Analysis, 34(2), 252-270.

CR
Dubois, D. (2010) Representation, propagation and decision issues in risk analysis under
incomplete probabilistic information. Risk Analysis, 30, 361-368.
Fahimnia, B., Tang, C.S., Davarzani, H. and Sarkis, J. (2015) Quantitative models for

1-15.
US
managing supply chain risks: A review. European Journal of Operational Research, 247,

Feduzi, A. and Runde, J. (2014) Uncovering unknown unknowns: Towards a Baconian


approach to management decision-making. Organizational Behavior and Human
AN
Decision Processes, 124, 268-283.
Fertis, A., Baes, M. and Lüthi, H-J. (2012) Robust risk management. European Journal of
Operational Research, 222, 663-672.
Flage, R. and Aven, T. (2009) Expressing and communicating uncertainty in relation to
M

quantitative risk analysis (QRA). Reliability and Risk Analysis: Theory and Applications,
2(13), 9-18.
Flage, R. and Aven, T. (2015) Emerging risk – conceptual definition and a relation to black
ED

swan types of events. Reliability Engineering and System Safety, 144, 61-67.
Flage, R., Aven, T., Baraldi, P., and Zio, E. (2014) Concerns, challenges and directions of development
for the issue of representing uncertainty in risk assessment. Risk Analysis, 34(7), 1196-1207.
PT

French, S., Bedford, T. and Atherton, E. (2005) Supporting ALARP decision making by cost benefit
analysis and multiattribute utility theory. Journal of Risk Research, 8(3), 2017-223.
CE

Funtowicz, S.O. and Ravetz, J.R. (1990) Uncertainty and Quality in Science for Policy. Dordrecht:
Kluwer Academic Publishers.

Funtowicz, S.O. and Ravetz, J.R. (1993) Science for the post-normal age. Futures, 25, 735-755.
AC

Gabrel, V., Murat, C. and Thiele, A. (2014) Recent advances in robust optimization: An overview.
European Journal of Operational Research, 235, 471-483.

Giannakis, M. and Papadopoulos, T. (2015) Supply chain sustainability: A risk management approach.
International Journal of Production Economics, doi:10.1016/j.ijpe.2015.06.032.

Gilboa, I. and Marinacci, M. (2013) Ambiguity and the Bayesian paradigm. In D. Acemoglu, M.
Arellano, & E. Dekel (Eds.), Advances in Economics and Econometrics: Theory and Applications.
Cambridge: Cambridge University Press.

25
ACCEPTED MANUSCRIPT

Goerlandt, F. and Montewka, J. (2015) Maritime transportation risk analysis: Review and
analysis in light of some foundational issues. Reliability Engineering and System Safety, 138, 115-
134.

Hale, A. (2014) Foundations of safety science: A postscript. Safety Science, 67, 64-69.

Hamilton, M.C., Lambert, J.H. and Valverde, J. (2015) Climate and related uncertainties influencing
research and development priorities. Journal of Risk and Uncertainty in Engineering Systems. Part A:
Civil Engineering, 1(2). 10.1061/AJRUA6.0000814, 04015005.

Hansson, S.O. (2013) Defining pseudoscience and science. In Pigliucci, M., Boudry, M. Philosophy of
Pseudoscience. Chicago: University of Chicago Press, pp. 61-77.

T
Hansson, S.O. and Aven, T. (2014) Is risk analysis scientific? Risk Analysis, 34(7), 1173-1183.

IP
Haugen, S. and Vinnem, J.E. (2015) Perspectives on risk and the unforeseen. Reliability
Engineering and System Safety, 137, 1-5.

CR
Heckmann, I., Comes, T. and Nickel, S. (2015) A critical review on supply chain risk—
Definition, measure and modeling. Omega, 52, 119-132.
Henley, E.J. and Kumamoto, H. (1981) Reliability Engineering and Risk Assessment.
London: Prentice Hall.
US
Hertz, D.B. and Thomas, H. (1983) Risk Analysis and its Applications. Chichester: Wiley.
Hites, R., De Smet, Y., Risse, N., Salazar-Neumann, M. and Vincke, P. (2006) About the
applicability of MCDA to some robustness problems. European Journal of Operational
AN
Research, 174, 322-332.
Hollnagel, E. (2004) Barriers and Accident Prevention. Aldershot, UK: Ashgate.
Hollnagel, E. (2014) Is safety a subject for science? Safety Science, 67, 21-24.
M

Hollnagel, E., Woods, D. and Leveson, N. (2006) Resilience Engineering: Concepts and
Precepts. UK: Ashgate.
Hopkins, A. (2014) Issues in safety science. Safety Science, 67, 6-14.
ISO (2009a) Risk Management – Vocabulary. Guide 73:2009.
ED

ISO (2009b) Risk Management – Principles and Guidelines. ISO 31000:2009.


Jones-Lee, M. and Aven, T. (2011) ALARP—What does it really mean? Reliability
Engineering and System Safety, 96, 877-882.
PT

Joshi, N.N. and Lambert, J. H. (2011) Diversification of infrastructure projects for emergent
and unknown non‐systematic risks. Journal of Risk Research, 14, 717-33.
Jüttner, U., Peck, H. and Christopher, M. (2003) Supply chain risk management: Outlining an
CE

agenda for future research. International Journal of Logistics Research and Applications, 6,
197-210.
Kaplan, S. and Garrick, B.J. (1981) On the quantitative definition of risk. Risk Analysis, 1,
11-27.
AC

Karvetski, C.W. and Lambert, J.H. (2012) Evaluating deep uncertainties in strategic priority-
setting with an application to facility energy investments. Systems Engineering, 15(4),
483-493.
Khan, F., Rathnayaka, S. and Ahmed, S. (2015) Methods and models in process safety and
risk management: Past, present and future. Process Safety and Environmental Protection,
98, 116-147.
Klibi, W., Martel, A. and Guitouni, A. (2010) The design of robust value-creating supply
chain networks: A critical review. European Journal of Operational Research, 203(2), 283-
293.
Klinke, A. and Renn, O. (2002) A new approach to risk evaluation and management: risk-
based precaution-based and discourse-based strategies. Risk Analysis, 22(6), 1071-94.

26
ACCEPTED MANUSCRIPT

Kloprogge, P., van der Sluijs, J. and Petersen, A. (2005) A Method for the Analysis of
Assumptions in Assessments. Bilthoven, The Netherlands: Netherlands Environmental
Assessment Agency (MNP).
Kloprogge, P., van der Sluijs, J.P. and Petersen, A.C. (2011) A method for the analysis of
assumptions in model-based environmental assessments. Environmental Modelling and
Software, 26, 289-301.
Laes, E., Meskens, G., and van der Sluijs, J.P. (2011) On the contribution of external cost
calculations to energy system governance: The case of a potential large-scale nuclear
accident. Energy Policy, 39, 5664-5673.
Lambert, J., Karvetski, C., Spencer, D., Sotirin, B., Liberi, D., Zaghloul, H., Koogler, J.,
Hunter, S., Goran, W., Ditmer, R. and Linkov, I. (2012) Prioritizing infrastructure

T
investments in Afghanistan with multiagency stakeholders and deep uncertainty of
emergent conditions. Journal of Infrastructure Systems, 18(2), 155-166.

IP
Le Coze, J-C. (2013) Outlines of a sensitising model for industrial safety assessment. Safety
Science, 51, 187-201.

CR
Le Coze, J-C., Pettersen, K. and Reiman, T. (2014) The foundations of safety science. Safety
Science, 67, 1-5.
Leveson, N. (2004) A new accident model for engineering safer systems. Safety Science,
42(4), 237-270.

US
Leveson, N. (2011) Engineering a Safer World. Cambridge: The MIT Press.
Lindley, D.V. (2006) Understanding Uncertainty. Hoboken, NJ: Wiley.
Linkov, I., Satterstrom, F., Kiker, G., Batchelor, C., Bridges, T. and Ferguson, E. (2006) From
comparative risk assessment to multi-criteria decision analysis and adaptive management:
AN
Recent developments and applications. Environment International, 32, 1072-1093.
Lofstedt, R.E. (2003) The precautionary principle: risk, regulation and politics. Process
Safety and Environmental Protection, 81(1), 36-43
M

Lundberg, J. and Johansson, B.J.E (2015) Systemic resilience model. Reliability Engineering
and System Safety 141, 22-32.
Malek, R., Baxter, B. and Hsiao, C. (2015) A decision-based perspective on assessing system
ED

robustness. Procedia Computer Science, 44, 619-629.


Martin, R. (2009) The Opposable Mind. Boston: Harvard Business Press.
Masys, A.J. (2012) Black swans to grey swans: revealing the uncertainty. Disaster Prevention
and Management, 21(3) 320-335.
PT

Melchers, R. (2001) On the ALARP approach to risk management. Reliability Engineering


and System Safety, 71(2), 201-208.
Meyer, T. and Reniers, G. (2013) Engineering Risk Management. Berlin: De Gruyter
CE

Graduate.
Mitra, S., Karathanasopoulos, A., Sermpinis, G., Christian, D. and Hood, J. (2015)
Operational risk: Emerging markets, sectors and measurement. European Journal of
AC

Operational Research, 241, 122-132.


Natarajan, K., Pachamanova, D. and Sim, M. (2009) Constructing risk measures from
uncertainty sets. Operations Research, 57(5), 1129-41.
Park, I., Amarchinta, H.K. and Grandhi, R.V. (2010) A Bayesian approach for quantification
of model uncertainty. Reliability Engineering and System Safety, 95, 777-785.
Pasman, H. and Reniers, G. (2014) Past, present and future of Quantitative Risk Assessment
(QRA) and the incentive it obtained from Land-Use Planning (LUP). Journal of Loss
Prevention in the Process Industries, 28, 2-9.
Paté-Cornell, M.E. (2012) On black swans and perfect storms: risk analysis and management
when statistics are not enough. Risk Analysis, 32(11), 1823-1833.
Patterson, M.D. and Wears, R.L. (2015) Resilience and precarious success. Reliability
Engineering and System Safety, 141, 45-53.

27
ACCEPTED MANUSCRIPT

Peterson, M. (2006) The precautionary principle is incoherent. Risk Analysis 26(3), 595-601.
Peck, H. (2006) Reconciling supply chain vulnerability, risk and supply chain management.
International Journal of Logistics Research and Applications, 9, 127-142.
Pettersen, K.A. (2013) Acknowledging the role of abductive thinking: A way out of
proceduralization for safety management and oversight? In Bieder, C. and Bourrier, M.
(eds.) Trapping Safety into Rules. How Desirable or Avoidable is Proceduralization?
Surrey, UK: Ashgate.
PSA-N (2015) Petroleum Safety Authority Norway.
http://www.psa.no/framework/category408.html#_Toc407544820. Accessed 26 August
2015.
Rechard R.P. (1999) Historical relationship between performance assessment for radioactive

T
waste disposal and other types of risk assessment. Risk Analysis, 19(5), 763-807.
Rechard R.P. (2000) Historical background on performance assessment for the waste isolation

IP
pilot plant. Reliability Engineering and System Safety, 69(3), 5-46.
Reinert, J.M. and Apostolakis, G. (2006) Including model uncertainty in risk-informed

CR
decision making. Annals of Nuclear Energy, 33, 354-369.
Renn, O. (2008) Risk Governance: Coping with Uncertainty in a Complex World. London:
Earthscan.
Righi, W.A., Saurin, T.A. and Wachs, P. (2015) A systematic literature review of resilience

System Safety, 141, 142-152. US


engineering: Research areas and a research agenda proposal. Reliability Engineering and

Rodrigues, M.A., Arezes, P. and Leão, S.P. (2014) Risk criteria in occupational environments:
AN
critical overview and discussion. Procedia - Social and Behavioral Sciences, 109, 257-262.
Roy, B. (2010) Robustness in operational research and decision aiding: A multi-faceted issue.
European Journal of Operational Research, 200, 629-638.
Sahebjamnia, N., Torabi, S.A. and Mansouri, S.A. (2015) Innovative applications of O.R.
M

integrated business continuity and disaster recovery planning: Towards organizational


resilience. European Journal of Operational Research, 242, 261-273.
Shapiro, A. (2013) On Kusuoka representation of law invariant risk measures. Mathematics of
ED

Operations Research, 38(1), 142-152.


Spiegelhalter, D.J. and Riesch, H. (2014) Don't know, can't know: embracing deeper
uncertainties when analysing risks. Philosophical Transactions of the Royal Society A,
369, 4730-4750.
PT

SRA (2015a) Glossary Society for Risk Analysis, www.sra.com/resources. Accessed 14


August 2015.
SRA (2015b) Foundations of risk analysis, discussion document, www.sra.com/resources.
CE

Accessed 14 August 2015.


Steen, R. and Aven, T. (2011) A risk perspective suitable for resilience engineering. Safety
Science, 49, 292-297.
AC

Sunstein, C., 2005. Laws of Fear. Cambridge: C.U.P.


Taleb, N.N. (2007) The Black Swan: The Impact of the Highly Improbable. London: Penguin.
Taleb, N.N. (2012) Anti Fragile. London: Penguin.
Taleb, N.N. (2013) http://www.fooledbyrandomness.com/DerivTBS.htm. Accessed 19
December 2013.
Tang, C.S. (2006) Perspectives in supply chain risk management. International Journal of
Production Economics, 103 (2006), 451-488.
Tang, C.S. and Zhou, X. (2012) Research advances in environmentally and socially sus-
tainable operations. European Journal of Operational Research, 223, 585-594.
Tang, O. and Nurmaya Musa, S. (2011) Identifying risk issues and research advancements in
supply chain risk management. International Journal of Production Economics, 133, 25-34.

28
ACCEPTED MANUSCRIPT

Teng, K., Thekdi, S.A. and Lambert, J.H. (2013) Risk and safety program performance
evaluation and business process modeling. IEEE Transactions on Systems, Man, and
Cybernetics: Part A. 42(6), 1504-1513.
Teng, K., Thekdi, S.A. and Lambert, J.H. (2012) Identification and evaluation of priorities in
the business process of a risk or safety organization. Reliability Engineering and System
Safety. 99, 74-86.
Thompson, K.M., Deisler Jr., P.H. and Schwing, R.C. (2005) Interdisciplinary vision: The
first 25 years of the Society for Risk Analysis (SRA), 1980-2005. Risk Analysis, 25, 1333-
86.
Van der Merwe, L. (2008) Scenario-based strategy in practice: a framework. Advances in
Developing Human Resources, 10(2), 216-39.

T
van der Sluijs, J., Craye, M., Futowicz, S., Kloprogge, P., Ravetz, J. and Risbey, J. (2005a)
Combining quantitative and qualitative measures of uncertainty in model-based

IP
environmental assessment. Risk Analysis, 25(2), 481-492.
van der Sluijs, J., Craye, M., Funtowicz, S., Kloprogge, P., Ravetz, J. and Risbey, J. (2005b)

CR
Experiences with the NUSAP system for multidimensional uncertainty assessment in
model based foresight studies. Water Science and Technology, 52(6), 133-144.
Vanem, E. (2012) Ethics and fundamental principles of risk acceptance criteria. Safety
Science, 50, 958-967.

US
Venkatasubramanian, V. (2011) Systemic failures: challenges and opportunities in risk
management in complex systems. AIChE Journal, 57(1), 2-9.
Vinnem, J.E., Witsø, E.S. and Kristensen, V. (2006) Use of ALARP evaluations and risk
AN
acceptance criteria for risk informed decision-making in the Norwegian offshore petroleum
industry. Safety and Reliability for Managing Risk – Guedes Soares & Zio (eds Taylor &
Francis Group, London, ISBN 0-415-41620-5, 2006.
Weick, K.E. and Sutcliffe, K.M. (2007) Managing the Unexpected: Resilient Performance in
M

an Age of Uncertainty. 2nd ed. San Francisco, CA: John Wiley and Sons Inc.
Weick, K.E., Sutcliffe, K.M. and Obstfeld, D. (1999) Organizing for high reliability:
processes of collective mindfulness. Research in Organizational Behavior, 2, 13-81.
ED

Weinberg, A.M. (1981) Reflections on risk assessment. Risk Analysis, 1, 5-7.


Zio, E. (2007a) An Introduction to the Basics of Reliability and Risk Analysis. Singapore:
World Scientific Publishing.
Zio, E. (2007b) Reliability engineering: Old problems and new challenges. Reliability
PT

Engineering and System Safety, 94(2), 125-141.


Zsidisin, G.A. (2003) A grounded definition of supply risk. Journal of Purchasing and Supply
Management, 9, 217-24.
CE

Zsidisin, G.A. and Ritchie, R. (2010) Supply Chain Risk: A Handbook of Assessment,
Management, and Performance. NewYork: Springer Publishing.
AC

29

You might also like