Systems Theory

Download as pdf or txt
Download as pdf or txt
You are on page 1of 43
At a glance
Powered by AI
The key takeaways are that systems theory views the world holistically by examining relationships and interactions between elements that compose the entire system. It also discusses the concepts of synthesis, analysis, holism and reductionism.

The two basic approaches discussed in systems theory are holism and reductionism.

The two processes of reasoning that form the foundations of systems thinking are synthesis and analysis.

Systems Theory

An Introduction To The Foundational Concepts


Overview

This book is an overview of the foundational concepts within systems theory, in particular, it is
focused on conveying what we call the systems paradigm that is the basic overarching principles
that are common to all areas of systems thinking and theory. Systems thinking has been defined as
an approach that attempts to balance holistic and analytical reasoning. In systems theory, it is
argued that the only way to fully understand something is to understand the parts in relation to the
whole. Systems thinking concerns an understanding of a system by examining the linkages and
interactions between the elements that compose the entire system. By taking the overall system as
well as its parts into account this paradigm offers us fresh insight that is not accessible through the
more traditional reductionist approach.


This book explores the foundations of systems theory, the process of reasoning call synthesis and
its counterpart analysis. The central theme throughout the book will be on understanding these two
basic processes of reasoning and how they relate to each other, thus enabling the student to
become more effective in their reasoning and modeling.

In the first section of the book we start off by taking an overview to the systems paradigm, we will
talk about how systems thinking helps us to gain an awareness of our processes of reasoning,
their assumptions, strengths, and limitations. We will try to understand what paradigms in general
are, before going on to talk about theories and the development of formal models.

In the second section, we explore the two basic approaches of holism and reductionism and their
counterparts synthesis and analysis, which are the two processes of reasoning that form the
foundations of systems thinking. In this section, we give a clear distinction between the two
different approaches, how they interrelate and the consequences of using each approach. The
third section covers the theme of nonlinear causality, a recurring theme across all of the systems
science. A major distinction between the analytical and synthetic approach is that between linear
and nonlinear causality. In this section, we explore each and how they give very different
conceptions to our understanding of cause and effect.

In the next section, we explore the relational paradigm, a way of looking at the world in terms of the
connections between things, the networked patterns they form and how these shape
and define the overall system. We go on to talk about the importance of interdependence and
integration within systems thinking. The final section of the book is dedicated to processing
thinking. Systems theory sees the world in terms of constant change and macro- level processes
that shape events through what are called systems archetypes. Likewise, we will talk about the key
structural process of differentiation and integration that drives evolution and change within all forms
of systems.

This book is designed for anyone with an interest in systems thinking and theory and should be
accessible to all. By the end of the book readers will have gained a new way of looking at the
world, what we call the systems paradigm, that can offer fresh insight and a new approach to
looking at virtually any domain of interest.
Systems Paradigm

In this section, we will be giving an overview to what we call the systems paradigm, which is the
fundamental set of concepts that support systems theory and constitute this particular way of
looking at the world. In order to try and create a clear understanding, we will do this by contrasting
it will the more traditional paradigm taken within modern science. This section will also work as a
high-level overview of what we will be covered during the rest of the book. We will be touching on
all of the main ideas here so as to get a sense of how they fit together and then in later modules
digging farther into each aspect in more detail.

A paradigm is a model, perspective, or set of ideas that form a worldview underlying the theories
and methodology of a particular domain. The systems paradigm is then a coherent set of basic
concepts and axioms that form the worldview or perspective underlying systems theory and
thinking. All reasoning and scientific inquiry rest upon a set of assumption about fundamental
philosophical questions. Before any kind of constructive inquiry into the world around us can be
conducted, a number of basic philosophical questions need to be answered, including basic
ontological questions, i.e. what is the nature of being? How does causality work? Etc. Basic
epistemological questions i.e. how do we know something and how can we prove that we know it?
Every coherent body of knowledge needs to provide some kind of answer to these questions,
which will then form the basis of that conceptual framework shaping how we see the world when
using it, and ultimately the kind of answers that can be derived within that paradigm.

For example, premodern European culture provided answers to these questions based on a fusion
of the work of classical Greek philosophy with the Christian Bible. This worldview of medieval
thought created a hierarchical ontology of things that existed based upon their perceived proximity
to God. Those things, such as angels, that were regarded as being closest to the perfect being of
God were placed at the top of the hierarchy with humans below this, animals below them, and so
on all the way down to inert matter that was perceived as being closest to hell. Within this
worldview, epistemological authority or validity was derived from tradition. If it was endorsed by
some preexisting authority, such as the Bible, monarch or Aristotle's writing then is was deemed
valid. This is a very simplified schematic representation of book, but it helps for illustration
purposes.

With the rise of the modern era approximately five hundred years ago, a whole new set of
philosophical answers were formulated to these fundamental questions that still today form the
foundation of our scientific framework. During a period of intense metaphysical questioning in the
seventeenth century, a number of great thinkers, like Francis Bacon and Rene Descartes laid the
philosophical foundations of our modern world. These thinkers asked and answered the most
fundamental questions about the nature of reality and knowledge which was further fleshed out in
the eighteenth century. They fundamentally rejected the idea that knowledge of the world around
us should be derived from scripture, theology or authority. These natural philosophers posited that
the world is physical in nature. That knowledge about the world can be accumulated through
empirical observation. That the laws governing that world were as Galileo Galilei famously said: "It
is written in mathematical language, and its characters are triangles, circles, and other geometric
figures, without which it is impossible to humanly understand a word; without these one is
wandering in a dark labyrinth."

Thus a new paradigm and method for deriving knowledge about our world was formed, the
scientific method. Where empirical data could be collected, hypothesis developed and experiments
done, to validate or invalidate those hypotheses. In such a way new knowledge could be
accumulated and it was believed that this knowledge should be put to the use of human
betterment, to curing disease, to grow more crops, to building bridges, etc.
This new paradigm was most powerfully expressed in the work of Sir Isaac Newton who did
nothing less than describe a full mathematical and scientific framework for how physical systems
interacted. The central ideas of which was that of matter in motion causing events, that the world is
governed by discrete components of matter interacting in a cause and effect fashion. Newton’s
absolute space-time coordinate system is the framework for a fixed, orderly predictable and
deterministic universe where all events are driven by the linear interaction of discrete components
of matter creating a mechanistic vision to the world, what is called the Newtonian paradigm.
In a recent publication by Maurizio Iaccarino entitled Science and Culture he writes about the
development of this new scientific paradigm as such:

"Science has had an increasingly strong influence on European culture. In the nineteenth century,
the buzzword for science was 'order'. Scientists had discovered that the movement of the stars is
predictable, and that all terrestrial and celestial phenomena follow the same scientific laws like
clockwork. They believed, according to the Galilean vision, that the book of nature is written in the
language of mathematics, with characters represented by geometric objects. The mission of
science was to discover the laws of nature and thereby explain all natural phenomena. This faith in
science gave rise to the philosophical movement called positivism, which led to a widespread trust
in science and technology and influenced social theory.”

This paradigm supporting modern science went largely unquestioned until the beginning of the
20th Century when quantum physics and general relativity showed its basic assumptions to be
limited. By the start of the second half of the century, a new paradigm was gradually being
formulated. This new paradigm we can call the systems paradigm, and it has a number of central
concepts or axioms to it that work to counterbalance the traditional assumptions of our modern
age.

The traditional paradigm of the modern era has posited a strong dichotomy between the subjective
and objective. Since before Rene Descartes' formulation of a philosophy based on a mind and
body dichotomy, modern science has been strongly focused on the objective material world.
Questions concerning the subjective dimension of the observer have been largely excluded in
favor of a quantitative analysis of objective material components and linear interactions. This has
resulted, from the beginning, in a strong divide between science and more subjective
interpretations to the world such as religion or many forms of philosophy.
The systems paradigm breaks down this barrier, positing that the subjective dimension of the
individual interpreter should be of equal importance to our understanding of the world. Systems
thinking sees any knowledge of the world as a product of an interaction between the conceptual
system used by the individual or community and the objective phenomena being observed. Thus to
gain a fuller understanding of the world one must both question and develop the subjective
framework being used as well as what is being studies.

To do this, it is important that the assumptions, paradigm, and models used in an inquiry are made
fully explicit so that everyone can examine the assumptions and bias that may distort the process.
Systems thinking places great emphasis on recognizing and asking how do I see the world? How
do other people see the world? how do those models and assumptions that we all hold shape and
create our interactions and the world around us? One of the central tenets of systems thinking can
be summed up in the simple statement "We have met the enemy and he is us." That is to say a
recognition that how we see the world creates how we act in the world, which creates the world
around us, which then feeds back to present us with challenges, all of which are the product of our
subjective interpretation. Thus for systems thinking there needs to be a balanced emphasis on the
subjective models and assumptions as on the objective inquiry that we are engaged in.

Systems thinking is based upon a very different process of reasoning from our traditional scientific
paradigm. The Newtonian paradigm is understood to be primarily reductionist, which means that
whole systems are seen to be reducible to an account of their constituent parts. Reductionism, as
a fundamental assumption, leads to a process of inquiry called analysis. Analysis is a process
inquiry that proceeds by breaking systems down and trying to understand the whole in terms of the
properties and interactions between the elementary parts in isolation from its environment. The
analytical method has been central to modern science and accountable for most of its successes,
through isolating systems, decomposing them, and searching for linear interactions of cause and
effect.

Systems thinking is characterized as being what is called holistic, meaning that it always refers to
the whole system or environment as the most appropriate frame of reference for understanding
something. In order to understand some component or system, we must understand the context
that it is a part of, its interaction with other systems and its functioning within the whole
environment. The process of reasoning that follows from this is called synthesis. Synthesis is the
opposite from analysis in that it means putting things together. Synthesis is the method of inquiry
used within the holistic approach, whereby we look at the relations between things and how, when
we combine them we get the emergence of new levels of organization.

A central part of the analytical paradigm is the idea of linear causality. The primary endeavor of
modern science has been to control for external variables, to isolate one or two input variables that
are thought to cause some effect within the system. These cause and effect relations are encoded
in equations and thought to describe how the system behaves.The central aim of the analytical
paradigm is to ignore weaker influences from the environment and develop a model that is based
on what are considered to be key observations, which are the primary driving variables causing
change within the system's state. Linear causality follows a sequential order, where a direct link
between cause and effect can be drawn, with there being a clear beginning and a clear end in
time, effects can then be traced back to one or a limited number of causes. This paradigm is
extended to general reasoning where people see events as a product of some linear interaction,
from A> B> C.

Systems thinking in contrary is focused on nonlinear causality where multiple factors affect an
outcome, as they work together synergistically in a networked fashion to generate a combined
result that is greater, or less, than the sum of their effects in isolation. A central idea here is that of
emergence. With emergence, an event many not have any direct cause, instead within the
systems paradigm many events are seen as in fact emergent phenomena, not caused by any one
thing but instead emerge out of the interaction between many things interacting in a horizontal,
parallel or networked fashion to generate the outcome. Equally, systems thinking looks for circular
or mutual causality, how two things affect each other and how every effect feeds back to its source
over time.

Systems theory fundamentally rests on a relational or interactional view of the world. That is to say
that the connections between parts are explicitly given ontological precedences over the parts
themselves. In the general sense, a system means a configuration of parts connected by a web of
interdependent relationships. When talking about any system, the emphasis is typically on
connections and interdependencies as the defining feature of the organization. The relational
paradigm of systems thinking emphasizes how connection, interdependencies, and context shape
the component parts of the system and not vice versa; which is the more traditional assumption.
The traditional analytical paradigm is fundamentally component-based, analysis is focused on the
properties of "things" in isolation and how those things cause change through direct interactions.

This perspective leaves us with a vision of the individual or component as actor effecting change
within their environment, it downplays the influence of connections and context in affecting and
shaping the individual parts. Systems thinking helps instead to focus on how the network of
connections around the individual parts affect and shape the system as a whole. This perspective
becomes of particular relevance when the system comes to have a high degree of connectivity.
The analytical approach is typically based on excluding a changing environment, as described by
the term "ceteris paribus" which means all other things remaining constant. This static nature to the
environmental variables makes it possible to repeat an experiment, to isolate and detect stable
cause and effect interactions that are believed to be the drivers of change, which leads to
predictable outcomes as long as it is possible to hold the environment constant.

This is in contrast to the systems thinking paradigm which is process orientated in nature; it takes a
dynamic vision of the world where everything is seen to be fundamentally in change. This change
is typically not perceived to be initiated by any of the particular parts to the organization, but
instead, is driven by processes that have inherent patterns that drive and shape the individual
events that constitute them. Environments are seen to be complex and constantly evolving as a
product of many interacting variables, leading to the emergence of new forms of organization over
time. Processes of change can be driven by macro level feedback loops that create certain
dynamics and patterns of change on the macro level what are called system archetypes.
Due to feedback and emergent, change within the systems paradigm is seen as an evolutionary
process rather than a mechanistic process. New phenomena emerge that could not have been
predicted a priori due to the nonlinear interaction between parts, between the parts and their
environment and because of past events feeding back on themselves; all of which make the future
essentially unpredictable in nature.
Systems Thinking Awareness

Systems thinking is, in its most generalized sense, a way of seeing the world. Before anything it
places great emphasis on the question of how do we see the world, that is to say, our subjective
interpretation of events. Human beings are not infinitely knowing creatures; we have a limited set
of sensory and cognitive capabilities with which to interpret events. This limitation manifests itself in
the fact that our attention is always limited, our reasoning is biased and we are continuously using
many assumptions to rapidly draw conclusions. From the system thinking perspective, it is seen as
paramount to gain an awareness of those limitations and bias within our reasoning and models.
At the heart of systems thinking is a recognition of this subjectivity. That is the recognition that how
the world appears to us is not merely in some objective form, but in fact, our conceptual system
structures, defines, and interprets every piece of information and endeavor we undertake, whether
in science, management, engineering or everyday life. It is precisely because of this, systems
thinking would posit, that any serious endeavor needs first to understand the structure and makeup
of the paradigm that is being used.

This is in contrast to the more analytical approach which holds that the world is largely objective, it
simply exists, and we just need to go and discover how it works. Here, little reference is made to
the assumptions and overall paradigm used to understand the world. The main emphasis is simply
on building models with which to understand some objective reality.
Systems thinking would put forward the idea that the subjective dimension of how we interpret
events is just as important as objective inquiry. That if we do not understand our subjective
processes of reasoning, we have no way of knowing if they are valid or invalid. Whether what we
"know" is based upon a coherent and sound set of assumptions or is in fact based upon a weak or
even misleading set of assumptions.

Systems thinking then puts a much stronger emphasis on self-awareness, where awareness is the
ability to know directly and to perceive, to feel, or to be conscious of events, objects, thoughts,
emotions, or sensory patterns. Awareness of one's own way of seeing the world and the process
through which we reason is seen as a prerequisite to effective cognitive capacities. In this sense
we can understand systems thinking as a form of metalanguage in that one of its explicit aims is to
help individuals to understand their processes of reasoning and how our actions and the world
they create lead directly from how we reason and see the world.
David Bohm, the famous 20th-century theoretical physics talks about this as such, "the reason that
we don’t see our problems is that the means by which we try to solve them are the source. That
may seem strange to someone who has first heard it because our whole

culture prides itself on thought as its highest achievement and the achievements of thought, I am
not trying to say are negligible. There are very great achievements in technology and various other
ways, in culture. But there's another side to it... one of the obvious things wrong with it is
fragmentation, thought is breaking things up into bits which should not be broken up. We can see
this going on, you see the world is broken up into nations, yet the world is all one, and you see with
the nation we have the boundary of the nation, we have established the boundary of a nation, now
that is invented by thought. If you go to the edge of the nation there is nothing there particularly."

This quote illustrates well how our way of thinking creates the world around us and ultimately how
it creates the problems that we encounter. Before anything, it is in understanding those processes
of reason and the paradigm that conditions our way of seeing things that we have the greatest
chance to make a difference, gain a deeper understanding of the world around us and a greater
capacity to act effectively within it.
Cognition is a very demanding exercise and thus it makes sense for us to use preconceptions and
assumptions to limit the demand on this energy expensive exercise. These tools, of preconception
and assumption-based reasoning, are an important form of abstraction, but we need to be able to
use them instead of them using us. To be an effective thinker is to understand the dynamics of the
conceptual system that we are using and this gives one the capacity to use it in a professional
manner to generate knowledge, instead of it using us as we simply react to our preconceptions.
Our tacit assumption is that we are in control of our thought processes and how we see the world.
The Enlightenment gave us the conception of the rational individual, the idea that humans are
endowed with the capability of abstract thought, that modern humans are rational and calculating,
that we use our intellectual capability to act in a purposefully way.

However, over the past thirty years or so, as this idea of the rational individual has come under
scrutiny within economics and the social sciences it has proven limited in scope. Humans are
capable of abstract reasoning but this is typically not what people do. For most people, it is not
particularly enjoyable and often an over demanding exercise. More often instead we use all sorts of
automatic inference processes based upon assumptions, so as to not have to reason, in this
process, we are not active agents of our own thought but instead, are guided by assumptions.

To be an effective thinker, one needs to hold an awareness to our set of assumptions, beliefs and
understand the paradigm that we are using. One needs to be aware of the assumptions that one is
using and be able to adjust them when needed. To use concepts and processes of reasoning like a
professional uses her tools, not letting those tools use us, which is often the case. David Bohm
again puts this well when he says "there is this feeling when you are thinking something, it does
nothing except inform you of the way things are and then you choose to do something. That is the
way people are talking, but the way you think determines the way you are going to do it, and then
you don’t notice a result comes back but you don’t see it as a result of what you have done, even
less do you see it as a result of how you were thinking. Unless the thinking changes it won't be
correct... unless we see the source of it, it will never change we need some kind of awareness of
what thought is doing let's put it that way, that seems clear, but which we don’t have generally
speaking”

Systems thinking recognizes that it is before anything how we think that creates the world we live
in. Thus systems thinking is primarily concerned with the metacognitive skills that are required to
understand, appraise and uses thinking effectively and constructively. So this is where we start
with the systems thinking journey, it is to say how do I see the world? What is my set of
assumptions, beliefs, and values? And then to go out into the world and notice how the way that
you see the world affects how you act in the world, the ways in which your worldview influences
and decides what you are going pay attention to, and also decides what you're not going to pay
attention to, thus shaping your perceptional all day every day.
What is a Paradigm?

This book is about the two basic paradigms used within science and general thought, so before we
go much further it should be of value to us to define more clearly what we mean by this term
paradigm. Although the term paradigm has been with us since ancient times it was reintroduced to
popular discourse by Thomas Kuhn in the 1960s through his study of how scientific assumptions
and theories change over time. Since then the concept of a paradigm has also been used in
numerous non-scientific contexts to describe the fundamental model or perception of events used
within that domain. In this more general sense, the term paradigm means the cognitive framework,
pattern or model from which people operate.

A paradigm is a framework containing all of the basic assumptions, ways of thinking, and
methodology that are commonly accepted by members of a particular domain or scientific
community. In science and philosophy, a paradigm forms a distinct set of concepts or thought
patterns, including theories, research methods, postulates, and standards for what constitutes
legitimate contributions to a field. In this more general sense, a paradigm is analogous to a
conceptual system or schema.

A conceptual framework may be defined as a network or a plan of interrelated concepts that


together provide a comprehensive understanding to some phenomena or domain of interest. In the
general sense, conceptual frameworks form a group of concepts that are broadly defined and
systematically organized to provide a focus and a tool for the integration and interpretation of
information. A conceptual framework forms the conceptual or schematic structure to a theory. As
such it may be understood as the fundamental design pattern to a theory or worldview. In the
general sense, conceptual systems can be understood as analogous to ontologies or schemas in
that in their basic structure, they consist of a set of concepts and relationships between those
concepts through which information can be structured and interpreted.

A concept here may be understood as an abstract representation of some phenomena. On a


cognitive level, concepts are understood as a specific well-structured pattern of organization that
forms a coherent whole which defines the idea as distinct from other patterns and categories. A
conceptual framework though is not merely a collection of concepts but, rather, a construct in
which each concept plays an integral role. According to Miles and Huberman (1994), a conceptual
framework “lays out the key factors, constructs, or variables, and presumes relationships among
them.” A set of concepts and relations between those concepts form an ontology. Conceptual
frameworks possess ontologies.

A conceptual framework when looked at through the lens of cognitive science and psychology
forms what is called a schema. In its most general sense, the word schema means a
representation of a plan or theory in the form of an outline or model. In cognitive science and
psychology, a schema describes an organized pattern of thought or behavior that structures
categories of information and the relationships among them.

A schema is a mental conceptual structure that informs a person about what to expect from a
variety of experiences and situations. Schemata are developed based on information provided by
life experiences and are then stored in memory for future usage. Our mind creates and uses
schemata as a shortcut to make future encounters with similar situations easier to navigate. A
schema helps people organize and make sense of information. For example, one may have
developed a conceptual framework or schema that all French people are good at cooking.
Because of this schema, one organizes one’s future perception around it and will more readily look
for information that supports this view while discarding information that disagrees with it.
Formal Models

One of the great achievements of human creativity has been the development of abstract
representations of the world around us. Indeed this capacity for abstract thought is a characteristic
of modern humans and it had much of its origins in the development of language as a way of
symbolizing and communicating abstract concepts. This capacity for abstract modeling forms the
foundations of advanced civilization and have extended the human ability to much better
understand the world, communicate that understanding between each other and to collaborate
around those shared models.

A model is an abstract, typically compact, representation of some phenomena that enables us to


conceptualize and communicate its basic structure and dynamics in a coherent form. A central
characteristic of models and modeling of all kind is the use of abstraction. Whereby various levels
of detail are removed from the original empirical phenomena in order to create a compact schema
or diagram of the system under consideration. Models help us organize and structure information,
clarify our reasoning, communicate, solve problems and predict events.

Abstraction is the process of considering something independently of its associations or attributes.


The use of abstraction involves removing successive layers of detail from a representation in order
to capture only the essential features that are generic to all entities of that kind and independent of
their specific form. Abstraction involves the use of inductive reasoning, that is to say identifying
common attributes to a variety of instances of some entity and the formation of a generic model
that captures the fundamental aspects of it without reference to any specific instance or context. An
algebraic equation, an architect’s master plan of a building, or a nonfigurative painting would be
good examples of abstract representations. They all have in common the aim of capturing and
communicating only the most essential feature to the system they are representing by removing
the specific, concrete details of their form or function.

All models are simplified representations of reality. Models require the compression of information
or data to generate a simplified form. This simplification process is essential for grasping the whole
of a system. Most systems in the real world are too complex for us to grasp as a whole due to their
many diverse components, interactions, and scale. In practice, we can typically only interact with
and experience a small subset of a system. Abstraction helps us to synthesize our many
experiences of some entity into a coherent impression of the whole. For example, one may have a
model as to what a nation like Brazil is like, but one could only ever, in reality, interact with a very
limited subset of the whole organization. Thus, it is the model that actually enables us to, in some
way, grasp the whole system, but only ever through the use of abstraction that creates a simplified
representation of the actual real world phenomena.

A central part of modeling is the use of encapsulation. Encapsulation means to cover or surround
something in order to show or express the main idea or quality of it in a concise fashion.
Encapsulation is a central part of modeling and designing systems in that it enables abstraction.
Through encapsulation, the internal workings of any component part of a system can be concealed
in order for it to reveal only the most essential properties and functionality required for its
interaction with other components. In such a way encapsulation abstracts away the internal detail
and complexity of a subsystem to enable the effective designing, functioning or vision of the whole
system.

Abstraction involves the induction of ideas or the synthesis of particular facts into one general
theory about something. It is the opposite of reification, which means to make something real,
bringing something into being, or the making of something concrete. Whereas abstraction removes
the specific, reification involves specification, which is the analysis or breaking down of a general
idea or abstraction into concrete, specific facts.
For any abstraction to have real application it must go through a process of reification and
specification in which the detail of the abstraction is specified in order to create a real instance of
that form, as all real instances must have specific attributes. This process may also be called
instantiation. For example, if you wanted to build a house an abstract model designed to describe
what a house is would not work, you would need one that specified all the details of your particular
house and your particular house would be an instance of the generic class of all houses.

The process of modeling through abstraction invariably requires that one makes certain
assumptions and often approximations. For example, in their models, Newton assumed that mass
is a universal constant, whereas Einstein considered mass as being variable. A model only ever
represents some subset of all possible phenomena and thus has to make certain assumptions
about other elements and systems outside of our focus of interest. Effective models make explicit
the assumptions that they entail, the condition under which those assumptions will hold and the
conditions under which those assumptions will not hold and will be prepared to relinquish the
validity of the model under those applications. For example, many of the models that management
use in order to deliver their strategy or take a product to market are only applicable under relatively
normal market conditions. Management strategy is typically only expected to account for events
that are less than a few standard deviations from the norm. However, extreme events do happen
and in such circumstances, the management team would have to relinquish their model and
recalibrate their strategy. Being aware of the assumptions that support a model is greatly
advantageous, as it enables the user to know when to apply it and when not, and it equally offers
the possibility of switching to other frameworks when required.

The effectiveness of a model may be defined along a number of different parameters. For
example, how solid are its foundations? That is to say, the assumptions that it is based upon, are
they truly self-evident solid assumptions or are they contingent upon certain conditions that may
not always hold? How well does the model allow us to grasp the whole and identify its core
attributes? Does it manage to synthesize all of the different perspectives on it? How faithful is it to
the empirical phenomena? Does it stand up to empirical testing? Moreover, can it predict future
events?

Models are often evaluated first and foremost by their consistency to empirical data; any model
inconsistent with reproducible observations must be modified or rejected. A model should have the
capacity to explain past observations and predict future observations to some extent. For example,
one perceived limitation in our standard economic and financial models is that they seem incapable
of predicting financial crises, even though they work to a certain degree during “normal” economic
conditions.

To be effective, a model must capture diverse information, perspective, and view on a particular
phenomenon. For example, if we take some complex entity like a city, there will be multiple
perspectives on how to interpret it, social, technological, economic, demographic, etc. An effective
model must be able to integrate all of these diverse perspectives in some way, to give us a vision
of the whole system and a basic understanding of its primary constituent parts and relations. As
such, on the one hand, an effective model is a balancing act between simplicity, through
abstraction and synthesis, and on the other hand, breadth or scope in order to include all of the
different views and possible instances of the system.

The scope that a model covers is an important metric in its evaluation. Models like that of general
relativity are highly valued due to their relevance to any physical system from the molecular to the
universal level To be effective a model should contain within it and extend to all instances and
applications of the system it is trying to represent. Although models, almost by definition, must be
compact representations, they must also be inclusive, entailing within its core form the capacity to
derive any possible instance of—or state to—the phenomenon under their description.
Models are powerful tools in that they enable us to conceptualize large systems that are beyond
our immediate faculties, but they do this by removing certain details. If the model is not built
properly, that is to say, all relevant information is not represented in the model in a compact form,
they represent only a partial account as to how the world is. When we then use those models to
operate in the world, the results may be, at best, only partially successful and, at worst, a
potentially dangerous situation. In fact, we now have a term for these poorly built mathematical
models, they are called weapons of math destruction.

Models constrain our perception, they contextualize, frame and condition what we see and don’t
see. Models that are poorly built and do not include all relevant perspective and information can
blind us to what would otherwise be obvious information. In a very literal manner, models remove
one from one’s senses in that they are conceptually based if they are not adequately built they can
create nonsensical results.

One good example of this would be models used in economics that try to describe human actors
as rational agents. These models are constructed in a particular way so as to be able to model
human economic activity mathematically. In the everyday world, it is quite apparent that people do
not always consider all actions in a rational fashion we take many shortcuts, have personal bias,
copy others, use heuristics, etc. none of which are rational. But none of this can be captured using
formal mathematical models and thus it is excluded from standard economic theory. Of course, the
world does not change simply because it does not fit into our modeling frameworks. Whenever
there is a mismatch between a model and the empirical world it is ultimately the models that have
to give and breaks down. We may go on using limited models because that is all we have, but
there will clearly be consequences. Thus, it is always desirable that we be explicit about the
limitations of a model and work towards developing more robust models whose foundations are
more robust, scope broader or match more accurately empirical data.
Holism and Reductionism

Holism and reductionism represent two paradigms or worldviews within science and philosophy
that provide fundamentally different accounts as how to best view, interpret and reason about the
world around us. Reductionism places an emphasis on the constituent parts of a system, while
holism places an emphasis on the whole system. While reductionism breaks an entity down so as
to reason about the entire system with reference to its elementary parts, holism tries to understand
something in reference to the whole system or environment that it is a part of.

Reductionism is the practice of analyzing and describing a complex phenomenon in terms of


elementary parts that exist on a simpler or more fundamental level. Reductionism attempts to
create a unified description of the world through reducing it to a set of elementary components
from which any phenomenon can be explained as some combination of these parts. The aim of
reductionism is an explanation showing how the higher level features of a whole system arise from
the elementary parts. Thus, the higher level features of a system can be largely ignored within the
inquiry, allowing us to focus on the lower level parts that constitute it. Reductionism implies an
assumption that all higher level phenomena can be understood as some combination of lower level
phenomena.

For example, a reductionist approach to interpreting biological entities like cells might take such
entities to be reducible to collections of physicochemical elements like atoms and molecules. It
would then focus on understanding these particles and how they combine to give the high-level
functions and behavior of the cell, instead of focusing on the features of the cell itself. Equally, a
reductionist approach to cognition would attempt to reduce higher level cognitive phenomena such
as awareness, emotions, and concepts to the basic physical building blocks of the brain: neurons
and synapses. The theory of methodological individualism within the social sciences would be
another example of a reductionist approach in that it requires that causal accounts of social
phenomena be explicable through how they result from the motives and actions of individual
agents. The common theme among different reductionist positions consists, above all, of their
emphasizing that complex phenomena should be explained by statements about phenomena of a
simpler nature.

Holism refers to any approach that emphasizes the whole, rather than the constituent parts of a
system. Holistic accounts of the world look for how an entity forms part of some larger whole and is
defined by its relations and functioning within that broader system. What all holistic approaches
have in common includes the principle that the whole has priority over its parts and the assumption
that properties of the whole cannot be explained by the properties of its parts—the idea of
emergence. Within this paradigm, the ultimate sources of knowledge are seen to derive not from
elementary component parts but, instead, from a reference to the system’s broader context. Given
that something can only be properly understood within its context, to gain a fuller understanding of
something requires gaining a greater understanding of the environment or context.

Holism posits that a system’s behavior and properties should be viewed as a whole, not as a
collection of parts. This often includes the view that systems function as wholes and that their
functioning cannot be fully understood solely in terms of their component parts. For example, social
psychology looks at the behavior of individuals in a social context because group behavior like
conformity cannot be fully understood by looking at the individual in isolation, but instead is best
understood by looking at the individual in the context of a whole social group. Likewise, many
phenomena, such as the wetness of water, only emerge when the two atoms of hydrogen and
oxygen are combined to give water. Neither atom possesses such a characteristic in isolation and
thus, we can only talk about the wetness of water when looking at the component parts as a whole.
When contrasted reductionism and holism lead to a number of fundamentally different perspectives
on basic questions about causality, objectivity, structure, dynamics, etc. While holism puts forward
a top-down view of causality and a dynamic process orientated view of the world that is subjective
in nature, reductionism provides a more static, bottom-up and objective perspective.

The reductionist approach typically adopts an objectivist stance. The central tenets of objectivism
are that reality exists independently of consciousness and that human beings have direct contact
with reality through sensory perception. In its general sense, Objectivism is the position that there
is a single ‘real world’ in which human actors are embedded. That these ‘real world’ properties and
organization are transparent to our perceptual and cognition, and that it is our – or the scientist’s –
task to ‘know’ this objective world through empirical inquiry, to achieve a one-to-one
correspondence between our conceptual representation of the world and the actual objective world
that exists.

Holistic approaches typically hold the idea that the individual – or the scientist – is not a passive
observer of an external universe; that there is no ‘objective truth,’ but that the individual is in a
reciprocal, participatory relationship with nature, and that the observer’s contribution to the process
is valuable. Due to this recognition to the subjective demotion of the observer’s interpretation,
holistic approaches are more inclined to begin by examining the subjective interpretations of the
observer, recognizing the need for an effective paradigm before an effective evaluation or model
can be formed. The recognition that the holistic approach gives to the subjective interpretation of
the observer opens the door for the idea that there may be multiple valid, or at least valuable,
explanations. Thus, while reductionism is inclined to search for the one right answer, holism tries to
understand a phenomenon by gaining as many perspectives on it as possible and then
synthesizing those perspectives into a more complete understanding.

Reductionism and holism reflect two different perspectives on the nature of causality. Reductionism
strongly reflects a particular conception of causality. Reductionism leads to the idea of upward
causation seeing higher level phenomena as being caused by lower level entities. Phenomena that
can be explained fully in terms of relations between other more fundamental phenomena are called
epiphenomena. Typically, there is an implication that the epiphenomenon exerts no causal effect
on the fundamental phenomena that explain it. The epiphenomena are often said to be “nothing
but” the outcome of the workings of the fundamental phenomena.

As a result, according to this view, causation at higher levels of existence is always in some sense
a derivative or epiphenomena caused by lower level interactions. Reductionism then follows a
strong organizational pattern of upward causation. Within the reductionist paradigm, upward
causality appears the only real plausible “scientific” explanation for phenomena. When the direction
of causal influence extends from macro-levels of organization down to micro-levels of organization,
this may be termed downward causation. Holistic accounts are primarily interested in the workings
of how the function and structure of an entity are defined by the broader system or whole that it is a
part of. As such, it places a strong emphasis on downward causation, how the whole macro level
effects a downward cause on the formation of the parts. Downward causation can be understood
as an inverse of the reductionist principle: the behavior, structure, and functionality of the elements
in the system are determined by the behavior of the whole. Here, determination moves downward
instead of upward.

One readily identifiable example of this would be the constraints and effect a society, as a whole,
has on its individual members, thus exerting a downward effect. Down causality can be seen within
societies, where individuals create the culture, institutions, and norms but then the intuitions
feedback to constrain and enable the agents in the social system so that we get a continuous
dynamic between the macro and the micro with causality flowing both ways.
A central aim of the reductionist approach is to reduce phenomena to their single lowest
denominator and then define all higher level phenomena in terms of those elementary parts. Thus,
reductionist approaches actively strive to reduce all accounts to a single dimension, defining all
higher level phenomena as deriving from a single low-level dimension. As such reductionism can
be said to be mono-dimensional in its structure. Because holistic accounts are grounded in the
concept of emergence—whereby new and qualitatively different phenomena and patterns emerge
as we put parts together—it places great emphasis on the multidimensionality to phenomena that
exhibit any degree of complexity. A holistic approach suggests that there are different levels of
explanation and that, at each level, there are emergent properties that cannot be reduced to those
of a lower level. For example, whereas a reductionist approach may try and understand a patient’s
mental disorder as purely chemical imbalances in the brain prescribing drugs to affect this, a
holistic approach would more likely look at different physiological, cognitive and sociocultural
factors to deal with the condition at various levels.

Holistic accounts are typically process orientated in nature, whereas reductionist accounts are
more focused on the static properties of elementary parts. Within a reductionist, scientific analysis
variables in the environment are kept constant. This allows researchers to repeat experiments in
exactly the same way and to detect stable behavior in the variables that are being observed, which
in turn leads to predictable outcomes. A central part of the analytical approach is the use of the
concept “ceteris paribus” meaning “other things equal.” Variables within the environment are
artificially held constant to isolate and perceive the linear effect on a limited number of variables
under observation. Thus, the use of reductionism within various domains often involves an attempt
to be able to maintain variables within the environment constant so as to be able to control a given
system through a limited number of variables.

One of the guiding rules of holism, in contrast, would be “panta rhei” meaning “everything flows.”
The idea that everything changes as derived from the Greek philosopher Heraclitus’ observation
that one cannot step into the same river twice. Whereas reductionism breaks a process down into
static parts, the holistic paradigm is focused on maintaining whole processes and is fundamentally
concerned with how things change through the processes that act on them.
Analysis and Synthesis

The terms analysis and synthesis stem from the Greek words meaning "to take apart" and "to put
together," respectively. Analysis can be defined as the procedure by which we break down a
complex whole into parts. While synthesis means the combining of constituent elements into a
single or unified entity. Synthesis and analysis represent two fundamentally different processes of
reasoning, but both are required to perform a full process of inquiry, as analysis helps one to
understand the parts while synthesis helps to understand the whole of a system. Analysis is the
traditional method of reasoning taken within modern science whereby we try to gain an
understanding of a system by breaking it down into its constituent elements. On the other hand,
synthesis, which is the foundations of systems thinking, works in the reverse direction, trying to
gain an understanding of an entity through the context of its relations within a whole that it is part
of.

Analysis is the process of breaking down or reducing systems to their constituent parts and then
describing the whole system primarily as simply the sum of these constituent elements. Analysis is
often described in terms of a three step process that we use for analyzing things. Firstly, we take
something and we break it down into its constituent elements. This is deeply intuitive to us. When
we wish to understand how a car, bird or business works, the first thing we do is isolate it by taking
it into a garage. Analysis is based on the premise that our basic unit of interest should be the
individual parts of a system. With analysis when we wish to understand something like a bird or car
we take it into a garage or lab and decompose it into its constituent parts. Secondly, once we have
broken the system down into its most elementary components, we analyze these individual
components in isolation to describe their properties and their functioning in isolation. Lastly, we
recombine these components into the original system that can now be described in terms of the
properties of its individual elements and their direct interactions.

Modern science is based upon the analytical method that involves the breaking down of complex
systems into components that can be analyzed in isolation and modeled using linear equations.
The analytical approach is the fundamental method behind modern science, and by extension our
modern understanding of the world, and it has proven highly successful in many ways from
understanding atoms and DNA to designing the modern corporation and nation state. However, as
successful as it has been, it also has inherent limitations to it. Because we understand systems by
breaking the parts down and isolating them, the reductionist paradigm systematically and
inherently demotes the relationships between these components. Thus, within this process of
analysis the whole system is implicitly thought to be nothing more than the sum of its parts.
Analyses works well when there is a low level of interconnectivity and interdependencies within the
system we are modeling. Although this may be true for some systems, it is certainly not always the
case. Many of the systems we are interested in describing have a high level of interconnectivity
and interdependency, examples being ecosystems, computer networks and many types of social
systems.

More complex systems are primarily defined by the relations within the system and not the static
properties of their elements. We can and often do continue to use analysis to try and describe
them but the reductionist approach is not designed for this, and thus we need to change our basic
paradigm to one that is more focused on these relations as opposed to the components. This is
where synthesis and systems thinking find their application. Synthesis means the combination of
components or elements to form a connected whole. It is a process of reasoning that describes an
entity through the context of its relations and functioning within the whole system that it is a part of.
Systems thinking is based upon this process of reasoning called synthesis, and it is also referred to
as being what is called holistic, meaning that it is characterized by the belief that the parts of
something are intimately interconnected and explicable only by reference to the whole. Thus
synthesis focuses on the relations between the elements, that is to say, the way those elements
are put together or arranged into a functioning entirety. And like with analyses one can also identify
a few key stages in this process of reasoning.

The first step in the process is to identify the system that our object of interest is a part of.
Examples of this might be a bird being part of a broader ecosystem or a person being part of a
greater culture. Next, we try to gain a broad outline of how this whole system functions. So for
example, a hard drive is part of a computer, and to properly understand it we need to have some
understanding of the whole computer. Lastly, we try to understand how the parts are
interconnected and arranged to function as an entirety. By completing this process we can identify
the complex of relations within which our entity is embedded, its place and function within the
whole. And within systems thinking this context is considered the primary frame of reference for
describing something.

The first thing to note is that the methods of synthesis and analysis are not mutually exclusive.
They should both be a part of any well-developed model, but each will have particular relevance
depending upon the type of system we are dealing with. Thus, it should not be of surprise that
physics is the home of the analytical approach, where they are often dealing with inert, static and
decomposable systems, whereas ecologists that deal with highly interconnected and dynamic
systems are much more inclined to synthetic reasoning. Some of the primary questions we will be
asking to determine the type of system we are dealing with, and thus the appropriate method of
reasoning, will be; Firstly, is it primarily a component-based system or does it serve some common
function that integrates the various elements? Is it isolated or connected? Is it a linear deterministic
system or a nonlinear, non-determinate system? And is it static or dynamic?

Firstly, are we dealing with an actual system or simply a set of things? When we wish to talk about
a composite entity, that is to say, a group of things, we can describe it as either a set of objects or a
system, the difference here being that a set is a group of objects that share no common function.
Thus, we call a group of cups on a table a set of cups as they exist independently from each other.
In contrary, if we take the human body, again it is a composite entity, but this time, the elements
have been designed to serve some common function, we can then call it a system, and we would
need to use systems thinking to properly understand it.

Secondly, how interconnected is the system? Analysis starts with a component-based view of the
world and builds a description based upon the properties of these components. Synthesis in
contrary focuses upon the relationships between parts. Thus, from a systems thinking perspective
we are often interested in connectivity, i.e. answering the question what is connected to what and
thus systems thinking is best suited to systems with a high level of interconnectivity. Thirdly, are we
dealing with a linear system or are there feedback loops? Analytical thinking searches for direct
linear relations between the cause of an event and the effect. Thus, we call this linear thinking.
Systems thinking is more inclined to see events as the product of a complex of interacting parts
where relations are often cyclical with feedback loops.

Is the system primarily static or dynamic? Analytical methods often describe entities in terms of
static structures with limited reference to their evolutionary development within time. Systems
thinking takes a more dynamic view of things often contextualizing entities in terms of the
evolutionary forces that have shaped them, and thus seeing the process of development as an
important phenomenon with which to understand the world. Lastly, are we dealing with a system on
the micro level or the macro level? Analysis breaks things down into parts, and thus analytical
thinking typically focuses on analyzing and optimizing subsystems, in a belief that we can improve
the whole system by simply optimizing all of its subcomponents. If we are dealing with a system on
the macro level what is sometimes called the global level we need to use synthetic thinking to get
a vision of the whole system and an understanding of how the parts interrelate to achieve global
functionality.
Synthesis

The word synthesis comes from the Greek word meaning “to put together.” Synthesis is a process
of reasoning whereby we put disparate parts together to gain an understanding of the whole.
Synthesis can be considered the opposite process of reasoning from analysis in that analysis
involves taking things apart to understand the components. Synthesis and analysis define two
distinct overarching patterns or structures to a process of inquiry. Whereas reductionism and
holism represent two different paradigms or ways of looking at the world, synthesis and analysis
can be understood as the corresponding processes of reasoning that lead from these set of
assumptions. Synthesis is the method used within a holistic paradigm. With synthesis, one
understands something by looking at how the parts interrelate to form the whole and how this
whole system functions within its environment.

Because holistic paradigms always refer to the whole in order to understand the parts, synthesis
reasons by looking at how parts are put together to form the whole and how the whole system
interacts with other systems and operates within some environment. This process of combining
different things together to form a greater whole is called emergence. In the process of emergence,
diverse elements interact, self-organize and combine to give rise to some combined entity that
exhibits novel and different properties when taken as a whole. Synthesis tries to understand this
process of emergence and how the interactions between diverse parts can create something new.

Synthesis takes a holistic view, meaning it looks at the whole and is primarily interested in the
overall workings. We regard a system as a whole unit when we treat it as a black box and look at
its overall behavior and function, i.e. what it does or accomplishes. For example, architecture is
considered a holistic discipline as it is primarily concerned with how the whole building works;
rather than prioritizing any single element to the building the architect must look at how all the parts
interrelate to form the whole system. In contrast, building engineers are required for the analysis
and design of the physical parts to ensure that it will function.

Synthetic thinking has three steps that can be seen to be the opposite of analysis. To understand
something using synthesis one has to firstly ask “What is this a part of?” Then, identify the whole
context that the system is a part of. For example, to understand a corporation, it is important to
identify the economy; to understand a bicycle, we have to understand people and what they might
use it for etc. Secondly, we have to understand the behavior of the whole. For example, one needs
to gain a basic understanding of the transportation system and the economic system in order to
understand the car and corporation respectively. Finally, we have to identify the function of the
system we are trying to explain and how it is interrelated to other systems in the performing of that
function. For example, understand the role a car plays in the transportation system and the role a
corporation plays in the economic system.

Both analysis and synthesis provide very different insights: while analysis reveals the structure of a
system and how it works, synthetic thinking reveals why it behaves as it does. No amount of
analysis to the French automobile would reveal why they drive on the right side of the road. Why
this happens is a historical contingent part of the broader evolutionary context within which the car
exists. Whereas, an analytical inquiry may give us detailed insight into the internal workings of
something, and thus an understanding of how it functions, it is also argued that reductionist
approaches do not allow us to identify why behaviors happen.

For example, an analytical approach could explain that running away from a large lion was made
possible by our fear centers causing a stress response to better allow us to run fast. However, the
same analytical view cannot say why we were afraid of the lion in the first place. In effect, by being
analytical we may be asking smaller, more specific questions and therefore not addressing the
bigger issue of why we behave as we do. Thus, while reductionism is useful, it can lead to
incomplete explanations.
Synthesis resolves this by referring to the broader context. Synthesis helps us to understand the
meaning of something because the meaning of something is in its functioning within some larger
system. A person finds meaning in their life by playing a part in some larger organization; a mother
playing her role in a family, a person forming part of a sports team or a musician in a band. It is
only in reference to a system’s functioning within a broader environment that we can derive
answers to the question of why and it is only really in taking things apart and analyzing them that
we can get answers to the question of how.

The effectiveness of either method is very much contingent on the context. Some phenomena and
circumstances lend themselves well to analytical reductionism, others not so well. Which paradigm
is most relevant may be understood to be contingent on the degree of complexity to the system we
are dealing with: simple systems are amenable to the reductionist approach, while complex
systems do not, due to the highly interconnected and interdependent nature.

The basic word “synthesis” means putting things together to form some new entity. For example, in
botany, plants perform the core function of photosynthesis wherein they use the sunlight’s energy
as a catalyst to make an organic molecule from a simple carbon molecule. Thus, synthesis is
essentially a creative process, synthetic thinking is designed to create new out-of-the-box ideas
and solutions. Whereas analysis cannot really tell us anything fundamentally new, it gives
incremental improvements and optimization; because we are simply breaking things down it can
only build upon what already exists. While synthetic thinking can enable major paradigm shifts due
to its creative, emergent nature.

While analysis gives us differentiation, which is important to achieving efficiency through


specialization, however, it can lead to fragmentation over time. Synthesis puts things together, thus
it plays an important role in systems integration; making sure that all the parts are working together
in an integrated fashion and that things do not become too fractured. For example, within science
systems thinking works to try and provide transdisciplinary models and frameworks that interrelate
the different domains of inquiry. The analytical strategy is not able to consider the wholeness of a
program because its primarily focus is on individual tasks; systems thinking is required to balance
this and maintain an overall integration thus ensuring that fragmentation does not occur.

Synthetic thinking refers to the context or environment as the most important frame of reference:
for something to be of value or correct it must be aligned with the context within which it exists. For
example, whereas with an analytical approach one might focus on the optimization of subsystems
placing this as the highest priority, even if it is at the expense of some other element, system or the
environment. Systems thinking would invariably give precedence to the system’s environment,
positing that nothing can be correct or “right” without being what is best for the overall context.
Thus, synthetic thinking can be an important element within a system to ensure its alignment with
its environment and long-term sustainability.

A corollary to this is that because everything is seen as context dependent, holistic systems are
optimized for adaptation and responding to change within their environment. Whereas an analytical
paradigm leads to the idea of determinism and predictability – because the focus is on the internal
workings of closed systems, i.e. everything that will exist already existed in the past and is
determined by it. Systems thinking holds that the behavior of a system cannot be perfectly
predicted, no matter how much data is available. Natural and social systems can produce
surprisingly unexpected behavior, and it is suspected that behavior of such systems might be
computationally irreducible, which means it would not be possible to even approximate the system
state without a full simulation of all the events occurring within it. In the face of this uncertainty,
adaptive capacity is seen as having great valued within the holistic paradigm.

Both synthesis and analysis are key to gaining a full understanding of an entity, both have their
achievements and limitations and both are required to form a balanced understanding of the whole
and its parts.
Effective Questions

Someone once found Albert Einstein deep in thought and remarked that he must be struggling to
solve a very difficult problem, "Oh he said, I am not trying to solve a problem I am trying to state it,
because once I have properly stated it the answer should follow." The way that we state a problem
immediately constrains the answers we can possibly derive. People are generally capable of
solving problems once the scope is defined, what receives far less attention is stating the problem
in the first place. The difference here is the difference between seeing the whole and the parts, in
order to formulate an original question appropriately we have to look at the full context, this
requires synthetic thinking. But once we have formulated the equation, we have defined the limits
of our inquiry and can then use analytical thinking to work through the problem.

To illustrate this, think about the mathematics you learned in school, where you are taught how to
solve some algebraic equations, you are then given some very stylized problems that nicely fit into
that algebraic model, you are then tested on how well you solve the equation. What is not really
being taught here is how to take some real world situation and contextualize it into the appropriate
form so that we can then use the machinery of algebra to compute it. In the real world of being an
engineer, scientist or manager this is really the central skill because computers can solve algebraic
equations at the speed of light. What computers are not able to do however, is take a lot of
unstructured information and use abstraction so a to generate a structured model that can be
computed.

The real world differs from school because in life we have to define both the question and the
answer. This means we have to see both the context and the details i.e. have fluency in working
with synthesis and analysis. As the systems thinker and business consultant Russell L. Ackoff is
often quoted saying "The more efficient you are at doing the wrong thing, the wronger you become.
It is much better to do the right thing wronger than the wrong thing righter. If you do the right thing
wrong and correct it, you get better.”

In this quote Dr. Ackoff states an important feature to our world, there is an order of magnitude
more value in forming an appropriate question than in answering an inappropriate question. Too
often we enthusiastically solve problems without asking whether the problems are correctly stated.
Thus any effective process of inquiry must involve both stages of divergent, holistic thinking, to
explore the full context, and convergent, analytical thinking to focus on solving a closed problem.
Causality

Causality describes a relationship that exists between two or more things where a change in one
thing causes a change in another. The essence of causality is a phenomenon being dependent on
some other effect. As such causality is a connection or linkage between states or events through
which one thing – the cause – under certain conditions gives rise to or causes something else –
the effect. Causality forms a fundamental and pervasive part of our perception and interpretation of
the world around us. Equally, our ability to act in the world depends on our grasp of causal
relationships among things; the ways they act and interact. In everyday life humans – and other
animals – rely on the assumptions of causality literally every waking second; in academia, much of
science is the study of systematic cause and effect relations.

Perceived causality involves the process of inductive inference. The basic process of induction is
one of inference from a set of things that have something in common to generalizing what we
observe about them as being true to all instances of that kind. We generalize from a sample of
previous experiences to a whole population. We project past uniformities in our experience onto
future events through expectation. We have seen a ball move off in the opposite direction every
time it has been hit with a bat in the past, so we assume it will be the same in the future and this
assumption through inductive inference creates the perception of there being a causal relationship
between them.

Thus although cause and effect appear a central characteristic of our world, they are in fact simply
associations we make between events. The philosopher David Hume illustrated how there are no
necessary connections between events in this world, in his words “all events seem entirely loose
and separate. One event follows another, but we can never observe any tie between them. They
seem conjoined but never connected.” We perceive cause and effect and then extrapolate from
that to infer cause and effect relations about things that we can not see such a black holes, quarks
or events in the future. Of course, when we generalize we are going beyond the evidence; by the
definition of what we are doing, generalizing. Our conclusion covers things that we have not or can
not know. The rising and setting of the Sun in the past can not guarantee it will rise and set again
tomorrow.

Conceptions of causality can be roughly divided into linear and nonlinear. Linear causality is the
idea that cause and effect follow a single direction between events, from A the cause, to B the
effect, that for every effect there is a single or a limited number of courses. Nonlinear causality is
the idea that causality may follow a bidirectional path from A to B or from B to A, or even both at the
same time and that there may be an unlimited number of causes for a given effect. With linear
causality there is seen to be a direct link between cause and effect, cause precedes effect in a
sequential pattern. Linear causality has a clear beginning and a clear end. There is one or a limited
number of causes for any given effect; additional linkages of causes or effects create a line or
pattern of domino causality. The central ideas behind linear causality are captured in what is called
the axiom of causality.

The Axiom of Causality is the proposition that everything in the universe has a cause and is thus
an effect of that cause. This means that if a given event occurs, then it is the result of a previous,
related event. If an object is in a particular state, then it is in that state as a consequence of
another object interacting with it previously. Plato states this Timaeus when he writes “In addition,
everything that becomes or changes must do so owing to some cause; for nothing can come to be
without a cause.” The three criteria for establishing linear cause and effect are correspondence,
time precedence, and nonspuriousness.
The first step in establishing linear causality is demonstrating correspondence or association. This
means asking the question is there a relationship between the independent variable and the
dependent variable? Correspondence or correlation means that the cause and effect occur within
some unit of analysis. For example, if being exposed to cold is more likely to make you sick then
the people who are exposed to cold should be more often ill.

Time precedence means that the cause must occur before the effect. If one wants to say that being
well-educated causes one to earn a high salary, then the cause of being educated must precede
the effect of earning a high salary. A spurious or false relationship exists when what appears to be
an association between the two variables is actually caused by a third extraneous variable. This is
captured in the saying “Correlation does not imply causation” a phrase used to emphasize that a
correlation between two variables does not mean that one causes the other. For example, a false
correlation might be drawn between the amount of ice cream sold and the sale of sunglasses.
Again there is a hidden variable of temperature that is causing both to change together without
there being a direct cause and effect relation between them.

Linear causality is a keystone of the analytical, reductionist approach and searching for these
linear causes and effect has formed a central part of modern science. This process of inquiry is
conducted by defining a closed system and isolating it, then linear cause and effect interactions are
searched for as an explanation for how elements in the system behave. Within this paradigm
cause and effect relations are seen to move in one direction from the bottom-up, and not the
reverse direction; lower-level phenomena are seen to cause higher level events. For example, in
asking why the body functions as it does we would refer to the internal constituent parts of the
organs and tissues to derive an upward causal relation, instead of looking for a cause within the
system’s environment; which would be a form of downward causation. Linear causality leads to the
conception of determinism, in that it defines a closed system and reduces the number of causes to
a limited set acting in a single direction. Reductionism and linear causality try to reduce the cause
of an effect to a single or a limited number of determinants, and the fewer the component
determinants, the greater the determinism.

Nonlinear causality sees causation flowing in a bidirectional or multidirectional pattern. Nonlinear


causality involves cyclical processes where one thing impacts another which in turn impacts the
first; although this chain of events leading to feedback may be mediated through several events or
over a prolonged period. Nonlinear causality is part of the holistic, synthetic paradigm that looks at
systems within their context or environment. As such it is much more focused on downward
causation, looking at how the environment causes the behavior in the system. The holistic
paradigm posits that effects can be the product of many causes. To gain a full understanding of the
effect we need not drill down to find a single cause but instead look at multiple different factors and
how they interact to give rise to the outcome as an emergent phenomenon.

Whereas in the linear model, the relationship between cause and effect is seen to derive from one
of the components affecting another. Within the systems thinking approach, the relationship
between the parts is seen to create the effect. For example, from this perspective, it is not that one
chemical substance causes another to react in a particular way it is, in fact, the type of relationship
between them that generates the emergence of a particular type of outcome.
With nonlinear causality, cause and effect can flow in both directions through time. However, this
requires some kind of control system being involved. Purely physical processes result in a
unidirectional flow to causality; from the past to the future. But once there is a control system
involved this can define some future desired state – the goal – and then affect events in the
present based upon the future. For example, whether I spend lots of money now may be
contingent on whether or not I think I will get paid at the end of the week. Thus with nonlinear
causality causes for events may be derived – at least partially – from the future. But this would
appear to be only possible under the condition of goal-orientated behavior, where current events
are controlled by projections surrounding some goal in the future. In a coming section, we will dig
further into nonlinear causality as we talk more about indeterminism, downward causation, and
equifinality
Linear Causality

Linear causality is a conception of causation as following a single, linear direction, i.e. event A
causes effect B, while B has no demonstrable effect on A. This unidirectional causation may be in
space as well as time, i.e. only incidents in the past can affect current events. Likewise, linear
causality implies a unidirectional flow to causation between the micro and macro levels of a system
where higher level effects are believed to be caused solely by lower level phenomena. Linear
causality can be interpreted in terms of four basic rules, namely unidirectionality, uniqueness,
necessity, and proportionality.

Linear causality interprets events in terms of a unidirectional unfolding of cause and effect as they
flow from the past to the future. The Axiom of Causality is the proposition that everything in the
universe has a cause. This means that if a given event occurs, then it is the result of a previous,
related event. If an object is in a particular state, then it is in that state as a consequence of
another object interacting with it previously. Uniqueness means that the same cause will lead to the
same effect. Uniqueness can be contrasted with two alternatives, one in which several different
causes lead to the same effect and the other in which the same cause may result in a variety of
different effects. Necessity means the same cause will produce the same effect always, without
exception.

Linear causality describes a relationship of proportionality between a given cause and a given
effect that stays constant over time. Small causes always produce small effects; large causes
always produce large effects. Because linear interactions exclude feedback, the intensity of the
effect will tend to be proportional to that of the cause. The magnitude of an effect is proportional to
the magnitude of its cause. In mathematics when two quantities are linearly proportional their
graph is a straight line with the slope representing the constant of proportionality. These rules give
linear causality the important property of additivity. The additivity principle means that if two
components, A and B are the cause of event C, contributing independently to creating that event,
then the total effect produced by this is the sum of their effects in isolation. However, additivity only
applies if the causes are independent of each other.

Linear causality is part of the analytical, reductionist paradigm that searches for a single, or a
limited number, of lower level phenomena that can explain higher level events through the
interaction between discrete component parts. Linear causal thinking starts with the assumption
that events can be described by a limited number of causes. Analysis is conducted by breaking
down the system into component parts. Because effects are seen to derive from a single, or at
least small, number of causes, the specific interaction that we wish to investigate must be isolated
from external interventions from other variables in the environment. Smith(2011) identifies two
requirements for the establishment of linear causality: a stable association between variables and
the successful elimination of external factors. A stable association can be said to exist between two
variables X and Y if an ordered relationship between the two is robust, persistent over time and not
strongly influenced by other external variables.

Linear causal thinking is closely related to reductionism; that tries to explain the whole in terms of
its parts through causal laws. In other words, linear causal thinking focuses on the parts. Searching
for the cause means to search for that part of a system whose operation had finally produced the
observed event. Within the reductionist paradigm, the cause is always seen to flow from the lower
levels to the higher levels; this is a tacit assumption typically held within physics. That higher level
biological and behavioral phenomena are derived ultimately from lower level physical causes.
Change in the universe is seen to be a result of the regular application of low-level physical laws.
Determinism When we combine these factors of a limited number of variables causing an effect, as
derived from physical laws, where only the past can cause the present, the result is a deterministic
vision of the world.
If all events are cause and effect relationships that follow universal rules, then all events past,
present and future are theoretically determinate. This deterministic vision of the universe that is a
function of the linear analytical paradigm may also be called the clockwork universe. A metaphor
for a vision of the world as a big clock, where the parts are analogous to the cogs, interacting in a
linear fashion as determined by the laws of physics.
Limitations

The theory of linear causality which dominated modern science from the time of Newton to the
early 20th century has since been proven to be limited in its application. During the 20th century
quantum theory and chaos theory presented fundamental limitations to the idea of determinism, in
that the state of subatomic particles within quantum physics is understood to be a product of a
probability distribution. When this is extrapolated to the macro level of the cosmos, it is observed
that quantum fluctuations in the origins of the universe that have led to the specific formation of
galaxies over the following billions of years mean that the universe is not deterministic but in fact
its evolution has been derived from quantum probability distributions near its origins.

Likewise by the latter half of the 20th century chaos theory had revealed a deep uncertainty to the
development of nonlinear dynamical systems. Added to this the butterfly effect came to describe a
disproportionality between causes and effects within a broad class of systems. However, this being
said linear systems theory remains relevant under certain conditions, but it is no longer
generalizable to all phenomena as previously believed.
Nonlinear Causality

As we talked about in a previous module nonlinear causality is a form of causation where cause
and effect can flow in a bidirectional fashion between two or more elements or systems and where
a single effect may have multiple causes. The essential characteristic of nonlinear causality is the
idea of feedback that an effect can create a cause, but equally, this cause can then feedback to
create an effect in the first system. Nonlinear causality can be contrasted with linear causality
where the direction of affect flows in a unique direction. Nonlinear causation leads to a number of
important outcomes that are not possible when considering more simple circumstances of linear
causality.

Nonlinear causality can lead to self-reinforcing or self-amplifying processes through feedback, thus
allowing for disproportionality between initial cause and final effect. The second outcome to
nonlinear causality is the bidirectional flow of causation between the macro and micro levels within
a system, thus enabling downward causation. Thirdly it can allow for reverse causality in time; that
set future goals can feedback to affect current events. Likewise, nonlinear causation implies the
action of many variables in creating a cause or vise verse, which can lead to equifinality, the idea
that some end effect may be created or reached through a number of different pathways. Finally, in
contrast to linear causality that creates a deterministic vision of the world, nonlinear causality may
lead to the indetermination of outcomes.

Feedback processes are a central characteristic of nonlinear causation. Events do not happen in
isolation but instead feedback to effect their source. Thus nonlinear causality can have the
characteristic of being self-perpetuating and self-referential. For example, when looking at some
relationship dynamic between a parent and child we might note that the child is uninterested in her
parents because her parents showed no interest in her, but equally, her parents showed no interest
in her because she appeared uninterested. This is a self-reinforcing loop enabled by nonlinear
causation. Likewise, the formation of hurricanes, financial crisis, animal stampedes, or the
development of a culture would be other examples of processes driven by self-reinforcing cause
and effect.

Whereas linear causality is defined by a degree of proportionality between a given cause and its
effect, nonlinear causality driven by feedback can enable a disproportionality between cause and
effect; what is called the butterfly effect. Because of feedback loops, some small event can get
compounded through each causal feedback iteration enabling a rapid change in proportionality
between the input and output. The result of this can be widely divergent outcomes to some
situation depending on only small changes in input values, what is called sensitivity to initial
conditions.

Whereas linear causation is predicated on being able to isolate a single or small amount of
variables causing a given effect, nonlinear causation does not try to reduce the number of reasons
for a given effect or vice versa, the number of effects derived from a cause. For example, if we took
the linear reductionist paradigm and asked the question why does an airplane fly? This approach
would try to reduce the cause to a limited number of direct physical interactions. In which case, the
answer would be traced back to the dynamics of the airflow around the wing as described by a few
variables within the Navier-Stokes equations of fluid dynamics.

Inversely a nonlinear causal description may involve many different explanations to this question.
We might say the flight is caused by the pilot directing the plane to its destination. Or the flight is
caused by the fact that it was chartered to fly at a particular time. Or that it is flying because the
company can make a profit from putting on that flight etc. Without any of these factors, the flight
would not be happening.
When this non-reductive, more holistic interpretation to causality is taken, it is possible to see that
some effects are the product of an almost infinite number of interacting factors, and it stops making
sense to talk about a direct effect. Instead, the language switches to that of emergence. Asking
how many different factors interact in a specific fashion to create a given outcome, with emergent
phenomena creating higher level patterns that then feedback to exert an effect on lower level more
elementary parts.

Whereas linear causality implies that causality flows from the bottom-up but not in the reverse
direction, nonlinear causality and the idea of emergence enable the interpretation of events as both
upwardly caused and downwardly caused, with causation flowing bidirectionally from the micro to
the macro and back again. To illustrate this, we might think about a polar bear and ask why is it
white? The reason it is white one might say is because of its genotype, which is a bottom-up
answer. But if we ask why its genes are such we would discover that they are a product of
evolution, which has selected for the color best suited to that environment. The polar environment
is white so the genes that produce a white bear have been selected for if it were in the Canadian
forests it would be a brown color. Thus we see downward caution acting, with the “cause” coming
from the environment to affect the state of the individual.

Likewise, the specific atomic nuclear interaction in the interior of a star at any time is determined by
where that reaction is taking place within the overall star. Thus the overall structure of the system is
affecting the specific phenomenon in a downward fashion. But again the atomic interactions affect
the whole, creating a nonlinear causal relationship between the system’s micro and macro level
with feedback. These patterns have downward causal efficacy in that they can affect which causal
powers of their constituents are activated, and this has significant implications for our conception of
determinism. Both top-down and bottom-up causation can occur at the same time.

Whereas linear causality and upward causality leads to the vision of a deterministic world.
Nonlinear causality leads to greater capacity for indeterminism. Indeterminism is the concept that
events are not caused, or not wholly caused deterministically by prior incidents. With linear
causality, a past cause creates a current effect in a direct fashion. With nonlinear causality, a cause
may create an effect but because of the top-down and bottom-up bidirectional flow to causality, the
context and conditions of this lowlevel interaction are conditioned by higher level phenomena.
Meaning the overall outcome is a product of this more complex interaction between low-level
cause and effect and the upper-level organization that sets the context, thus enabling a much
greater opportunity for indeterminism.

Philosophy Robert Van Gulick describes this phenomenon as such “A given physical constituent
may have many causal powers, but only some subsets of them will be active in a given situation.
The larger context (i.e. the pattern) of which it is a part may affect which of its causal powers get
activated. . . . Thus the whole is not any simple function of its parts, since the whole at least
partially determines what contributions are made by its parts” Thus with nonlinear causality, the
cause of events are not directly determined by preceding events. But more emerge out of the
bidirectional exchange between the conditions set by the overall system and the local interactions.

With nonlinear causality, a single cause can have many effects, such as a nerve cell sending out
many impulses, or inversely many causes can have a single effect such as a hurricane being the
product of temperature, pressure, humidity, etc. This leads to the idea of equifinality, which is the
principle that in open systems a given end state can be reached by many potential means. The
term and concept are derived from Hans Driesch, the developmental biologist and later applied by
Ludwig von Bertalanffy, the founder of general systems theory.

Some systems have more than one pathway or process for achieving a given goal. This increases
the likelihood that the system will achieve its goal under varying environments and circumstance. If
one subsystem is damaged, or if environmental circumstances change significantly, the presence
of multiple mechanisms or pathways thereby increases the likelihood that the varying conditions
can be adapted to or overcome.
For example, the human immune system has both a preexisting component and induced antibody
component to respond to foreign “invaders.” Some organisms have multiple pathways and utilize
them in different environmental circumstances. These redundant systems accomplish the same
end but do so with more than one similar or equivalent channel.

Our conception of time is closely connected to our understanding of causality. A linear conception
of causality leads to a unidirectionality to cause and effect in time, and this is a fundamental
component in structuring our understanding of past, present, and future. When
considering only matter and energy within a purely physical system this unidirectionality to
causality with respect to time may hold. However, when we introduce information into the model it
now becomes possible for future events to feedback to affect current events, thus enabling reverse
causation.

Information encoding and processing is an essential feature of biological systems that


differentiates them from purely physical systems and allows their departure from deterministic
physical causality. Many entities have control systems that enable them to process information;
examples include animals, people, social institutions and various kinds of technology. With a
control system the structure and initial conditions may not matter, what matters is the goal, future
goals can determine current actions. Here we can note causality reversing as it goes from some
projected future event, back to affect the present state. The human body is acting out this process
virtually every moving second, in that we typically formulate some goal before initiating any action,
such as desiring the future state of eating a meal before acting out the process of cooking. The
physical structure and initial conditions determine the outcomes as governed by equations, past
effects cause current events going forward but with a control system the information defines a
system’s desired behavior or response and thus cause and effect are contingent on higher level
information whose cause is derived from some future projection.
Relational Paradigm

Relational thinking is a way of seeing the world that places greater precedence on the relations or
connections between entities rather than simply looking at those entities as discrete. The main
overarching principle in the relational paradigm is a shift in one’s perception from seeing a fixed
world, made up of things and their properties, to seeing a world that is primarily made of relations
and connectivity. Throughout the sciences, relational theories in general, are frameworks for
understanding physical or social systems in such a way that the elements of interest are only
meaningful in relation to other entities, that is to say through their relations. With the rise of network
theory and computation, relational approaches have developed within many different areas over
the past few decades. There are now relational theories within physics, sociology, psychology,
international relations and many more.

The traditional analytical approach taken within modern science is to understand the world through
focusing on the properties of discrete parts, holding these as ontologically primary i.e. separate
objects or entities are what are seen to be real, and one must have these entities before one can
have relations between them. In a paper entitled “What is Relational Thinking?” Didier Debaise
describes this component-based paradigm as such “A paradigm which has crossed modernity and
which deploys itself, more or less implicitly, at every level of knowledge, in the orientations given to
practices, in the way of relating to experience. This paradigm is that of ‘being–individual.’ One can
say, very schematically, that modernity will have been... a research almost exclusively on the
conditions of existence, the reasons, modalities, and characteristics of the individual, granting it,
implicitly or explicitly, an ontological privilege to the constituted individual.”

Reductionism is an attempt to trace back all phenomena to basic elementary parts and how those
parts generate direct cause and effect interactions. A classical example of this is methodological
individualism, which is the requirement that causal accounts of social phenomena explain how they
result from the motivations and actions of individual agents, at least in principle. The analytical
reductionist paradigm excludes the idea that relations – how two or more things interact – can
actually be the source in itself shaping those elementary parts.

The synthetic approach of systems thinking is instead focused on the relations between the parts,
it posits that the relations between parts – and whole networks of relations that form the context –
can and do shape the constituent parts in a two-way reciprocal relation. Within the systems
paradigm, “causes” are not traced back to the properties of component parts but instead are seen
to derive from the relations between things. In particular, how whole networks of interconnections
that form the context or environment can shape the individual parts. For example, two objects may
have a particular color when looked at in separation, but when we place them side-by-side in
relation to each other, the initial objects’ colors may appear different. The properties of the objects’
colors have not changed, but they appear different because a relation has been added to them and
it is this relation that is affecting the perceived properties of the elements. As another example one
could cite global cities like Singapore or Dubai which are a product of the context of globalization,
they are enabled and defined by global networks of connection, air traffic, logistic and financial
networks, etc. These global cities are not entirely created by the local context but instead are
defined largely by the global connections that shape them.

One of the most extraordinary examples of this with in physics is what is called “entangled” with
each other, meaning their spin, position or other properties become linked or interdependent. If one
then makes a measurement of one particle, that then instantaneously determines the other
particle’s state. There is no manifest interaction between them instead it is this relationship of
entanglement that alters the properties of the parts. These examples help to illustrate how certain
properties, features, and dynamics only emerge out of the interaction between things, and they are
governed by the nature of those interactions; thus it is important to use a relational paradigm to
understand these phenomena.
The systems paradigm argues for a balance between the analytical approach, focused on parts,
and the synthetic approach focused on relations. An overemphasis on the parts can lead to a
narrow process of reasoning that creates its own limitations. For example, by focusing on the
individual parts of society we derive the conception of the rational individual, i.e. the individual that
is driven solely or primarily by their own internal logic. This understanding of the rational agent then
leads to defining any human action that is not logically reasoned through by the individual – which
is a large section of human behavior – as irrational and unexplainable.

Most humans do not rationally and logically reason through what they chose to do or believe.
Instead, they act based on the context and their connections with others. People adopt a particular
belief or opinion because it fits in with their culture – their connections with others that form their
socio-cultural context – these choices are not derived from the individual making a logical
decision.Thus simply trying to understand human behavior as a function of the internally generated
motives of the individual is very much limited. Likewise trying to describe all phenomena in terms of
connections and context results in an equally limited perception. To gain a complete understanding
of some phenomena, it is required that a relational paradigm is used to complement and balance
this perspective.

The relational paradigm leads to an inversion of our traditional conception where discrete entities
exist within an inert space creating actions, interactions, and relations. From the relational
perspective, relations are what define how entities act and react; it is the network of connections
around an entity that creates the context for its behavior or form. For example, when looking at a
sculpture, we often assume it is simply the inherent properties of that item that define it. But a
sculpture is made up of what is called positive and negative space. Positive space refers to the
object itself, while negative space is the space around the sculpture that gives it form, context and
through which we interpret it.

This is analogous to the shift in perception brought about by the rise of modern physics. Newtonian
classical physics saw the environment of space and time as essentially absolute, exerting no
influence, it was the object that affected change and interaction. General relativity changed this
though to a new perspective where space and time are a fabric and events are a product of an
interaction between the object and this space-time fabric. Here again, the interactions and context
is an active agent in shaping events and outcomes, we can not reduce everything to a description
of the parts.

The relational paradigm fundamentally alters our perception of space. The traditional component
based conception of space is relative to objects and their physical extension, which creates a
three-dimensional Euclidean space; one that is absolute in that it just exists and is not affected by
the changes in components or connections. The relational paradigm though alters this perception
of space in that relations are defined by exchanges of some kind, along every connection
something is transferred, information, money, heat, light, trust, etc. how “close” things are to each
other is relative to how easily something can be exchanged through the connection. The easier it is
to exchange something between two elements the “closer” they are, and thus within this paradigm,
there is not absolute space but instead, distance is defined by connections and ease of transfer.
The same applies to any form of network of connections, such as a transportation network with
major hubs that will in effect be “closer” than other less well-connected nodes because of their high
degree of connectivity, not because of their distance within some absolute form of space.

The relational paradigm has found application in many areas of science and we will briefly highlight
some of its main applications. In physics, a relational theory is a framework for understanding
physical systems in such a way that the positions and other properties of objects are only
meaningful relative to other objects. In a relational space-time theory, space does not exist unless
there are objects in it; nor does time exist without events. Space can be defined through the
relations among the objects that it contains considering their variations through time.
Leibniz’s relationalism, is one example of this, describing space and time as systems of relations
that exist between objects. The alternative spatial theory is an absolute theory in which space
exists independently of any objects that can be immersed in it.

Relational sociology is a collection of sociological theories that emphasize relationalism over


substantivalism in explanations and interpretations of social phenomena. Pierpaolo Donati – known
as the founder of ‘relational sociology’ writes about relational sociology as such “More or less
implicitly, the observer (the social scientist) stakes for granted that the concept of relations qua talis
is not of first importance, but must come ‘after’ the terms that it connects. This means that social
relations are viewed as a product of individuals and of social structures, as something that comes
after them. On the contrary, the relational paradigm affirms: In the beginning, there is the relation!”

Relational psychoanalysis is a relatively new and evolving school of psychoanalytic thought


considered by its founders to represent a “paradigm shift” in psychoanalysis. An important
difference between relational theory and traditional psychoanalytic thought is in its theory of
motivation, which would assign primary importance to real interpersonal relations, rather than to
instinctual drives. Freudian theory, with a few exceptions, proposes that human beings are
motivated by drives that are biologically rooted and innate. Relationalists, on the other hand, argue
that the primary motivation of the psyche is to be in relationships with others. Relationalists posit
that personality emerges from the matrix of early formative relationships with parents and other
figures.

Complex interdependence, in international relations, is the idea that states and their fortunes are
inextricably tied together through a complex set of interdependencies that have grown up with the
rise of global interconnectivity on different levels – what is called globalization. Complex
interdependence theory was a major challenge to fundamental assumptions of traditional and
structural realism which focused on military and economic capabilities to explain the behavior of
states. Complex Interdependence, on the contrary, highlighted the emergence of transnational
institutions through global relations and their capacity to affect the nation states.
The relational paradigm has been applied to many other areas and offers a fresh way of looking at
age-old questions that can lead to totally new, out of the box insights and solutions, in that it
represents a complete shift in our perception, to the nexus of relations and context that surround
objects and events.
Interdependence

Interdependence is one of the central concepts within systems theory in that most definitions for a
system involve the idea of interdependency between a set of parts. Indeed the idea of
interdependency is often the central concept that is used to define a system, in that without
interdependency between parts there is no system just a set of independent elements.
Interdependence is a type of connection or relation between elements; relations may be defined as
either dependent, codependent, independent or interdependent. In systems theory,
interdependence is defined in slightly more specific terms than in the everyday usage of the term.
Here it means an interrelationship between autonomous elements through the formation of a
combined, emergent organization. The essence of interdependence involves autonomy,
differentiation, and emergency. Two or more autonomous elements coming together, differentiating
their states with respect to each other, to create some combined organization that is greater than
any of the parts, through the process of emergence.

Before there can be any form of interdependence, there must be connectivity between the parts.
Connectivity and interdependence are two distinctly different phenomena. Connectivity defines an
exchange of some kind, whereas as interdependence describes some form of dependence within
that exchange. Although they are different ideas, connectivity is a prerequisite to interdependence
and connectivity is a driver of interdependency. There may be other factors that change the degree
of interdependence within a system, but connectivity is a primary factor. As connectivity increases,
this presents more channels or mediums for the development of interdependencies.
If two or more parts of a system are highly interdependent, they must engage in a large number of
interactions. If no interactions occur between the parts of a system, they are not interdependent
and therefore they do not make up a system.

Dependence defines the state of relying on or being controlled by someone or something else. The
flow of dependence in this relation is unidirectional. One element is dependent on the other without
the other element being dependent on the first. For example, in mathematical modeling, there are
dependent and independent variables. Many scientific and mathematical models investigate how
the former depend on the latter. The dependent variables represent the output or outcome whose
variation is being studied. The independent variables represent inputs or causes, i.e. potential
reasons for a change. Planet Earth's relationship with the sun is one of dependence. The Sun can
be said to be independent of the Earth, but Earth's whole state and functioning would change
dramatically without the Sun.

Codependency describes a relation where each element is dependent directly on the other to
maintain their state or operation. Codependence often implies a power imbalance between the two
components within the dynamic, as one element becomes dominant over the other. Even though
one part is seen to have manifest power or influence over the other, the dependency still operates
in both directions; the influencer is still dependent in some way on the influenced.
For example, in an absolute monarchy or dictatorship, the socio-political dynamic will often be
presented as one of dependence, where the people are dependent on the sovereign or dictator to
rule them. Power and influence will appear to flow in one direction, with the ruler being
independent to do as they want. But of course this is not the case, without the population, and the
population being submissive to the ruler, the ruler would not exist. Thus the ruler is equally
dependent on the population, but this only becomes manifest when the people rebel and reveal the
underlying dynamic of codependence.
Within psychology, codependency is characterized by a person belonging to a one sided
relationship where one person relies on the other for meeting some or many of their emotional and
self-esteem needs or enabling the person to maintain their irresponsible, addictive, or
underachieving behavior. But equally, over time the other person becomes dependent on their role
in supporting that person. Dependence and codependence are essentially the same things, the
only difference being whether the flow of that dependence is unidirectional or bidirectional.

Independence within a connection relates to a lack of any form of dependence within the relation,
each element may engage in the connection but is not significantly affected by it. Independent
elements act autonomously. The components within an independent connection are not defined,
driven or significantly influenced by the other items in that connection. This type of relation may be
defined as an exchange, which is the act of giving one thing and receiving another. In a connection
between independent agents, exchanges are made based upon the perceived interests and logic
governing each actor. For a connection to be one of independence, the members must be acting
autonomously.

An independent relationship requires both freedom and autonomy on the behalf of the members
engaged in the relation. Freedom defines the ability to act without internal or external constraints.
While the term freedom is often understood more in terms of the capacity to make external
choices, the term autonomy defines not just the actions that someone takes, or appears to have
available to them, it also relates to the motives behind their actions and reasoning. For actions to
be autonomous they must be a product of an individual's capacity to govern themselves. The
coherentist approach to autonomy states that an agent governs their actions and thus acts
autonomously if, and only if, their motives adhere with some internal logic, set of rules or beliefs
defined by that actor. As long as the agent acts in a way that is in agreement with their own beliefs,
desires, and logic, then they can be said to be acting autonomously.

Added to this, the set of instructions under which an actor is operating must also be based on
sound reasoning. Actors do not truly govern themselves unless the motives or mental processes
that produce their actions are responsive to a sufficiently wide range of reasons for or against a
particular belief, value or action. The individual needs to be aware of the reasoning, information,
and motives behind their operations to be acting independently. If these are constrained than their
autonomy will be likewise constrained. For example, a person who is exposed to only a limited
number of media channels which present only a single perspective on a political subject, cannot be
said to be acting fully independently when it comes to casting their vote, as in such a circumstance
their reasoning has not been a product of a balanced consideration of all the relevant facts.

Interdependence is inherently complex in that it involves a dynamic of both dependence and


autonomy on different levels. As the author, Stephen Covey wrote, "Interdependence is a choice
only independent people can make." Interdependence is what happens when autonomous element
interact and enable the emergence of an overall system. The concept of interdependence differs
from the reliance in a dependent relationship, where some elements are dependent on others. With
interdependence the parts are not dependent on each other directly, they are instead, interrelated
through enabling the emergence of some overall combined organization.

Interdependence involves the emergence of some overall combined system to the organization,
where individuals retain their autonomy with respect to each other, but they are interdependent
with respect to the overall combined organization. Each component has to differentiate their
function and state with regard to other components in order to create a combined overall system. It
is then this relationship of interdependence that shapes the constituent element's properties, and
thus the change in the parts is not seen to be generated by any elementary part but instead is
defined by its interdependence with the whole.
When two or more elements work together to form some combined organization they differentiate
their states with respect to each other in order to create synergies and the emergence of a new
level of organization. This emergent organization is not associated with the individual properties of
any of its parts; thus no part is directly dependent on any other, they are shaped and formed by the
whole organization. One example of this would be a cooperative enterprise, which is an
autonomous association of people united voluntarily to meet their common economic, social and
cultural aspirations through a jointly owned and democratically controlled business.
Integration

Integration means the bringing together or connecting of things. It is the act of combining or adding
parts to make a unified whole. As such it can be defined as the opposite from disintegration or
differentiation, which means to “set apart.” The overall degree of integration to a system can be
defined in terms of the integrity to the network of connections between its parts. The systems
paradigm looks at the world in terms of relationships and integration. Systems are integrated
wholes whose properties cannot be reduced to those of smaller units. Instead of concentrating on
basic building blocks or substances, the systems approach emphasizes the principles of
organization; how the components are integrated into whole patterns of organization.

At a low level of connectivity what defines an entity is simply its set of elements. But as the degree
of connectivity is turned up it is the connections between the parts that come to define the whole
organization as a system. Thus what defines a system is the degree of connectivity, exchange, and
interdependency between parts. At a low level of connectivity and integration, the system’s parts
define the relations and the whole. But in integrated systems with a dense network of connections,
this is inverted as the parts come to be shaped by the connections and the whole.

The degree of integration to the connections within a system is important because it defines how
unified that system is. The connections within a system enable the flow of some resource through
those connections; it is this flow of resources through the system that binds it into an
interdependent whole. It is the movement of blood through the network of veins within the body
that is a primary integrating factor to the whole system. Likewise, it is the flow of communications
through a nation’s broadcast media that binds a modern nation state into a single, integrated
sociocultural unit. Social capital likewise can be understood as the number of connections within a
community. A strong community is an integrated network of connections along which resources
flow and which enables the community to experience itself and operate as an entirety.
Every new connection made within the system allows some resources to flow more efficiently,
whether this is goods within a national transportation network, information to flow more freely
around the world through telecommunication connections or resources within an ecosystem
through the exchange between creatures. The more the connections, the greater the integration
and the more the organization will form a unified system.

Disintegration may be understood as the opposite from integration as it defines the breaking up or
removal of connections and a reduction in overall integration. As a system becomes disintegrated
the relations are reduced, and parts become disconnected. No longer interdependent the system
returns to a simple set of components without unity. From this perspective when looking at the
difference between a functional community and a dysfunctional urban ghetto, we would note that
there is some integration within the first social network that enables the flow of resources between
members of the community, while the dysfunctional community would represent a disintegrated
network that inhibits the flow of these resources and the overall functionality.

Although disintegration can appear as being solely dysfunctional it plays a major role in the
development of a system, without disintegration there can not be reintegration. For example, on
the level of the individual, this dynamic process of integration and disintegration is captured in
psychology under the term positive disintegration. Unlike mainstream psychology, this theoretical
framework views psychological tension and anxiety as necessary for growth. These disintegrative
processes are therefore seen as positive, whereas people who fail to go through positive
disintegration may remain in a state of primary integration.
The connections within a system and its overall integration enable system-wide processes to take
place. Through the connections, the parts to a system can become interrelated in performing some
common function. For example, the human digestive system is a set of components that are
integrated through a nexus of connections to perform one overall macro operation of processing
inputted food into nutrients to be circulated. Or for example, when a business is operating as an
integrated system, production processes can take place that span the entire organization. The
system’s functionality may be reduced by some part not functioning properly or through lack of
interoperability between the elements, leading to disintegration. Integration through connectivity
then forms the foundations of the process of emergence. To achieve emergence within a system,
the parts must be integrated so that a global process can take place through those connections. As
another example of this, through globalization – which is the process of international integration –
we are witnessing the rise of global processes, such as production and logistic processes that
require the integration of economies and organizations across the world.

Integrity is a defining factor to the autonomy of a system. As what defines a system is the pattern of
connections between its parts, the greater the interconnections and interdependence between the
elements within a system the more it can function as a coherent, integrated whole; defining its
autonomy from its environment. This exchange between the parts enables processes to take place
within the system that are autonomous – to some degree – from other systems and the
environment. Within an integrated community of people, there will be certain processes that take
place making it a functional and autonomous society, with beliefs, social institutions and economic
activity being interrelated to form a coherent society. Dependence may then be understood as the
opposite of autonomy, and thus a lack of self-contained integration. Without appropriate
connections between its parts, the system requires more connections to other entities within its
environment to enable its functional processes to take place.

Integrity is a defining factor to autonomy. The word integrity evolved from the Latin adjective
“integer,” meaning “whole” or “complete.” Integrity is the state of being integrated into a whole. An
individual’s personal integrity, for example, is their capacity to define a set of moral rules and code
that are coherent and to act in agreement with them. Integrity in this sense is generally understood
as a personal choice to hold oneself to consistent moral and ethical standards. Integrity stands in
opposition to hypocrisy. Where hypocrisy means a lack of integration between one’s stated values
and actions. Hypocrisy implies that a party holds within themselves conflicting values and actions,
and thus there is a lack of integration. When someone acts based on integrity they act according to
some coherent set of rules, and this enables their autonomy from contingent events that are
governed by a different set of rules. In this integrity of acting consistently under the same set of
rules, the individual defines their autonomy and earns trust from others. With integrity, others can
trust that they will continue to operate under the same consistent set of rules in the future, in such
a way others feel they know how they will act and can count on them to take actions based upon
the rules.

Integration and disintegration form a dynamic process through which a system develops to become
part of larger systems and environments, as the integration to a system on one level must become,
at least partially, disintegrated to promote integration on other levels. Integration represents a
unique set of interrelations between a group of parts that defines them as in some way
autonomous from other systems and their environment. But for a system to interoperate with other
systems and form part of a more extensive system, some of the connections in the system will
become compromised or redundant. For example, as a traditional community becomes integrated
into a modern nation state, some of the social, economic or cultural connections will be replaced by
those forming with the larger society. Thus at the same time integrating the smaller subsystems
into the larger organization of the parent society, but also working to disintegrate local connections
within the community. This new set of links works then to reduce the integrity of the original
system, in that they are governed by a different protocol and set of rules, as defined they the larger
organization. Likewise, it works to reduce the original system’s autonomy in that it now becomes
more dependent on the broader system, in some way.
Process Thinking

Process thinking is a way of interpreting events in terms of the processes of change that create
them. It focuses on the nonlinear dynamics of change over time that create certain patterns out of
which events emerge. Process thinking involves considering phenomena dynamically – concerning
movement, activity, events, change and temporal evolution. Ludwig Von Bertalanffy noted how
systems theory related to Heraclitus’s perception of “panta rhei” meaning “everything flows.” He
wrote that from the Heraclitean and systems view, “structure is a result of function and the
organism resembles a flame rather than a crystal.”

Such an approach to the understanding of phenomena draws its inspiration from process
philosophy. Process philosophy is based on the premise that being is dynamic and that the
dynamic nature of being should be the primary focus of any comprehensive account to how the
world works. This paradigm draws on a tradition of thinkers from Heraclitus to twentieth century
process philosophers such as William James, Henri Bergson, Alfred North Whitehead and beyond
all of who, in various ways, viewed reality in terms of ceaseless process, flux, and transformation
rather than as a stable world of unchanging entities.

Process thinking is a central part of the systems paradigm. Systems thinking is process orientated,
where the world is perceived in terms of processes of change, instead of static events. It adopts a
more process-based ontology, meaning that within the systems paradigm objects are not seen to
create change through direct, cause and effect, discrete interactions, but instead processes are
seen to have internal patterns that generate and condition events. This is an inversion of our
traditional conception that sees objects as having precedence over processes of change.
Process thinking can be contrasted with a more static way of thinking that sees events as
generated by linear cause and effect relations between a system’s component parts. Even though
we experience our world as continuously changing, the modern analytical paradigm has long
emphasized describing reality as an assembly of static events whose dynamic features are taken
to be ontologically secondary and a derivative of the interaction between elementary parts. The
analytical process of reasoning that breaks systems down to understand their internal parts leads
to a detailed description of a system’s constituent components and a static understanding of its
structural properties.

Systems thinking is focused on open systems within the context of their environment. A key
consideration is how systems change with respect to the changes within their environment? This
leads to the idea of adaptation and evolution, where changes in the environment feedback to affect
the system which must then adapt to those changes. In this way, the system can be continuously
evolving to meet the changes within its environment. The analytical approach, in contrast, is
focused on closed systems with limited regard for the system within its environment. A closed
linear system can only change by generating different configurations of its internal parts. With a
limited amount of interacting parts, there is a finite number of possible future states. In a closed
linear system there is limited possibility for emergence and thus the future resembles the past, the
future can be modeled and understood as some permutation of the past. With a small number of
interacting elements in a closed system, sooner or late the system will have cycled through every
possible configuration of its internal parts and then the future will involve revisiting previously
experienced states.

Linear thinking sees events happening through cause and effect interactions. One thing causes
another as a discrete event. Nonlinear closed-loop thinking skills lead one to see causality as an
ongoing process, rather than a one-time event. Events feedback on themselves meaning that the
history of past events matter as they feed into shape current and future events within overarching
processes of change. These feedback loops over time form reoccurring patterns, what are called
system archetypes.
As such, how things come to be constituted, reproduced, adapted and defined through ongoing
processes is seen to be central; issues are always seen to be relative to time and framed in terms
of patterns of behavior over time. Events then, are not seen simply as the product of discrete
cause and effect interactions at any given time, but are a product also of larger patterns – the
archetypes – that condition the context within which those interactions take place. Process thinking
encourages people to use the historical trajectory for stimulating and guiding inquiry into underlying
relationships that produce events.

The idea of synergies and nonlinearity makes possible a conception of emergence, the idea that
interactions between the parts may create something new. Emergence describes a process of
development whereby many parts interact in a nonlinear fashion to create something that is more
than the sum of their parts. In fact, it typically produces novel, unpredictable and unexpected
phenomena. The internet revolution would be a good example, one could not have fully understood
how when we connected all these computers together we would get the emergence of social
networking, the app economy, cloud computing and all the innovations built on top of this.

Emergence is a process of becoming, the emphasis within the systems paradigm is on the process
through which new entities become formed, rather than analysis of the structure to what already
exists. Linear systems – such as a pendulum – are not in a state of becoming, they have a finite
about of interacting components that cycle through a predetermined set of states, by
understanding the structure we can understand the states the system will exhibit. However, more
complex nonlinear systems – such as a bird – go through a constant process of becoming whose
endpoint is not determined yet, but which sets the context for current events. The analytical
paradigm is based on a substance metaphysics, which goes back to
the preSocratic Greek philosopher, Parmenides. Substance metaphysicians claim that the primary
units of reality (called “substances”) must be static—they must be what they are at any instant in
time. In contrast process philosophy sees becoming, as well as ways of occurring, as central to
any inquiry.
Integration and differentiation

Integration and differentiation represent two different stages during the evolution of a system.
Differentiation is the process whereby an integrated system becomes divided up into more
specialized, well-defined parts. Integration is the process whereby diverse elements become
combined or synthesized into a whole system. The process of evolution involves a dynamic
interplay between systems differentiation, where new different elements are created, and systems
integration where those elements that are best suited to the whole system are selected by the
environment as the system becomes reintegrated.

Differentiation means the process of becoming or making something different. Differentiation


involves a process of disintegration, the dividing up of an integrated system into more specialized
subsystems. Through the process of differentiation what was originally a homogenous system
becomes heterogeneous as its constituent subsystems come to form their own identity and
structural features that are distinct from each other. Differentiation enables specialization. The
formation of separate individual subsystems enables those components to focus more intensely on
a particular function or activity. Thus allowing them to become more efficient at this activity than if
they had to perform a large number of diverse activities. System differentiation is likewise a
structural technique for solving the temporal problems of a system situated in a complex
environment.

System differentiation is understood within systems theory as a way of responding to and dealing
with the complexity of the system’s environment. This is accomplished through the creation of
subsystems in an effort to copy within a system the difference between it and the environment. The
differentiation process is a means of increasing the complexity of a system since each subsystem
can make different connections with other subsystems. It allows for more variation within the
system to respond to variation in the environment.

The development of modern societies into many different specialized institutions from what were
largely homogeneous organizations within pre-modern societies is an often cited example of
differentiation. One of the central ideas of the systems sociologist Niklas Luhmann was that
modern society is differentiated into various self-referential functional subsystems which operate
according to their own particular logic without being subordinated to a central unit. They are open
for exchange with each other but also interdependent being largely responsible for their own
functioning and development. Biological differentiation is the process by which cells or parts of an
organism change during development to serve a particular function. For example, the cells of an
animal in its early embryonic phase, are identical at first but develop through differentiation into
specific tissues, such as bone, muscles, and skin.

Differentiation within a material is any process in which a mixture of materials separates out
partially or completely into its constituent parts, as in the cooling and solidification of a magma into
two or more different rock types or in the gradual separation of the originally
homogeneous earth into crust, mantle, and core. Likewise within economics, product differentiation
is a marketing process that showcases the differences between products. Differentiation looks to
make a product more attractive by contrasting its unique qualities with other competing products.
Integration Systems integration involves the interrelating and recombining of differentiated parts.
Systems integration is the composition of a whole functioning system by assembling elements in a
way that allows them to work together to achieve an intended purpose. In engineering, for
example, system integration is defined as the process of bringing together the component
subsystems into one system and ensuring that the subsystems function together as a whole
system.
Integration requires the development of new layers of abstraction that can accommodate the
diversity of the differentiated components it is designed to interrelate. Integration involves the
development of a generic layer to integrate the differentiated elements. For example, to enable the
development of a new level of socioeconomic organization in the form of globalization it is required
that we develop some language through which the different societies can interoperate.
This integration process through which a new, more abstract, level of organization combines the
elements also works to reduce their autonomy, in order to align them with the functioning of the
whole system. For example, the formation of the political entity of the European Union enables
interoperability between the differentiated countries but likewise constrains their national
governments.

In the paper “A Complexity Drain on Cells in the Evolution of Multicellularity” Daniel McShea
describes this process in living organisms as such: “In evolution, as higher-level entities arise from
associations of lower-level organisms, and as these [higher level] entities acquire the ability to
feed, reproduce, defend themselves, and so on, the lower-level organisms will tend to lose much of
their internal complexity.” Integration both constrains and enables differentiation. Cells in
multicellular organisms can rely on the whole organism adapting to changing conditions via a
multicellular response. Thus the individual cells can do away with the rarely used functions to
become more specialized and differentiated within the whole organism. Social insects demonstrate
the same sort of tradeoff “individuals of highly social ant species are less complex than individuals
from [socially] simple ant species.” But it is the complex social interaction – the integration –
between them that allows them to perform, as a whole, more effectively.

Differentiation and integration enable and create each other, integration can only take place when
there are different parts. Likewise, differentiation can only occur through integration into a larger
organization; because elements that remain autonomous are required to perform multiple functions
to maintain themselves within their environment. Differentiation thus actually becomes the basis for
any form of unity, since the unity of any (sub)system can only be based on its difference from its
environments. A central part of an evolutionary process of development is a dialectic interplay
between a system’s macro level – integration into the whole – and micro level – differentiation of
the parts. This process of development can be understood as a form of dialectic development. It is
the dynamic interaction between the process of integration and differentiation that drives the
evolutionary process of development within the system in a dialectic form. Homogenous systems
divide, define their difference, compete and then reintegrate into a more complex whole.
Paradigm Shift

Systems theory is often seen by its proponents as a paradigm shift within modern science and
general thought, as it offers a whole new way of looking at the world. As we talked about in a
previous module the term paradigm describes a set of assumptions, concepts, values, and
practices that constitute a way of viewing the world for the community that shares them, especially
in an intellectual discipline. In this module we are going to talk about paradigm shifts; major
evolutionary changes with a whole conceptual framework.

In 1962 a man named Thomas S. Kuhn published a book called the Structure of Scientific
Revolutions. The idea for the book arose while he was studying for his Ph.D. thesis. He read
through the history of science and noticed that the standard account of its history did not do justice
to the complexity of its situation. In this standard account, science was being portrayed as a slow
and steady progression towards ever greater knowledge in a linear fashion, with current
discoveries just being layered on top of past ones in a somewhat linear accumulation over time.
Kuhn went on to develop an alternative nonlinear model to the development of scientific knowledge
where normal periods of stable development are punctuated with revolutionary transformation as
the paradigm fails, and new theories are developed, compete and ultimately come to replace the
old framework.

Kuhn noticed that within a particular scientific domain when the science claimed to be based solely
on facts, there were always some facts that are seen as being more important than others. Certain
problems and issues arise within a particular domain and before long the area becomes shaped
around those questions, certain assumptions form and frameworks develop to help structure
people's reasoning. Kuhn called this overarching way that we look at the field of inquiry a
paradigm. As new people become indoctrinated within a particular area, they become imbued with
that paradigm and ultimately dependent upon it to structure their thinking about the subject.
As the paradigm becomes established it becomes this shared model that shapes what questions
get asked and how facts are seen. Kuhn put forward the idea that there are three fundamentally
different modes or stages in the development of a science depending largely on the state of the
central paradigm used within that science. These stages are PreScience, where no paradigm yet
exists, Normal Science, where an accepted and established paradigm guides inquiry, and
Revolutionary Science where the central paradigm is called into question and new contenders
compete to form a new central model.

Kuhn noticed that a science is not merely born fully fledged but that they go through a prescientific
stage of formation. PreScience is characterized by a lack of agreement, constant debate over
fundamentals and thus a lack of paradigm with which to proceed as a unified community.
Prescientific domains are not really scientific at all; they are philosophical in nature as they go
through a process of basic ontological development, i.e. trying to define what exactly it is they are
studying and what is the best approach to trying to tackle it, with wide scope for subjective
interpretation and a diversity of perspectives.

During the normal period of development, researchers operate within an established and accepted
theory and set of methods tackling what are seen to be legitimate questions within that paradigm. A
good example of this would be 18th and 19th century physics where Newtonian mechanics was
the established paradigm that got applied to understanding everything from electricity to heat and
all forms of motion. Kuhn writes in one essay "Normal Science means research firmly based upon
one or more past scientific achievements, achievements that some particular scientific community
acknowledges for a time as supplying the foundation for its further practice.”
Most sciences, most of the time are not in a state of significant change and can thus be said to be
in a normal stage of development marked by incremental improvements on previously established
frameworks. This established paradigm is largely accepted and productive in delivering new insight
into the issues that are taken as being important. Science in this normal stage is a highly
conservative activity, the objective is to solve puzzles with limited alteration to the paradigm.
Normal scientists are not trying to test the paradigm, in contrary, they accept it unquestioningly.
One famous expression of this normal scientific stage within physics is given by Lord Kelvin in a
speech in 1900 wherein he famously told an assemblage of physicists at the British Association for
the Advancement of Science, "There is nothing new to be discovered in physics now. All that
remains is more and more precise measurement."

A scientific revolution occurs, according to Kuhn, when scientists encounter anomalies that cannot
be explained by the universally accepted paradigm within which scientific progress has thereto
been made. Over time anomalies are discovered, phenomena that simply cannot be reconciled
with the existing paradigm. As more and more anomalies accumulate a burgeoning sense of crisis
envelopes the scientific community. Confidence in the normal paradigm breaks down and the
process of normal science grinds to a halt as it moves into a crisis, with this crisis marking the
beginning of a period of revolutionary science.

Finally out of the struggle to form a new model of understanding one or more viable candidates
emerges. This begins the model revolution step. It is a revolution because the new model is a new
paradigm. It's radically different from the old paradigm, so different that Kuhn declared that the two
are incommensurate. Each uses its own set of rules to judge the other. Thus believers in each
paradigm find it difficult or even impossible to communicate. Once a single new paradigm is settled
on by a few significant and influential proponents, the paradigm change step begins. Here the field
transitions from the old to the new paradigm while improving the new paradigm to maturity.
Eventually, the old model is sufficiently replaced and becomes the field's new Normal Science. The
cycle then begins all over again, often with a prolonged stable or normal phase before a relatively
short nonlinear revolutionary period.

The term paradigm also means a set of linguistic items that form mutually exclusive choices in
particular syntactic roles. In this sense, a paradigm defines a certain lexicon and syntax for a
particular area that shapes how we see it. For example in the English language we can say “a
book” or “his book” but not “a his book.” This is an example of two paradigms or patterns that are
not compatible, which helps to illustrate why paradigm shifts are seen to be major structure
transformations. Because often the former and later patterns are incompatible. For example, when
it comes to encoding information everyone can use analog or digital, but we will not be able to
efficiently exchange information between a digital and analog device. Thus it makes sense to
follow just one paradigm at any given time.
A Complexity Labs Publication
[email protected]

You might also like