Complexity in Chemistry and Beyond
Complexity in Chemistry and Beyond
Complexity in Chemistry and Beyond
com
https://textbookfull.com/product/complexity-
in-chemistry-and-beyond-interplay-theory-and-
experiment-new-and-old-aspects-of-complexity-
in-modern-research-1st-edition-klaus-mainzer-
auth/
textbookfull
More products digital (pdf, epub, mobi) instant
download maybe you interests ...
https://textbookfull.com/product/tactical-management-in-
complexity-managerial-and-informational-aspects-renata-petrevska-
nechkoska/
https://textbookfull.com/product/antioxidants-in-systems-of-
varying-complexity-chemical-biochemical-and-biological-
aspects-1st-edition-lyudmila-n-shishkina-editor/
https://textbookfull.com/product/geometry-and-complexity-
theory-1st-edition-j-m-landsberg/
https://textbookfull.com/product/complexity-in-entrepreneurship-
innovation-and-technology-research-applications-of-emergent-and-
neglected-methods-1st-edition-elisabeth-s-c-berger/
Architecture Media and Memory Facing Complexity in Post
9 11 New York Joel Mckim
https://textbookfull.com/product/architecture-media-and-memory-
facing-complexity-in-post-9-11-new-york-joel-mckim/
https://textbookfull.com/product/toward-a-new-old-theory-of-
responsibility-moving-beyond-accountability-daryl-koehn/
https://textbookfull.com/product/theory-and-experiment-in-
gravitational-physics-will/
https://textbookfull.com/product/complexity-and-resilience-in-
the-social-and-ecological-sciences-eoin-flaherty/
https://textbookfull.com/product/complex-systems-in-medicine-a-
hedgehog-s-tale-of-complexity-in-clinical-practice-research-
education-and-management-david-c-aron/
Complexity in Chemistry and Beyond: Interplay
Theory and Experiment
NATO Science for Peace and Security Series
This Series presents the results of scientific meetings supported under the NATO
Programme: Science for Peace and Security (SPS).
The NATO SPS Programme supports meetings in the following Key Priority areas:
(1) Defence Against Terrorism; (2) Countering other Threats to Security and (3) NATO,
Partner and Mediterranean Dialogue Country Priorities. The types of meeting supported
are generally “Advanced Study Institutes” and “Advanced Research Workshops”. The
NATO SPS Series collects together the results of these meetings. The meetings are
co-organized by scientists from NATO countries and scientists from NATO’s “Partner” or
“Mediterranean Dialogue” countries. The observations and recommendations made at the
meetings, as well as the contents of the volumes in the Series, reflect those of participants
and contributors only; they should not necessarily be regarded as reflecting NATO views
or policy.
Advanced Study Institutes (ASI) are high-level tutorial courses intended to convey the
latest developments in a subject to an advanced-level audience
Advanced Research Workshops (ARW) are expert meetings where an intense but
informal exchange of views at the frontiers of a subject aims at identifying directions for
future action
Following a transformation of the programme in 2006 the Series has been re-named and
re-organised. Recent volumes on topics not related to security, which result from meetings
supported under the programme earlier, may be found in the NATO Science Series.
The Series is published by IOS Press, Amsterdam, and Springer, Dordrecht, in
conjunction with the NATO Emerging Security Challenges Division.
Sub-Series
http://www.nato.int/science
http://www.springer.com
http://www.iospress.nl
Craig Hill
Department of Chemistry, Emory University, Atlanta, Georgia, USA
and
Djamaladdin G. Musaev
Department of Chemistry, Emory University, Atlanta, Georgia, USA
123
Published in Cooperation with NATO Emerging Security Challenges Division
Proceedings of the NATO Advanced Research Workshop on
From Simplicity to Complexity in Chemistry and Beyond:
Interplay Theory and Experiment
Baku, Azerbaijan
28–30 May 2008
Published by Springer,
P.O. Box 17, 3300 AA Dordrecht, The Netherlands.
www.springer.com
v
vi Preface
vii
viii Contents
Didier Astruc Institut des Sciences Moléculaires, UMR CNRS Nı 5255, Université
Bordeaux I, Talence Cedex, France
Carles Bo Institute of Chemical Research of Catalonia (ICIQ), Tarragona, Spain
Departament de Quı́mica Fı́sica i Quı́mica Inorgànica, Universitat Rovira i Virgili,
Tarragona, Spain
Lapo Bogani Physikalisches Institut, Universität Stuttgart, Stuttgart, Germany
Ataualpa A.C. Braga Institute of Chemical Research of Catalonia (ICIQ),
Tarragona, Catalonia, Spain
Leroy Cronin Department of Chemistry, University of Glasgow, Glasgow, UK
Andrea Dei LAMM Laboratory, Dipartimento di Chimica, Università di Firenze,
UdR INSTM, Sesto Fiorentino (Firenze), Italy
Ekkehard Diemann Faculty of Chemistry, University of Bielefeld, Bielefeld,
Germany
Vagif Farzaliyev Institute of Chemistry of Additives, Azerbaijan National
Academy of Sciences, Baku, Azerbaijan
Dante Gatteschi Department of Chemistry, University of Florence, INSTM, Polo
Scientifico Universitario, Sesto Fiorentino, Italy
Yurii V. Geletii Department of Chemistry, Emory University, Atlanta, GA, USA
Thorsten Glaser Lehrstuhl für Anorganische Chemie I, Fakultät für Chemie,
Universität Bielefeld, Bielefeld, Germany
Christophe Gourlaouen Institute of Chemical Research of Catalonia (ICIQ),
Tarragona, Catalonia, Spain
Weiwei Guo Department of Chemistry, Emory University, Atlanta, GA, USA
Craig L. Hill Department of Chemistry, Emory University, Atlanta, GA, USA
ix
x Contributors
Klaus Mainzer
1
Interesting in this context is that nonlinear chemical, dissipative mechanisms (distinguished from
those of a physical origin) have been proposed as providing a possible underlying process for some
aspects of biological self-organization and morphogenesis. Nonlinearities during the formation of
microtubular solutions are reported to result in a chemical instability and bifurcation between
pathways leading to macroscopically self-organized states of different morphology (Tabony,
J. Science, 1994, 264, 245).
K. Mainzer ()
Lehrstuhl für Philosophie und Wissenschaftstheorie, Munich Center for Technology in Society
(MCTS), Technische Universität München, Arcisstrasse 21, D-80333 Munich, Germany
e-mail: [email protected]
C. Hill and D.G. Musaev (eds.), Complexity in Chemistry and Beyond: Interplay 1
Theory and Experiment, NATO Science for Peace and Security Series B: Physics
and Biophysics, DOI 10.1007/978-94-007-5548-2 1,
© Springer ScienceCBusiness Media Dordrecht 2012
2 K. Mainzer
explainable and deducible from but not reducible to those of lower levels. In this
sense, supramolecular chemistry builds up a supramolecular science whose already
remarkable achievements point to the even greater challenges of complexity in the
human organism, brain, society, and technology.
The theory of nonlinear complex systems [6] has become a successful and widely
used tool for studying problems in the natural sciences—from laser physics,
quantum chaos, and meteorology to molecular modeling in chemistry and computer
simulations of cell growth in biology. In recent years, these tools have been used
also—at least in the form of “scientific metaphors”—to elucidate social, ecological,
and political problems of mankind or aspects of the “working” of the human mind.
What is the secret behind the success of these sophisticated applications? The
theory of nonlinear complex systems is not a special branch of physics, although
some of its mathematical principles were discovered and first successfully applied
within the context of problems posed by physics. Thus, it is not a kind of traditional
“physicalism” which models the dynamics of lasers, ecological populations, or our
social systems by means of similarly structured laws. Rather, nonlinear systems
theory offers a useful and far-reaching justification for simple phenomenological
models specifying only a few relevant parameters relating to the emergence of
macroscopic phenomena via the nonlinear interactions of microscopic elements in
complex systems.
The behaviour of single elements in large composite systems (atoms, molecules,
etc.) with huge degrees of freedom can neither be forecast nor traced back.
Therefore, in statistical mechanics, the deterministic description of single elements
at the microscopic level is replaced by describing the evolution of probabilistic
distributions. At critical threshold values, phase transitions are analyzed in terms of
appropriate macrovariables—or “order parameters”—in combination with terms
describing rapidly fluctuating random forces due to the influence of additional
microvariables.
By now, it is generally accepted that this scenario, worked out originally for
systems in thermal equilibrium, can also be used to describe the emergence of order
in open dissipative systems far from thermal equilibrium (Landau, Prigogine, Thom,
Haken, etc. [6]; for some details see Sect. 1.5). Dissipative self-organization means
basically that the phase transition lies far from thermal equilibrium. Macroscopic
patterns arise in that case according to, say, Haken’s “slaving principle” from
the nonlinear interactions of microscopic elements when the interaction of the
dissipative (“open”) system with its environment reaches some critical value, e.g.,
in the case of the Bénard convection. In a qualitative way, we may say that old
structures become unstable and, finally, break down in response to a change of the
control parameters, while new structures are achieved. In a more mathematical way,
1 Challenges of Complexity in Chemistry and Beyond 5
2
Defects, in general—not only those related to the surface—affect the physical and chemical (e.g.,
catalytical) properties of a solid and play a role in its history. They form the basis of its possible
complex behaviour.
3
Fluctuation—static or nonstatic, equilibrium or nonequilibrium—usually means the deviation of
some quantity from its mean or most probable value. (They played a key role in the evolution.)
Most of the quantities that might be interesting for study exhibit fluctuations, at least on a
microscopic level. Fluctuations of macroscopic quantities manifest themselves in several ways.
They may limit the precision of measurements of the mean value of the quantity, or vice versa, the
identification of the fluctuation may be limited by the precision of the measurement. They are the
cause of some familiar features of our surroundings, or they may cause spectacular effects, such
as the critical opalescence and they play a key role in the nucleation phase of crystal growth (see
Sect. 1.8). Fluctuations or their basic principles which are relevant for chemistry have never been
discussed on a general basis, though they are very common—for example in the form of some
characteristic properties of the very large metal clusters and colloids.
4
During cosmological, chemical, biological, as well as social and cultural evolution, information
increased parallel to the generation of structures of higher complexity. The emergence of relevant
information during the different stages of evolution is comparable with phase transitions during
which structure forms from unordered systems (with concomitant entropy export). Although we
can model certain collective features in natural and social sciences by the complex dynamics of
phase transitions, we have to pay attention to important differences (see Sect. 1.6).
In principle, any piece of information can be encoded by a sequence of zeros and ones, a
so-called f0,1g-sequence. Its (Kolmogorov) complexity can thus be defined as the length of the
minimal f0,1g-sequence in which all that is needed for its reconstruction is included (though,
according to well-known undecidability theorems, there is in general no algorithm to check
whether a given sequence with such a property is of minimal length). According to the broader
definition by C.F. von Weizsäcker, information is a concept intended to provide a scale for
measuring the amount of form encountered in a system, a structural unit, or any other information-
1 Challenges of Complexity in Chemistry and Beyond 7
For a real understanding of phase transitions, we have to deal not only with the
structure and function of elementary building blocks, but also with the properties
which emerge in consequence of the complex organization which such simple
entities may collectively yield when interacting cooperatively. And we have to
realize that such emergent high-level properties are properties which—even though
they can be exhibited by complex systems only and cannot be directly observed
in their component parts when taken individually—are still amenable to scientific
investigation.
These facts are generally accepted and easily recognized with respect to crystal-
lographic symmetry; here, the mathematics describing and classifying the emerging
structures (e.g., the 230 space groups) is readily available [11]. But the situation
becomes more difficult when complex biological systems are to be investigated
where no simple mathematical formalism yet exists to classify all global types of
interaction patterns and where molecular complexity plays a key role: The behaviour
of sufficiently large molecules like enzymes in complex systems can, as yet, not be
predicted computationally nor can it simply be deduced from that of their (simple
chemical) components.
Consequently, one of the most aspiring fields of research at present, offering
challenging and promising perspectives for the future [2] is to learn experimentally
and interpret theoretically how relevant global interaction patterns and the resulting
high-level properties of complex systems emerge by using ‘a’ stepwise procedure,
to build ever more complex systems from simple constituents. This approach
is used, in particular, in the field of supramolecular chemistry [12]—a basic
topic of this book—where some intrinsic propensities of material systems are
investigated. By focusing on phenomena like non-covalent interactions or multiple
weak attractive forces (especially in the case of molecular recognition, host/guest
complexation as well as antigene-antibody interactions), (template-directed) self-
assembly, autocatalysis, artificial, and/or natural self-replication, nucleation, and
control of crystal growth, supramolecular chemistry strives to elucidate strategies for
making constructive use of specific large-scale molecular interactions, characteristic
for mesoscopic molecular complexes and nanoscale architectures.
In order to understand more about related potentialities of material systems,
we should systematically examine, in particular, self-assembly processes. A system
of genuine model character [2,7,13], exhibiting a maximum of potentiality or
disposition “within” the relevant solution, contains very simple units with the shape
of Platonic solids [11]—or chemically speaking, simple mononuclear oxoanions
[14]—as building blocks from which an extremely wide spectrum of complex
carrying entity (“Information ist das Maß einer Menge von Form”). There exists, of course, a great
variety of other definitions of information which have been introduced within different theoretical
contexts and which relate to different scientific disciplines. Philosophically speaking, a qualitative
concept is needed which considers information to be a property neither of structure nor of function
alone, but of that inseparable unit called form, which mediates between both.
8 K. Mainzer
the entries simultaneously, in the time of a single access. But if it can only print out
the result at that point, there is no improvement over the classical algorithm. Only
one of the million computational paths would have checked the entry we are looking
for. Thus, there would be a probability of only one in a million that we obtain that
information if we measured the computer’s state. But if that quantum information
is left unmeasured in the computer, a further quantum operation can cause that
information to affect other paths. In this way the information about the desired
entry is spread, through quantum inference, to more paths. It turns out p that if the
inference-generating operation is repeated about 1,000 times, (in general, n times)
the information about which entry contains the desired number will be accessible
to measurement with probability 0.5. Therefore repeating the entire algorithm a few
more times will find the desired entry with a probability close to 1.
An even more spectacular quantum algorithm was found by Peter Shor [18] for
factorizing large integers efficiently. In order to factorize a number with n decimal
digits, any classical computer is estimated to need a number of steps growing
exponentially with n. The factorization of 1,000-digit numbers by classical means
would take many times as long the estimated age of the universe. In contrast,
quantum computers could factor 1,000-digit numbers in a fraction of a second.
The execution time would grow only as the cube of the number of digits. Once
a quantum factorization machine is built, all classical cryptographic systems will
become insecure, especially the RSA (Rivest, Shamir and Adleman) algorithm
which is today often used to protect electronic bank accounts [19].
Historically, the potential power of quantum computation was first proclaimed in
a talk of Richard Feynman at the first Conference on the Physics of Computation
at MIT in 1981 [15]. He observed that it appeared to be impossible in general to
simulate the evolution of a quantum system on a classical computer in an efficient
way. The computer simulation of quantum evolution involves an exponential
slowdown in time, compared with the natural evolution. The amount of classical
information required to describe the evolving quantum state is exponentially larger
than that required to describe the corresponding classical system with a similar
accuracy. But, instead of regarding this intractability as an obstacle, Feynman
considered it an opportunity. He explained that if it requires that much computation
to find what will happen in a multi-particle interference experiment, then the amount
of such an experiment and measuring the outcome is equivalent to performing a
complex computation.
A quantum computer is a more or less complex network of quantum logical
gates. As the number of quantum gates in a network increases, we quickly run into
serious practical problems. The more interacting qubits are involved, the harder it
tends to handle the computational technology. One of the most important problems
is that of preventing the surrounding environment from being affected by the
interactions that generate quantum superpositions. The more components there are,
the more likely it is that quantum information will spread outside the quantum
computer and be lost into the environment. The process is called decoherence. Due
to supramolecular chemistry, there has been some evidence that decoherence in
complex molecules, such as molecular nano-magnets, might not be such a severe
problem.
1 Challenges of Complexity in Chemistry and Beyond 11
A molecular magnet containing vanadium and oxygen atoms has been described
[5] which could act as a carrier of quantum information. It is more than one
nanometer in diameter and has an electronic spin structure in which each of the
vanadium atoms, with their net spin ½, couple strongly into three groups of five. The
magnet has a spin doublet ground and triplet spin excited state. ESR (Electronic Spin
Resonance) spectroscopy was used to observe the degree of coherence possible.
The prime source of decoherence is the ever-present nuclear spins associated with
the 15 vanadium nuclei. The experimental results of [5] pinpoint the sources of
decoherence in that molecular system, and so take the first steps toward eliminating
them. The identification of nuclear spin as a serious decoherence issue hints at the
possibility of using zero-spin isotopes in qubit materials. The control of complex
coherent spin states of molecular magnets, in which interactions can be tuned by
well defined chemical changes of the metal cluster ligand spheres, could finally lead
to a way to avoid the roadblock of decoherence.
Independent of its realization with elementary particles, atoms, or molecules,
quantum computing provides deep consequences for computational universality and
computational complexity of nature. Quantum mechanics provides new modes of
computation, including algorithms that perform tasks that no classical computer
can perform at all. One of the most relevant questions within classical computing,
and the central subject of computational complexity is whether a given problem is
easy to solve or not. A basic issue is the time needed to perform the computation,
depending on the size of the input data. According to Church’s thesis, any (classical)
computer is equivalent to and can be simulated by a universal Turing-machine.
Computational time is measured by the number of elementary computational
steps of a universal Turing-machine. Computational problems can be divided
into complexity classes according to their computational time of solution. The
most fundamental one is the class P which contains all problems which can be
computed by (deterministic) universal Turing machine in polynomial time, i.e. the
computational time is bounded from above by polynomial. The class NP contains all
problems which can be solved by non-deterministic Turing-machine in polynomial
time. Non-deterministic machines may guess a computational step by random. It is
obvious by definition that P is a subset of NP. The other inclusion, however, is rather
non-trivial. The conjecture is that P ¤ NP holds and great parts of complexity theory
are based on it. Its proof or disproof represents one of the biggest open questions in
theoretical informatics.
In quantum theory of computation the Turing principle demands the universal
quantum computer can simulate the behavior of any finite physical system [20].
A stronger result that was conjectured but never proved in the classical case demands
that such simulations can always be performed in a time that is at most a polynomial
function of the time for the physical evolution. That is true in the quantum case.
In the future, quantum computers will prove theorems by methods that neither a
human brain nor any other arbiter will ever be able to check step-by-step, since
if the sequence of propositions corresponding to such a proof were printed out, the
paper would fill the observable universe many time over. In that case, computational
problems would be shifted into lower complexity classes: intractable problems of
classical computability would become practically solvable.
12 K. Mainzer
Fig. 1.1 Complexity degrees of 1/fb – noise with white noise (b D 0), pink noise (b D 1), red noise
(b D 2), and black noise (b D 3) [22] (Color figure online)
are uniform over a wide range of frequencies. In this case the spectrum has a
constant value, flat throughout the frequency range. The contributions of periodic
components cannot be distinguished.
But in nonlinear dynamics of complex systems we are mainly interested in
complex series of data that conform to neither of these extremes. They consist
of many superimposed oscillations at different frequencies and amplitudes, with a
spectrum that is approximately proportional to 1/f b for some b greater than zero. In
that case, the spectrum varies inversely with the frequency. Their signals are called
1/f – noise. Figure 1.1 illustrates examples of signals with spectra of pink noise
(b D 1), red noise (b D 2), and black noise (b D 3). White noise is designated by
b D 0. The degree of irregularity in the signals decreases as b becomes larger.
For b greater than 2 the correlations are persistent, because upwards and
downwards trends tend to maintain themselves. A large excursion in one time
interval is likely to be followed by another large excursion in the next time interval
of the same length. The time series seem to have a long-term memory. With b less
than 2 the correlations are antipersistent in the sense that an upswing now is likely
14 K. Mainzer
to be followed shortly by a downturn, and vice versa. When b increases from the
antipersistent to the persistent case, the curves Fig. 1.1 become increasingly less
jagged.
The spectrum gets progressively smaller as frequency increases. Therefore, large-
amplitude fluctuations are associated with long-wavelength (low-frequency) oscil-
lations, and smaller fluctuations correspond to short-wavelength (high-frequency)
cycles. For nonlinear dynamics pink noise with b roughly equal to 1 is particular
interesting, because it characterizes processes between regular order of black noise
and complete disorder of white noise. For pink noise the fraction of total variability
in the data between two frequencies f1 < f2 equals the percentage variability within
the interval cf1 < cf2 for any positive constant c. Therefore, there must be fewer
large-magnitude fluctuations at lower frequencies than there are small-magnitude
oscillations at high frequencies. As the time series increases in length, more and
more low-frequency but high-magnitude events are uncovered because cycles of
longer periods are included. The longest cycles have periods comparable to the
duration of the sampled data. Like all fractal patterns, small changes of signals are
superimposed on larger ones with self-similarity at all scales.
In electronics, 1/f -spectra are known as flicker-noise, differing from the uniform
sound of white noise with the distinction of individual signals [23]. The high-
frequency occurrences are hardly noticed contrary to the large magnitude events.
A remarkable application of 1/f -spectra delivers different kinds of music. The
fluctuations of loudness as well as the intervals between successive notes in the
music of Bach have a 1/f -spectrum. Contrary to Bach’s pink-noise music, white-
noise music has only uncorrelated successive values. The brain fails in finding any
pattern in a structureless and irritating sound. On the other side, black-noise music
seems too predictable and boring, because the persistent signals depend strongly
on past values. Obviously, impressing music finds a balance between order and
disorder, regularity and surprise.
1/f -spectra are typical for processes that organize themselves to a critical state
at which many small interactions can trigger the emergence of a new unpredicted
phenomenon. Earthquakes, atmospheric turbulence, stock market fluctuations, and
physiological processes of organisms are typical examples. Self-organization, emer-
gence, chaos, fractality, and self-similarity are features of complex systems with
nonlinear dynamics [24]. The fact that 1/f -spectra are measures of stochastic noise
emphasizes a deep relationship of information theory and systems theory, again:
all kinds of complex systems can be considered information processing systems. In
the following, distributions of correlated and unrelated signals are analyzed in the
theory of probability. White noise is characterized by the normal distribution of the
Gaussian bell curve. Pink noise with a 1/f -spectrum is decisively non-Gaussian. Its
patterns are footprints of complex self-organizing systems.
In complex systems, the behavior of single elements is often completely un-
known and therefore considered a random process. In this case, it is not necessary to
distinguish between chance that occurs because of some hidden order that may exist
and chance that is the result of blind lawfulness. A stochastic process is assumed
to be a succession of unpredictable events. Nevertheless, the whole process can be
1 Challenges of Complexity in Chemistry and Beyond 15
characterized by laws and regularities, or with the words of A.N. Kolmogorov, the
founder of modern theory of probability: “The epistemological value of probability
theory is based on the fact that chance phenomena, considered collectively and on
a grand scale, create non-random regularity.” [25] In tossing a coin, for example,
head and tail are each assigned a probability of 1:2 whenever the coin seems to be
balanced. This is because one expects that the event of a head or tail is equally likely
in each flip. Therefore, the average number of heads or tails in a large number of
tosses should be close to 1/2, according to the law of large numbers. This is what
Kolmogorov meant.
The outcomes of a stochastic process can be distributed with different probabil-
ities. Binary outcomes are designated by probability p and 1 p. In the simplest
case of p D 1/2, there is no propensity for one occurrence to take place over another,
and the outcomes are said to be uniformly distributed. For instance, the six faces of
a balanced die are all equally likely to occur in a toss, and so the probability of
each face is 1/6. In this case, a random process is thought of as a succession of
independent and uniformly distributed outcomes. In order to turn this intuition into
a more precise statement, we consider coin-tossing with two possible outcomes
labeled zero or one. The number of ones in n trials is denoted by rn , and the sample
average rn /n represents the fraction of ones in n trials. Then, according to the law of
large numbers, the probability of the event that rn /n is within some fixed distance to
1/2 will tend to one as n increases without bound.
The distribution of values of samples clusters about 1/2 with a dispersion
that appears roughly bell-shaped. The bell-shaped Gaussian curve illustrates Kol-
mogorov’s statement that lawfullness emerges when large ensembles of random
events are considered. The same general bell shape appears for several games with
different average outcome like playing with coins, throwing dice, or dealing cards.
Some bells may be squatter, and some narrower. But each has the same mathematical
Gaussian formula to describe it, requiring just two numbers to differentiate it
from any other: the mean or average error and the variance or standard deviation,
expressing how widely the bell spreads.
For both independence and finite variance of the involved random variables,
the central limit theorem holds: a probability distribution gradually converges to
the Gaussian shape. If the conditions of independence and finite variance of the
random variables are not satisfied, other limit theorems must be considered. The
study of limit theorems uses the concept of the basin of attraction of a probability
distribution. All the probability density functions define a functional space. The
Gaussian probability function is a fixed point attractor of stochastic processes
in that functional space. The set of probability density functions that fulfill the
requirements of the central limit theorem with independence and finite variance
of random variables constitutes the basin of attraction of the Gaussian distribution.
The Gaussian attractor is the most important attractor in the functional space, but
other attractors also exist.
Gaussian (and Cauchy) distributions are examples of stable distributions. A
stable distribution has the property that it does not change its functional form. The
French mathematician Paul Lévy (1886–1971) determined the entire class of stable
16 K. Mainzer
Obviously, the theory of complex systems and their phase transitions offers a
successful formalism to model the emergence of order in Nature. The question arises
how to select, interpret, and quantify the appropriate variables of complex models
in the social sciences. In this case, the possibility to test the complex dynamics
of the model is restricted: In general, we cannot experiment with human society.
Yet, computer simulations with varying parameters may deliver useful scenarios to
recognize global trends of a society under all sorts of conditions.
Evidently, human society is a complex multi-component system composed
of diverse elements. It is an open system because there exist not only internal
interactions through materials and information exchange (“ideas”) between the
1 Challenges of Complexity in Chemistry and Beyond 17
individual members of a society, but also an interchange with the external environ-
ment, nature, and civilization. At the microscopic level (e.g., micro-sociology and
micro-economy), the individual “local” states of human behaviour are characterized
by different attitudes. Changes of society are related to changes in attitudes of its
members. Global change of behaviour is modeled by introducing macrovariables in
terms of attitudes of social groups (compare reference 22 chapter 8) [26].
In social sciences, one distinguishes strictly between biological evolution and the
history of human society. The reason is that the development of nations, markets,
and cultures is assumed to be guided by the intentional behaviour of humans, i.e.,
human decisions based on intentions, values, etc. From a microscopic viewpoint
we may, of course, observe single individuals contributing with their activities
to the collective macrostate of the society representing cultural, political, and
economic order (and, hopefully, determined by the value of corresponding “order
parameters”).
Yet, macrostates of a society do, of course, not simply average over its parts.
Its order parameters strongly influence the individuals of the society by orientating
(“enslaving”) their activities and by activating or deactivating their attitudes and
capabilities. This kind of feedback is typical for complex dynamical systems.
If the control parameters of the environmental conditions attain certain critical
values due to internal or external interactions, the macrovariables may move into
an unstable domain out of which highly divergent alternative paths are possible.
Tiny unpredictable microfluctuations (e.g., actions of very few influential people,
scientific discoveries, new technologies) may decide which of the diverging paths
society will follow.
A particular measurement problem of sociology arises from the fact that sociol-
ogists observing and recording the collective behaviour of society are themselves
members of the social system they observe. Sociologists strive to define and to
record quantitatively measurable parameters of collective behaviour, using all sorts
of “objective”, that is, empirical and quantitative methods. But, while the world of
macroscopic physical phenomena will certainly not be changed in a scientifically
relevant way by the fact that it is being explored and investigated, this does not
necessarily hold true for social systems—a further justification for the obvious fact
that scientific procedures used in classical physics are not simply transferable to
the study of human social behaviour. This well-known sociological phenomenon
of “self-observation in a society” confirms the complex dynamics of a society, i.e.,
the nonlinear feedback between individual activities at the microscopic level and its
global macroscopic order states.
While systems in physics and chemistry are often taken for granted and are
considered to be arbitrarily delimitable units of consideration, social systems cannot
even be defined (and much less analyzed and studied) without simultaneously
considering their environment and delineating their boundaries from their interval
dynamics as well as from their interactions with all those features that do not pertain
to the system. The problems which obviously arise in this context are carefully
analyzed by N. Luhmann in his well-known system theory approach [27]. Problems
of a similar nature arise when considering biological processes. In addition, it might
18 K. Mainzer
can never analyze exactly how every relevant factor interrelate to spread energy or
to energize spreads. But in both fields, the broad pattern of probability describing
the whole system can be seen.
Bachelier introduced a stochastic model by looking at the bond market as a fair
game. In tossing a coin, each time one tosses the coin the odds of heads or tails
remain 1:2, regardless of what happened on the prior toss. In that sense, tossing
coins is said to have no memory. Even during long runs of heads or tails, at each toss
the run is as likely to end as to continue. In the thick of the trading, price changes
can certainly look that way. Bachelier assumed that the market had already taken
account of all relevant information, and that prices were in equilibrium with supply
matched to demand, and seller paired with buyer. Unless some new information
came along to change that balance, one would have no reason to expect any change
in price. The next move would as likely be up as down.
In order to illustrate this smooth distribution, Bachelier plotted all of a bond’s
price-changes over a month or year onto a graph. In the case of independent and
identically distributed price-changes, they spread out in the well-known bell-curve
shape of a normal (“Gaussian”) distribution: the many small changes clustered in
the center of the bell, and the few big changes at the edges. Bachelier assumed that
price changes behave like the random walk of molecules in a Brownian motion.
Long before Bachelier and Einstein, the Scottish botanist Robert Brown had studied
the erratic way that tiny pollen grains jiggled about in a sample of water. Einstein
explained it by molecular interactions and developed equations very similar to
Bachelier’s equation of bond-price probability, although Einstein never knew that.
It is a remarkable coincidence that the movement of security prices, the motion
of molecules, and the diffusion of heat are described by mathematically analogous
models. In short, Bachelier’s model depends on the three hypotheses of (1) statistic
independence (“Each change in price appears independently from the last”), (2)
statistic stationarity of price changes, and (3) normal distribution (“Price changes
follow the proportions of the Gaussian bell curve”).
But the Dow charts demonstrate that the index changes of financial markets have
no smooth distribution of a Gaussian bell curve (compare references 24 and 22,
chapter 7.4). Price fluctuations of real markets are not mild, but wild. That means
that stocks are riskier than assumed according to normal distribution. With the bell
curve in mind, stock portfolios may be put together incorrectly, risk management
fails, and trading strategies are misguided. Further on, the Dow chart shows that,
with globalization increasing, we will see more crises. Therefore, our whole focus
must be on the extremes now.
On a qualitative level, financial markets seem to be similar to turbulence in
nature. Wind is an example of natural turbulence which can be studied in a wind
tunnel. When the rotor at the tunnel’s head spins slowly, the wind inside blows
smoothly, and the currents glide in long, steady lines, planes, and curves. Then,
as the rotor accelerates, the wind inside the tunnel picks up speed and energy. It
suddenly breaks into sharp and intermittent gusts. Eddies form, and a cascade of
whirlpools, scaled from great to small, spontaneously appears. The same emergence
of patterns and attractors can be studied in the fluid dynamics of water.
Another random document with
no related content on Scribd:
hablauan, ni en el puesto ni
meneo mostrauan algun
descuydo deshonesto, y
solamente se reyan de aquellos
que hasta entonçes por solo el
hábito, estado y opinion
venerauan honrrauan y obedeçian
pensando que en si fuessen de
algun valor y preçio: y agora se
acusan por verdaderos ydiotas
engañados, pues ven por
experiençia desto sus desmanes,
su poco recogimiento y poca
vergüença. Quando los ven tan
desordenados, descomedidos en
su comer y beber, tan infames y
disolutos en sus injurias, con
tantas vozes y grita por tan façiles
y ligeras ocasiones venir á las
manos y cabello; y sobre todo me
admiraua ver aquel monstruo de
naturaleza Alcidamas cura de San
Nicholas tan desbaratado en su
vibir y costumbres, obras,
conuersaçion, que nos dexó
confusos y admirados a quantos
estauamos alli. Sin empacho
ninguno de las dueñas hazia
cosas de su cuerpo y partes
vergonçosas, y dezia de su
lengua que avn avria empacho de
lo dezir y hazer vn muy profano
joglar.
Miçilo.—Por çierto que me has
admirado, gallo, con tu tan
horrenda historia, o por mejor
dezir, atroz tragedia. ¡Quán
comun cosa es faltar los hombres
de su mayor obligaçion!
Supliquemos a nuesto Señor los
haga tan buenos que no
herremos en los imitar, y
merezçan con su ofiçio inpetrar
graçia de nuestro Señor para sí, y
para nos, y auisemos de oy más a
todos los perlados que pues en la
iglesia son pastores deste ganado
no permitan que en los tales
auctos y çelebridades de misas
nueuas aya estos ayuntamientos,
porque vengan a tanto desman.
Gallo.—Ya, Miçilo, quiero dexar
guerras y contiendas y heridas y
muertes de honbres con las
cuales te he escandalizado hasta
aqui, y quiero que agora oyas la
más alta y más feliçissima
nauegaçion que nunca a honbres
aconteçio. En fin oyras vna
admirable ventura que te quiero
contar, la qual juntamente con el
prospero suçeso te dara tanto
deleyte que holgarás
grandemente de le [980] oyr; y
pues es ya venido el dia abre la
tienda, que en el canto que se
sigue lo oyras.