Ilya Prigogine - From Being To Becoming Time and Complexity in The Physical Sciences
Ilya Prigogine - From Being To Becoming Time and Complexity in The Physical Sciences
Ilya Prigogine - From Being To Becoming Time and Complexity in The Physical Sciences
Artists: John and Jean Foster Compositor: Santype International Limited Printer and Binder: The Maple-Vail Book Manufacturing Group
To my colleagues and friends in Brussels and Austin whose collaboration has made this work possible
No part of this book may be reproduced by any mechanical, photographic, o r electronic process, or in the form of a phonographic recording, nor may it be stored in a retrieval system, transmitted, or otherwise copied for public o r private use, without written permission from the publisher.
Printed in the United States of America
CONTENTS
PREFACE Chapter 1 xi Introduction: Time i n Physics 1 The Dynamical Description and Its Limits 1 The Second Law of Thermodynamics 5 Molecular Description of Irreversible Processes Time and Dynamics 12
Part I
15
Classical Dynamics 19 lntroduction 19 Hamiltonian Equations of Motion and Ensemble Theory Operators 27 Equilibrium Ensembles 29 Integrable Systems 29 Ergodic Systems 33 Dynamical Systems neither lntegrable nor Ergodic 38 Weak Stability 43 Quantum Mechanics 47 lntroduction 47 Operators and Complementarity 49 Quantization Rules 51 Time Change in Quantum Mechanics 56 Ensemble Theory in Quantum Mechanics 59 Schrodinger and Heisenberg Representations 62 Equilibrium Ensembles 64 The Measurement Problem 65 Decay of Unstable Particles 67 Is Quantum Mechanics Complete? 70
21
Chapter
Part l l
73
Thermodynamics 77 Entropy and Boltzmann's Order Principle 77 Linear Nonequilibrium Thermodynamics 84 Thermodynamic Stability Theory 90 Application to Chemical Reactions 94
CONTENTS
CONTENTS
Chapter 5
Self-organization 103 Stability, Bifurcation, and Catastrophes 103 Bifurcations: The Brusselator 109 A Solvable Model for Bifurcation 116 Coherent Structures in Chemistry and Biology Ecology 1 2 3 Concluding Remarks 126
APPENDIXES
217
A. 6.
120 C.
Chapter 6
N o n e q u i l i b r i u m Fluctuations 1 3 1 The Breakdown of the Law of Large Numbers 131 Chemical Games 135 Nonequilibrium Phase Transitions 139 Critical Fluctuations i n Nonequilibrium Systems 142 Oscillations and Time Symmetry Breaking 143 Limits t o Complexity 145 Effect of Environmental Noise 147 Concluding Remarks 154
D.
T i m e a n d E n t r o p y Operators for t h e Baker Transformation Resonances a n d Kinetic Description 232 Entropy, Measurement, a n d t h e Superposition Principle i n Quantum Mechanics 241 Pure States and Mixtures 241 Entropy Operator and Generator of Motion 242 The Entropy Superoperator 245 Coherence a n d Randomness i n Q u a n t u m T h e o r y 249 Operators and Superoperators 249 Classical Commutation Rules 251 Quantum Commutation Rules 252 Concluding Remarks 254 257 263 267
21 9
SUBJECT INDEX
Ill
151
165
Chapter 8
T h e M i c r o s c o p i c T h e o r y o f Irreversible Processes 1 7 9 Irreversibility and the Extension of the Fomalism of Classical and Quantum Mechanics 179 A New Transformation Theory 181 Construction of the Entropy Operator and the Transformation Theory: The Baker Transformation 187 Entropy Operator and the Poincare Catastrophe 191 Microscopic Interpretation of the Second Law of Thermodynamics: Collective Modes 1 9 4 Particles and Dissipation: A Non-Hamiltonian Microworld 197 The Laws o f Change 201 Einstein's Dilemma 201 Time and Change 204 Time and Entropy as Operators Levels of Description 21 0 Past and Future 21 2 A n Open World 214
Chapter 9
206
PREFACE
Come, press me tenderly upon your breast But not too hard, for fear the glass might break This is the way things are: the World Scarcely suffices for the natural. But the artificial needs t o be confined.
GOETHE, Faust, Part II
This book is about time. I would like to have named it Time, the Forgotten Dimension, although such a title might surprise some readers. Is not time incorporated from the start in dynamics, in the study of motion? Is not time the very point of concern of the special theory of relativity? This is certainly true. However, in the dynamical description, be it classical or quantum, time enters only in a quite restricted way, in the sense that these equations are invariant with respect to time inversion, t -t - t . Although a specific type of interaction, the so-called superweak interaction, seems to violate this time symmetry, the violation plays no role in the problems that are the subject of this book. As early as 1754, d'Alembert noted that time appears in dynamics as a mere "geometrical parameter" (d'Alembert 1754). And Lagrange, more than a hundred years before the work of Einstein and Minkowski, went So far as to call dynamics a four-dimensional geometry (Lagrange 1796). In this view, future and past play the same role. The world lines, the trajectories, followed by the atoms or particles that make up our universe can be traced toward the future or toward the past. This static view of the world is rooted in the origin of Western science (Sambursky 1963). The Milesian school, of which Thales was one of the most illustrious proponents, introduced the idea of a primordial matter closely related to the concept of conservation of matter. For Thales, a single substance (such as water) forms the primordial matter; all changes
PREFACE
PREFACE
in physical phenomena, such as growth and decay, must therefore be mere illusions. Physicists and chemists know that a description in which past and future play the same role does not apply to all phenomena. Everybody observes that two liquids put into the same vessel generally diffuse toward some homogeneous mixture. In this experiment, the direction of time is essential. We observe progressive homogenization, and the one-sidedness of time becomes evident in the fact that we d o not observe spontaneous phase separation of the two mixed liquids. But for a long time such phenomena were excluded from the fundamental description of physics. All time-oriented processes were considered to be the effect of special, "improbable" initial conditions. At the beginning of this century, this static view was almost unanimously accepted by the scientific community, as will be seen in Chapter 1. But we have since been moving away from it. A dynamical view in which time plays an essential role prevails in nearly all fields of science. The concept of evolution seems to be central to our understanding of the physical universe. It emerged with full force in the nineteenth century. It is remarkable that it appeared almost simultaneously in physics, biology, and sociology, although with quite different specific meanings. In physics it was introduced through the second law of thermodynamics, the celebrated law of increase of entropy, which is one of the main subjects of this book. In the classical view, the second law expressed the increase of molecular disorder; as expressed by Boltzmann, thermodynamic equilibrium corresponds to the state of maximum "probability." However, in biology and sociology, the basic meaning of evolution was just the opposite, describing instead transformations to higher levels of complexity. How can we relate these various meanings of time-time as motion, as in dynamics; time related to irreversibility, as in thermodynamics; time as history, as in biology and sociology? It is evident that this is not an easy matter. Yet, we are living in a single universe. To reach a coherent view of the world of which we are a part, we must find some way to pass from one description to another.
A basic aim of this book is to convey to the reader my conviction that we are in a period of scientific revolution-one in which the very position and meaning of the scientific approach are undergoing re-
appraisal-a period not unlike the birth of the scientific approach in ancient Greece or of its renaissance in the time of Galileo. Many interesting and fundamental discoveries have broadened our scientific horizon. To cite only a few: quarks in elementary particle physics; strange objects like pulsars in the sky; the amazing progress of molecular biology. These are landmarks of our times, which are especially rich in important discoveries. However, when I speak of a scientific revolution, I have in mind something different, something perhaps more subtle. Since the beginning of Western science, we have believed in the "simplicity" of the microscopic-molecules, atoms, elementary particles. Irreversibility and evolution appear, then, as illusions related to the complexity of collective behavior of intrinsically simple objects. This conception-historically one of the driving forces of Western sciencecan hardly be maintained today. The elementary particles that we know are complex objects that can be produced and can decay. If there is simplicity somewhere in physics and chemistry, it is not in the microscopic models. It lies more in idealized macroscopic representations, such as those of simple motions like the harmonic oscillator or the two-body problem. However, if we use such models to describe the behavior of large systems or very small ones, this simplicity is lost. Once we no longer believe in the simplicity of the microscopic, we must reappraise the role of time. We comc, then, to the main thesis of this book, which can be formulated as follows: First, irreversible processes are as real as reversible ones; they d o not correspond to supplementary approximations that we of necessity superpose upon time-reversible laws. Second, irreversible processes play a fundamental constructive role in the physical world; they are at the bases of important coherent processes that appear with particular clarity on the biological level. Third, irreversibility is deeply rooted in dynamics. One may say that irreversibility starts where the basic concepts of classical or quantum mechanics (such as trajectories or wave functions) cease to be observables. Irreversibility corresponds not to some supplementary approximation introduced into the laws of dynamics but to an embedding of dynamics within a vaster formalism. Thus, as will be shown, there is a microscopic formulation that extends beyond the conventional formulations of classical and quantum mechanics and explicitly displays the role of irreversible processes.
PREFACE
PREFACE
This formulation leads to a unified picture that enables us to relate many aspects of our observations of physical systems to biological ones. The intention is not to "reduce" physics and biology to a single scheme, but to clearly define the various levels of description and to present conditions that permit us to pass from one level to another. The role of geometrical representations in classical physics is well known. Classical physics is based on Euclidean geometry, and modern developments in relativity and other fields are closely related to extensions of geometrical concepts. But take the other extreme: the field theory used by embryologists to describe the complex phenomena of morphogenesis. It is a striking experience, especially for a nonbiologist, to attend a movie describing the development of, for example, the chicken embryo. We see the progressive organization of a biological space in which every event proceeds at a moment and in a region that make it possible for the process to be coordinated as a whole. This space is functional, not geometrical. The standard geometrical space, the Euclidean space, is invariant with respect to translations or rotations. This is not so in the biological space. In this space the events are processes localized in space and time and not merely trajectories. We are quite close to the Aristotelian view of the cosmos (see Sambursky 1963), which contrasted the world of divine and eternal trajectories with the world of sublunar nature, the description of which was clearly influenced by biological observations. The glory, doubtless, of the heavenly bodies fills us with more delight than the contemplation of these lowly things; for the sun and stars are born not, neither do they decay, but are eternal and divine. But the heavens are high and afar off, and of celestial things the knowledge that our senses give is scanty and dim. The living creatures, on the other hand, are at our door, and if we so desire it we may gain ample and certain knowledge of each and all. We take pleasure in the beauty of a statue, shall not the living fill us with delight; and all the more if in the spirit of philosophy we search for causes and recognize the evidence of design. Then will nature's purpose and her deep-seated laws be everywhere revealed, all tending in her multitudinous work to one form or another of the Beautiful. Aristotle, quoted in Haraway, 1976. Although the application of Aristotle's biological views to physics has had disastrous consequences, the modern theory of bifurcations and instabilities allows us to see that the two concepts-the geometrical world
and the organized, functional world-are will, I think, have a lasting influence.
Belief in the "simplicity" of the microscopic level now belongs to the past. But there is a second reason why I am convinced that we are in the middle of a scientific revolution. The classical, often called "Galilean," view of science was to regard the world as an "object," to try to describe the physical world as if it were being seen from the outside as an object of analysis to which we d o not belong. This attitude has been immensely successful in the past. But we have reached the limit of this Galilean view (Koyre 1968). To progress further, we must have a better understanding of our position, the point of view from which we start our description of the physical universe. This does not mean that we must revert to a subjectivistic view of science, but in a sense we must relate knowing to characteristic features of life. Jacques Monod has called living systems "these strange objects," and they are very strange indeed compared with the "nonliving" world (Monod 1970). Thus, one of my objectives is to try to disentangle a few general features of these objects. In molecular biology there has been fundamental progress without which this discussion would not have been possible. But I wish to emphasize other aspects: namely, that living organisms are far-from-equilibrium objects separated by instabilities from the world ofequilibrium and that living organisms are necessarily "large," macroscopic objects requiring a coherent state of matter in order to produce the complex biomolecules that make the perpetuation of life possible. These general characteristics must be incorporated in the answer to the question, What is the meaning of our description of the physical world: that is, from what point of view d o we describe it? The answer can only be that we start at a macroscopic level, and all the results of our measurements, even those of the microscopic world, at some point refer back to the macroscopic level. As Bohr emphasized, primitive concepts exist; these concepts are not known a priori, but every description must be shown to be compatible with their existence (Bohr 1948). This introduces an element of self-consistency into our description of the physical world. For example, living systems have a sense of the direction of time. Experimentation on even the simplest monocellular organisms reveals that this is so. This direction of time is one of these "primitive
PREFACE
PREFACE
concepts." No science-whether of reversible time behavior, as in dynamics, or of irreversible processes-would be possible without it. Therefore one of the most interesting aspects of the theory of dissipative structures developed in Chapters 4 and 5 is that we can now find the roots of this direction of time at the basis of physics and chemistry. In turn, this finding justifies in a self-consistent way the sense of time that we have attributed to ourselves. The concept of time is much more complex than we thought. Time associated with motion was only the first aspect that could be incorporated consistently into the framework of theoretical structures such as classical or quantum mechanics. We can go further. One of the most striking new results to be described in this book is the appearance of a "second time," a time deeply rooted in fluctuations on the microscopic, dynamical level. This new time is no longer a simple parameter, as in classical or quantum mechanics; rather it is an operator, like those describing quantities in quantum mechanics. Why we need operators to describe the unexpected complexity of the microscopic level is one of the most interesting aspects of the development to be considered in this book.
The recent evolution of science may lead to a better integration of the scientific outlook in the framework of Western culture. There is no doubt that the development of science has, in spite of all its successes, also led to some form ofcultural stress (Snow 1964).The existence of"two cultures" is due not only t o a lack of mutual curiosity, but also, at least partly, t o the fact that the scientific approach has had so little to say about problems, such as time and change, pertaining to literature and art. Although this book does not address problems related to philosophy and human sciences, they are examined by my colleague Isabelle Stengers and myself in another book, La nouvelle alliance (Gallimard, 1979), soon to be translated into English. It is interesting to note that there is a strong current both in Europe and in the United States to bring the philosophical and the scientific themes closer together. For example, consider the work of Serres, Moscovici, Morin, and others in France and the recent, provocative article by Robert Brustein, "Drama in the Age of Einstein," published in the New York Times on August 7, 1977, in which the role of causality in literature is reappraised.
It is probably not an exaggeration to say that Western civilization is time centered. Is this perhaps related to a basic characteristic of the point of view taken in both the Old and the New Testaments? It was inevitable that the "timeless" conception of classical physics would clash with the metaphysical conceptions of the Western world. It is not by accident that the entire history of philosophy from Kant through Whitehead was either an attempt to eliminate this difficulty through the introduction of another reality (e.g., the noumenal world of Kant) or a new mode of description in which time and freedom, rather than determinism, would play a fundamental role. Be that as it may, time and change are essential in problems of biology -and in sociocultural evolution. In fact, a fascinating aspect of cultural and social changes, in contrast with biological evolution, is the relatively short time in which they take place. Therefore, in a sense, anyone interested in cultural and social matters must consider, in one way or another, the problem of time and the laws of change; perhaps inversely, anyone interested in the problem oftime cannot avoid taking an interest in the cultural and social changes of our time as well. Classical physics, even extended by quantum mechanics and relativity, gave us relatively poor models of time evolution. The deterministic laws of physics, which were at one point the only acceptable laws, today seem like gross simplifications, nearly a caricature of evolution. Both in classical and in quantum mechanics, it seemed that, if the state of a system at a given time were "known" with sufficient accuracy, the future (as well as the past) could at least be predicted in principle. Of course, this is a purely conceptual problem; in fact, we know that we cannot even predict whether it will rain in, say, one month from now. Yet, this theoretical framework seems to indicate that in some sense the present already "contains" the past and the future. We shall see that this is not so. The future is not included in the past. Even in physics, as in sociology, only various possible "scenarios" can be predicted. But it is for this very reason that we are participating in a fascinating adventure in which, in the words of Niels Bohr, wc are "both spectators and actors."
The level at which this book has been written is intermediate. Thus, a reader must be familiar with the basic tools of theoretical physics and
PREFACE
PREFACE
chemistry. I hope, however, that by adopting this level I can present to a large group of readers a simple introduction to a field that seems to me to have wide implications. The book is organized in the following way. An introductory chapter is followed by a short survey of what may be called the physics of "being" (classical and quantum mechanics). I emphasize mainly the limits ofclassical and quantum mechanics to convey to the reader my conviction that, far from being closed, these fields are in a state of rapid development. Indeed, it is only when the simplest problems are considered that our understanding is satisfactory. Unfortunately, many of the popular concepts of the structure of science have as their bases undue extrapolations from these simple situations. Attention is then turned to the physics of "becoming," to thermodynamics in its modern form, to selforganization, and the role of fluctuations. Three chapters deal with the methods now available for building a bridge from being to becoming; they involve kinetic theory and its recent extensions. Only Chapter 8 includes more technical considerations. Readers who do not have the necessary background may turn directly to Chapter 9, in which the main conclusions obtained in Chapter 8 are summarized. Perhaps the most important conclusion is that irreversibility starts where classical and quantum mechanics end. This does not mean that classical and quantum mechanics become wrong; rather they then correspond to idealizations that extend beyond the conceptual possibilities of observation. Trajectories or wave functions have a physical context only if we can give them an observable Tntext, but this is no longer the case when irreversibility bccomes part of the physical picture. Thus, the book presents a panorama of problems that may serve as an introduction to a deeper understanding of time and change. All references to the literature are given at the end of the book. Among them are key references in which the interested reader may find further developments; others are original publications of special interest in the context of this book. The selection is admittedly rather arbitrary and I apologize to the reader for any omissions. Of special relevance is the book by Gregoire Nicolis and myself titled SrlflOryanization in Nonequilibrium Systems (Wiley-Interscience, 1977). In the preface to the 1959 edition of his Logic of'Scirnt$c Discovery, Karl Popper wrote: "There is at least one philosophic problem in which all thinking men are interested. It is the problem of cosmology: the
problem of understanding the world-including ourselves, and our knowledge, as part of the world." The aim of this book is to show that recent developments in physics and chemistry have made a contribution to the problem so beautifully spelled out by Popper. As in all significant scientific developments, there is an element of surprise. We expect new insights mainly from the study of elementary particles and from the solutions to cosmological problems. The new surprising feature is that the concept of irreversibility on the intermediate, macroscopic level leads to a revision of the basic tools of physics and chemistry such as classical and quantum mechanics. Irreversibility introduces unexpected features that, when properly understood, give the clue to the transition from being to becoming. Since the origin of Western science, the problem of time has been a challenge. It was closely associated with the Newtonian revolution and it was the inspiration for Boltzmann's work. The challenge is still with us, but perhaps we are now closer to a more synthetic point of view, which is likely to generate new developments in the future.
I am deeply indebted to my co-workers in Brussels and in Austin for the essential role they have played in helping to formulate and to develop the ideas on which this book is based. Although I cannot thank all of them individually here, I would like to express my gratitude t o Dr. Alkis Grecos, Dr. Robert Herman, and Miss Isabelle Stengers for their constructive criticism. I owe special thanks to Dr. Marie Theodosopulu, Dr. Jagdish Mehra, and Dr. Gregoire Nicolis for their constant help in the preparation of the manuscript.
Octohrr 1979
Ilya Prigogine
Chapter
INTRODUCTION
description. Whatever the future of theoretical physics may be, " elementary" particles seem to be of such great complexity that the adage concerning "the simplicity of the microscopic" no longer holds. The change in our point of view is equally valid in astrophysics. Whereas the founders of Western astronomy stressed the regularity and eternal character of celestial motions, such a qualification now applies, at best. to very few, limited aspects such as planetary motion. Instead of finding stability and harmony, wherever we look, we discover evolutionary processes leading to diversification and increasing complexity. This shif in our vision )f the physical world leads us to investigate branches of nathematics a111 theoretical physics that are likely to be of interest in the new context. For Aristotle, physics was the science of processes, of changes that occur in nature (Ross 1955). However, for Galileo and the other founders of modern physics, the only change that could be expressed in precise mathematical terms was acceleration, the variation in the state of motion. This led finally to the fundamental equation of classical mechanics, which relates acceleration to force, F:
FIGURE 1.1 World lines indicating the time evolution of the coordinate X ( t ) corresponding to different initial conditions: (A) evolution forward in time; (B) evolution backward in time.
Henceforth physical time was identified with the time, t , that appears in the classical equations of motion. We could view the physical world as a collection of trajectories, such as Figure 1.1 shows for a "one dimensional " universe. A trajectory represents the position X ( t ) of a test particle as a function of time. The important feature is that dynamics makes no distinction between the future and the past. Equation 1.1 is invariant with respect to the time inversion t + - t : both motions A, "forward" in time, and B, "backward" in time, are possible. However, unless the direction of time is introduced, evolutionary processes cannot be described in any nontrivial way. It is therefore not astonishing that Alexandre Koyre (1968) referred to dynamical motion as" a motion unrelated to time or, more strangely, a motion which proceeds in an intemporal time-a notion as paradoxical as that of a change without change."
Again, of the changes that occur in nature, classical physics retained only motion. Consequently, as Henri Bergson (Evolution creatice, 1907, see Bergson, 1963) and others emphasized, everything is given in classical physics: change is nothing but a denial of becoming and time is only a parameter, unaffected by the transformation that it describes. The image of a stable world, a world that escapes the process of becoming, has remained until now the very ideal of theoretical physics. The dynamics of Isaac Newton, completed by his great successors such as Pierre Laplace, Joseph Lagrange, and Sir William Hamilton, seemed to form a closed universal system, capable of yielding the answer to any question asked. Almost by definition, a question to which dynamics had no answer was dismissed as a pseudoproblem. Dynamics thus seemed to give man access to ultimate reality. In this vision, the rest (including man) appeared only as a kind of illusion, devoid of fundamental significance. It thus became the principal aim of physics to identify the microscopic level to which we could apply dynamics; this microscopic realm could then serve as the basis for explaining all observable phenomena. Here classical physics met the program of the Greek atomists, as stated by Democritus: "Only the atoms and the void." Today we know that Newtonian dynamics describes only part of our physical experience; it applies to objects on our own scale whose masses are measured in grams or tons and whose velocities are much smaller than that of light. We know that thevalidity of classical dynamics is limited
INTRODUCTION
TIME IN PHYSICS
by the universal constants, the most important of which are 11, Planck's constant, whose value in the cgs system is of the order of 6x erg sec. and c, the velocity of light ( 3 x lot0 cmlsec). As the scales of very small objects (atoms, "elementary" particles) or of hyperdense objects (such as neutron stars or black holes) are approached, new phenomena occur. To deal with such phenomena, Newtonian dynamics is replaced by quantum mechanics (which takes into account the finite value of h ) and by relativistic dynamics (which includes c). However, these new forms of dynamics-by themselves quite revolutionary-have inherited the idea of Newtonian physics: a static universe, a universe of being without becoming. Before further discussion of these concepts, we must ask whether physics can really be identified with some form of dynamics. This question must be qualified. Science is not a closed subject. Examples are the recent d~scoveries the field of elementary particles that show how much our in theoretical understanding lags behind the available experimental data. But, first, a comment on the role of classical and quantum mechanics in molecular physics, which is the best understood. Can we describe at least qualitatively the main properties of matter in terms of only classical or quantum mechanics? Let us consider in succession certain typical properties of matter. As regards spectroscopic properties, such as emission or absorption of light, there is no doubt that quantum mechanics has been immensely successful in predicting the position of the absorption and emission lines. But with respect to other properties of matter (e.g., the specific heat), we have to go beyond dynamics proper. How does it happen that heating a mole of gaseous hydrogen from, say, 0" to 100C always requires the same amount of energy if performed at constant volume or constant pressure? Anshering this question requires not only knowledge of the structure of the molecules (which can be described by classical or quantum mechanics), but also the assumption that, whatever their histories, any two samples of hydrogen will reach the same "macroscopic" state after some time. We thus perceive a link with the second law of thermodynamics, which is summarized in the next section and which plays an essential role throughout this book. The role of nondynamical elements becomes even greater when nonequilibrium properties, such as viscosity and diffusion, are included. T o calculate such coefficients,we must introduce some form of kinetic theory
or a formalism involving a "master equation " (see Chapter 7). The details of the calculation are not important. The main point is that, in addition to the tools provided by classical or quantum dynamics, we need supplementary tools, which will be described briefly before investigating their position with respect to dynamics. Here we encounter the main subject of this book: the role of time in the description of the physical universe.
As already mentioned, dynamics describes processes in which the direction of time does not matter. Clearly, there are other situations in which this direction does indeed play an essential role. If we heat part of a macroscopic body and then isolate this body thermally, we observe that the temperature gradually becomes uniform. In such processes, then, time displays an obvious "one-sidedness." Engineers and physical chemists have given such processes extensive study since the end of the eighteenth century. The second law of thermodynamics as formulated by Rudolf Clausius (see Planck, 1930) strikingly summarizes their characteristic features. Clausius considered isolated systems, which exchange neither energy nor matter with the outside world. The second law then implies the existence of a function S, the entropy, which increases monotonically until it reaches its maximum value at the state of thermodynamic equilibrium:
This formulation can be easily extended to systems that exchange energy and matter with the outside world (see Figure 1.2). We must distinguish two terms in the entropy change, d S : the first, d , S , is the transfer of entropy across the boundaries of the system; the second, di S, is the entropy produced within the system. According to the second law, the entropy production inside the system is positive:
FIGURE 1.2
An open system in which di S represents entropy production and d , S represents entropy exchange between system and environment.
F I G U R E 1.3
The concept of asymptotic stability: if a perturbation leads to point P, the system w~ll respond through an evolution leading back to the equilibr~umpoint 0.
It is in this formulation that the basic distinction between reversible and irreversible processes becomes essential. Only irreversible processes contribute to entropy production. Examples of irreversible processes are chemical reactions, heat conduction, and diffusion. O n the other hand, reversible processes may correspond to wave propagation in the limit in which the absorption of the wave is neglected. The second law of thermodynamics, then, states that irreversible processes lead to a kind of one-sidedness of time. The positive time direction is associated with the increase of entropy. Let us emphasize how strongly and specifically the one-sidedness of time appears in the second law. It postulates the existence of a function having quite specific properties such that in an isolated system it can only increase in time. Such functions play an important role in the modern theory of stability initiated by Aleksander Lyapounov's classic work. (References can be found in Nicolis and Prigogine, 1977.) There are other instances of the one-sidedness of time. For example, in the superweak interaction, the equations of dynamics do not admit the invcrsion t + - t. But they are weaker forms of one-sidedness; they can be accommodated in the framework of the dynamical description and do not correspond to irreversible processes as introduced by the second law. Because we shall concentrate on processes that lead to Lyapounov functions, this concept must be examined in more detail. Consider a system whose evolution is described by some variables X i , which may, for example, represent concentrations of chemical species. The evolution of such a system may be given by rate equations of the form
in which Fi is the overall production rate of the component X i ; there is one equation for each component (examples are given in Chapters 4 and 5). Suppose that, for X i = 0 , all the reaction rates vanish. This is then an equilibrium point for the system. We may now ask, If we start with nonvanishing values of the concentrations, Xi, will this system evolve toward the equilibrium point X i = O? In today's terminology, is the state X i = 0 an attru&or? Lyapounov functions enable us to tackle this problem. We consider a function of the concentrations, f = % '(X . . ., X,), and we suppose that it is positive throughout the region of interest and vanishes at X = O.* We then consider how $ ' ( X , , . . . , X,) varies as the concentrations, X i , evolve. The time derivative of this function as the Concentrations evolve according to the rate equations (1.4) is:
'
,,
df -=C-dt
d f dX, , a x , dt
Lyapounov's theorem asserts that the equilibrium state will be an attractor if d f 'ldt, the time derivative of f ', has the opposite sign of f ';that is, if the derivative in our example is negative. The geometrical meaning of this condition is evident; see Figure 1.3. For isolated systems, the second
* In general, a Lyapounov function may also be negative definite. but its first derivative must be positive definite (see, e.g.. equation 4.28).
INTRODUCTION
law of thermodynamics states that a Lyapounov function exists and that, for such systems, thermodynamic equilibrium is an attractor of nonequilibrium states. This important point can be illustrated by a simple problem in heat conductivity. The time change of temperature T is described by the classical Fourier equation:
in which ti is the heat conductivity (ti > 0). A Lyapounov function for this problem can easily be found. We can take, for example,
and the Lyapounov function O(T)decreases indeed to its minimum value when thermal equilibrium is reached. Inversely, the uniform temperature distribution is an attractor for initial nonuniform distributions. Max Planck emphasized, quite rightly, that the second law of thermodynamics distinguishes between various types of states in nature, some of which act as attractors for others. Irreversibility is the expression of this attraction (Planck 1930). Such a description of nature is clearly very different from the dynamical description: two different initial temperature distributions reach the same uniform distribution in time (see Figure 1.4).The system possesses an intrinsic "forgetting" mechanism. How different this is from the dynamica1 "world line" view, in which the system always follows a given trajectory. There is a theorem in dynamics that shows that two trajectories can never cross; at most they may meet asymptotically (for t + a)at singular points. Let us now briefly consider how irreversible processes can be described in terms of molecular events.
F I G U R E 1.4
Approach to thermal equilibrium. Different initial distributions such as TI and T, lead to the same temperature distribution.
Let us first ask what an increase in entropy means in terms of the molecules involved. To find an answer, we must explore the microscopic meaning of entropy. Ludwig Boltzmann, the first to note that entropy is a measure of molecular disorder, concluded that the law of entropy increase is simply a law of increasing disorganization. Consider, for exam-
FIGURE 1 6
A one-dimensional random walk
FIGURE 1.5
Two different distributions of molecules between two compartments: (A) N = N , = 12, N 2 = 0; (B) N1 = N 2 = 6. After a sufficient lapse of time, distribution B represents the most probable configuration, the analogue of thermodynamic equilibrium.
ple, a container partitioned into two equal volumes (see Figure 1.5). The number of ways, P, in which N molecules can be divided into two groups, N, and N2 , is given by the simple combinatorial formula
p=-
N! N,! N2!
in which N! = N ( N - 1)(N - 2) . . . 3 . 2 . 1. The quantity P is called the rzumber of'complerio~~s Landau and Lifschitz, 1968). (see Starting from any initial values of N , and N 2 we may perform a simple experiment, a "game" proposed by Paul and Tatiana Ehrenfest to illustrate Boltzmann's ideas (for more details see Eigen and Winkler, 1975). We choose a particle at random and agree that, when chosen, it will change its compartment. As could be expected, after a sufficiently long time an equilibrium is reached in which, except for small fluctuations, there is an equal number of molecules in the two compartments (N, N2 N/2). It can be easily seen that this situation corresponds to the maximum value of P and in the course of evolution P increases. Thus Boltzmann identified the number of complexions, P, with the entropy through the relation
particles than the other compartment, this lack of symmetry is always destroyed in time. If P is associated with the "probability " of a state as measured by the number of complexions, then the increase of entropy corresponds to the evolution toward the "most probable" state. We shall return to this interpretation later. It was through the molecular interpretation of irreversibility that the concept of probability first cntcred theoretical physics. This was a decisive step in the history of modern physics. We can push such probability arguments still further to obtain quantitative formulations that describe how irreversible processes evolve with time. Consider, for example, the well-known random walk problem, a n idealized but nevertheless successful model for Brownian motion. In the simplest example, a one-dimensional random walk, a molecule makes a one-step transition at regular time intervals (see Figure 1.6). With the molecule initially at the origin, we ask for the probability of finding it at point m, after N steps. If the probability that the molecule proceeds forward or backward is assumed t o be one-half, we find that
--
k log P
in which k is Boltzmann's universal constant: an entropy increase expresses growing molecular disorder, as indicated by the increasing number of complexions. In such an evolution, the initial conditions arc "forgotten." Whenever one compartment is at first favored with morc
Thus, to arrive at point m after N steps, some +(N m) steps must be taken to the right and some +(N - m) to the left. Equation 1.11 gives the number of such distinct sequences multiplied by the overall probability of an arbitrary sequence of N steps. (For details, see Chandrasekhar, 1943.) Expanding the factorials, we obtain the asymptotic formula correspondIng t o a Gaussian distribution:
INTRODUCTION
TIME IN PHYSICS
Using the notation D = fn12, in which 1 is the distance between two sites and n the number of displacements per unit time, this result can be written:
in which x = ml. This is the solution of a one-dimensional diffusion equation identical in form to the Fourier equation (equation 1.6, but K is replaced by D). Evidently, this is a very simple example; in Chapter 7, consideration is given to more sophisticated techniques for deriving irreversible processes from kinetic theory. Here, however, we may ask the fundamental questions, What is the position of irreversible processes in our description of the physical world? What is the relation of these processes to dynamics?
In classical and quantum dynamics, the fundamental laws of physics are taken to be symmetrical in time. Thermodynamic irreversibility corresponds to some kind of approximation added to dynamics. An often quoted example was given by Josiah Gibbs (1902): if we put a drop of black ink into water and stir it, the medium will look gray. This process would seem to be irreversible. But if we could follow each molecule we would recognize that in the microscopic realm the system has remained heterogeneous. Irreversibility would be an illusion caused by the observer's imperfect sense organs. It is true that the system has remained heterogeneous, but the scale of heterogeneity, initially macroscopic, has become microscopic. The view that irreversibility is an illusion has been very influential and many scientists have tried to tie this illusion to mathematical procedures, such as coarse graining, that would lead to irreversible processes. Others with similar aims have tried to work out the conditions of macroscopic observation. None of these attempts has led to conclus~ve results. It is difficult to believe that the observed irreversible processes, such as viscosity. decay of unstable particles, and so forth, are slmply illusions
caused by lack of knowledge or by incomplete observation. Because we know the initial conditions even in simple dynamical motion only approximately, future states of motion become more difficult to predict as time increases. Still, it does not seem meaningful to apply the second law of thermodynamics to such systems. Properties like specific heat and compressibility, which are closely associated with the second law, are meaningful for a gas formed by many interacting particles but are meaningless when applied to such simple dynamical systems as the planetary system. Therefore. irreversibility must have some basic connection with the dynamical nature of the system. The opposite notion has also been considered: perhaps dynamics is incomplete; perhaps it should be expanded to include irreversible processes. This attitude is also difficult to maintain, because for simple types of dynamical systems the predictions, both of classical and quantum mechanics, have been remarkably well verified. It is enough to mention the success of space travel, which requires very accurate computation of the dynamical trajectories. In recent times it has been repeatedly asked whether quantum mechanics is complete in connection with the so-called measurement problem (to which we return in Chapter 7). It has even been suggested that, to include the irreversibility of the measurement, new terms would have to be added to the Schrodinger equation describing the dynamics of quantum systems (see Chapter 3). We come here to the very formulation of the subject of this book. Using the philosophical vocabulary, we can relate the "static" dynamical description with bring; then the thermodynamic description, with its emphasis on irreversibility, can be related to becoming. The aim of this book, then. is to discuss the relation between the physics of being and the physics of becoming. Before that relationship can be dealt with, however, the physics of being must be described. This is done by mcans of a short outline of classical and quantum mechanics. emphasizing their basic concepts and their present limitations. Then the physics of becoming is addressed, with a short presentation of modern thermodynamics, including the basic problem of self-organization. We are then ready to examine our central problem: the transition between being and becoming. To what extent can we present today a logically coherent, though necessarily incomplete. description of the
INTRODUCTION
physical world? Have we reached some unity of knowledge or is science broken into various parts based on contradictory premises? Such questions will lead us to a deeper understanding of the role of time. The problems of unity of science and of time are so intimately connected that we cannot treat one without the other.
Part
EMERGENCE OF ORDER I N F L U I D FLOWS The ordered structures of a storm arise from complex nonlinear lnteractlons In flurd systems far from e q u ~ l ~ b r ~ u m photograph above IS of The large-scale e d d ~ e sIn Jupiter's atmosphere N o n l ~ n e a rlnteractlons also lead t o the emergence of palred vortlces at the boundary between t w o layers of f l u ~ d l o w i n g at d~fferentvelocit~es as f s h o w n In the computer-drawn grdptis on the facrng page L ~ n e sof equal vorticlty have been plotted l n ~ t ~ a l lthe mlxrng layer I S turbulent and has only small-scale y structure By computer s m u l a t l o n Ralph Metcalf and James R ~ l e y h o w s h o w small perturbat~ons the mlxlng layer evolve Into varlous types of of large scale vortlces These s ~ m u l a t ~ o n s closely match exper~mental work done on mlxrng layers. Whether f l o w s dissipate In turbulent chaos or lead to large-scale order depends o n the existence and nature of instabrl~ties In the system The photograph of J u p ~ t e ris reproduced through the courtesy of the National Aeronautics and Space Administration and the computer plots through the courtesy of James Riley and Ralph Metcalfe. Further rnformation about the computer simulation is given In a paper t ~ t l e d"Direct Numerical Slmulatron of a Perturbed, Turbulent M i x i n g Layer,'' A I A A - 8 0 - 0 2 7 4 (presented at the A l A A 18th Aerospace Scences Meeting, Pasadena, Callforn~a, January 14-1 6. 1980). w h ~ c h can be o b t a ~ n e d from F l o w Research Company, Kent, W a s h ~ n g t o n 98031
Chapter
CLASSICAL DYNAMICS
Introduction
Classical dynamics is the oldest part of present-day theoretical physics. It might even be said that modern science began when Galileo and Newton formulated dynamics. A number of the greatest scientists of Wcstern civilization, such as Lagrange, Hamilton and Henri Poincare, have made decisive contributions to classical dynamics; moreover, classical dynamics was the starting point of the scientific revolutions of the twentieth century, such as relativity and quantum theory. Unfortunately, most college and university textbooks present classical dynamics as if it were a closed subject. We shall see that it is not. In fact, it 1 " subject in rapid evolution. In the past twenty years, Andrei Kolmogoroff, Vladimir Arnol'd, and Jiirgen Moser, among others, have introduced important new insights, and further developments can be expected in the near future (see Moser, 1972).
Classical dynamics has been the prototype of the scientific approach. In French the term "rational" mechanics is often used, implying that the laws of classical mechanics are the very laws of reason. Among the characteristics attributed to classical dynamics was that of strict determinism. In dynamics a fundamental distinction is made between initial conditions, which may be given arbitrarily, and the equations of motion, from which the system's later (or earlier) dynamic state can be calculated. As will be seen, this belief in strict determinism is justified only when the notion of a well-defined initial state does not correspond to an excessive idealization. Modern dynamics was born with Johannes Kepler's laws of planetary motion and with Newton's solution of the "two body" problem. However, dynamics becomes enormously more complicated as soon as we take into account a third body-a second planet, for instance. If the system is sufficiently complex (as in the "three body" problem), knowledge (of whatever finite precision) of the system's initial state generally does not allow us to predict the behavior of this system over long periods of time. This uncertainty persists even when precision in the determination of the initial state becomes arbitrarily large. It becomes impossible, even in principle, to know whether, for instance, the solar system that we inhabit is stable for all future times. Such considerations considerably limit the usefulness of the concept of trajectories or world lines. We must, then, consider ensembles of world lines compatible with our measurements (see Figure 2.1). But once we leave the consideration of single trajectories, we leave the model of strict determinism. We can make only statistical predictions, forecasting average results. It is a curious turn of events. For years, the proponents of classical orthodoxy have tried to free quantum mechanics from its statistical aspects (see Chapter 3): Albert Einstein's remark is well known: "God does not play dice." Now we see that, when long periods of time are considered, classical dynamics itself needs statistical methods. More important still, even classical dynamics, perhaps the most elaborated of all theoretical sciences, is not a "closed" science: we can pose meaningful questions to which it yields no answers. Because classical dynamics is the oldest of all theoretical sciences, its development illustrates. in many ways, dynamics of the evolution of science. We can see the birth of paradigms, their growth and decay.
F I G U R E -. 1 2 .
Trajectories originating from a finite region in phase space corresponding to the initial state of the system.
Examples of such paradigms are the concepts of intugroble and ergodic dynamic systems that will be described in the next sections ofthis chapter, Of course, no systematic description of the theoretical basis of classical could be presented in this chapter; we can only emphasize certain relevant features.
mechanics it is convenient to describe tllc state o f a system of Point Particles by the coordinates y,. . . . qs a d momenta pr, ..., p s . 0 1 importance is the energy of the system when expressed jn terms
CLASSICAL DYNAMICS
Let us consider an arbitrary function of q,, . . . , p , . Employing Hamilton's equations (2.4), its change with time will be given by:
in which the first part depends only on the momenta and is the kinetic energy, and the second part depends on the coordinates and is the potential energy (for details, see Goldstein, 1950). The energy expressed in these variables is the Hamiltonian, which plays a central role in classical dynamics In this discussion, only conservative " systems " in which H does not depend explicitly on time are considered. A simple example would be a one-dimensional harmonic oscillator for which the Hamiltonian is
in which [ j ;HI is called the Poisson bracket off with H . The condition for the invariance off is, therefore,
Clearly,
[H,H]=
i=
in which rn is the mass and k is the spring constant related to the frequency I (or angular velocity w ) of the oscillator by
(~_HoHEz~~~
api aqi
dqi dp,
In many-body systems, the potential energy is often the sum of twobody interactions, as in gravitational or electrostatic systems. The central point for us is that, once the Hamiltonian H is known, the motion of the system is determined. Indeed, the laws of classical dynamics may be expressed in terms of Hamilton's equations:
A great achievement of classical dynamics is that its laws can be expressed in terms of a single quantity, the Hamiltonian. Imagine a space of 2s dimensions whose points are determined by the coordinates q , , . . . p s . This space is called the phase space To each mechanical state there corresponds a point P, of this space. The position of the initial point P at time t o , together with the Hamiltonian. completely determines the evolution of the system.
This relation expresses simply the conservation of energy. To make the connection between dynamics and thermodynamics, it is very useful to introduce, as did Gibbs and Einstein, the idea of a representative ensemble (see Tolman, 1938). Gibbs has defined it as follows: "We may imagine a great number of systems of the same nature, but differing in the configurations and velocities which they have at a given instant, and differing not merely infinitesimally, but it may be so as to embrace every conceivable combination of configurations and velocities . . . ." The basic idea, therefore, is that instead of considering a single dynamic system, we consider a collection of systems, all corresponding to the same Hamiltonian. The choice of this collection, or ensemble, depends on the conditions imposed on the systems (we may for example consider isolated systems or systems in contact with a thermostat) and on our knowledge of the initial conditions. If the initial conditions are well defined, the ensemble will be sharply concentrated in some region of phase space; if they are poorly defined, the ensemble will be distributed over a wide region in phase space. For Gibbs and Einstein. the ensemble point of view was merely a convenient computational tool for calculating average values when exact
F I G U R E 2.2
Gibbs ensemble. Systems, whose state is described by the various points. have the same Hamiltonian and are subject to the same constraints, but differ in their initial conditions.
F I G U R E 2.3
Preservation of volume in phase space.
we have initial conditions were not prescribed. As will be secll in this chapter. as well as in Chapter 7, the importance of the ensemble point of view goes much further than originally conceived by Gibbs and Einstein. A Gibbsian ensemble of systems can be represented by a "cloud" of points in the phase space (see Figure 2.2). In the limit in which each region contains a large number of points, the "cloud" can be described as a continuous fluid with a density of
u
As a result, the volume in phase space is preserved in time (see Figure 2.3). Using equation 2.1 1, we obtain a simple equation of motion for the phase-space density p. As is shown in all textbooks (Tolman 1938), this is the well-known Liouville equation. which can be written in the form
in phase space. Because the number of points in the ensemble is arbitrary, p will be normalized; that is,
Therefore
~n which (as it is in equation 2.5) the bracket is the Poisson bracket of H with p. As it is often convenient to use an operator formnlarion, we simply multiply equation 2.1 I by i = and write
fl
represents the probubiiity of finding at time t a rcpresentativc point in the volume element d q , , .. ., rip, of phase space. The change of density in every volume element of phase space is due to the difference of the Rows across its boundaries. The remarkable feature is that the flow in phase space is 'Lincompressible." In other words, the divergence of the How vanishes. Indeed, using Hamilton's equations (2.4).
ln
CLASSICAL D Y N A M I C S
The concept of an operator is discussed in greater detail in the next section. To simplify the notations, we have considered a single degree of freedom. The multiplication by i is introduced to make L a Hermitian operator like the operators of quantum mechanics studied in Chapter 3. The formal definition of Hermitian operators can be found in any textbook. The definition of an operator in quantum mechanical systems is given in Chapter 3, in the section on operators and complementarity. The basic difference between them lies in the space in which they act: in classical dynamics L acts in the phase space, whereas in quantum mechanics the operators act in the coordinate space or in the momentum space. The Liouville operator has been used extensively in recent work in statistical mechanics (see Prigogine, 1962). Our interest in ensemble theory is obvious. Even if we do not know the exact initial conditions, we may consider the Gibbs density and calculate the average value of any mechanical property A(p, q) such as
Operators
Operators are generally introduced in connection with quantum mechanics. The quantum mechanical aspects are discussed in Chapter 3, but for now it is sufficient merely to emphasize that operaton also appear in classical dynamics when the ensemble point of view is adopted. Indeed, the concept of the Liouville operator has already been introduced in equation 2.13. In general, an operator has eigenfunctions and eigenvalues When an operator acts on one of its eigenfunctions, the result is the eigenfunction multiplied by its associated eigenvalue. Consider, for example, the operator A corresponding to second-order differentiation:
using the ensemble average. Note also that it is easy to give the formal solution of the Liouville equation 2.12 as
If it acts on an arbitrary function (say x2), the operator changes that function into another one. However, certain functions are left unchanged: for example. consider the "eigenvalue problem"
This expression can be verified by straightforward differentiation. A word of caution is necessary here. The Gibbs ensemble approach introduces the probability concept through the density function p in phase space. This allows the study of both pure cases, for which the initial conditions are prescribed, and i x t r e corresponding to various posfible initial conditions. In any case, the time evolution of the density function has a strictly deterministic dynamical character. There is no simple cor~tzectionwith probabilistic (or "stochastic") processes, such as Brownian motion, which is described in Chapter 1. Concepts such as transition probabilities d o not appear here. One striking difference is in the role of time. Solution 2.12' is valid for all values of t positive and negative, whereas solution 1.13 refers only to t positive. (In mathematical terms, solution 2.12' corresponds to a group and 413 to a semigroup.) I'
kx
and
1" which k is a real number. These are the eigenfunctions and the eigenvalues, respectively, associated with the operator. The eigenvalues may be either discrete or continuous. To understand this difference, let us reconsider the eigenvalue problem (2.16). So far, boundary conditions have not been introduced, but we now impose the
CLASSICAL DYNAMICS
condition that the eigenfunction be zero at the boundaries of the domain corresponding to x = 0 and .Y = L. These are the boundary conditions that arise naturally in quantum mrchanics. Their physical interpretation is that the particle is trapped inside this domain. It is easy to satisfy these --boundary conditions. Indeed the conditions
Equilibrium Ensembles
u sin k x = 0 for . = 0, L
lead to
As mentioned in Chapter 1, the approach to thermodynamic equilibrium is the evolution toward a final state that acts as an attractor for the initial conditions. It is not difficult to guess what this means in terms of the Gibbs distribution function in phase space. Let us consider an ensemble of which all the members have the same energy E. The Gibbs density p is zero except possibly on the energy surface defined by the relation
We see, therefore, that the spacing between two permitted states depends on the size of the domain. Because the spacing is inversely proportional to L2, we obtain, in the limit of large systems, what is called a continuous spectrum rather than the discrete spectrum obtained for finite systems. Often one has to consider a slightly more involved limit in which both the volume. V, of the system and the number of particles, N, are infinite, although their ratio remains constant:
N
-t
limit, which plays an important role in the This is the tbermody~~umic study of thermodynamic behavior in many-body systems. The distinction between discrete and continuous spectrums is very important for the description of the time evolution of p , the density in phase space. If L has a discrete spectrum, the Liouville equation (2.11') leads to a periodic motion. However, the nature of the motion changes drastically if L has a continuous spectrum. We shall come back to this in the section on decay of unstable particles in Chapter 3. However, it should be noted here that even afinite classical system may have a continuous spectrum in contrast with what happens in quantum mechanics.
Initially, we could consider an arbitrary distribution over this energy surface. This distribution then evolves in time according to the Liouville equation. The simplest view of what thermodynamic equilibrium means is to assume that at thermodynamic equilibrium the distribution p would become constant on the energy surface. This was the basic idea that Gibbs had, and he called the corresponding distribution the microcanonical ensemble (Gibbs 1902). Gibbs was able to show that this assumption leads to the laws of equilibrium thermodynamics (see also Chapter 4). Besides the microcanonical ensemble, he introduced other ensembles, such as the canonical ensemble corresponding to systems in contact with a large energy reservoir at uniform temperature T. This ensemble also leads to the laws of equilibrium thermodynamics and allows a remarkably simple molecular interpretation of such thermodynamic properties as equilibrium entropy. However. such matters will not be dealt with here; instead attention will be focused on the basic question: What kind of conditions have to be imposed on the dynamics of a system to ensure that the distribution function will approach the microcanonical or the canonical ensemble?
Integrable Systems
For most of the nineteenth century the idea of integrabk systems dominated the development of classical dynamics (see Goldstein, 1950). The
FIGURE 2 5
F I G U R E 2.4
Transformation from Cartesian coordinates (p and q ) to action and angle variables (J and a, respectively) for the harmonic oscillator.
Elimination of potential energy (represented in part A by wavy lines) for integrable systems.
idea is easily illustrated by a harmonic oscillator. Instead of the canonical variables q and p, new variables, J and n, are introduced and defined by
transformation gives us a new representation enabling us to speak about well-defined bodies or particles because the potential energy has been eliminated. We then obtain a Hamiltonian of the form
H = H(J1, . . . , J,)
p = ( 2 ~ ~ cos a) " ~ ~
This transformation is quite similar to the transformation from Cartesian to polar coordinates; n is called the angle variable, and J , which is the corresponding momentum, the action variable (see Figure 2.4). With these variables, equation 2.2 takes the simple form
We have performed a canonical transformation in which one form of the Hamiltonian (2,2) has been changed into another (2.25). What has been gained? In the new form, the energy is no longer divided into kinetic and potential energy. Equation 2.25 gives the total energy directly. We can immediately see the usefulness of such transformations for more complicated problems. As loog as we have a potential energy, we cannot really attribute an energy to each of the bodies making up the system. because part of the energy is '' between" the various bodies. The canonical
which depends only on the action variables. Systems for which we can transform equation 2.1 into 2.14 and 2.23 into 2.26 through an appropriate change of variables are by definition the integrable systems of dynamics. For these systems we may therefore "transform away" the pot en tial energy, as represen led schematically in Figure 2.5. Does the physical world such as the one represented by elementary particles and their interactions correspond to an integrable system? This basic question is discussed in Chapter 3. Another striking feature of the transformation into action and angle variables is that in equation 2.25 the frequency, ru, of the harmonic oscillator is displayed explicitly in the Hamiltonian (it does not have to be derived through the integration of the equations of motion) Similarly, for the general case. wc have s frequencies, w , , . . , w , , each of which is related to the Hamiltonian by
CLASSICAL DYNAMICS
Coordinates that are by definition the angle variables, a , , . . . , cr, correspond to the action variables J i Physical quantities are periodic functions of these angle variables. The form of the Hamiltonian in action variables (equation 2.26) leads to important consequences. The canonical equations are now (see equations 2.4 and 2.27)
Ergodic Systems
Therefore each action variable is a constant of motion and the angle variables are linear functions of time. Throughout the nineteenth century, mathematicians and physicists working on problems in classical dynamics looked for integrable systems. because once the transformation into the Hamiltonian form (equation 2.26) has been found, the integration problem (the solution of the equations of motion) becomes trivial. Thus, the scientific community was shocked when Heinrich Bruns first proved (as did Poincare in more general cases) that the most interesting problems of classical dynamin starting from the three-body problem ( e g , including the sun, the earth. and the moon) do not lead to integrable systems (Poincare 1889). In other words. we cannot find a canonical transformation that leads to the Hamiltonian form given in equation 2.26; therefore, we cannot find invariants such as the action variables Ji by a canonical transformation. This was in a sense the point at which the development of classical dynamics ended. Poincark's basic theorem is discussed later in this chapter in the section titled Dynamical Systems neither Integrable nor Ergodic. For now, it should be noted that, in consideration of the relation of dynamics and thermodynamics, Poincark's theorem is most fortunate. In general, if physical systems belonged to the class of integrable systems, they could not forgct their initial conditions; if the action variables, J , , . . , J,, had prescribed values initially, they would keep them forever, and the distribution function could never become uniform over the microcanonical surface corresponding to a given value E of the energy. Clearly, the final state would drastically depend on the preparation of the system, and concepts such as approach to equilibrium would lose their meaning.
Because of the difficulties in using integrable systems to incorporate the approach to equilibrium, James Clerk Maxwell and Ludwig Boltzmann turned their attention to another type of dynamical system. They introduced what is generally known today as the ergodic hypothesis. In the words of Maxwell, "The only assumption which is necessary for a direct proof of the problem of thermodynamic equilibrium is that the system, if left to itself in the actual state of motion. will sooner or later pass through every phase which is consistent with the equation of energy." Mathematicians have pointed out that a trajectory cannot fill " a surface" and that the statement must be altered to indicate that the system will eventually come arbitrarily close to every point of the energy surface, in accord with the quasi-ergodic hypothesis (see Farquhar, 1964). It is interesting to note that we are dealing with a prototype of dynamical systcms, which is just the opposite of the point of view taken in the study of integrable systems. In this prototype, essentially only a single trajectory "covers" the energy surface. Ergodic systems have only one invariant instead of the s invariants, J , , J , . . . , J , , of integrable systems. lf we keep in mind that we are generally interested in many-body systems for which s is of the order of Avogadro's number, x 6 x lo2', the difference is indeed strikino --a' There is no doubt of the existence of ergodic dynamical systems, even of a very simple type. An example of ergodic time evolution is the motion on a two-dimensional unit square corresponding to the equations
dr, -=a dt
and
-=I
dt
dq
These equations are easily solved to give, with periodic boundary conditions,
~ ( t= Po )
+ at 4 ( t )= q o + t
(mod 1)
CLASSICAL DYNAMICS
may write
Both are constants, as is the frequency of a harmonic oscillator (see equation 2.25). When more than one frequency is included in a problem of dynamics, a basic question is the so-called linear independence of the frequencies. If cr is rational, we may find numbers m, and m, , both of which do not vanish, such that
F I G U R E 2.6
~h~ phase trajectory given by equation 2.31. For the trajectory is dense on the unit Square.
ci
irrational.
The basic characteristics of the trajectory depend on the value of a, for which two cases have to be distinguished. If ct is a rational number. say a = m/n, the trajectory will be periodic and will repeat itself after a period, T = n. The system is then riot ergodic. On the other hand, if a is irrational, then the trajectory will satisfy the condition of the quasi-ergodic hypothesis i t will come arbitrarily near each point of the unit square. It will "fill" the surface of the square (Figure 2.6). For later reference it is important to note that, in spite of the ergodicity of the motion, each small region of the phase fluid moves without deformation, because a small rectangle Ap Aq preserves not only its size but also its form (dAp/dt = dAq/dt = 0 as the consequence of equations 2.29). This is in contrast with other types of motion (see Chapter 7 and Appendix A) in which phase-fluid movement leads to violent disturbances. In the equations of motion (2.29). a and 1 are two characteristic frequencies (wl and 01,); one of them relates to p and the other to q. We
The frequencies are then linearly dependent. On the other hand, if a is irrational, equality 2.24 cannot be satisfied with nonvanishing numbers rn, and m, . The frequencies are then linearly independent. About 1930, the work of George Birkhoff, John von Neumann, Heinz Hopf, and others gave definite mathematical form to the ergodic problem in classical mechanics. (For references see Farquhar, 1964, and Balescu, 1975.) We have seen that the flow in phase space preserves volume (or "measure"). This still leaves many possibilities open. In an ergodic system, the phase fluid sweeps the whole available phase space on the microcanonical surface, but as we have seen it may d o so without altering its shape. But much more complicated types of flow are possible: not only does the phase fluid sweep the entire phase volume, but the initial shape of the element becomes greatly distorted. The initial volume sends out amoebalike arms in all directions in such a way that the distribution, becomes uniform over a long period of time, regardless of its initial configuration. Such systems, called mixing systems, were first investigated hy Hopf. There is no hope of drawing a simple figure that could correspond to this flow, because two neighboring points, no matter how close together, may diverge. Even if we start with a simply shaped distribution, 1" time we obtain a "monster," as Benoit Mandelbrot has rightly called objects of this complexity (Mandelbrot 1977). Perhaps a biological anacan clarify the degree of this complexity: for example, the volume of a lung and the hierarchy of vesicles that it contains. There are flows with even stronger properties than those of mixing, Which have been investigated notably by Kolmogoroff and Ya. Sinai (see Balescu, 1975). Of particular interest are the K-jlows. whose properties are nearer to those of stochastic systems. In fact, when we go from
CLASSICAL DYNAMICS
ergodic flows to mixing flows and then to K-flows, the motion in phase space becomes more and more unpredictable. We are further and further away from the idea of determinism, which was considered the characteristic feature of classical dynamics for such a long time. (An example using the baker transformation is treated in Appendix A.) With regard to the spectral properties of L, the distinction between these different types of flows is very simple. For example, an ergodic system means that the only solution of
F I G U R E 2.7'
Various types of flow in phase space: (A) nonergodic; (B) ergodic but not mixing; (C) mixing.
and therefore corresponds to a constant on the microcanonical surface. By referring to equation 2.13, we can see that equation 2.34 is indeed a solution of equation 2.33, but the characteristic feature of ergodic systems is that it is the only one. Similarly (see, e.g., Lebowitz, 1972), mixing implies the stronger property that L has no discrete eigenvalues other than zero. Finally K-flows imply that, in addition to mixing, the multiplicity of solutions (i.e., the number of solutions for a given eigenvalue) is constant. An unexpected result of ergodic theory is that "unpredictability" or "randomness " of motion is related to such simple properties of the Liouville operator L. In a series of remarkable papers, Sinai (see, e.g., Balescu, 1975) was able to prove that a system of more than two hard spheres in a box was a K-flow (and therefore also mixing and ergodic). Unfortunately, it is not known if this remains true for other (Icss singular) laws of interaction. Nevertheless, most physicists shared the opinion that this was only a formal difficulty and that the mechanical basis of the approach to equilibrium observed in physical systems had indeed to be found in the theory of ergodic systems. The view that dynamical systems would be, in general, ergodic was first challenged in a paper by Kolmogoroff (1954). He pointed out that, for large classes of interacting dynamical systems, one could construct periodic orbits confined t o a subspace (invariant tori) of the ergodic surface. Other investigations also contributed to weakening our belief in the universality of ergodic systems. For example, an important piece of work
From Fracrals Form. Chance and D~mens~on B e n o ~ tB Mandelbrot by (W H Freeman and Company) Copyr~ght 2 1977 by B e n o ~ tB Mandebrot
was the one realized by Enrico Fermi, John Pasta, and Stanislaw Ulam (see Balescu, 1975), who made a numerical investigation of the behavior of a coupled chain of anharmonic oscillators. They expected that this system would reach thermal equilibrium rapidly. Instead, they found periodic oscillations in the energy of the various normal modes. The work of Kolmogoroff was extended by Arnol'd and Moser and has led to the so-called KAM theory. Perhaps the most interesting aspect of this new theory is that, independently of ergodicity, dynamical systems may lead to random motion that is somewhat similar to the type of motion occurring in mixing systems or K-flows. Let us consider this important point in more detail.
F I G U R E 2.8
-
,,
FIGURE 2.9
Various types of trajectories: (A) periodic; (8) conditionally periodic; f ) random,
THE P H Y S I C S O F B E I N G
An interesting observation made by Ford and others is that a dynamical system may. depending on circumstances. change from being conditionally periodic to being "random." To analyze this finding, let us start with the Hamiltonian that is formed by the sum of an unperturbed Hamiltonian, H , , depending only on canonical momenta and a perturbation depending both on canonical momenta and on canonical coordinates:
FIGURE 2 10
Whittaker's theory (see text for details).
of resonance for which If the perturbation were absent, J , and J 2 would be the action variables corresponding to the problem, and we would have two "unperturbed" frequencies related to the Hamiltonian H , and given (as in equation 2.27) by n,ol
+ n2cu2= 0
(2.39)
An essential difference between this example and that of the harmollic oscillator is that, in general, H, will not be linear in the J's and these two frequencies will be action dependent. Let us now examine the effect of the perturbation V in the Hamiltonian (equation 2.35). Because this is in general a periodic function in the angle variables, a,, a , , we may write it in the general form of a Fourier series. Typically, we may consider a perturbation of the form
The interesting point is that the solution of the equation of motion through perturbation theory always includes terms of order:
which corrcspond to ratios of the potential energy divided by sums of the frequencies for the unperturbed system. This leads to "dangerous" behavior when the Fourier coefficient, Vn,,, , does not vanish in the presence
Expression 2.38 is undefined and anomalous behavior has to be expected. As shown by the numerical experiments, the occurrence of resonances causes periodic or quasi-periodic behavior to become random behavior (see Figure 2.9). Resonances destroy the simplicity of dynamical motion. They correspond to the transfer of large amounts of energy or momentum from one degree of freedom to the other. In numerical calculations only a finite number-for example, two resonances-are generally considered. But it is important to investigate what would happen if the number of resonances were infinite; that is, if there were resonances in every region of the J , , J2-plane, no matter how small. This is the case corresponding to Poincare's theorem on the nonexistence of integrable systems mentioned earlier. The resonances lead to such an irregular motion t h a t invariants of motion other than the Hamiltonian are no longer analytic functions of the action variables. We shall refer to it as the "Poincare catastrophe." which will play an important role in the later chapters of this book. It is remarkable how prevalent Poincare's catastrophe is. It appears in most problems of dynamics starting from the celebrated threebody problem. A good illustration of the physical meaning of Poincare's fundamental theorem has been provided by Edmund Whittaker's theory of "adelphic integrals" (1937). Consider a trajectory that starts at some point A in the action space J , , J , of Figure 2.10 and the frequencies, w , , u ~ , at this , point. Whittaker was able to solve the problem of motion formally for a large class of Hamiltonians in terms of series expansion, but the type of
CLASSICAL D Y N A M I C S
Weak Stability
, .
... *'
Intersection
F I G U R E 2 11
Trajectory in phase space as the intersection of two invariant surfaces.
Intersection
series expansion differscrucially according to whether or not the frequencies are rationally independent (or commensurate). Because w,, o, are generally continuous functions of the action variables, they will be rationally dependent every time their ratio is a rational number of the form m/n and will be rationally independent if the ratio is not a rational number Therefore, the type of motion is different for two points A and B even if they are very near one another, because each rational number is embedded in irrationals and vice versa. This is the basic content of the concept of weak stability already mentioned. It is clear that Poincare's catastrophe may lead to "random" motion. For integrable systems, a trajectory may be viewed as the '' intersection " of invariants of motion. For example, in the case of two degrees of freedom a trajectory would correspond to the intersection of the two surfaces, J = 6 , and J = 6,. in which 6,. 6, are given constants (see equation 2.26). But whenever we have the Poincare catastrophe the invariants of motion become nonanalytical, ''pathological'' functions, as does their intersection (Figure 2.1 1). It should be noted that the situation is more complex for nonintegrable systems in which the Poincare catastrophe arises than it is for ergodic (or mixing) systems. In the first case we know that, as a result of the Kolmogoroff, Moser, and Amol'd theory, in general both periodic motions confined to some pan of the available p h a s space and random motions "covering" the whole phase space exist. Both types of motion may have a positive measure. O n the contrary, the confined motions of ergodic (or mixing) systems have a measure zero. The consequences of this situation are analyzed in the next section.
As we have seen, there are at least two types of situations in which dynamical motion introduces random elements. The first corresponds to mixing flow (or flows satisfying stronger conditions, such as K-flows), and the second to what is referred to as the Poincare catastrophe, in which resonances prevent the "continuation" of the unperturbed invariants of motion when an interaction is initiated. The two situations are quite different: in the first case, the dynamical systems are characterized by a Liouville operator with well-defined spectral properties (such as a continuous spectrum); in the second, it is the decomposition of H (see equation 2.35) into the two parts H , and V that is essential. in both cases, however, the character of the motion is such that two trajectories, regardless of how close together their starting points are, may diverge greatly in time. This corresponds to what often has been called instability of motion and is of obvious importance for the long-term behavior of dynamical systems. To contrast this behavior with the one found in simple systems, let us consider a simple pendulum for which the Hamiltonian is
COS
in which the first term is the kineticenergy and the second the potential energy in the gravitational field. The coordinate q is replaced by 6, the angle of deflection. Such a pendulum can move in two ways: it can either oscillate around Its equilibrium position or rotate around its point of suspension. Rotation 1s possible only when the energy of the pendulum is large enough. The region in which one or the other motion is possible can be represented 1" phase space, as shown in Figure 2.12. The important point for us is that the neighboring points in phase space corresponding to vibration or rotation belong to the same region. Therefore, even with limited information about the initial state of the system, we can decide if the system will rotate or vibrate.
FIGURE 2 12
Phase space for the rotator. Shaded region corresponds to vibration; outside region corresponds to rotation.
This property is lost for systems in which stability is weak. In such systems, one type of motion may occur in every neighborhood of another type of motion (see Figure 2.13). There is then no point in increasing the precision of our observation. The microstructure of the phase space has become extremely complex. This is the reason why statistical arguments enter into every long-term prediction. In such situations, statistical ensembles must be considered. We cannot reduce the mixture " to a " pure" case corresponding to a single trajectory (which would be represented by a 6-function in phase space). Is this difficulty practical or tl~eoreticalin nature? I would support the view that this result has important theoretical and conceptual significance because it forces us to transgress the limits of a purely dynamica1 description. A similar problem-Is the limitation of the propagation of signals by the velocity of light a practical or theoretical question?-is answered by the theory of relativity, which shows that our concepts of space and time have to be changed because of this limitation. There is always the temptation to try to describe the physical world as if we were not part of it. We could then conceive of velocities of propagation of arbitrary, even infinite, speed and the determination of initial conditions with infinite precision. But seeing the world from the outside is not the object of physics. Rather, it is to describe the physical world as it appears to us, who belong to it, through our measurements. In the line of thought inaugurated by the theory of relativity and followed by quantum mechanics, it is a basic objective of theoretical physics to make explicit the general limitations introduced by the measurement processes. But weak stability is only one step toward the incorporation of time and irreversibility into the fornlal structure of dynamics. As will be seen,
"
F I G U R E 2.1 3
In systems haring weak stability, one type of motion. +#, may be found in the neighborhood of another, +.
the introduction of entropy or. in general, a Lyapounov function greatly alters this whole formal sttucture (see Chapters 3 and 7). This is a most unexpected development. We were prepared to see new theoretical structures arise as a result ofdiscoveries in the field of elementary particles or as a result of new insights into the evolution of the universe, but that the concept of thermodynamic irreversibility, which has been with us for one hundred fifty years, should force us to invent new theoretical structures is most surprising. Emphasis should also be placed on the creative role that the problem of irreversibility has played in the history of classical dynamics, and even more so in quantum dynamics (see Chapter 3). The challenge of thermodynamics, which led to ergodic theory and to the ensemble theory, has been the starting point of quite remarkable developments. This productive dialogue between the physics of being and the physics of becoming is still going on today, as will be seen in Chapters 7 and 8.
Chapter
QUANTUM MECHANICS
Introduction
As demonstrated in Chapter 2, it is only recently that we have begun to grasp the complexity of dynamical description, even in the framework of classical dynamics. Still, classical dynamics attempted to represent some intrinsic reality independent of the mode of description. It was quantum mechanics that shook the Galilean foundations of physics. It destroyed the belief that physical description is realistic in a naive sense, that the language of physics represents the properties of a system independent of the conditions of experimentation and measurement. Quantum mechanics has a very interesting history (Jammer 1966; Mehra 1976, 1979).It started with Planck's attempt to reconcile dynamics with the second law of thermodynamics. Boltzmann had considered this problem for interacting particles (which will be discussed in Chapter 7), whereas Planck thought it would be easier to study the interaction of matter with radiation. He failed in this purpose, but in his attempt he discovered the well-known universal constant h, which bears his name.
QUANTUM MECHANICS
For a time, quantum theory remained associated with thermodynamia in the theory of black-body radiation and the theory of specific heat. When Arthur Haas presented what may be considered a precursor of Niels Bohr's theory of electronic orbits in 1908 in Vienna as a part of his dissertation, it was refused on the ground that quantum theory had nothing to do with dynamics. The situation changed drastically when the extraordinary success of the Bohr-Sommerfeld model of the atom made clear the necessity of building a new dynamics in which Planck's constant could be consistently incorporated. This was accomplished by Louis de Broglie, Werner Heisenberg, Max Born, Paul Dirac, and many others. Because the scope of this book precludes a detailed account of quantum mechanics, the following discussion will focus on the notions necessary for our inquiry: the role of time and irreversibility in physics. The "classical" quantum theory as formulated in the mid-1920s was inspired by the Hamiltonian theory summarized in Chapter 2 Like this Hamiltonian theory, the quantum theory was immensely successful for such simple systems as the rotator, the harmonic oscillator, or the hydrogen atom. However, as in classical dynamics, problems arise when more complicated systems are considered. Can quantum mechanics consistently incorporate the concept of elementary particles? Can it describe decay processes? These are the problems to be emphasized here. They are addressed again in Part I11 of this book in discussing the bridge from being to becoming. Quantum mechanics is a microscopic theory in the sense that it was introduced with the primary purpose of describing the behavior of atoms and molecules. Thus, it is surprising that it has led to the questioning of the relation between the microworld that we seek to observe and the macroworld to which we ourselves and our measuring devices belong. It can be said that quantum mechanics makes explicit the conflict (which, before its advent, had been implicit) between the dynamical description and the process of measurement (see d7Espagnat,1976; Jammer. 1974). In classical physics, rigid rods and clocks are often used as models of ideal measurement. They were the main tools used by Einstein in his thought experiments, but there is a supplementary element in measurement, which was emphasized by Bohr. Every measurement is intrinsically irreversible: recording and amplification in measurement are coupled to irreversible
events, such as the absorption or emission of light. (See Rosenfeld, 1965, and George, Prigogine, and Rosenfeld, 1973.) How can dynamics, which treats time as a parameter that has no preferential direction. lead to the element of irreversibility inseparable from measurement? This problem currently attracts a great deal of attention. It is perhaps one of the hottest problems of our time, one in which science and philosophy merge: Can wc understand the microscopic world in "isolation"? In fact, we know matter, especially its microscopic properties, only by means of measuring devices, which themselves are macroscopic objects consisting of a large number of atoms or molecules. In a way these devices extend our sense organs. The apparatus can be said to be the mediator between the world that we explore and ourselves. We shall see that the state of a quantum system is determined by the / wave function. This wave function satisfies a dynamical equation that is reversible in time, as do the equations of classical dynamics. Therefore, this equation cannot by itself describe the irreversibility of measurement. . The novel aspect of quantum mechanics is that we need both reversibility and irreversibility. In a sense, this was already true in classical physics, in which both types of equations were used: for example, Hamilton's equations of dynamics, which are reversible in time, and Fourier's equation for the temperature evolution, which describes an irreversible process. There, however, the problem could be brushed aside by qualifying the heat equation as a phenomenological equation devoid of any fundamental significance. But how do we brush aside the problem of measurement, which is our very link with the physical world?
The observation that sharp absorption or emission lines exist has been most important in the formulation of quantum mechanics. The only possible interpretation seems to be that a system like an atom or a molecule has discrete energy levels. To reconcile this with the classical ideas, a very important step had to be made. The Hamiltonian. as introduced in Chapter 2, can take a continuous set of values according to the values of its
QUANTUM MECHANICS
arguments, the coordinates and momenta. Therefore, it seemed necessary to replace the Hamiltonian, H, viewed as a continuous function, with a ~ C W object, the Hamiltonian regarded as an operator, and denoted by HOn (For an introduction to quantum mechanics, see Landau and . ~ifschitz, 1960.) The concept of operators in connection with classical mechanics was briefly discussed in Chapter 2. However, the situation is quite different in quantum mechanics. In considering trajectories in classical mechanics, we need only the Hamiltonian as a function of coordinates and momenta (see equation 2.4). However, even in the simplest quantum case, such as the interpretation of the properties of the hydrogen atom, we need the Hamiltonian operator, because we want to interpret the energy levels as the eiaencalues associated with this operator (see .equation 2.16). There.
"
The numbers El, E , , . . .,En are the energy levels of the system. Of course. we must have rules by which to change from classical variables to quantum operators. One such rule is
which is to say, without going into detail: "Keep the coordinates as they are and replace momenta by derivatives with respect to coordinates."* In a sense, the transition from functions to operators was forced upon us by spectroscopic experiments that revealed the existence of energy levels. It was a natural step to take, and yet we can only admire people like Max Born, Pascual Jordan, Werner Heisenberg, Erwin SchrMinger and Paul Dirac who dared to make this jump. The introduction of operators radically changes our description of nature. Thus, it is quite appropriate to speak of the " quantum revolution." T o give an example of these new features, the operators that we have to introduce generally do not commute. This has the following consequences: an eigenfunction of an operator is considered to describe the
* When there is no possibility of confusion, the subscript " o p " will be omitted and H will be used instead of H,, .
state of the system in which the physical quantity represented by this operator has a well-defined value (the eigenvalue). Therefore, noncommutativity means, in physical terms, that there can be no state in which. for example, the coordinate q and the momentum p have well-defined values simultaneously. This is the content of the well-known Heisenberg uncertainty relations. This consequence of quantum mechanics is quite unexpected, as it forces us to give up the naive realism of classical physics. We can measure the momentum and the coordinate of a particle. We cannot say that it has well-defined values of coordinate and momenta simultaneously. This conclusion was reached fifty years ago by Heisenberg and Born, among others. It seems as revolutionary today as it did then. In fact, discussions about the meaning of the uncertainty relations have never ceased. Can we not through the introduction of some supplementary " hidden " variables restore physical sanity? Until now this has proved to be difficult, if not impossible, and most physicists have given up such attempts. Although the history of this fascinating subject cannot be related here, it is treated quite well in specialized monographs (see Jammer, 1974). Niels Bohr formulated the principle of complementarity based on the existence of physical quantities represented by noncommuting operators (see Bohr, 1928). 1 hope that he and my late friend Leon Rosenfeld would not have disapproved of the way in which I would like to define this complementarity: the world is richer than it is possible to express in any single language. Music is not exhausted by its successive stylizations from Bach to Schoenberg. Similarly, we cannot condense into a single description the various aspects of our experience. We must call upon numerous descriptions, irreducible one to the other, but connected to each other by precise rules of translation (technically called transformations). Scientific work consists of elective exploration rather than a discovery of a given reality; it consists of choosing the problem that must be posed. But rather than anticipate some of the conclusions that are presented in Chapter 9, let us resume the discussion of quantum mechanics.
Quantization Rules
Eigenfunctions play very much the same role as basic vectors in vector algebra. As is known from elementary mathematics. an arbitrary vector,
QUANTUM MECHANICS
The main difference between the elementary vector space (see Figure 3.1A) and the space used in quantum mechanics (see Figure 3.1B) is in the number of dimensions that are finite in the first case and infinite in the second. In the second case, one speaks of Hilbert space, and the functions u,, or Y , are elements (or vectors) of this space. Each element may appear in two ways at the left or at the right inside the scalar product (equation 3.5). For this reason, Dirac (1958) introduced an elegant notation. The element u, may be written either as a bra vector
FIGURE 3.1
Decomposition of (A) a vector, 1, into its components and (B) a wave function, Y , into eigenfunctions u,, u , , . . ., u,.
or as a ket vector
say I, can be decomposed into its components along the set of basic vectors (see Figure 3.1). Similarly, we may represent an arbitrary state Y of a quantum mechanical system as a superposition of suitable eigenfunctions :
For reasons that will become apparent in the next section, Y is also called the wave function. It is especially convenient to take an orthonormal set of eigenfunctions (the corresponding basic vectors would have length one and be orthogonal to each other): (uil uj)
= hij\ =
This notation allows us to express in a compact way important properties of the Hilbert space. Suppose that the expansion in equation 3.3 is valid for all elements. Using the bra-ket notation and equation 3.6, we may then write for an arbitrary element
I=
lifi=j if i + j
I@),
we obtain the
The notation (ui 1 uj) indicates the scalar product which we shall use repeatedly. From this short excursion into the formalism, let us return to physics. The expansion coefficients c,, , which appear in equation 3.3, have an important physical meaning. If we measure the physical quantity (say, the energy) of which the u, are the eigenvectors, the probability of finding the eigenvalue corresponding to u, (say, E n ) is (c, 1.' The function Y . which gives the quantum state, is therefore called a probability amplitude (its
in which ul is the complex conjugate to ui. By multiplying equation 3.3 by u and using the orthonormality conditions given in equation 3.4, we ; see immediately that
THE PHYSICS O F B E I N G
QUANTUM MECHANICS
square gives the probability proper). This remarkable physical interpretation of Y is due to Born (see Jammer, 1966). It has already been noted that in quantum mechanics physical quantities are represented by operators. However, these operators cannot be arbitrary. The specific class of operators of interest may be defined by associating each operator A with its adjoint At:
As in elementary geometry, a similitude transformation must often be performed on operators. A similitude S leads from A to A through the relation
An interesting property is that such similitudes leave invariant all algebraic properties; for example, if
Their importance stems from the fact that eigenvalues of self-adjoint or Hermitian operators are real. Moreover, a Hermitian operator leads to an orthonormal set of eigenfunctions satisfying condition 3.1. It is often stated that " observables " are represented in quantum mechanics by Hermitian operators. Are all observables Herrnitian operators? This is a complicated question, which is dealt with in Chapter 8. In addition to Hermitian operators we need a second class of operators that are associated with changes in coordinates. It is known from elementary geometry that coordinate changes do not alter the value of a scalar product. Therefore let us consider operator A such that it leaves the scalar product (equation 3.5) invariant. This implies
C = AB, then
because, using equation 3.1 l',
c=AB
( A u 1 Av) = ( u I U)
and, as a consequence, using equation 3.8,
The similitude (in equation 3.13) may be considered a mere change in coordinates if S is a unitary operator. We are now ready to formulate the problem of quantization as one of finding a suitable coordinate system in which the Hamiltonian takes a simple, diagonal form. This is the Born-Heisenberg-Jordan quantization rule (see Dirac, 1958). We start with the Hamiltonian containing, as in equation 2.1, a kinetic (or unperturbed) contribution, H,, plus a potential energy (or perturbation), V. We may then look for a similitude
By definition, operators satisfying equation 3.1 1 are called unitary operators. The inverse of the operator A is A-', such that
,&-I
A-'A = 1
(3.11')
1" terms of a unitary operator S, which transforms the initial Hamiltonian into a diagonal one. This is equivalent to the solution of the eigenvalue problem in equation 3.1. Indeed, we may represent H as a matrix, and equation 3.1 shows that, in the representation using its eigenfunctions, H is represented by a diagonal matrix:
QUANTUM MECHANICS
THE PHYSICS O F BEING
The analogy with the transformation problem of classical mechanics considered in Chapter 2 in the section on integrable systems is striking. The Born-Heisenberg-Jordan quantization rules will be returned to in Chapter 8 in a discussion of how the systems that display irreversible processes can be quantized. For now, let it suffice to note that, as in classical transformation theory, two possible descriptions of a physical system have been applied to integrable systems. Indeed, diagonalization of the Hamiltonian is quite similar to the classical transformation of the Hamiltonian to action variables (equation 2.26). This point can be simply illustrated by a harmonic solid, which corresponds to interacting neighboring atoms or molecules whose relative displacement is so small that it can be described in terms of a potential energy that is quadratic in the displacement, such as that in the harmonic oscillator (equation 2.2). We may describe this system in two ways. The first corresponds to the interaction between neighboring particles in the solid, in which case we have to consider both the kinetic and the potential energy (refer to Figure 2.5A). The second way requires, as in the section on integrable systems in Chapter 2, a canonical transformation to eliminate the potential energy. We may then consider the solid to be a superposition of independent oscillators and calculate the energy levels of each oscillator (refer to Figure 2.5B). Again, we have a choice of descriptions: one in which entities are not well defined (because part of the energy of the solid is "between" the particles) and the other in which they are independent, the "normal modes" of the solid. We return once again to the question, Does our physical world belong to one of these two highly idealized descriptions, or is a third one necessary? This question will be further dealt with later in this chapter in the section on ensemble theory in quantum mechanics.
this new equation was with classical optics-the eigenvalues corresponding to the characteristic frequencies associated with wave phenomena. Schrodinger's equation is a wave equation involving the basic dynainical quantity, the Hamiltonian. Its explicit form is
in which i is the symbol and h is Planck's constant divided by 2n (we shall often take h equal to one, to avoid excessive notation). Note that this equation is not derived in quantum mechanics, but assumed. It can be validated only by comparison with experiment. The Schrodinger equation is a partial differential equation (in that derivatives with respect to coordinates appear in H,, (see the next section) in contrast with Hamilton's equations (2.4). But they do have an element in common: both Hamilton's equations and Schrodinger's equation are of first order in time. Once Y is known at some arbitrary time to (together with suitable boundary conditions such as Y + 0 at infinite distances), we may calculate Y for arbitrary times both in the future or in the past. In this sense we recover the deterministic view of classical mechanics, but it now applies to the wave function and not to the trajectory, as in classical mechanics. The discussion in Chapter 2 on the Liouville equation can be applied directly here. It is true that Y represents a probability amplitude (as p in equation 2.12 represents a probability), but its time evolution has a strictly dynamical character. As in the Liouville equation, there is no simple condition here with a probabilistic process such as Brownian motion. The time cvolution is determined by the Hamiltonian. Therefore, in quantum mechanics, the Hamiltonian (more precisely the Hamiltonian operator) plays a dual role. O n the one hand. it determines the energy levels through equation 3.1. O n the other, it determines the time evolution of the system. It is also important to notice that the Schrodinger equation is linear. If at a given moment t we have
Q U A N T U M MECHANICS
then at another arbitrary time t', earlier or later than r, we also have Y(tf)= a,Yl(t1) a2Y2(t')
(3.19)
We have seen that Y determines the probability of the outcome of experiments and may be appropriately called a probability amplitude. It is also called the wave function, because equation 3.17 has a strong formal similarity with the wave equations of classical physics. It is easy to give the formal solution of the Schrodinger equation (3.17):
role. Consider, for example, a collection of N electrons. Y will now depend on all the N electrons. A permutation of, say, electrons 1 and 2 should not change the physical situation. Therefore, we have to require (remember that Y is a probability amplitude and that probabilities are given by ( Y 12)
1 Y ( l >2) l2
1 Y(2, 1)l2
This may be verified by taking its derivative. This form is quite similar to equation 2.12', except that the Liouville operator L is replaced by the Hamiltonian H. Note that e-'"' (or ~ 6 ' ~ ' ) is a unitary operator, in agreement with equation 3.12: (e- iHt)t = eiHt = (e- i H t ) - l This results from the fact that H is Hermitian. Therefore. in both classical and quantum mechanics, the time evolution is given in terms of a unitary transformation. Time evolution corresponds merely to a change of coordinates ! If we use the expansion (in equation 3.3) of Y in terms of the eigenfunctions of the Hamiltonian, we obtain from equation 3.20 the explicit relation
These two ways correspond to the two basic quantum statistics: the Bose statistics, when the wave function does not change under the permutation of the two particles, and the Fermi statistics, when it does. The type of statistics seems to be quite a fundamental property of matter, because all known elementary particles obey one or the other. Protons, electrons, and so forth, are fermions; photons and some unstable particles, like mesons, are bosons. One of the great achievements of quantum mechanics is the discovery of this distinction between fermions and bosons, which shows up at all levels of the structure of matter. The behavior of metals, for example, could not be understood without Fermi statistics, as applied to electrons, and the behavior of liquid helium is a beautiful illustration of Bose statistics. The problem of Bose or Fermi statistics in connection with the decay of quantum states is discussed in the next section.
According to our rule, the probability of finding the system in the state u, will be given by (3.22) 1 e - i E k f ~ k1' = (ck1' The important point is that this probability is time independent. In the representation in which the energy is diagonal, nothing really happens." The wave function simply "rotates" in the Hilbert space, and the probabilities are constant in time. Quantum mechanics may be applied to systems formed by many particles. Here the concept of indistinguishability plays a very important
"
Using the formalism of quantum mechanics, we can calculate the average value ( A ) of some dynamic quantity, A, whose eigenvalues are a,, a,, . . . . By definition, an average value is the sum of all values, a ,, a,, . . . , that the variable can take, each being multiplied by the corresponding probabil-
Q U A N T U M MECHANICS
In classical mechanics, the averaging operation includes the integration over phase space (see equation 2.14). Wenow introduce the trace operation, which plays a similar role in quantum mechanics,
Using the definition of the eigenfunctions u,, and the density operator p, defined by This can also be written as
(A)=(Y(AY)
The important point is that the average value ( A ) is quadratic in the probability amplitude. This is in contrast with equation 2.14, which is linear in the Gibbs distribution function p. Note also that, in a sense, even a system characterized by a well-defined wave function Y already corresponds to an ensemble. Indeed if we expand Y , for example in terms of the eigenfunctions of the Hamiltonian (see equation 3.3), and measure the energy, we may find the eigenvalues El, , , . . . , each with the probability / c 1'. E c, 12, . . . . This seems to be an unavoidable consequence of Born's statistical interpretation of quantum mechanics. As a result, quantum mechanics can only make predictions about " repeated " experiments. In this sense, the situation is similar to that of a classical ensemble of dynamical systems described by a Gibbs ensemble. Yet, in quantum mechanics there is also a clearcut difference between pure cases and mixtures (see the section titled Hamiltonian Equations of Motion and Ensemble Theory in Chapter 2). To formulate this difference, it is useful to introduce the quantum analogue of the Gibbs distribution function p . To do so, we must first introduce a set of complete orthonorma1 functions n such that, as in equations 3.4 and 3.7,
I
4
Again, this definition uses Dirac's " bracket " notation (see equation 3.7). Operators act on elements of the Hilbert space. For example p acting on I O ) will be given according to definition 3.30 by
The reason for introducing definition 3.30 is that we may now obtain for the average ( A ) as given by equation 3.28 the compact expression
( A ) = tr(AY)(Y)
which exactly corresponds to the classical form (2.14), the integration over phase space being replaced by the trace operator. Alternatively expression 3.31 can be written as
( n 1 An')
We then expand Y in terms of the functions n and use equation 3.6. We obtain
(A)=(YIAY)=
n
(Yln)(nlAy)
THE PHYSICS OF B E I N G
QUANTUM MECHANICS
Therefore the diagonal elements of p may be viewed as the probabilities of finding the value anof the observable. Note that trace of p is unity, as we have (see equations 3.27 and 3.30)
(from equation 3.30) the time variation of the density p: p(t) = e it"p(0)ei'H By taking the derivative, this leads to
This is the quantum mechanical analogue of equation 2.9. As in classical mechanics, the interest of the ensemble approach is that we can consider more general situations, for example, corresponding to a weighted superposition of various wave functions. Then equation 3.30 becomes
This equation is valid both for pure cases and for mixtures. We obtain exactly the same type of formula that we derived in classical mechanics (formula 2.1 1). The only difference is that instead of the Poisson bracket we now have the commutator of H with p. To emphasize the similarity between these two situations, we shall write the evolution equation (3.35) and its formal solution again in the form i-8~= Lp, at
with
in which p, represents the weights corresponding to the various wave functions Y,, which make up the ensemble. The form of the density operator p permits us to make a clearcut distinction between pure cases corresponding to a simple wave function and mixtures. In the first case, p is represented by equation 3.30; in the second, by equation 3.32. This leads to a simple formal distinction. For pure cases,
including the Liouville operator, which now has a new meaning. This will permit us to treat both classical and quantum systems by the same methods in Chapter 7. Let us have another look at the average value of a mechanical quantity and its time variation. We have, using equations 3.31 and 3.34,
and p is then an idempotent operator. This is not so for mixtures. The distinction between pure cases and mixtures is necessary to formulate the measurement problem, as will be seen in a later section titled The Measurement Problem.
because the definition of the trace operator (equation 3.29) implies that (see expression 3.31')
Once we know the time variation (equation 3.20) of the wave function through the solution of the Schrodinger equation, we immediately obtain
Although operators generally do not commute (see the section on operators and complementarity earlier in this chapter), they do so when implied in the trace operation. We have also written p instead of p(t = 0). We may therefore obtain the average value (A(t)) in two equivalent
Q U A N T U M MECHANICS
ways. In the first, the density changes in time and A remains constant, whereas, in the second. we consider that the density remains-constant but . -the mechanical quantity A changes according to equat~on /: 3.3
This second description is called the Heisenberg representation. It differs from the Schrodinger representation in that, instead of the mechanical quantities like A being considered time-independent, the wave function Y or p is time-independent. By taking the derivative with respect to time, equation 3.39 leads to (see equations 3.35 and 3.36)
ways between possible excitations in a many-particle system. For this reason, a number of physicists starting with von Neumann himself have tried to define macro-observables that would give an approximate description of dynamics and include the approach to equilibrium. Once again, we encounter the idea that approach to equilibrium and more generally the concept of irreversibility correspond to an approximation of dynamics. It will be seen in Chapter 7 that we can consider this problem quite differently: irreversibility corresponds indeed to an extension of dynamics, possible when supplementary conditions (such as weak stability in classical dynamics) are satisfied.
aA i-=
at
AH- HA
Note that it is of the same form as the Liouville equation (3.36), except that L is replaced by - L. This will be used in Chapter 7. A similar distinction exists in classical dynamics. Equation 2.5 corresponds to the Heisenberg equation and equation 2.1 1 to the SchrMinger equation. These two equations differ by the sign of the Poisson bracket operator L as defined in equation 2.13.
Equilibrium Ensembles
The concept of equilibrium ensembles that was introduced for classical systems in Chapter 2 may be easily extended to quantum systems. Yet, there are interesting differences between classical and quantum dynamical systems. For example, quantum ergodic systems can be shown to imply that the systems are not degenerate (to each eigenvalue of the energy corresponds a single eigenfunction) This result, which was established by von Neumann (see Farquhar, 1964), very much limits the interest of the ergodic approach because most quantum systems of interest are degenerate. For example. a given energy may be partitioned in many
Many conceptual problems refer to the very formulation of quantum mechanics. For example: Is the departure from classical causality really unavoidable? Can we not introduce supplementary "hidden" variables so as to make the formalism of quantum mechanics more similar to that of classical mechanics? These questions are beautifully reviewed in a monograph by Bernard d7Espagnat (1976). In spite of the effort expended in attempting to solve such problems, no marked success has been achieved until now. Our attitude will be different: we accept the quantum mechanical formalism but we ask how far we can extend it without marked modifications. This question arises when the measurement problem mentioned early in this chapter is considered. Suppose that we start with a wave function Y and the corresponding density p as given by equation 3.30:
By measuring a dynamical quantity, say the energy. of which the un are the eigenfunctions. we obtain various eigenvalues E E , , . . . , with probwe abilities 1 cn 1.' But once we have obtained a given eigenvalue, say E i ,
,,
know that the system is necessarily in the state u i . At the end of the measurement we have a mixture:
F I G U R E 3.2
.4 symmetric potential.
with probabilities 1 c 1 12, I c z 1 ' , . . . 1 ck .. . . In accordance with equation 3.32, the corresponding density p is now
. IZ,
which is quite different from equation 3.41. The transformation from equation 3.41 to equation 3.42, often called the reduction of the wave packet, does not belong to the type of unitary transformations (equation 3.20) described by the solution of the Schrodinger equation. Von Neumann (1955) has expressed this difference in a most elegant way by showing that we may define an "entropy" that increases when we go from a pure state to a mixture. In this way the problem of irreversibility now appears at the very heart of physics. But how is this problem possible? We have seen that Schrodinger's equation is linear (see equation 3.18). A pure state should therefore remain a pure state. If indeed the "fundamental level" of description is the Schrodinger equation, there is no easy way out. Many suggestions are given in d'Espagnat's book (1976), none quite convincing. The solution proposed by von Neumann himself (1955) and advocated by others, including Eugene Wigner, is that we have to leave the field of physics and invoke the active role of the observer. This is in line with the general philosophy already mentioned that irreversibility is not in nature, but in us. In the present case, it is the perceiving subject engaged in an act of observation who decides that a transition between a pure state and a mixture occurs. It is easy to criticize this point of view, but again how do we introduce irreversibility into a reversible" world?
"
Others go even further; they claim that there is no reduction of the wave packet at the price that our universe is continuously splitting into a stupendous number of branches as the result of measurementlike interactions! Although such extreme views will not be discussed here, it should be noted that the very existence of such concepts is proof that physicists are not positivists. They are not satisfied with giving rules that simply "work "! We shall return to this problem in Chapter 8. Let us only note here that the distinction between pure states and mixtures, which is formally very clear in quantum mechanics, can in fact lie beyond any finite accuracy of measurement. For example, consider a symmetric potential with two minima such as that represented in Figure 3.2. Suppose that 1 u , ) corresponds to a wave function centered in region "a," and I u , ) to one centered in region "b." The distinction between pure states and mixtures differs by terms involving the product l u l ) ( u 2 1. However, this product can be extremely small whcn the potential barrier takes macroscopic dimensions. In other words, wave functions may become " unobservables," somewhat like the trajectories in problems involving weak stability considered in Chapter 2. This remark will play an important role in the theory of quantum irreversible processes presented in Chapters 7 through 9.
Before discussing the decay of unstable particles, a distinction between "small" and "large" systems should be made clear. The transition from
QUANTUM MECHANICS
a discrete to a continuous spectrum was treated in Chapter 2 in the section on operators. A general theorem in quantum mechanics states that quantum mechanical systems confined to a finite volume have a discrete spectrum. To obtain a continuous spectrum, we must therefore go to the limit of an infinite system. This is in contrast with classical systems in which, as noted in Chapter 2, the Liouville operator may already have a continuous spectrum for finite systems. The difference comes from the fact that the classical Liouville operator acts on the phase space, which involves velocities (or momenta) that are always continuous variables, whereas the Hamiltonian operator acts on the coordinate space (or on the momentum space but not on both; see equations 3.1 and 3.2). A discrete spectrum for H means periodic motion. This is no longer so when the spectrum becomes continuous. Let us therefore see how the transition to a continuous spectrum changes the time evolution. Instead of the sum in equation 3.21, an integral must now be considered. Using the eigenvalue of the energy as the independent variable, we may write this integral in the form
An important point is that this integral has to be taken from a finite value (in this case, a finite value equal to zero) to infinity. Indeed, if the Hamiltonian could take arbitrarily large negative values, the system would be unstable; therefore some lower limit must exist. Instead of the periodic variation represented by equation 3.21, we now obtain a Fourier integral, which may represent a much larger type of variation in time. In principle, this is welcome. We may, for examplc, apply this formula to the decay of an unstable particle or to the deactivation of an excited atomic level. Then, by introducing appropriate initial conditions, we would like to find an exponential decay for the probability
in which z is the lifetime. This is nearly, but not exactly, so. In fact, exponential formula 3.44 can never be exact. As a consequence of a celebrated theorem, the Paley-Wiener theorem (1934), a Fourier integral of the form 3.43, in which the integration is taken from a finite value to
infinity, always decays more slowly than an exponential in the limit of long times. In addition, equation 3.43 leads to short time deviations from the exponential law. It is true that numerous theoretical investigations have shown that the deviations from the exponential are too small to be measured at present. It is important that experimental and theoretical investigations be continued. The fact that deviations from the exponential law of decay exist leads to serious questions about the meaning of indistinguishability. Suppose that we prepared a beam of unstable particles, say mesons, and let it decay and that later we prepared another group of mesons. Strictly speaking, these two groups of mesons, prepared at two different times, would have different decay laws, and we could distinguish between the two groups just as we can distinguish between old and young women. This seems somewhat strange. If we have to choose, I believe we should keep indistinguishability as a basic principle. Of course, if we restricted the concept of elementary particles to stable particles-as, for example, Wigner has suggested on many occasionsthen this question would not arise. But it seems difficult to restrict the existing schemes for elementary particles to stable ones. It seems fair to say that the scientific community feels more and more that some generalization of quantum mechanics is necessary to incorporate unstable particles. In fact, the difficulty is even greater. We would like to associate elementary particles with well-defined properties in spite o their interacf tions. To take a concrete case, consider the interaction of matter with light, of electrons with photons. Suppose that we could diagonalize the corresponding Hamiltonian. We would obtain some "units" similar to the normal modes of a solid, which by definition no longer interact. Certainly these units cannot be the physical clectrons or photons that we see around us. These objects interact and it is precisely because of this interaction that we can study them. But how to incorporate interacting but well-defined objects into a Hamiltonian description? As mentioned before, in the representation in which the Hamiltonian is diagonal the objects are well defined. but there are no interactions; in the other representations the objects are not well defined. One feels that a way out must be a closer look at what we really have to eliminate and what to keep by a suitable transformation. As will be seen in Chapter 8, this problem is closely related to the basic distinction between reversible and irreversible processes.
Q U A N T U M MECHANICS
In light of the discussions presented, I believe that the answer to this question can be safely formulated as "no." Quantum mechanics was directly inspired by the situation in atomic spectroscopy. The frequency o f " rotation" of an electron around the nucleus is of the order of 10-l6 second, the typical lifetime second. Therefore, an excited electron rotates 10,000,000 times before it falls down to the ground state. As it was well understood by Bohr and Heisenberg, it was this fortunate circumstance that made quantum mechanics so successful. But today we can no longer be satisfied with approximations that treat the nonperiodic part of the time evolution as a small, insignificant perturbation effect. Here again, as in the problem of measurement, we are confronted with the concept of irreversibility. With his remarkable physical insight, Einstein noticed (1917) that quantization in the form used at that time (i.e., in the Bohr-Sommerfeld theory) was valid only for quasi-periodic motions (described in classical mechanics by integrable systems]. Certainly fundamental progress has been realized since. Yet the problem remains. We are faced with the very meaning of idealizations in physics. Should we consider quantum mechanics of systems in a j n i t e volume (and therefore with a discrete energy spectrum) to be the basic form of quantum mechanics? Then problems such as decay, lifetimes, and so forth must be considered to be related to supplementary " approximations " involving the limit to infinite systems to obtain a continuous spectrum. Or, on the contrary, should we argue that nobody has ever seen an atom that would not decay when brought into an excited level? The physical "reality" then corresponds to systems with continuous spectra, whereas standard quantum mechanics appears only as a useful idealization, as a simplified limiting case. This is much more in line with the view that elementary particles are expressions of basic fields (such as photons with respect to the electromagnetic field) and fields are in essence not local because they extend over macroscopic regions of space and time. Finally, it is interesting to note that quantum mechanics has introduced statistical features into the basic description of physics. This is most clearly expressed in terms of the Heisenberg uncertainty relations. It
is important to note that no similar uncertainty relation exists for time and energy (i.e., the Hamiltonian operator). As the result of SchrMinger's equation, which relates the time change to H o p , such an uncertainty relation could be understood as a complementarity between time and change, between being and becoming. But time is just a number (not an operator) in quantum mechanics, as it is in classical mechanics. We shall see that there are circumstances-implying the limit to continuous spectrum-in which such a supplementary uncertainty relation may be established between the Liouville operator and time even in classical mechanics. When this is so, time acquires a new supplementary meaning-it becomes associated with an operator. Before we take up this fascinating problem again, let us consider the "complementary" part of physics; that is, the physics of becoming.
Part
II
EMERGENCE OF WAVE STRUCTURES IN FIELDS OF AMOEBAE OF THE CELLULAR SLIME MOLD Dictyostelium discoideum.
When mature slime-mold amoebae have exhausted their food resources and become starved, they secrete cyclic AMP, an attractant that induces their aggregation. The attractant is secreted in brlef pulses, initially by only a few amoebae that then become the centers to which other amoebae are attracted. The frequency of these pulses is one every five mlnutes at first, increasing to one every t w o minutes as aggregation proceeds The nine frames shown here were taken at ten-minute intervals. The initial signals, which decay wtthin a few seconds of the~rcreation, are passed on t o amoebae nearby, and they in turn pass them on to amoebae further away, and so forth. The pulses are relayed outward from each center at about one millimeter every three mlnutes. Besldes relaylng the pulses, the amoebae respond to these signals by movlng a short distance toward a signalling center each time a pulse reaches them. This d~scontinuotjsmovement has been visualized in these photographs by the use of a f ~ n e l y adjusted dark-fleld optical system that reveals the moving amoebae (which are elongated) as b r ~ g h t bands and the stationary amoebae ( w h ~ c h are rounded up) as dark bands Waves can be either concentric, whth a period controlled by pace-maker amoebae, or spiral. with pertod~citygoverned by the refractory ~nterval-that is, the length of tlme following a response during whlch the amoebae will not respond to further stimulation. The larger circles shown are approximately ten millimeters In diameter. [Unpubl~shedphotographs by P. C. Newell. F . M. Ross, and F C Caddtck Further details of the signallhng system can be found in "Aggregation and Cel! Surface Receptors in Cellular Slime Molds" by P. C. Newell. In Microbial Interactions, Receptors and Recognition, Series 6, J . L. Reiss~g, ed. (Chapman & Hall, 1977). pp. 1-57.]
Chapter
THERMODYNAMICS
Chapters 2 and 3 of this book dealt with the physics of time corresponding to reversible phenomena, because both the Hamilton and the Schrodinger equations are invariant with respect t o the substitution t -+ - t. Such situations correspond to what I have called the ~ h y s i c sf being. We o now turn to the physics of becoming and, specifically, to irreversible processes as described by the second law of thermodynamics. In this chapter and in the two that follow, the point of view is strictly phenomenological. What the relation with dynamics may be will not be investigated; however, methods will be outlined that successfully describe unidirectional time phenomena over a wide range, from simple irreversible processes such as heat conduction to complicated processes involving self-organization. Since its formulation, the second law of thermodynamics has emphasized the unique role of irreversible processes. The title of William Thomson's (Lord Kelvin's) paper, in which he presented the general for-
THERMODYNAMICS
mulation of the second law for the first time, was: "On the Universal Tendency in Nature to the Dissipation of Mechanical Energy " (Thomson 1952). Clausius also used a cosmological language: "The entropy of the universe tends to a maximum" (Clausius 1865). However, it must be recognized that the formulation of the second law seems to us today to be more a program than a well-defined statement. because no recipe was formulated by either Thomson or Clausius to express the entropy change in terms of observable quantities. This lack of clarity in its formulation was probably one of the reasons why the application of thermodynamics became rapidly restricted to equilibrium, the end state of thermodynamic evolution. For example, the classic work of Gibbs, which was so influential in the history of thermodynamics, carefully avoids every incursion into the field of nonequilibrium processes (Gibbs 1975). Another reason may well have been that irreversible processes are nuisances in many problems: for example, they are obstacles to obtaining the maximum yield in thermal engines. Therefore the aim of engineers constructing thermal engines has been to minimize losses due to irreversible processes. It is only recently that a complete change in perspective has arisen, and we begin to understand the constructive role played by irreversible processes in the physical world. Of course, the situation corresponding to equilibrium remains the simplest one. It is in this case that the entropy depends on the minimum number of variables. Let us briefly examine some classical arguments. Consider a system that exchanges energy, but not matter, with the outside world. Such a system is called a closed system in contrast with an open system, which exchanges matter as well as energy with the outside world. Suppose that this closed system is in equilibrium. The entropy production then vanishes. O n the other hand, the change of the macroscopic entropy is then defined by the heat received from the outside world. By definition,
is valid for such a simple system (for details, see Prigogine 1967):
in which E is the energy, p the pressure, and V the volume. This formula expresses that the energy exchanged by the system with the outside world during a small time interval dt is due to the heat received by the system plus the mechanical work performed at its boundaries. Combining equation 4.1 with equation 4.2, we obtain the total differential of the entropy in the variables E and V :
/
Gibbs has generalized this formula to include variations in composition. Let us call n,, n,, n,, . . . , the number of moles of the various components. We may then write
The quantities p, arc by definition the chemical potentials introduced by Gibbs, and equation 4.3' is called the Gibbs formula for entropy. The chemical potentials are themselves functions of the thermodynamic variables, such as temperature, pressure, concentration, and so forth. They take an especially simple form for so-called ideal systems,* in which they depend in a logarithmic way on the mole fractions N, = n 7 / ( x n,):
in which R is the gas constant (equal to the product of Boltzmann's constant k and Avogadro's number) and i:,(p, T) is some function of pressure and temperature. in which T is a positive quantity called the absolute temperature. Let us combine this relation with the first law of thermodynamics, as
* Examples of ideal systems are dilute solutions and perfect gases.
THE PHYSICS OF B E C O M I N G
Instead of entropy, other thermodynamic potentials are often introduced, such as Helmholtz free energy, defined by
It is then easy to show that the law of increase of entropy, valid for isolated systems, is replaced by the law of decrease of free energy for systems that are maintained at a given temperature. The structure of equation 4.5 reflects a competition between the energy E and the entropy S. At low temperatures the second term is negligible and the minimum value of F imposes structures corresponding to minimum energy and generally to low entropy. At increasing temperatures, however, the system shifts to structures of higher and higher entropy. Experience confirms these considerations because at low temperatures we find the solid state characterized by an ordered structure of low entropy, whereas at higher temperatures we find the gaseous state of high entropy. The formation of certain types of ordered structures in physics is a consequence of the laws of thermodynamics applied to closed systems at thermal equilibrium. In Chapter 1, the simple interpretation of entropy in terms of complexions, given by Boltzmann, was described. Let us apply this formula to a system whose energy levels are given by E E , , E , . By looking for the occupation numbers, which make the number of complexions (equation 1.9) a maximum for given values of the total energy and number of particles, we obtain Boltzmann's basic formula for the probability, P i , of the occupation of a given energy level, E , :
1E,
E 2
FIGURE 4.1
Low-temperature distribution: only the lowest energy level is appreciably populated.
El
- e-EsIkT
(4.8)
and therefore the three states are approximately equally populated (see Figure 4.2).
E2
FIGURE 4.2
High-temperature distribution: excited states as well as the ground state are populated.
,,
in which k is, as in equation 1.10, Boltzmann's constant, T the temperature, and E , the energy of the chosen level. Suppose that we consider a simplified system with only three energy levels. Then Boltrmann's formula (equation 4.6) tells us the probability of finding a molecule in each of the three states at equilibrium. At very low temperatures, T + 0, the only significant probability is that corresponding to the lowest energy level, and we come to the scheme shown in Figure 4.1 in which virtually all the molecules are in the lowest energy state, El, because
Boltzmann's probability distribution (equation 4.6) gives us the basic principle that governs the structure of equilibrium states. It may appropriately be called Boltzmann's order principle. It is of paramount importance as it is capable of describing an enormous variety of structures including, for example, some as complex and delicately beautiful as snow crystals (Figure 4.3). Boltzmann's order principle explains the existence of equilibrium structures. However, the question can be asked, Are they the only type of structures that we see around us? Even in classical physics we have many phenomena where nonequilibrium may lead to order. When we apply a thermal gradient to a mixture of two different gases, we observe an increment of one of the components at the hot wall. whereas the other concentrates at the cold wall. This phenomenon, already observed in the nineteenth century, is called thermal diffusion. In the steady state, the entropy is generally lower than it would be in a uniform texture. This shows that nonequilibrium may be a source of order. This observation initiated the point of view originated by the Brussels school. (See Prigogine and Glansdorff, 1971, for a historical survey.)
F I G U R E 4.4
Mosaic model of multienzyme reaction. Substrate S, is changed by successive modifications to the product P by the action of "captive" enzymes.
F I G U R E 4.3
Typical snow crystals. (Courtesy of National Oceanic and Atmospheric Administration, photographs
The role of irreversible processes becomes much more marked when we turn to biological or social phenomena. Even in the simplest cells, the metabolic function includes several thousand coupled chemical reactions and, as a consequence, requires a delicate mechanism for their coordination and regulation. In other words, we need an extremely sophisticated functional organization. Furthermore, the metabolic reactions require specific catalysts, the enzymes, which are large molecules possessing a spatial organization, and the organism must be capable of synthesizing these substances. A catalyst is a substance that accelerates a certain chemical reaction but is not itself used up in the reaction. Each enzyme, or catalyst, performs one specific task; if we look at the manner in which the cell performs a complex sequence of operations, we find that it is organized along exactly the same lines as a modern assembly line (see Figure 4.4). (See Welch, 1977.) The overall chemical modification is broken down into successive elementary steps, each of which is catalyzed by a specific enzyme. The initial compound is labeled S, in the diagram; at each membrane, an imprisoned" enzyme performs a given operation on the substance and . then sends it on to the next stage. Such an organization is quite clearly not the result of an evolution toward molecular disorder! Bioiogical order is both architectural and functional; furthermore, at the cellular and supercellular levels, it manifests itself by a series of structures and coupled functions of growing complexity and hierarchical character. This is contrary to the concept of evolution as described in the thermodynamics of isolated systems, which leads simply to the state of maximum
LG
THERMODYNAMICS
number of complexions and, therefore, to "disorder." Do we then have to conclude, as did Roger Caillois (1976), that "Clausius and Darwin cannot both be right," or should we introduce, with Herbert Spencer (1870), some new principle of nature, such as the "instability of the homogeneous " or " a differentiating force, creator of organization." Thc unexpected new feature is that nonequilibrium may, as will be seen in this chapter, lead to a new type of structure, the dissipative structures, which are essential in the understanding of coherence and organization in the nonequilibrium world in which we live.
Taking into account this expression as well as the Gibbs formula (4.3'), we immediately obtain,
in which A is the afJinity of the chemical reactions (first introduced by Theophile De Donder, 1936), which is related to the chemical potentials, Pjuj, by
To make the transformation from equilibrium to nonequilibrium, we must calculate entropy production explicitly. We can no longer be satisfied with the simple inequality, because we want to relate entropy production to well-defined physical processes. A simple evaluation of entropy production becomes possible if we assume that entropy outside equilibrium depends on the same variables, E, 1/; n,, as it does in equilibrium. (For nonuniform systems we would have to assume that the entropy density depends on the energy density and local concentrations.) As an example, let us calculate the entropy produced by chemical reactions in closed systems. Consider a reaction such as
The firs\ term in equation 4.13 corresponds to an entropy flow (see equation 4.1), whereas the second term corresponds to the entropy production:
Using definition 4.12, we find that the entropy production per unit time takes the remarkable form
The change due to the reaction in the number of moles of component X in time dt is equal t o that of Y and opposite that of A or B:
It is a bilinear form in the rate v of the irreversible process (here the chemical reaction) and the corresponding force (here A/T). This type of calculation can be generalized: starting with the Gibbs formula (4.30, one obtains
In general, chemists introduce an integer v, (positive or negative), called the stoichiometric coefficient of component y, into the chemical reaction; 4 is then, by definition, the degree of advancement of the chemical reaction. We may then write
dn, = v, d t
in which J j represents the rates of the various irreversible processes taking place (chemical reactions, heat flow, diffusion, etc.) and X j the
THERMODYNAMICS
corresponding generalized forces (affinities, gradients of temperature and of chemical potentials, etc.). This is the basic formula of the macroscopic thermodynamics of irreversible processes. It should be emphasized that supplementary assumptions have been used to derive the explicit expression (4.17) for the entropy production. The validity of the Gibbs formula (4.3') can only be established in some neighborhood of equilibrium. This neighborhood defines the region of " local " equilibrium. At thermodynamic equilibrium,
for all irreversible processes simultaneously. It is therefore quite natural to assume, at least for conditions near equilibrium, linear homogeneous relations between the flow and the forces. Such a scheme automatically includes such empirical laws as Fourier's law, which says that the flow of heat is proportional to the gradient of temperature, and Fick's law for diffusion, which states that the flow of diffusion is proportional to the gradient of concentration. In this way, we obtain the linear thermodynamics of irreversible processes characterized by the relations
Heat conductivity in crystals affords a simple application of Onsager's theorem. According to the reciprocity relations, the heat conductivity tensor would be symmetrical whatever the symmetry of the crystal. This remarkable property had in fact already been established experimentally by Woldemaz Voigt in the nineteenth century and corresponds to a special case of the Onsager relations. The proof of Onsager's relations is given in textbooks (see Prigogine 1967). The important point for us is that they correspond to a general property independent of any molecular model. This is the feature that makes them a thermodynamic result. Another example to which Onsager's theorem applies is a system composed of two vessels connected by means of a capillary or a membrane. A temperature difference is maintained between the two vessels. This system has two forces, say X k and X , , corresponding to the difference in temperature and chemical potential between the two vessels, and two corresponding flows, Jk and J , . It reaches a state in which the transport of matter vanishes, whereas the transport of energy between the two phases at different temperatures continues; that is, a steady nonequilibrium state. No confusion should arise between such states and equilibrium states characterized by a zero entropy production. According to equation 4.17, entropy production is given by
Linear thermodynamics of irreversible processes is dominated by two important results. The first is expressed by the Onsager reciprocity relations (1931), which state that
When the flow J i , corresponding to the irreversible process i , is influenced by the force X j of the irreversible process j, then the flow J j is also influenced by the force X ithrough the same coefficient L i j . The importance of the Onsager relations resides in their generality. They have been submitted to many experimental tests. Their validity showed, for the first time, that nonequilibrium thermodynamics leads, as does equilibrium thermodynamics, to general results independent of any specijic molecular model. The discovery of the reciprocity relations can be considered to have been a turning point in the history of thermodynamics.
Coefficients L 1 L I Z L , , , L , , are all measurable quantities, and we , may therefore verify that, indeed,
This example can be used to illustrate the second important property of linear nonequilibrium systems: the theorem of minimum entropy production (Prigogine, 1945; see also Glansdorff and Prigogine, 1971). It is easy to see that equation 4.23, together with equation 4.24, is equivalent to the condition that entropy production (equation 4.21) is minimum for a g i ~ constant Xk. Equations 4.21, 4.22, and 4.24 give
Therefore the vanishing of the mass flow (equation 4.23) is equivalent to the extremum condition
L(c) = ax, dt
"
F I G U R E 4.5
Spatial pattern of convection cells, viewed from above in a liquid heated from below.
The theorem of minimum entropy production expresses a kind of inertial " property of nonequilibrium systems. When given boundary conditions prevent the system from reaching thermodynamic equilibrium (i.e., zero entropy production) the system settles down in the state of " least dissipation." It was clear when this theorem was formulated that it was strictly valid only in the neighborhood of equilibrium, and for many years great efforts were made to extend this theorem to systems farther from equilibrium. It came as a great surprise when it was shown that in systems far from equilibrium the thermodynamic behavior could be quite different-in fact, even directly opposite that predicted by the theorem of minimum entropy production. It is remarkable that this unexpected behavior had already been observed in ordinary situations studied in classical hydrodynamics. The example first analyzed from this point of view is called the Benard instability. (For a detailed discussion of this and other hydrodynamic instabilities, see Chandrasekhar, 1961.) Consider a horizontal layer of fluid between two infinite parallel planes in a constant gravitational field, and let us maintain the lower boundary at temperature TI and the higher boundary at temperature T2 with TI > T,. For a sufficiently large value of the "adverse" gradient (TI - T2)/(T, T,), the state of rest becomes unstable and convection +
starts. Entropy production is then increased because the convection is a new mechanism for heat transport (see Figure 4.5). Moreover, the motions of the currents that appear after convection has been established are more highly organized than are the microscopic motions in the state of rest. In fact, large numbers of molecules must move in a coherent fashion over observable distances for a sufficiently long time for there to be a recognizable pattern of flow. Thus, we have a good example of the fact that nonequilibrium can be a source of order. As will be seen later in this chapter in the section titled Application to Chemical Reactions, this is true not only for hydrodynamic systems, but also for chemical systems if well-defined conditions imposed upon the kinetic laws are satisfied. It is interesting to note that Boltzmann's order principle would assign almost zero probability to the occurrence of Benard convection. Whenever new coherent states occur far from equilibrium, the application of probability theory, as implied in the counting of number of complexions, breaks down. For Benard convection, we may imagine that there are always small convection currents appearing as fluctuations from the average state, but below a certain critical value of the temperature gradient, these fluctuations are damped and disappear. However, above this critical
THE PHYSICS O F B E C O M I N G
THERMODYNAMICS
value, certain fluctuations are amplified and give rise to a macroscopir: a current. A new molecular order appears that basically correspond. +de giant fluctuation stabilized by the exchange of energy with the world. This is the order characterized by the occurrence of what are referred to as "dissipative structures." Before further discussion of thc possibility of dissipative structures, a brief review of some aspects of thermodynamic stability theory will yield interesting information about the conditions for their occurrence.
and
in which we have used the fact that the specific heat is defined as
and is a positive quantity. More generally, if we perturb all the variables in equation 4.3', we obtain a quadratic form. The result is given below (the calculations may be found, e.g., in Glansdorff and Prigogine, 1971):
The states corresponding tc thermodynamic equilibrium or the steady states corresponding to a minimum of entropy production in linear nonequilibrium thermodynamics are automatically stable. The concept of Lyapounov functions was introduced in Chapter 1. Entropy production in the range of linear nonequilibrium thermodynamics is just such a function: if a system is perturbed, entropy production will increase, but the system reacts by returning to the state at which its entropy production is lowest. For a discussion of far-from-equilibrium systems, it is useful to introduce still another Lyapounov function. As we know, equilibrium states in isolated systems are stable when corresponding to the maximum production of entropy. If we perturb a system that is near an equilibrium value S,, we obtain
Here p is the density. v = l i p is the specific volume (the index N j means that composition is maintained constant in the variation of N j ) , X is the isothermal compressibility, N , is the mole fraction of component j, and pjj. is the derivative
The basic stability conditions of classical thermodynamics are C, > 0 (thermal stability) (mechanical stability) (4.34)
However, because the function S is maximum at S,, the first-order term vanishes, and therefore the stability is determined by the sign of the second-order term S2S. Elementary thermodynamics permits us to calculate this important expression explicitly. First consider the perturbation of a single, independent variable, the energy E in equation 4.3'. We then have
X >0
jj
'
Each of these conditions has a simple physical meaning. For example, if condition 4.32 were violated, a small fluctuation in temperature would be amplified through the Fourier equation instead of being damped. When these conditions are satisfied, d2S is a negative definite quantity. Moreover, it can be shown that the time derivative of d2S is related to the
THE PHYSICS OF B E C O M I N G
in which P is defined as
As a result of inequalities 4.30 and 4.35, d2S is a Lyapounov function, and its existence ensures the damping of all fluctuations. That is the reason why a macroscopic description for large systems that are near equilibrium is sufficient. Fluctuations play only a subordinate role: they appear as a negligible correction to the laws for large systems. Can this stability be extrapolated for systems farther from equilibrium? Does h2S play the role of a Lyapounov function when we consider larger deviations from equilibrium but still within the framework of macroscopic description? To answer these questions, calculate the perturbation d2S for a system in a noneqzrilibrium state. Inequality 4.30 remains valid, in the range of macroscopic description. However, the time derivative of d2S is no longer related to the total entropy production, as in inequality 4.35, but to the entropy production arising from perturbation. In other words, we now have, as has been shown by Glansdorff and myself (1971),
FIGURE 4.6 Various steady-state solutions corresponding to reaction 4.39: 0 corresponds to thermodynamic equilibrium; "th" is the "thermodynamic branch."
The right-hand side is what may be called the excess entropy production. It should be re-emphasized that 6Jp and 6Xp are deviations from the values Jp and X, at the stationary state, the stability of which we are testing through a perturbation. Contrary to what happens for systems at equilibrium or near equilibrium, the right-hand side of equation 4.37, corresponding to the excess entropy production, does generally not have a well-defined sign. If for all t larger than some fixed time t o , in which t o may be the starting time of the perturbation, we have
then h2S is a Lyapounov function and stability is ensured. Note that, in the linear range, the excess entropy production would have the same sign as the entropy production itself and we would obtain the same result as would be obtained with the theorem of minimum entropy production. However, the situation changes in the far-from-equilibrium range. There the form of chemical kinetics plays an essential role. Examples of the effect of chemical kinetics are presented in the next section. For certain types of chemical kinetics the system may become unstable. This shows that there is an essential difference between the laws for systems at equilibrium and those for systems that are far from equilibrium. The laws of equilibrium are universal. However, far from equilibrium the behavior may become very specijic. This is a welcome circumstance, because it permits us to introduce a distinction in the behavior of physical systems that would be incomprehensible in an equilibrium world. Suppose that we consider a chemical reaction of the type
in which {A) is a set of initial products, {X) a set of intermediate ones, and {F} a set of final products. Chemical reaction equations are generally nonlinear. As a result, we shall obtain many solutions for the intermediate concentration (see Figure 4.6). Among these solutions, one corresponds to
THERMODYNAMICS
thermodynamic equilibrium and can be continued into the nonequilibrium range; it will be referred to as the thermodynamic branch. The important new feature is that this thermodynamic branch may become unstable at some critical distance from equilibrium.
The reaction rate is still assumed to be given by equation 4.42, but the affinity is now
Let us apply the preceding formalism to chemical reactions. Condition 4.38 for the existence of a Lyapounov function then becomes
in which 6vp represents the perturbation of the chemical reaction rates and SAP,the perturbation of the chemical affinities as defined in equation 4.14. Consider the chemical reaction
Because we are mainly interested in far-from-equilibrium situations, we neglect the reverse reactions and write
for the reaction rate.* According to equations 4.4 and 4.14, the affinity for an ideal system is a logarithmic function of the concentration. Therefore, A = log
-
XY CD
A fluctuation in the concentration X about some steady-state value gives rise to excess entropy production:
The negative sign does not mean that the perturbed steady state will necessarily become unstable, but it may become so (the positive sign is a sufficient but not necessary condition for stability). It is, however, a general result that instability of the thermodynamic branch necessarily involves autocatalytic reactions. One is immediately reminded of the fact that most biological reactions depend on feedback mechanisms. In Chapter 5, it will be seen, for example, that the energy-rich molecule adenosine triphosphate (A'TP), necessary for the metabolism of living systems, is produced through a succession of reactions in the glycolytic cycle that involve ATP at the start. To produce ATP we need ATP. Another example is cell production: it takes a cell to produce a cell. Thus, it becomes very tempting to relate the structure, which is so conspicuous in biological systems, to a breakdown of the stability of the thermodynamic branch. Structure and function become closely related. To grasp this important point in a clear way let us consider some simple schemes of catalytic reactions. For example:
* For the sake of simplicity, we will assume that all kinetic and equilibrium constants, as well as RT, are equal to 1 ; also we will use X for the concentration of X, C , , and so forth.
THE PHYSICS OF B E C O M I N G
The values of the initial products A and final products E are maintained constant in time so that only two independent variables, X and Y, are left. To simplify, we neglect the reverse reactions. This is a scheme of autocatalytic reactions; the increase in the concentration of X depends on the concentration of X. The same is true for Y. This model has been widely used in ecological modelling, with X representing, for example, an herbivore that uses A, and Y representing carnivore that propagates at the expense of the herbivore. This model is associated in the literature with the names of Lotka and Volterra (see May, 1974). Let us write the corresponding kinetic laws:
FIGURE 4.7
Periodic solutions of the Lotka-Volterra model obtained for different values of the initial conditions.
To study the stability of this steady state, which corresponds in this case to the thermodynamic, we shall use a normal mode analysis. We write X ( t ) = X o +xeU'; with Y(t)= Y o + ye"'
Obviously, stability is related to the sign of the real parts of the roots of the dispersion equations. If for each solution, con, of the dispersion equation
the initial state would be stable. In the Lotka-Volterra case, the real part vanishes, and we obtain and introduce equation 4.52 into kinetic equations 4.49 and 4.50, neglecting higher-order terms in x and y. We then obtain the dispersion equation for o (which expresses the fact that the determinant for the homogeneous linear equations vanishes). Because we have two components, X and Y, the dispersion equations are of the second order. Their explicit form is
This means that we have so-called marginal stability. The system rotates around the steady state (equation 4.51). The frequency of rotation (equation 4.56) corresponds to the limit of small perturbations. The frequency of oscillations is amplitude dependent and there are an infinite number of periodic orbits around the steady state (see Figure 4.7).
Let us consider another example, which has been used extensively recently because it has remarkable properties that allow one to model a wide range of macroscopic behaviors. It is called Brusselutor, and it corresponds to the scheme of reaction (for details, see Nicolis and Prigogine, 1977)
The initial and final products (A, B, C, D, and E) remain constant, whereas the two intermediate components (X and Y) may have concentrations that change in time. Putting the kinetic constants equal to one, we obtain the system of equations
F I G U R E 4.8
Limit cycle behavior of the Brusselator. The same periodic trajectory is obtained for different initial conditions. The letter S represents unstable steady state.
Applying the normal mode analysis, as for the Lotka-Volterra example, we obtain the equation
which should be compared with equation 4.54. One finds immediately that the real part of one of the roots becomes positive whenever
Therefore, this scheme, contrary to what happens with the LotkaVolterra equation, presents a real instability. Numerical calculations as well as analytical work performed for values of B, larger than the critical value, lead to the behavior indicated in Figure 4.8. We have now a limit cycle; that is, any initial point in the space X Y approaches the same periodic trajectory in time. It is important to note the very unexpected character of this result. The oscillation frequency now becomes a well-defined function of the physicochemical state of the system, whereas, in the Lotka-Volterra case, the frequency is essentially arbitrary (because it is amplitude dependent). Today many examples of oscillating systems are known, especially in biological systems, and the important feature is that their oscillation frequency is well defined once the state of the system is given. This shows that these systems are beyond the stability of the thermodynamic branch. Chemical oscillations of this type are supercritical phenomena. The molecular mechanism leads to quite fascinating and difficult questions to which we shall return in Chapter 6.
Limit cycle is not the only possible type of supercritical behavior. Suppose that we consider the exchange of matter between two boxes (referred to as Box 1 and Box 2). Instead of obtaining equations 4.58 and 4.59, we obtain
dX dt
+ X ; Y , - BX,
- X,
+ D x ( X , - X,),
%=BX, - X:Y, + D , ( Y , - Y , ) dt
The first two equations refer to Box 1, the last two to Box 2. Numerical calculations show that, under suitable conditions beyond the critical value, the thermodynamic state corresponding to identical concentrations of X and Y given by (see equations 4.60)
Time (arbitrary units)
FIGURE 4 9
A perturbation of Y in Box 2 (Y,) about the homogeneous state incrcascs owing to the autocatalytic the rate of production of X in that box (x,), step. This effect grows until a new state is reached corresponding to spatial symmetry breaking.
becomes unstable. An example of this behavior. as recorded by computer, is given in Figure 4.9. We have here a symmetry-breaking dissipative structure. If a steady state X , > X , is possible, the symmetrical one corresponding to X , > X I is also possible. Nothing in the macroscopic equations indicates which state will result. It is important to note that small fluctuations can no longer reverse the configurations. Once established, the symmetry-broken systems are stable. The mathematical theory of these remarkable phenomena is discussed in Chapter 5. In concluding this chapter, emphasis is placed on three aspects that are always linked in dissipative structures: the function, as expressed by the chemical equations; the space-time structure, which results from the instabilities; and the fluctuations, which trigger the in-
Structure
leads to the most unexpected phenomena, including order through,puctuations, which will be analyzed in the next two chapters.
Chapter
SELF-ORGANIZATION
As indicated in the preceding chapter, thermodynamic description takes various forms according to the distance from equilibrium. Of particular importance for us is the fact that, far from equilibrium, chemical systems that include catalytic mechanisms may lead to dissipative structures. As will be shown, these structures are very sensitive to global features such as the size and form of the system, the boundary conditions imposed on its surface, and so forth. All these features influence in a decisive way the type of instabilities that lead to dissipative structures. In some cases, the influence of external conditions may be even stronger; for example, macroscopic fluctuations may lead to new types of instabilities. Far from equilibrium, therefore, an unexpected relation exists between chemical kinetics and the space-time structure of reacting systems. It is true that the interactions, which determine the values of the relevant
kinetic constants and transport coefficients, result from short-range interactions (such as valency forces, hydrogen bonds, and Van der Waals forces). However, the solutions of the corresponding equations depend, in addition, on global characteristics. This dependence (which on the thermodynamic branch, near equilibrium, is rather trivial) becomes decisive in chemical systems working under far-from-equilibrium conditions. For example, the occurrence of dissipative structures generally requires that the system's size exceed some critical value-a complex function of the parameters describing the reaction-diffusion processes. Therefore we may say that chemical instabilities involve long-range order through which the system acts as a whole. This global behavior greatly modifies the very meaning of space and time. Much of geometry and physics is based on a simple concept of space and time, generally associated with Euclid and Galileo. In this view, time is homogeneous. Time translations may have no effect on physical events. Similarly, space is homogeneous and isotropic; again translations and rotations cannot alter the description of the physical world. It is quite remarkable that this simple conception of space and time may be broken by the occurrence of dissipative structures. Once a dissipative structure is formed, the homogeneity of time, as well as space, may be destroyed. We come much nearer to Aristotle's "biological" view of space-time, which was described briefly in the preface. The mathematical formulation of these problems requires the study of partial differential equations if diffusion is taken into account. The evolution of component Xi is then given by equations of the form
---
FIGURE 5.1
Bifurcation diagram for equation 5.2. Solid line and dots denote stable and unstable branches respectively.
The variety of phenomena that may be described by this sort of reaction-diffusion equation is quite amazing, which is why it is interesting to consider the "basic solution" to be the one corresponding to the thermodynamic branch. Other solutions may then be obtained by successive instabilities, which take place when the distance from equilibrium is increased. Such types of instabilities may be studied by means of bijiurcation theory (see Nicolis and Prigogine, 1977). In principle, a bifurcation is simply the appearance of a new solution of the equations for some critical value. Suppose, for example, that we have a chemical reaction corresponding to the rate equation (see McNeil and Walls, 1974)
in which the first contribution comes from the chemical reactions and generally has a simple polynomial form (as in Chapter 4 in the section on application to chemical reactions), whereas the second term expresses diffusion along the coordinate r. For simplicity of notation we use a single coordinate r, although, in general, diffusion occurs in three-dimensional geometrical space. These equations must be supplemented by boundary conditions (generally either the concentrations or the flows are given on the boundaries).
Clearly, for R < 0, the only time-independent solution is X = 0. At the point R = 0, we have a bifurcation of a new solution, X = R (see Figure 5.1), and it may be verified by the linear stability method explained in Chapter 4, in the section on application to chemical reactions, that the solution X = 0 then becomes unstable, whereas the solution X = R becomes stable. Generally, we have successive bifurcations where we increase the value of some characteristic parameter p (like B in the Brusselator scheme). Figure 5.2 shows a single solution for the value p,, but multiple solutions for the value p2 .
FIGURE 5.3
Trajectories for equations 5.4.
FIGURE 5.2
Successive bifurcations: A and A' represent primary bifurcation points from the thermodynamic branch; B and B' represent secondary bifurcation points.
It is interesting that, in a sense, the bifurcation introduces history into physics and chemistry, an element that formerly seemed to be reserved for sciences dealing with biological, social, and cultural phenomena. Suppose that observation shows us that the system whose bifurcation is shown in Figure 5.2 is in state C because of an increase of the value of p. Interpretation of state C implies a knowledge of the history of the system, which had to go through bifurcation points A and B. Every description of a system that has bifurcations will imply both deterministic and probablistic elements. As will be seen in greater detail in Chapter 6, the system obeys deterministic laws, such as the laws of chemical kinetics, between two bifurcation points, but in the neighborhood of the bifurcation points fluctuations play an essential role and determine the "branch" that the system will follow. The mathematical theory of bifurcation is generally very complex. It often implies rathe1 tedious expansions, but there are some cases in which exact solutions are available. A very simple situation of this type is provided by Rene Thom's (1975) theory of catastrophes, which can be applied when diffusion is neglected in equation 5.1 and when such equations derive from a potential. It means that they then take the form
dX, -- --
in which V is a kind of "potential function." This is a rather exceptional case. However, when satisfied, a general classification of the solutions of equation 5.3 may be undertaken by looking for the points at which there are changes in the stability properties of the steady states. These are the points that Thom called "ensemble de catastrophes." Another type of system admitting an exact theory of bifurcation is described later in this chapter in the section titled A Solvable Model for Bifurcation. Finally, a general concept, which plays an important role in the theory of self-organization, is that of structural stability. It can be illustrated by a simplified form of the Lotka-Volterra equations corresponding to the prey-predator competition:
In the (x, y) phase space, an infinite set of closed trajectories surrounds the origin (see Figure 5.3). Compare the solutions of equations 5.4 with those arising from the following equations:
av
dt
ax,
In the latter case, even for the smallest value of the parameter a (a < O), the point x = 0, y = 0 is asymptotically stable, being the end point
SELF-ORGANIZATION
FIGURE 5.4
Trajectories for equations 5.5.
evolve through mutation and replication into greater complexity. In recent work, Manfred Eigen and Peter Schuster (1978) presented a model for a "realistic hypercycle" related to the molecular organization of a primitive replication and translation apparatus. The concept of structural stability seems to express in the most compact way the idea o innovation, the appearance of a new mechanism and a f new species, which were initially absent in the system. Simple examples are given later in this chapter in the section on ecology.
toward which all trajectories in phase space converge, as indicated in Figure 5.4. By definition, equations 5.4 are termed "structurally unstable " with respect to " fluctuations " that slightly alter the mechanism of interaction between x and y and introduce terms, however small, of the type shown in equations 5.5. This example may seem somewhat artificial, but consider a chemical scheme describing some polymerization process in which polymers are constructed from molecules A and B, which are pumped into the system. Suppose that the polymer has the following molecular configuration: ABABAB Suppose that the reactions producing this polymer are autocatalytic. If an error occurs and a modified polymer appears such as ABAABBABA . . then it may multiply in the system as a result of the modified autocatalytic mechanism. Manfred Eigen presented interesting models that include such features and showed in idealized cases that the system would evolve toward an optimum stability with respect to the occurrence of errors in the replication of polymers (see Eigen and Winkler, 1975). His model has as its basis the idea of "cross catalysis." Nucleotides produce proteins, which in turn produce nucleotides: Nucleotides
Suppose that we impose the value of the concentrations at the boundary. We look then for solutions of the form (see equation 4.60) X
=
+ X,(t)
+ Y,(t)
nnr sin -, L
n Proteins
L ' J
B Y =A
sin
nnr L
This results in a cyclic network of reactions called a hypercycle. When such networks compete with one another, they display the ability to
in which n is an integer and X , and Yo are still time dependent. These solutions satisfy the boundary conditions X = A and Y = B / A for r = 0
and r = L. We may then apply the linear stability analysis and obtain a dispersion equation that relates o to the space dependence given by the integer n in equations 5.7. The results are as follows. The instability may arise in different ways. The two dispersion equations may have two roots that are complex conjugate, and at some point the real part of these roots vanishes. This is the situation that leads to a limit cycle, which was studied in Chapter 4. In the literature, this is often called the Hopf bijurcation (Hopf 1942). A second possibility is that we have two real roots, one of which becomes positive at some critical point. That is the situation leading to spatially nonuniform steady states. We may call it the Turing bifurcation, because Alan Turing (1952)was the first to note the possibility of such a bifurcation in chemical kinetics in his classic paper on morphogenesis. The variety of phenomena is even larger because the limit cycle may also be space dependent and lead to chemical waves. Figure 5.5 shows a chemical nonuniform steady state corresponding to a Turing bifurcation, whereas Figure 5.6 shows, the simulation of a chemical wave. The realiza-
8x
D, = 4 x
A = 2; B = 5.45.
F I G U R E 5.7 Cylindrically symmetric steady-state dissipative structure in two dimensions obtained by computer simulation with parameters D, = 1.6 x A = 2; B = 4.6; circle radius R = 0.2. D, = 5 x
F I G U R E 5.8 Polar steady-state dissipative structure in two dimensions obtained by computer simulation with parameters D, = 3.25 x D, = 1.62 x lo-'; A = 2; B = 4.6; R = 0.1.
tion of one or the other of these coherent phenomena depends on the value of the diffusion coefficients D or, better, on the ratio D I P . When this parameter becomes zero, we obtain a limit cycle, the "chemical clock," whereas inhomogeneous steady states can appear only when DIP is sufficiently large. Localized structures can also result from this scheme of reactions when the fact that the initial substances A and B (see equation 4.57) must diffuse through the system is taken into account. The wealth of dissipative structures increases greatly when two- or three-dimensional systems are considered. For example, we may then have the appearance of polarity in a hitherto uniform system. Figures 5.7 and 5.8 show the first bifurcation in a two-dimensional circular system differingin the values of the diffusion constants. In Figure 5.7, the concentration remains radially isotropic, whereas, in Figure 5.8, the appearance of a privileged access can be observed. This is of interest for the application to morphology in which one of the first stages corresponds to the appearance of a gradient in a system that was initially in a spherically symmetric state. Successive bifurcations may also be of interest, for example, as il-
lustrated in Figure 5.9. Before B, we have the thermodynamical branch, whereas at B , a limit cycle behavior begins. The thermodynamical branch remains unstable but bifurcates into two new solutions at point B , ; these solutions are also unstable but become stable at points BT,, B:, . These two new solutions correspond to chemical waves. One type of wave admits a plane of a symmetry (Figure 5.10), whereas the other corresponds to rotating waves (see Figure 5.11). It is quite
B
FIGURE 5.10
Equal concentration curves for X in trimolecular model in circle of radius R = 0.5861 subject t o zero-flux boundary conditions. Solid and dashed lines refer, respectively, to concentrations larger o r smaller than values on D, = 4 x (unstable) steady state: X, = 2, A = 2, D , = 8 x B = 5.4. The concentration patterns shown in Parts A and B are at different stages of periodic solution.
F I G U R E 5.1 1 Rotating solution for trimolecular model arising under the same conditions as those for Figure 5.10 but for an even larger supercritical value of bifurcation parameter, B = 5.8.
SELF-ORGANIZATION
remarkable that this type of situation has indeed been observed experimentally in chemical reactions (see the section titled Coherent Structures in Chemistry and Biology later in this chapter).
5.8. After eliminating one of the variables and integrating, we obtain the equation valid at the steady state
O(w) is a polynomial in w whose exact form is of no interest here. Note only that @(w)= 0 for w = 0. It is very interesting to compare this formula to the Hamiltonian as expressed in equation 2.1 or 2.2, which is written here as
which is the simplified form of .the Brusselator in which the reaction A -+ X in equation 4.57 has been suppressed. Such a description appears in the theory of dissipative structures for reactions involving enzymes fixed on membranes where the presence of component X is ensured by diffusion and not through the "source" A. Let us also use fixed boundary conditions:
The specific simplifying feature of this scheme is the existence of a "conserved quantity," as can be seen by adding the two equations in reaction
It can be seen that, to change the Hamiltonian (equation 5.13) into equation 5.1 l, we must replace the coordinate q by the concentration and time by the coordinate r. Note also that w = 0 is at the boundary of the system. Consider the systems represented in Figures 5.12 and 5.13. In the situation shown in Figure 5.12 in which @(w)has a maximum for w = 0, only the thermodynamical branch can be stable. Suppose that we start at w = 0 and go to the right. @(w)becomes negative, which means according to equation 5.11 that the gradient ( d ~ l d r will increase steadily as dis)~ tance from the boundary increases. We shall therefore be able to satisfy the second boundary condition. The situation changes completely when we consider the case in which @(w)has a minimum for w = 0. Then by going to the right, we come to the point of intersection with the horizontal K. At this point, w,, the gradient (dwldr) will vanish, and we can then reach the second boundary by going back to the origin w = 0. In this way we obtain a bifurcating solution with a single extremum. Certainly other, more complicated solutions can be built in the same way. This provides us, I believe, with the simplest, effective construction of a bifurcating solution in a reaction-diffusion system. It is interesting
SELF-ORGANIZATION
F I G U R E 5.1 2
Situation corresponding to no bifurcation
F I G U R E 5.13
Situation corresponding to bifurcation.
that the time periodicity of the classical pendulum problem leads to the space periodicity of bifurcating solutions. A further fascinating analogy between time-periodic and timeindependent but spatially nonuniform solutions of nonlinear systems can be drawn by choosing as the bifurcation parameter a characteristic length, L, of the reaction space. As it turns out, if L is small enough, only the spatially homogeneous state will exist and be stable for natural boundary conditions. Above a critical value, LC,,however, a stable monotonic gradient of the kind shown in Figure 5.8 can emerge and subsist until a second critical value LA, is reached, whereupon this pattern disappears (Babloyantz and Hiernaux 1975).The existence of thisjnite length LC,for spatial self-organization is to be compared with the emergence of a$finite frequency accompanying the bifurcation of a time-periodic solution like a limit cycle (see the soluble model just discussed). If L increases further, at a certain LC,(LC,> LC,but possibly < L:,), a second pattern will be available that will give a nonmonotonic concentration profile. Further growth will reveal more complex concentration patterns. Their relative stability will depend on the occurrence of secondary and higher bifurcations. The fact that growth and morphology are linked in this picture is reminiscent of some aspects of morphogenesis in early embryonic development. For example, the "imaginal discs" in the early stages of larval development of the fruitfly Drosophila both grow and subdivide into compartments separated by rather sharp boundaries. This problem was recently analyzed by Stuart Kauffman and co-workers (1978) in the context of repeated bifurcations at successively higher lengths, as discussed above. The existence of a second bifurcation parameter L, in addition to the kinetic bifurcation parameter p (see Figure 5.2) or B (see Figure 5.9) that is present in most systems, enables some systematic, if preliminary, classification of spatially nonuniform dissipative structures. In diagrams such as Figure 5.14, in which the structure of bifurcation is given in terms of a single parameter, only primary bifurcating branches are represented. In the vicinity of the bifurcation points their behavior is known. In particular, the first branch is stable (if supercritical, i.e., if it arises for p > p,,) whereas the others are unstable. Higher bifurcating branches are not shown because they typically arise at a finite distance from the bifurcation points.
SELF-ORGANIZATION
has been studied by many investigators both experimentally and theoretically. In experimental studies, it plays the same role as does the Brusselator in theoretical studies. According ;o the circumstances, a wide range of phenomena has been observed; for example, oscillations for a period of the order of a minute in homogeneous mixtures and wavelike activity. Elucidation of the mechanism of this reaction is largely attributed to Richard Noyes and co-workers (Noyes and Field 1974). Let
F I G U R E 5.14
Successive primary bifurcations from the thermodynamic branch: solid line denotes stable branch; dashed line denotes unstable branch.
The situation changes if bifurcation is followed in the space of both p and L. For certain combinations of p and L, degenerate bifurcation at a double eigenvalue of the linearized operator may occur, in which case bifurcating branches coalesce. Conversely, if p and L change slightly from this state of degeneracy, the bifurcation branches split and may give rise to secondary and higher bifurcations (Erneux and Hiernaux, in press; Golubitsky and Schaeffer 1979).The point is that all these possibilities can be classified completely, as long as one remains near the degenerate bifurcation. The situation begins to look like catastrophe theory, even though one is not dealing in general with systems deriving from a potential.
P, Q
waste-product concentration
SELF-ORGANIZATION
Warren Rauscher (1973) reported oscillations in a reaction that included hydrogen peroxide, malonic acid, potassium iodate, manganous sulfate, and perchloric acid, which may be viewed as a "mixture" of BelousovZhabotinskii and Bray reagents. This reaction was s;udied systematically by Adolf Pacault and co-workers under open-systems conditions (Pacault, de Kepper, and Hanusse 1975). Finally, Endre Koros (1978) reports a whole family of simple aromatic compounds (phenol, aniline, and their derivatives), which in reacting with acid bromate are capable of generating oscillations without the catalytic action of metal ions like cerium or manganese ions. These metal ions are known to play an important role in the Belousov-Zhabotinskii reaction. Although oscillating reactions are rather exceptional in the field of inorganic chemistry, they are observed at all levels of biological organization-from the molecular to the supercellular. Among the most significant biological oscillations are those related to enzyme activity in metabolism, which have a period of the order of a minute, and thosc related to epigenesis, which have a period of the order of an hour. The best understood example of metabolic oscillation is that which occurs in the glycolytic cycle, which is a phenomenon of the greatest importance for the energetics of living cells (Goldbeter and Caplan 1976). It consists in the degradation of one molecule of glucose and the overall production of two molecules of ATP by means of a linear sequence of enzyme-catalyzed reactions. It is the cooperative effects involved in the enzyme activity that lead to the catalytic effects responsible for the oscillations. It is quite remarkable that oscillations in the concentrations of all metabolytes of the chain are observed for certain rates of glycolytic substrate injection. Even more remarkable is the fact that all glycolytic intermediates oscillate with the same period but with different phases. The enzymes in the reaction play somewhat the same role as a nicol prism in optical experiments. They lead to a phase shift in the chemical oscillation. The oscillatory aspect of chemical reaction is especially spectacular in the glycolytic cycle because one can follow experimentally the influence of various factors on the period and the phase of the oscillation. Oscillatory reactions of the epigenetic type are also well known. They occur as a consequence of regulatory processes at the cellular level. Proteins are generally stable molecules, whereas catalysis is a very fast process. Thus, it is not unusual for the protein level in a cell to be too
high, in which case other bodily substances act to suppress the synthesis of macromolecules. Such feedback gives rise to oscillations and has been studied in detail in, for example, the regulation of the lactose operon in the bacterium Escherichia coli. Other examples of oscillation-producing feedback mechanisms can be found in the aggregation process in slime molds, in reactions involving membrane-bound enzymes, and so forth. The interested reader should consult the relevant literature (for references, see Nicolis and Prigogine, 1977). It seems that most biological mechanisms of action show that life involves far-from-equilibrium conditions beyond the stability of the threshold of the thermodynamic branch. It is therefore very tempting to suggest that the origin of life may be related to successive instabilities somewhat analogous to the successive bifurcations that have led to a state of matter of increasing coherence.
Ecology
Let us turn to some aspects of stability theory applicable to structural stability (see Prigogine, Herman, and Allen, 1977.) To take a simple example, the growth of a population X in a given medium is often expressed by
in which K is related to the rate of birth, d is related to mortality, and N is a measure of the milieu's capacity to support the population. The solution to equation 5.17 can be expressed with the help of the logistic curve presented in Figure 5.15. This evolution is entirely deterministic. The population ceases to grow when the milieu is saturated. However, it may happen, following events over which the model has no control, that a new species (characterized by other ecological parameters K , N, and d) appears, initially in a small quantity, in the same milieu. This ecological jluctuation raises the question of structural stability: the new species may either disappear or supplant the original one. It is easy to show, using linear stability analysis, that the new species will supplant the original
FIGURE 5.1 5
Logistic curve. See equation 5.17.
FIGURE 5.16
Occupation of an ecological niche by the successive species.
one only if addition of economic functions. Let S f be the economic function k at point i (say, the "city" is i). We must then replace equation 5.17 with an equation of the type The occupation of the ecological niche by the species assumes the form indicated in Figure 5.16. This model describes in a quantitatively exact manner the significance of "the survival of the fittest" in the framework of a problem posed in terms of the exploitation of a given ecological niche. A variety of such models may be introduced by taking into account various possible strategies used by a population for its survival. For example, we may distinguish between species using a wide variety of foods (so-called generalists) and those using a narrow spectrum (so-called specialists).We may also take into account the fact that some populations immobilize a part of their society for "unproductive" functions, such as "soldiers." This is closely related to the social polymorphism of insects. One could also make use of the concepts of structural stability and order through fluctuation in more complex problems, and, at the cost of some drastic simplifications, even in the study of human evolution. As an example, let us consider the problem of urban evolution from this point of view (see Allen, 1977). In terms of the logistic equation (5.17), an urban region is characterized by the increase of its capacity N because of the
in which Rk is a coefficient of proportionality. However, S f itself increases with the population X i in a complicated manner: it plays an autocatalytic role, but the efficiency of this autocatalysis depends on the need at point i for the product k as related to the increase of the population and the competition from rival production units, located at other points. In this model, the appearance of an economic function is comparable to a fluctuation. The appearance of this economic function will destroy the initial uniformity of the population distribution by creating employment opportunities that concentrate the population at a point. The new employment opportunities will drain the demand from neighboring points; intervening in an already urbanized area, they may be destroyed by the competition from similar but better developed or better situated economic functions; they may also develop in coexistence, or at the cost of the destruction of one or the other of these functions.
THE PHYSICS OF B E C O M I N G
Figure 5.17 illustrates a possible " history " of the urbanization of an initially uniform region, in which four economic functions seek to develop at each point in a network of fifty localities; the various attempts follow each other in a stochastic temporal sequence. The final result depends in a complex manner on the interplay of deterministic economic laws and the probabilistic succession of fluctuations. Although the details of any particular simulation depend on the exact "history" of the region, certain average properties of the structure engendered are roughly conserved. For example, the number and average separation of large centers is approximately the same for systems having the same values of the parameters even though they undergo different histories. Such a model permits an estimation of the long-term consequences of decisions concerning, for example, transportation, investments, and so forth, as the effects are passed along the various interaction loops of the system, and successive adjustments of the different agents occur. In general, we see that such a model offers a new basis for the understanding of" structure" resulting from the actions (choices) of the many agents of a system, having in part at least mutually dependent criteria of action (utility functions).
---
Concluding Remarks
The examples studied in the last section are quite removed from the simple systems of classical and quantum mechanics. However, it should be noted that there are no limits to structural stability. Every system may
C
160, 140 '
FIGURE 5.17
A possible "history" of the urbanization of an initially uniform region, in which four economic functions seek t o develop at each point in a network of fifty localities; the various attempts follow each other in a stochastic temporal sequence. A. The distribution of population at time t = 4, o n a lattice of 50 points. At t = 0, each has a population of 67. B. At t = 12, the basic urban structure of the region is emerging, with five fast-growing centers. C. By time t = 20, the structure has solidified, and the largest center exhibits the "urban sprawl" of residential suburbs. D. At t = 34, growth of the urban centers is slow and the "above average growth" is taking place in the interurban zones, resulting in counterurbanization. E. Evolution of the populations of points a, b, and C, indicated in part D, throughout the simulation.
tt+t+
++++tt+++*
+++
Time
present instabilities when suitable perturbations are introduced. Therefore, there can be no end to history. Ramon Margalef has, in a beautiful presentation, described what he calls the " baroque of the natural world " (Margalef 1976). He means that ecosystems contain many more species than would be " necessary" if biological efficiency alone were an organizing principle. This "over creativity" of nature emerges naturally from the type of description being suggested here, in which "mutations" and "innovations" occur stochastically and are integrated into the system by the deterministic relations prevailing at the moment. Thus, we have in this perspective the constant generation of" new types" and " new ideas " that may be incorporated into the structure of the system, causing its continual evolution.
Jade pommel from China's Han period. Its diameter is 4 cm. Private collection. Photograph by R. Kayaert, Brussels.
Chapter
NONEOUILIBRIUM FLUCTUATIONS
One reason why quantum mechanics has attracted such a great interest is certainly the introduction of a probabilistic element into the description of the microworld. As was seen in Chapter 3, in quantum mechanics physical quantities are represented by operators that do not necessarily commute. This leads to the well-known Heisenberg uncertainty relations. Many people have seen in these relations a proof that on the microscopic level, to which quantum mechanics applies, determinism is violated-a statement that needs clarification. As emphasized in Chapter 3, in the section on time change in quantum mechanics, the basic equation of quantum mechanics, the Schrodinger equation, is as deterministic as the classical equations of motion. There is no uncertainty relation involving time and energy, in the sense in which Heisenberg's uncertainty relations are valid. Once the wave function is
N O N E Q U l L l B R l U M FLIJCTUATIONS
known at the initial time, we can, according to quantum mechanics, calculate its value at all times both in the past and in the future. Yet it is true that quantum mechanics introduces a basic probabilistic element in the description of the microworld. However, the macroscopic, thermodynamic description deals generally with averages, and the probabilistic elements introduced by quantum mechanics play no role. It is therefore of special interest to note that independent of the uncertainty relations there are macroscopic systems in which fluctuations and probabilistic description play an essential role. This can be expected in the neighborhood of bifurcations where the system has to "choose" one of the possible branches that appear at the bifurcation point. This statistical element will be analyzed in detail in this chapter in order to show that near bifurcations the law of large numbers essentially breaks down. (For an introduction to probability theory, see Feller, 1957.) In general, fluctuations play a minor role in macroscopic physics, appearing only as small corrections that may be neglected if the system is sufficiently large. However, near bifurcations they play a critical role because there the fluctuation drives the average. This is the very meaning of the concept of order through fluctuations, which was introduced in Chapter 4. It is interesting to note that this leads to unexpected aspects of chemical kinetics. Chemical kinetics is a field that is now about a hundred years old. It has always been formulated in terms of the type of rate equations, which were studied in Chapters 4 and 5. Their physical interpretation is quite simple. Thermal motion causes particles to collide. Most such collisions are elastic: that is, they change the translational kinetic energy (as well as the rotational and vibrational energy if polyatomic molecules are considered), without affecting the electronic structure. However, a fraction of these collisions are reactive and give rise to new chemical species. O n the basis of this physical picture, one may expect that the total number of collisions between two species of molecules, say X and Y, will be proportional to their concentrations, as will be the number of inelastic collisions. This idea has dominated the development of chemical kinetics since its formulation. However, how can such chaotic behavior, like that depicted by collisions occurring at random, ever give rise to coherent structure? Naturally some new feature must be taken into consideration; that is, the fact that, near instabilities, the distribution of reactive particles
is no longer random. Until recently this feature was not included in chemical kinetics; however, further progress in its development is expected to take place in the next few years. Before addressing the breakdown of the law of large numbers, let us briefly examine what is meant by this law. Consider, for example, a probability distribution of great importance in many fields of science and technology, the Poisson distribution. Suppose that we have a variable X that may take integral values, X = 1, 2, 3, . . . . According to the Poisson distribution, the probability of X is given by
This law is found to be valid in a variety of situations, such as the distribution of telephone calls, waiting time in restaurants, or the fluctuation of particles in a medium of given concentration. In equation 6.1, (X) represents the average value of X. An important feature of the Poisson distribution is that (X) is the only parameter included in the distribution. The probability distribution is entirely determined by its mean. This is not so for the Gaussian distribution (equation 6.2), which contains in addition to the average, (X), the dispersion a,
From the probability distribution function, one can easily obtain the "variance," which gives the dispersion about the mean:
The characteristic feature of the Poisson distribution is that the dispersion is equal to the average itself:
( 6 x 2 ) = (X)
Let us consider a situation in which X is an extensive quantity proportional to the number of particles N (in a given volume) or to the volume
NONEQUlLlBRlUM FLUCTUATIONS
V itself. We then obtain for the relative fluctuations the well-known square root law:
Chemical Games
To include fluctuations, we have to leave the macroscopic level. However, to turn to classical or quantum mechanics is practically out of the question. Every chemical reaction would then become an involved manybody problem. Therefore, it is useful to consider an intermediate level of somewhat the same type as that considered in Chapter 1 in the discussion of the random walk problem. The basic idea is that of the existence of well-defined transition probabilities per unit time. Consider again the probability W ( k ,t ) of finding the Brownian particle at a place k, at time t. Let us introduce the transition probability o , , ,which gives us the probability (per unit time) for a transition between the two "states," k and t . We may then express the time change of W(k, t ) in terms of a competition between gain terms, related to the transition P -+ k, and loss terms, related to the transition k -+ P. We can then obtain the basic equation
The order of magnitude of the relative fluctuation is inversely proportional to the square root of the average. Therefore, for extensive variables of order N we obtain relative deviations of order N-'I2. This is the characteristic feature of the law of large numbers. As a result we may disregard fluctuations in large systems and use a macroscopic description. For other probability distributions, the mean square deviation is no longer equal t o the average, as in equation 6.4. But whenever the law of large numbers applies, the order of magnitude of the mean square deviation is still the same, and we have
cSX2)
finite for
We may also introduce a variable x into equation 6.2, which is "intensive"; that is, it does not increase with the size of the system (such as pressure, concentration, or temperature). The Gaussian distribution for such an intensive variable becomes, taking into account equation 6.6,
This shows that the most probable deviation of an intensive variable from its mean will be of the order of 1 / - ' 1 2 and will therefore become small when the system is large. Inversely, large fluctuations of intensive variables can occur only in small systems. These remarks will be illustrated by examples to be considered later. We shall see how, near a bifurcation point, nature always finds some clever way to avoid the consequences of the law of large numbers through an appropriate nucleation process.
In the Brownian motion problem, k would correspond to thc position on the lattice and o ,would not be zero only if k differs from f by one unit. , But equation 6.8 is much more general. It is in fact the basic equation for Markov processes, which play a prominent role in modern theory of probability (see Barucha-Reid, 1960). A characteristic feature of Markov processes is that the transition probabilities of, involve only the states k and /. The transition probability from k -r / does not depend on which states were involved before the occupation of state k . In this sense the system has no memory. Markov processes have been used to describe many physical situations and can also be used to model chemical reactions. For example, let us consider a
N O N E O U l L l B R l U M FLUCTUATIONS
The macroscopic kinetic equations are of the type introduced in Chapters 4 and 5 (we write here the kinetic constants):
We suppose, as before, that the concentrations of A and E are given. The steady state corresponding to equation 6.10 is
In this standard macroscopic description, fluctuations are neglected. To study their effect, we introduce a probability distribution W(A, X, E, t) and apply the general expression 6.8. The result is
- k , z A W ( A , X , E, t)
This equation can be solved both for equilibrium and for nonequilibrium states. The result is a Poisson distribution with the macroscopic expression 6.11 as the mean for X . This is quite satisfactory and seems so natural that for some time we believed that this result could be extended to all chemical reactions whatever their mechanism. But then a new, unexpected element entered. If we consider more general chemical reactions, the corresponding transition probabilities become nonlinear. For example, using the same argument as before, the transition probability corresponding to A + X 4 2X is proportional to ( A + 1) . (X - I), the product of the number of par- ' tlcles of A and X before the inelastic collision. So the corresponding Markov equations also become nonlinear. It can be said that a distinct characteristic of chemical games is their nonlinearity as contrasted with the linear behavior of random walks, for which the transition probabilities are constant. To our surprise this new feature leads to deviations from the Poisson distribution. This unexpected result was proved by Gregoire Nicolis and myself (1971; see also Nicolis and Prigogine, 1977) and has aroused much interest. These deviations are very important from the point of view of the valldity of macroscopic kinetic theory. We shall see that macroscopic chemical equations are valid only when deviations from the Poisson distribution may be neglected. As an example, suppose that we have the chemical reaction 2X -t E with rate constant k . From the Markov equation 6.8, we may derive the time change of the average concentration of X. Not unexpectedly this leads to
Indeed, we have to choose two molecules in succession from the X molecules present. Note that
N O N E Q U l L l B R l U M FLUCTUATIONS
term would vanish for a Poisson distribution, according to equation 6.4, which would mean that the behavior is governed by the macroscopic chemical equation. This result is quite general. We see that deviations from the Poisson distribution play an essential role in the transition from microscopic to macroscopic level. Normally we may neglect them. For example, in expression 6.13, the first term must have the same order of magnitude as (X); that is, it must be proportional to the volume. The second is then independent of the volume. Therefore, in the limit of large volumes, it may be neglected. But, if the deviation from the Poisson behavior becomes proportional not to the volume itself as predicted by the law of large numbers, but to a higher power of the volume, then the whole macroscopic chemical description breaks down. It is interesting to observe that in a sense chemical kinetics is very much a mean jield theory, as are many other theories of classical physics and chemistry, such as the theory of equation of state (Van der Waal's theory), the theory of magnetism (Weiss field), and so forth. We also know from classical physics that such mean-field theories led to consistent results except near phase transitions. The theory initiated by Leo Kadanoff, Jack Swift, and Kenneth Wilson, among others, has as its basis the clever idea of studying long-range fluctuations that appear near critical points of phase transitions (see Stanley, 1971). The scale of the fluctuations becomes so large that molecular details no longer matter. The situation is rather similar here. We would hope to find conditions ensuring the existence of nonequilibrium phase transitions for macroscopic systems by imposing length scale invariance on the master equation and taking the thermodynamic limit (is., the limit as both the number of particles and the volume tend to infinity, but density remains finite). From these conditions we should also be able to evaluate explicitly the way that the variance of fluctuations behaves near the transition. For nonequilibrium systems, this program has so far been carried out only for a simple model of master equation, namely, the Fokker-Planck equation (Dewel, Walgraef, and Borckmans 1977). Further work is in progress. Let us now consider in detail a simple example in which the law of large numbers is violated.
Friedrich Schlogl has studied the following chemical sequence (see Schlogl, 1971, 1972; Nicolis and Prigogine, 1978):
Following our usual prescriptions, we can easily obtain the macroscopic kinetic equation
dx
dt
+ (6' - 6)
(6.18)
+ 6x = 6' - 6
(6.19)
phase transition when a liquid phase becomes a vapor phase. At this point the variance is of the order of V 2 . That is, (6 --X 2 ) v2 finite as
v -+
3 solutions
FIGURE 6.1 Behavior of the solutions of equation 6.19 in terms of the parameters 6 and 6'; C represents the line of coexistence of multiple steady states.
In other words, near nonequilibrium phase transitions there is no longer a consistent macroscopic description. Fluctuations are as important as average values. One can show that in the multiple steady state region, the probability function P(x) itself undergoes an extreme change in the limit V -r m. For any finite V, P(x) is a double-humped distribution with peaks centered on the macroscopic stable states x+ and x- . For V -+ oo, each of the two humps collapses to a delta function (Nicolis and Turner, 1977a and b). Therefore a stationary probability is obtained of the form
It is interesting that this third-order equation is isomorphic with the one familiar from equilibrium phase transitions described by the Van der Waals theory. When we follow the evolution of the system along the line 6 = 6' (see Figure 6.1), we see that equation 6.19 has only one root, x = 0, for 6 positive, whereas there are three roots, x = 0, x+ = i , for 6 negative (remember that x, being a concentration, must be real). This model is sufficiently simple to obtain the exact mean square deviation (see Nicolis and Turner, 1977a and b). As 6 approaches zero, we obtain
Both these quantities tend to infinity as 6 approaches zero, indicating a breakdown of the law of large numbers in the sense defined by equation 6.6. This breakdown becomes especially evident at the points at which the system can jump from the root x + to the root x - ,just as in an ordinary
in which x is the intensive variable related to X , x = X / V . The weights C + and C- sum to unity, and are otherwise determined explicitly from the master equation. Both 6(x - x,) and 6 ( x - x-) satisfy the master equation independently for V -+ oo.Their " mixture " (equation 6.22), on the other hand, gives the thermodynamic limit of the steady-state probability distribution evaluated first for finite system size. The analogy with equilibrium phase transitions of the Ising model type is striking: if x + and x- were the values of total magnetization, then equation 6.22 would describe an Ising magnet at the zero (equilibrium) magnetization state. O n the other hand, the "pure states," 6(x - x + ) and 6(x - x-), would describe two magnetized states sustained for arbitrarily long times if appropriate boundary conditions are applied on the surface of the system. The conclusion is not so astonishing as it at first seems. The very concept of macroscopic values, in a sense, loses its meaning. Macroscopic values are generally identified with the "most probable" values, which, if fluctuations may be neglected, become identical with average values. Here, however, we have near phase transition two most "probable" values, neither of which corresponds to the average value, and fluctuations between these two "macroscopic" values become very important.
lation function Gij of the ith box with ,x plotted as a function of the distance j , The boxes are controlled by the >r equation with A = 2, B = 3, and with linkages between the boxes as in the text. These values of the rs leave the system below the critical the formation of a spatial dissipative
6.2
FIGURE 6 . 3
Critical behavior of the spatial correlation function for the sar Parameter values as those in Figure 6.2, but B = 4.
chemical interactions. Chaos gives rise to order . What is the role of the number of particles in this process? That is an essential question to be addressed next, using the example of chemical oscillations.
The preceding considerations can also be applied to the problem of oscillating chemical reactions. From the molecular point of view, the existence of oscillations is very unexpected.
NONEQUlLlBRlUM FLUCTUATIONS
One might first think that it would be easier to obtain a coherent oscillating process with a few particles, say 50, than with as many as, say, Avogadro's number, loz3,which are generally involved in macroscopic experiments. But computer experiments show that it is just the opposite. It is only in the limit of number of particles N -+ cc that we tend to "long range " temporal order. To understand this result at least qualitatively, let us consider the analogy with phase transitions. If we cool down a paramagnetic substance to a temperature called the Curie point, the system undergoes a behavioral change and becomes ferromagnetic. Above the Curie point, all directions play the same role; below it, there is a privileged direction corresponding to the direction of magnetization. Nothing in the macroscopic equation determines which direction the magnetization will take. In principle, all directions are equally likely. If the ferromagnet contained a jnite number of particles, this privileged direction would not be maintained in time. It would rotate. However, in an infinite system, no fluctuation whatsoever can shift the direction of the ferromagnet. The long-range order is established once and for all. The situation is very similar in oscillating chemical reactions. It can be shown that, when the system switches to a limit cycle, the stationary probability distribution also undergoes a structural change: it switches from a single-humped form to a craterlike surface centered on the limit cycle. As in equation 6.22, the crater gets sharper as V increases and, in the limit V + cc, it becomes singular. In addition to this, however, a family of time-dependent solutions of the master equation appears. For any finite V, these solutions lead to damped oscillations so that the only long-time solution remains the steady-state one. Intuitively, this means that the phase of the motion on the limit cycle, which plays the same role as the direction of magnetization, is determined by the initial conditions. If the system is finite, fluctuations will progressively take over and destroy phase coherence. O n the other hand, computer simulations suggest that as V increases the timedependent modes are less and less damped. We can therefore expect to obtain, in the limit V + oo, a whole family of time-dependent solutions of the master equation rotating along the limit cycle (Nicolis and Malek-Mansour 1978). Again, in our intuitive picture, this would mean that in an infinite system phase coherence can be maintained for arbitrarily long times, just as a privileged initial magnetization can be
sustained in a ferromagnet. In this sense, therefore, the appearance of a periodic reaction is a time symmetry breaking process, exactly as ferromagnetism is a space symmetry breaking one. The same observations could be made for time-independent but spacedependent dissipative structures. In other words, it is only if the chemical equations are exactly valid (i.e., in the limit of large numbers when the law of large numbers applies) that we may have coherent nonequilibrium structures. An additional element to the far-from-equilibrium condition employed in Chapter 4 is the size of the system. If life is indeed associated with coherent structures-and everything supports this view-it must be a macroscopic phenomenon based on the interaction of a large number of degrees of freedom. It is true that some molecules, such as nucleic acids, play a dominant role, but they can only be generated in a coherent medium involving a large number of degrees of freedom.
Limits to Complexity
The methods outlined in this chapter may be applied to many situations. One of the interesting features of this approach is that it shows that the laws of fluctuation depend markedly on the scale. The situation becomes quite similar to that in the classical theory of the nucleation of a liquid drop in a supersaturated vapor. A droplet smaller than a critical size (called the size of an "embryo") is unstable, whereas if it is larger than this size it grows and changes the vapor into a liquid (see Figure 6.4). Such a nucleation effect also appears in the formation of an arbitrary dissipative structure (see Nicolis and Prigogine, 1977). We may write a master equation of the type
(6.23)
which takes into account both the effect of the chemical reaction inside a volume A V and the migration of the particles through exchange with the outside world. The form of this equation is very simple. When the average
NONEQUlLlBRlUM FLUCTUATIONS
FIGURE 6.4 Nucleation of a liquid droplet in supersaturated vapor: (A) a droplet smaller than the critical size; (B) a droplet larger than the critical size.
( X Z )in volume A V is calculated, one obtains from equation 6.2 the sum of two terms represented schematically as d ( X z ) ~= chemical effects inside A V v dt
equation is equal to the number of interacting species. Therefore, in a complex medium, such as a tropical forest or a modern civilization, the degree of such an equation would be very high indeed. Consequently, the chances of having at least one positive root leading to instability increase. How then is it possible that complex systems exist at all? I believe that the theory summarized here gives the beginning of an answer. The coefficient in equation 6.24 measures the degree of coupling between the system and its surroundings. We may expect that in systems that are very complex, in the sense that there are many interacting species or components, this coefficient will be large, as will be the size of the fluctuation, which could start the instability. Therefore we reach the conclusion that a sufficiently complex system is generally in a metastable state. The value of the threshold depends both on the system's parameters and on the external conditions. The limits to complexity is not a one-sided problem. It is interesting to note that, in recent numerical simulations of nucleation, this role of communication (e.g., through diffusion in nucleation) has been implemented.
T H E PHYSICS OF BECOMING
The traditional approach to environmental fluctuations originated with Paul Langevin's analysis of the Brownian motion problem. In this view, the rate function [say v(x)] describing the macroscopic evolution of an observable quantity (say x) gives only part of the instantaneous rate of change of x. Because of fluctuations of the surroundings, the system also experiences a random force F(x, t). Thus, considering x to be a fluctuating quantity, we write
If, as in Brownian motion, F reflects the effect of intermolecular interactions, its successive values must be uncorrelated both in time and in space. Because of this, the variance of fluctuations obtained agrees with the central limit theorem. O n the other hand, in a nonequilibrium environment, fluctuations can modify the macroscopic behavior of the system dramatically. It seems that, for this behavior to occur, the external noise must act multiplicatively rather than additatively; that is, it is coupled with a function of the state variable of x, which vanishes if x itself vanishes. To illustrate this point, consider a modified Schlogl model (see equation 6.15):
FIGURE 6.5
Stationary solutions x, of equation 6.28 versus y : solid curve indicates stable solution; dashed curve, unstable.
stable under infinitesimal perturbations. We now consider y to be a random variable. The simplest assumption is that it corresponds to a Gaussian white noise, just as in Brownian motion problems. We set
At y = 2, both a stable steady-state solution and an unstable one emerge, as shown in Figure 6.5. In addition, x = 0 is always a solution that is
Instead of writing equation 6.28, we now write a stochastic differential equation (Arnold 1973), which is a suitable generalization of the Langevin equation (equation 6.25), free of some ambiguities inherent in the usual formulation of the latter. This equation couples the noise with the second power, x2, of the state variable. It can be connected to a master equation of the Fokker-Planck type, from which the stationary probability distribution can be computed. The result is that in this distribution the transition point y = 2 of the phenomenological description disappears: the process certainly reaches zero and subsequently remains there. In the experimental work on the effects of noise mentioned near the beginning of this section (Kawakubo, Kabashima, and Tsuchiya 1978) the arrangement is very similar to that expressed in equation 6.28, except
that noise is coupled with a linear term and equation 6.28 includes a constant input term. As it turns out, for small values of the variance oZ the system (a parametric oscillator circuit) exhibits limit-cycle behavior. However, if the variance exceeds a threshold, the oscillatory behavior disappears and the system falls to a steady-state regime.
Part
111
Concluding Remarks
We have now outlined the main elements ofthe physics of becoming. Many unexpected results have been reported, extending the range of thermodynamics. Classical thermodynamics was associated, as mentioned, with the forgetting of initial conditions and the destruction of structures. We have seen, however, that there is another macroscopic region in which, within the framework of thermodynamics, structures may spontaneously appear. The role of determinism in macroscopic physics must be reappraised. Near instabilities, there are large fluctuations that lead to a breakdown of the usual laws of probability theory. A ncw view of chemical kinetics has emerged. As a consequence of these developments, classical chemical kinetics appears as a mrunjeld theory, but to describe the appearance of coherent structures, to describe the formation of order from chaos, we must introduce a new, more refined description of the temporal se. quences that lead to the time evolution of the system. However, the stabilization of dissipative structures requires a large number of degrees of freedom. This is the reason that a deterministic description prevails between successive bifurcations. Both the physics of being and the physics of becoming have taken new dimensions in the past several years. Can the two points of view be unified in some way? After all. we are living in a single world whose aspects, however diverse they seem at first, must have some relation. This is the subject of Part IV.
Seconds
600
I mm
Schlieren Photographs of Roll Cells. The cells grow larger as the concentration gradient IS decreased by active transport across the capillary split
Primary r o l l cell
Phase I Surface
Primary r o l l cell
Phase I1
Secondary r o l l cell
Chapter
KINETIC THEORY
Introduction
The relation between the two basic fields of theoretical physicsdynamics and thermodynamics-is probably the most challenging problem to be treated in this book. It has been a subject of discussion since the formulation of thermodynamics one hundred fifty years ago, and thousands of papers have been written about it. The relation touches upon the meaning of time and is, therefore, of crucial importance. We cannot expect an easy solution to the problem because, if that were possible, it would have been solved long ago. I shall give qualitative arguments to justify my conviction that we have now found a way of avoiding what seemed for so long to have been insurmountable obstacles. However, because no "proofs" are given here, the interested reader should consult a monograph in preparation on this subject (Prigogine, forthcoming).*
*
More details may also be found in the appendixes to this book
KINETIC THEORY
We shall start with kinetic theory, and especially with Boltzmann's H theorem, which must be considered a milestone on the road to understanding the microscopic meaning of entropy (for a presentation of classical kinetic theory, see Chapman and Cowling, 1970). Why was Boltzmann so fascinated with the second law? What attracted him to such an extent that he devoted virtually his entire career to understanding and interpreting it? In Populiire Schriften (1905), he wrote: "If one would ask me which name we should give to this century, I would answer without hesitation that this is the century of Darwin." Boltzmann was deeply attracted by the idea of evolution, and his ambition was to become the "Darwin " of the evolution of matter. Boltzmann's approach had astounding successes. It has left a deep imprint on the history of physics. The discovery of the quantum by Planck was an outcome of Boltzmann's approach. I fully share the enthusiasm with which Schrodinger wrote in 1929 that " His [Boltzmann's] line of thought may be called my first love in science. No other has ever thus enraptured me or will ever d o so again." Yet, it must be recognized that there are serious difficulties with Boltzmann's approach. It proved very difficult to apply his approach except to gases at low concentration. Although modern kinetic theory has been quite successful in discussing some aspects of transport theory involving viscosity, heat conductivity, and so forth, it sheds no light on the microscopic meaning of entropy in dense systems. Even for gases at low concentration, as we shall see, Boltzmann's definition of entropy applies only for certain initial conditions. It is because of such difficulties that Gibbs and Einstein worked out a much more general approach in terms of ensemble theory, which was described in Chapters 2 and 3. Their approach was, however, essentially limited to systems in equilibrium. The complete title of Gibbs's classic memoir is Elementary Principles of Statistical Mechanics: Development with Special Reference to the Rational Foundations of Thermodynamics (Gibbs 1902). This work on (equilibrium) thermodynamics is far from Boltzrnann's ambition to derive a mechanical theory of the evolution of matter. Because of the lack of success in attempts to apply ensemble theory to nonequilibrium situations (see the sections on Gibbs's entropy and the Poincare-Misra theorem later in this chapter), the idea that supplementary approximations must be introduced to deal with nonequilibrium became prevalent. Gibbs's well-known example of the mixture of ink with water was mentioned in Chapter 1. However, this idea
of supplementary "coarse graining" has not been successful (though it appealed to many physicists) because it proved in the end to be as difficult to provide a precise prescription for coarse graining as it is to solve the problem of the microscopic meaning of irreversibility itself. Today we understand the nature of these difficulties a little better and as a result we may follow a path designed to avoid them. First, it should be emphasized that Boltzmann's approach goes beyond dynamics; it uses a remarkable mixture of dynamical and probabilistic concepts. In fact Boltzmann's kinetic equation is the forerunner of the Markov chains that were used to model chemical equations in Chapter 6. In his L e ~ o n sde thermodynamique, Poincare discussed in detail the relation of the second law with classical dynamics. Yet he did not even quote Boltzmann! Moreover, his conclusion is categorical: thermodynamics and dynamics are incompatible. He based his conclusion on a short paper he had published earlier (1889) in which he proved that, in the framework of Hamiltonian dynamics, there can be no function of coordinates and momenta that would have the properties of a Lyapounov function (see the section titled The Poincare-Misra Theorem later in this chapter and that titled The Second Law of Thermodynamics in Chapter 1). As Misra has shown recently, this conclusion remains valid even in the framework of ensemble theory. The importance of the Poincare-Misra theorem is that it leaves us only two alternatives. We can conclude with Poincare that there is no dynamical interpretation of the second law. Then, irreversibility comes from supplementary phenomenological or subjectivistic assumptions, from "mistakes." But how then can we account for the wealth of important results and concepts that derive from the second law?* In a sense living beings, we ourselves, are then " mistakes." Fortunately, there is a second alternative. Poincare tried to associate entropy with a function of correlations and momenta, but this attempt also failed. Can we not retain the idea of introducing a microscopic entropy such that macroscopic entropy is an appropriate average of the microscopic entropy, thus realizing Poincart's program in a different
* Refer to the discussions in Chapters 4 and 5 that stress the importance of dissipative structures for biological problems. How can we account for these results if the second law is an approximation?
KINETIC THEORY
way? Quantum mechanics has accustomed us to associate operators with physical quantities. Moreover, we have seen that in the ensemble approach (see the section dealing with ensemble theory in Chapter 2) the time evolution is described by the Liouville operator.* It therefore becomes very tempting to try to realize Poincark's program in terms of an operator associated with the microscopic entropy (or Lyapounov functions). At first this seems a strange idea-or at least a purely formal device. An attempt will be made here to show that this is not so, that the idea of introducing a microscopic entropy operator is on the contrary a very simple and natural one. It should be remembered that the idea of an energy operator (the Hamiltonian operator H o p referred to in Chapter 3) means that we cannot associate a well-defined value of energy with an arbitrary wave function unless it happens to be an eigenfunction of H o p . Similarly the idea of an entropy operator would mean that the relation between distribution function p and entropy would be more subtle than formerly considered. Again, in general we could not associate a welldefined value of entropy with the distribution function (or a function of p) unless it happens to be an eigenfunction of this operator. As will be seen, this more-refined relation between density p and entropy is in line with the idea of randomness on the microscopic level as introduced in classical mechanics by the concept of weak stability (see Chapter 2). We can expect therefore that the construction of this operator will be possible only if the basic concepts of classical (or quantum) mechanics, such as trajectories or wave functions, correspond to unobservable idealizations. Whenever it is possible to introduce such a microscopic entropy operator, classical dynamics becomes an algebra of noncommuting operators (somewhat like quantum mechanics). It is certainly a great surprise that such a fundamental change in the structure of dynamics can be forced on us by the concept of irreversibility. Basically the same conclusions apply to quantum mechanics, of which the consequent fundamental change in structure will be described briefly in Chapter 8 and in Appendix C .
* We have already seen that the use of operators becomes natural whenever we give Up
the idea of trajectory (see also Appendixes A and B). Certainly the idea of operators is not restricted to quantum mechanics.
In short, the usual formulation of classical (or quantum) mechanics has become "embedded " in a larger theoretical structure, which also allows the description of irreversible processes. It is very gratifying that irreversibility does not correspond to some approximation added to the laws of dynamics but to an enlargement of their theoretical structure. In this framework, there is a new type of complementarity between the dynamical description and entropy. It can be expected that this complementarity exists only if the dynamical system is sufficiently "complex." Nobody would expect a thermodynamic type of behavior for a simple harmonic oscillator. In this chapter, Boltzmann's approach is discussed and the PoincareMisra theorem presented. The construction of a new form of classical or quantum dynamics that explicitly displays irreversible processes will be presented in Chapter 8.
A few years before the publication in 1872 of Boltzmann's fundamental paper, "Further Studies on Thermal Equilibrium between Gas Molecules," Maxwell had already studied the evolution of the velocity distribution function, f (r, v, t), which gives the number of particles having at time t the position rand the velocity v (Maxwell 1867). (In terms of the general distribution function p, as defined in equation 2.8,f is obtained by integrating over all coordinates and momenta except those of a single molecule.) Maxwell gave convincing arguments that, for long times in low-concentration gases, this velocity distribution should tend to the Gaussian form,
in which m is the mass of the molecules and T the (absolute) temperature (see equation 4.1). This is the well-known Maxwell velocity distribution. Boltzmann's aim was to discover a molecular mechanism that would
KINETIC THEORY
ensure the validity of Maxwell's velocity distribution for long times. His starting point was to deal with large systems including many particles. He considered it natural that, like social and biological situations, such systems would call attention not to individual particles, but to the evolution of groups of particles, and concepts of probability could be used quite freely. He decomposed the time variations of the velocity distribution into two terms, one due to the motion of the particle, the other due to binary collisions :
The integration is performed both on the geometrical factors that determine the collision cross section a and over the velocity v1 of one of the molecules in the collision. Adding equations 7.3 and 7.5, we obtain Boltzmann's celebrated integro-differential equation for the velocity distribution:
There is no difficulty in making the flow term explicit. We must simply introduce the Hamiltonian for free particles, H = p2/2m, and apply equation 2.11. We then obtain
After this equation has been obtained, we can introduce Boltzmann's H-quantity : H and prove that
-=
dvf log f
in which v = p/m is the velocity. However, the evaluation of the collision term does present a problem. Boltzmann used a plausibility argument very similar to the type of arguments introduced in the theory of Markov chains, which were described in Chapter 5. Historically, however, Boltzmann's theory preceded the theory of Markov chains. As was done in equation 6.8, Boltzmann decomposed the time change due to collision into a gain term, in which one particle with velocity v appears at point r (that means in some element of volume around point r), and loss terms, in which such a molecule disappears because of collisions. Therefore we have the scheme v',
U; + v,
- of, '
aH
at
-/~dmdvlolog-
f lf''(f'f; ff,
-ff,)<~
v1
v;
gain loss
v, v
The frequency of these collisions is proportional to the number of molecules that have velocities v', v', (or v , 0,); that is, f (0')f (u',) [or f (0)f (v,)]. After a few elementary calculations, this gives the contribution for the collision term (see Chapman and Cowling, 1970):
We therefore obtain a Lyapounov function. However, the basic difference between this Lyapounov function and that considered in Chapter 1 in the section on the second law of thermodynamics is that it is now expressed in terms of the velocity distribution and not in terms of macroscopic quantities such as temperature. The Lyapounov function reaches its minimum when the condition logf
(7.10)
invariants, which are the number of particles, the three Cartesian momenta of the particles, and the kinetic energy. These five quantities are conserved in a collision. Therefore log f must be a linear expression of these given quantities, and disregarding the momenta, which are only important if there is motion as a whole, we immediately arrive at the Maxwell distribution (formula 7.1), in which, indeed, log f is a linear function of the kinetic energy mv2/2. Boltzmann's kinetic equation is a very complicated one because it contains the product of the unknown distribution functions under the integral. For systems near equilibrium, we may write
First collision
2n collisions
in which f ( O ) is the Maxwell distribution and 4 is considered a small quantity. We then obtain a linear equation for 4, which has proved to be extremely useful in transport theory. An even cruder approximation of Boltzmann's equation is to replace the whole collision term by a linear relaxation term, and to write
'
Equilibrium
Time FIGURE 7.1 Evolution of H with time. (After Bellemans and Orban, 1967.)
in which T is an average relaxation time that gives an order of magnitude of the time interval necessary to reach the Maxwell distribution. Boltzmann's equation has given rise to many other kinetic equations that are valid under rather similar conditions (collisions between excitations in solids, plasmas, etc.). More recently, extensions to dense systems have been suggested. However, these generalized equations for dense media d o not admit a Lyapounov function and the connection with the second law is lost. The procedure for using Boltzmann's approach can be summarized as follows : Dynamics
1
Kinetic equation (" Markov process")
1
Entropy (through H)
In recent years there have been many numerical calculations to verify Boltzmann's predictions. The H-quantity has been calculated on computers, for example, for two-dimensional hard spheres (hard disks), starting with disks on lattice sites with isotropic velocity distribution (Bellemans and Orban 1967). The results, which are given in Figure 7.1, confirm Boltzmann's prediction. Boltzmann's theory has also been used to calculate transport properties (viscosity and thermal conductivity). This is the great achievement of methods devised by Sydney Chapman and David Enskog for the solution of Boltzmann's equation. In this case, too, agreement has been quite satisfactory (see Chapman and Cowling, 1970, and Hirschfelder, Curtiss, and Bird, 1954). Why does Boltzmann's method work? The first aspect to consider is the assumption of molecular chaos. As was discussed in Chapter 5 in the section on classical chemical kinetics, Boltzmann calculated the average
KINETIC THEORY
, ,
number of collisions, neglecting fluctuations. But this is not the only important element. If we compare Boltzmann's equation with the Liouville equation (2.12), we see that in Boltzmann's equation the symmetry of the Liouville equation is broken. If we change L -+ - L and t + - t in the Liouville equation, this equation remains invariant. We can change L + - L by changing the momentum (or the velocity) p + - p . This is a consequence of equation 2.13. By looking at Boltzmann's kinetic equation, or the simple version given in equation 7.12, we see that the flow term changes sign when v is replaced by - v , but the collision term remains invariant. This term is even with respect to velocity inversion. This is also true for the original Boltzmann equation. Therefore the symmetry of the collision term violates the " L - t " symmetry of the Liouville equation. A characteristic feature of Boltzmann's equation is that it possesses a new t y p e o symmetry, one that does not f appear in the Liouville equation, neither in classical nor in quantum mechanics. In brief, the time evolution contains both odd and even terms in L. This is very important. Only the collision term (which is even in L ) contributes to the evolution of the Lyapounov function H. We may say that Boltzmann's equation transposes the basic thermodynamic distinction between reversible and irreversible processes into the microscopic (or more accurately kinetic) description. The flow term corresponds to a reversible process and the collision term to an irreversible process. Thus, there is a close correspondence between the thermodynamic description and Boltzmann's description, but unfortunately this correspondence is not "deduced" from dynamics; it is postulated from the start (i.e., equation 7.2). A surprising feature of Boltzmann's theorem is its universal character. The interaction between the molecules may be quite varied: we may consider hard spheres, repulsive central forces decreasing according to some power law, or both repulsive and attractive forces. Yet, independently of the microscopic interactions, the H quantity has a universal form. We shall return to the interpretation of this remarkable feature in the next chapter. Let us turn now to some of the difficulties related to Boltzmann's treatment of kinetic theory.
know the exact initial conditions and therefore the trajectory. Yet the transition from the distribution function in phase space to the trajectory corresponds to a well-defined process of successive approximations. However, for systems exhibiting "weak stability," there is no process of successive approximations, and the concept of a trajectory corresponds to an idealization beyond that which can be obtained from experiments, regardless of their accuracy. Another serious objection is based on Joseph Loschmidt's reversibility paradox: because the laws of mechanics are symmetrical with respect to the inversion t + - t, to each process there corresponds a time-reversed process. This also seems to be in contradiction with the existence of irreversible processes. Is Loschmidt's paradox at all justified? It is easy to test it by means of a computer experiment. Andre Bellemans and John Orban (1967) have calculated Boltzmann's H-quantity for two-dimensional hard spheres (hard disks). They start with disks on lattice sites with an isotropic velocity distribution. The results are shown in Figure 7.2. We see that, indeed, the entropy (that is, minus H) first decreases after the velocity inversion. The system deviates from equilibrium over a period ranging from fifty to sixty collisions (which would correspond in a low-concentration gas to about seconds). The situation is similar for spin-echo experiments and plasma-echo experiments. Over limited periods, anti-Boltzmannian behavior in this sense can be observed. All this shows that Boltzmann's equation is not always applicable. Paul and Tatiana Ehrenfest made the remark that Boltzmann's equation cannot be correct b o t h before and after inversion of velocities (see Ehrenfest and Ehrenfest, 1911). Boltzmann's view was that, in some sense, physical situations for which the kinetic equation (equation 7.6) is valid would be overwhelmingly more frequent. It is difficult to accept this view, because today we can realize both computer and laboratory experiments in which his kinetic equation is n o t valid, at least over limited periods. What inference can be drawn from the fact that there are situations for which the kinetic equation is valid and others for which it is not? Does this fact express a limitation of Boltzmann's statistical interpretation of entropy or a failure of the second law for some class of initial conditions? The physical situation is quite clear: velocity inversion creates correla-
I'ime
FIGURE 7.2
Evolution of H with time for a system of 100 disks when velocities are inverted after 50 collisions (open cucles), 100 collisions (solid circles). (After Bellemans and Orban, 1967.)
tions between particles that may be of macroscopic range.* Particles that collide at time t , must collide again at time 2 t o - t These anomalous correlations can be expected to disappear during the period from t o to 2 t o , after which the system returns to a "normal" behavior. In brief, entropy production can be understood in the interval 0 to to to be associated with the "Maxwellianization" of the velocity distribution, whereas in the period to to 2 t o it should be associated with the decay of anomalous correlations. Thus the failure of Boltzmann's approach to cope with such situations can be easily understood. We need a statistical expression of entropy that depends e x p l i c i t l y on correlations. Let us briefly consider how the Hquantity would evolve if we were able to construct a Lyapounov function that also contained the correlations (see Prigogine et al., 1973).
,.
* These "anomalous" correlations also have the property that they exist prior to collisions, whereas the normal correlations are produced by the collisions.
in which we integrate over the phase space. In quantum mechanics, the equivalent quantity would be in agreement with equations 3.29 and 3.31':
= tr
(7.14)
FIGURE 7.3
Time behavior of R in the velocity-inversion experiment. The velocities are inverted at time t o .
We may associate the diagonal terms (n 1 p 1 n ) with probabilities (i.e., equation 3.31") and the off-diagonal ones with correlations. A Lyapounov function of the form 7.13 or 7.14 would indeed incorporate correlations and would go beyond Boltzrnann's approach, which deals only with probabilities. We may add that the existence of a Lyapounov function of type 7.14 would be eminently reasonable, because the minimum of R, taking into account equation 7.15, would be reached when all diagonal elements of p are equal (and their sum equal to one) and all off-diagonal elements vanish. This is the situation described by equal probabilities and random phases. We would then have a situation quite similar to the microcanonical ensemble, considered in Chapter 2, in which all states have the same probability on the energy surface. What would happen if we carried out the velocity-inversion experiment using expression 7.14. The result that we could expect is represented in Figure 7.3. (For a detailed discussion, see Prigogine et al., 1973.) Suppose that we start with only diagonal elements in the density matrix (which corresponds to the initial condition of no correlations). We then proceed until time t o . In this time interval, we have an evolution quite similar to that described by Boltzmann's equation (see Figure 7.2) and R
decreases as a result of collisions. At to we obtain a velocity inversion. This corresponds to introducing off-diagonal elements into the density matrix, because such elements correspond to correlations. Therefore, at this point 0 will increase (see expression 7.14); from t o to 2 t o , it will again decrease as the anomalous correlations die out. At time 2to the system is in the same state as it was at time t o . In other words, we have restored the initial state at the expense of "entropy production," which is now positive during all the time evolution of the system. There is no longer any period of the system corresponding to "antithermodynamic behavior." The increase at time to is not in contradiction with this statement. At this time the system is not closed-the velocity inversion corresponds to a flow in entropy (or of "information"), which leads to an increase. We can contrast this behavior with that of the Boltzmann H-quantity in which the "thermodynamic" evolution from 0 to t o is followed by an antithermodynamic one from to to 2 t , (see Figure 7.2). In summary, we may say that we have realized a cycle of rejuvenation, but, as in real life, rejuvenation exacts a price. Here, this price is the overall entropy production in the period from 0 t o 2 t 0 . Can we really construct a function, such as R, that takes into account correlations? That is the basic question.
KINETIC THEORY
Gibbs Entropy
As just pointed out, we would like to construct a Lyapounov function such as expression 7.13 or 7.14. Let us see if this can be done using the Liouville equation. The calculation is especially simple for classical systems, because we can then obtain (using equation 2.13)
Let us consider, instead of the Liouville equation, a system of ordinary linear equations:
Here, too, we may ask whether a Lyapounov function associated with this system of equations exists. This question is discussed in all textbooks dealing with the Lyapounov method for the study of the stability of the solution of differential equations (Minorski 1962). Generally, one considers a quadratic form such as
which can be easily verified by partial integration. This result is independent of the special functional (expression 7.13). We could also have considered
If R is indeed a Lyapounov function, the elements of the term B will in general depend on the coefficients A of the differential equations. Similarly, we have to expect that, if there is a Lyapounov function associated with dynamics (i.e., with the Liouville equation), it should be a functional of the dynamical processes involved (included in the operator L). A universal form can emerge only at a later stage, through a suitable change of coordinates. Before considering these questions, let us introduce the Poincare-Misra theorem.
or any other "convex" functional of p. The attempt to avoid the difficulties in Boltzrnann's scheme by considering the complete distribution function p instead of the velocity distribution f fails. That is the reason why Gibbs proceeded, as mentioned in Chapter 1, to a "subjectivistic view of irreversibility" as an illusion due to the imperfection of the sensory organs of the observer. (For a recent interpretation of this view see Uhlenbeck in Mehra, 1973.) However, from the point of view adopted in this book, the negative result expressed in equation 7.16 can hardly be a surprise: ensemble theory differs from dynamics in the fact that "ignorance" of initial conditions is incorporated in the distribution function p. But this cannot be the sole reason why irreversibility, as expressed by a Lyapounov function, can be constructed. Certainly supplementary conditions, such as weak stability, are necessary. In addition it should not be expected that the Liouville equation (2.12) will lead to a universal Lyapounov function, be it expression 7.13 or 7.16'.
KINETIC THEORY
the form
We now consider the case corresponding to an equilibrium ensemble (see the section titled Operators in Chapter 2): p(0) = microcanonical ensemble = constant which we normalize to one. Then by definition
in which we have used equation 2.12' and the fact that L is a Hermitian operator (see equation 2.13). We recover the fact that R is time independent. We now look for a more general form such as
with
All this is valid when M (and D) are operators or ordinary functions of coordinates and momenta. However, if the latter case is so, we can go one step further and replace p(0) in equation 7.25 by its value, which is a constant and which we take equal to one. Then equation 7.25 reduces to
To make expression 7.18 a Lyapounov function, we suppose that the time derivative D of M is negative (or zero)
It is now an easy matter to show that requirement 7.20 cannot be satisfied unless D = 0 everywhere, but then R is not a Lyapounov function if M is a function of coordinates and momenta. Let us consider the time derivative of R:
But because of expression 7.20 this implies that D = 0 everywhere on the microcanonical surface and R cannot be a Lyapounov functional. This proof can be extended to general convex functionals. We therefore return to Poincare's conclusion: the microscopic entropy (or Lyapounov functional) cannot be an ordinary function of the phase variables. If it exists at all, it can only be an operator. Then equation 7.25 can indeed be satisfied by requiring only that Dp(O), for p(0) = constant, is an eigenfunction of D corresponding to a vanishing eigenvalue. But then the introduction of irreversibility requires a generalization of the conceptual framework of dynamics !
A New Complementarity
What we have shown is not only that the functional of the form 7.13 cannot be used to define a Lyapounov functional-this is a direct con-
THE B R I D G E F R O M BEING TO B E C O M I N G
sequence of the Liouville equation-but that more general functionals such as those of the form 7.18 are also ruled out if the q~antityM corresponding to the "microscopic entropy" is a function of coordinates and momenta. It should be emphasized that an appeal to special, "improbable" initial conditions would not help. We assume the validity of the second law as expressed by a Lyapounov function. We may introduce a weaker statement by giving up the monotonous increase of entropy. But then we are lost, for the distinction between reversible and irreversible processes would have to be replaced by some new one, which at present we cannot even formulate in a consistent way. Therefore, it seems that we are back to the difficulties mentioned in Chapter 1. Must we regard irreversibility as an approximation or as a property that we, the observers, introduce into a reversible world? Fortunately, this is not an unavoidable consequence of the Poincare-Misra theorem. As already explained, since the advent of quantum mechanics, we have become accustomed to introducing into physics a new type of object, operators (see Chapters 2 and 3). Therefore it is tempting to consider the Lyapounov functional of the form 7.17, but with M defined as a microscopic "entropy operator" that does not commute with the Liouville operator L. The commutator - ~ ( L M- ML) = D 6 0
FIGURE 7.4
Three possible transitions of a dynamical system: (A) transition between an initial phase-space region X at time to and either of two regions Y and Z at a latter time, (B) a single type of transition from X t o Y; (C) distribution of phase fluid initially concentrated in region X on long filament Y.
T;
then defines the "microscopic entropy production." But this leads to a new form of complementarity. The concept of complementarity was introduced in Chapter 3. We have seen that in quantum mechanics position and momenta are represented by noncommuting operators (Heisenberg's uncertainty relations). This may be viewed as an example of Bohr's complementarity principle: there are observables in quantum mechanics whose numerical value cannot be determined simultaneously. Thus, we also have a new form of I complementarity-one between the dynamical and the thermodynamic descriptions. The possibility of such a complementarity was explicitly mentioned by Bohr and is confirmed by the approach taken here. Either we consider eigenfunctions of the Liouville operator to determine the dynamical evolution of the system or we consider eigenfunctions of M,
but there are no common eigenfunctions of the two noncommuting operators L and M. What does M regarded as an operator mean? First of all, it means that there are supplementary properties not included in the dynamical descriptiotl. Even if we know the eigenfunctions and the eigenvalues of L, we cannot assign a well-defined value to M. Such supplementary properties can come only from some type of randomness in the motion. We have already seen in Chapter 2 that there is a hierarchy of dynamical systems with stronger and stronger stochastic properties. We have seen that in ergodic systems the motion may be quite smooth (see the section titled Ergodic Systems in Chapter 2), but this is not so when stronger conditions are introduced. Consider a dynamical system that is initially (at time to) in region X of phase space. Suppose that at time to z it is found either in region Y or in region Z (see Figure 7.4A). In other words, if we know that at t o the system is in region X, we can only calculate the probability that it will be in either Y or Z at time to + T. This does not prove that there is some "basic randomness " associated with the motion. T o investigate this point, we decrease the size of region X, in which case one of two things may happen: either for some sufficiently small size of the initial region all parts will later be in the "same" region, say Y (see Figure 7.4B), or the situation shown in Figure 7.4A persists whatever the size of region X. The second case corresponds precisely to the "weak stability" condition: each region, whatever its size, contains
KINETIC THEORY
different types of trajectories and the transition to a single trajectory becomes ambiguous. Our example is somewhat oversimplified: Our requirements are satisfied if each phase element is sufficiently "distorted " with the passage of time. For example, in Figure 7.4C, the phase fluid initially concentrated in region X is distributed after some time on a long filament Y. Again the concept of a trajectory becomes ambiguous if this distortion remains, regardless of the size of region X. It is in such situations that we may expect a microscopic entropy operator to exist. As will be seen in Chapter 8, this expectation is verified: the operator M can indeed be constructed for systems that present either mixing (or a stronger condition) or a Poincare catastrophe. In spite of the basic difference in our arguments, the conception of irreversibility arrived at here is, in its essence, quite similar to that put forward by Boltzmann. Irreversibility is the manifestation on a macroscopic scale of " randomness" on a microscopic scale. For examples such as the one just discussed (i.e., that illustrated in Figure 7.4), we may go even further and associate to the system a new type of time-an operator time T closely related to M. Because this T is an operator, it has as eigenvalues the possible ages a system may have (see also Appendix A). A given initial distribution p can generally be decomposed into members having different ages and evolving differently. This is probably the most intriguing conclusion to be drawn in this book: although in physics time was always a mere label associated with trajectories or wave packets, here time emerges with a completely new meaning associated with evolution. We shall return to this idea in Chapters 8 and 9. It should be emphasized that, although for Boltzrnann irreversibility was a consequence of molecular chaos "superimposed " on the equations of dynamics, we pursue a purely dynamical approach. Both randomness and irreversibility are consequences of the structure of the equations of motion. For example, in classical mechanics we have
>- . Randomness Dynamical characteristics (mixing, Poincare catastrophe) \ Irreversibility (M operator)
Contrary to what Boltzmann attempted to show there is no "deduction" of irreversibility from randomness-they are only cousins! Chapter 8 will deal first with the consequences of the existence of both the operator M and a Lyapounov function. The construction of the latter will then be discussed briefly, followed by a few examples.
Chapter
Irreversibility and the Extension of the Formalism of Classical and Quantum Mechanics
We have seen in Chapter 7 that the "minimum assumption" necessary for introducing irreversibility into classical mechanics is to enlarge the concept of classical observables: instead of functions of coordinates and momenta, an operator M has been introduced. This means that classical dynamics no longer consists of the study of orbits; rather it becomes the study of the time evolution of distribution functions. The situation is somewhat similar in quantum mechanics. There is no way of introducing an operator such as M in the framework ofthe reversible evolution of wave functions as described by the Schrodinger equation (3.17) (see Appendix C ) . Therefore we must, as in classical mechanics, turn to ensemble theory (see Chapter 3) and use the quantum version of the Liouville theorem
Note: This chapter is the most technical one of this book. For the convenience of the reader, a nontechnical summary is presented in Chapter 9.
(3.36). Moreover, in quantum mechanics we must make a distinction between operators, which act on wave functions, and "superoperators " which act on operators (or matrices). For example, the Liouville operator L acts on the density matrix p (see equations 3.35 and 3.36) and is therefore a superoperator. The entropy operator M in quantum mechanics is also a superoperator because it acts on the density matrix p. But it differs in a fundamental way from the Liouville operator L because of the difference between pure states and mixtures introduced in Chapter 3 (see equations 3.30 and 3.32). Described in detail in Appendix C, L is a " factorizable" superoperator, which means that, when acting on p corresponding to a pure state (i.e., to a well-defined wave function), it leaves the system in a pure state that is a well-defined wave function. This is in agreement with the Schrodinger equation (3.17), according to which a wave function evolves into another wave function in time. O n the other hand, M is not factorizable; it does not preserve the difference between pure states and mixtures. In other words, the distinction between pure states and mixtures is lost in systems in which irreversible processes described by a Lyapounov function arise. This does not mean that Schradinger's equation becomes wrong-nor do Hamilton's equations in classical mechanics-but the distinction between pure states and mixtures (or between wave functions and density matrixes) is no longer observable. Whenever M can be introduced, we may proceed as in classical mechanics. As usual, the integration over phase space is replaced by the trace operator (see equation 3.32), and expression 7.18 then becomes
reader in grasping the physical meaning of the concepts involved. First, we shall establish the connection between the existence of a Lyapounov function such as expression 8.1 and Boltzmann's approach, and then consider some applications in qualitative terms. We have also seen in Chapter 3 that conventional quantum mechanics has led to unsolved problems being widely discussed today. These problems can be seen from a new perspective once irreversibility is consistently incorporated in the dynamical description.
'
with In keeping with the notation used in earlier publications (see, e.g., Prigogine et al., 1978) we write, instead of T , Again, it is not always possible to find an operator M such that the two preceding inequalities are satisfied. If the Hamiltonian has a discrete spectrum, the motion of the wave function (or of p) is periodic. Therefore a necessary condition is the existence of a continuous spectrum. It is beyond the scope of this book to describe in detail the microscopic theory of irreversible processes. The objective here is simply to assist the
Inserting definitions 8.2 and 8.3 into expression 8.1, we get, using the definition of Hermiticity (see definitions 3.11 and 3.34),
inclusion of "irreversibility" must be more than a mere change of coordinates expressed by a unitary transformation. To clarify this point, we will use the solution of the equations of motion (equation 3.36). We can replace the expressions in 8.1 by the more explicit inequalities
This is a very interesting result, because expression 8.4 is ofthe same type as the one that we were trying to derive from expression 7.14 to describe the velocity-inversion experiment. But we see that this form of the Lyapounov function can exist only in a new representation obtained from the preceding one by transformation 8.5. Any explicit reference to the operator M in expression 8.1 has disappeared through the transformation. The definition of a Lyapounov function is not unique. When expression 8.4 is a Lyapounov function, all convex functionals of p such as
We then use expression 8.5 to make the transformation to the new representation and obtain for the entropy production (expression 8.9)
= tr
log j
are also Lyapounov functions (see Appendix A, in which F is shown to satisfy a Markov process). We are dealing with an expression that, like Boltzmann's H-quantity (expression 7.7),depends only on the statistical description of the system. Once we know the state of the system as given by 3, we may evaluate 0. The particular state F , which leads to a minimum of R, acts as an attractor for the other states. There is therefore a close relation between the existence of the operator M and the transformation theory involving the operator A (see definition 8.5). Let us now reconsider the formal properties of the transformation from expression 8.1 to expression 8.4 (for details, see Prigogine, forthcoming). First we write the equations of motion in the new representation. Taking into account definition 8.5, we obtain
This implies that the difference between @ and its Hermitian adjoint Ot does not vanish:
Therefore we note the important conclusion that the new operator of motion that appears in the transformed Liouville equation (8.6) can no longer be Hermitian as was the Liouvillc operator L. This shows that we must leave the usual class of unitary transformations (expression 3.11) and proceed to an extension of the symmetry of quantum mechanical operators. Fortunately, it is easy t o determine the class of transformations that we must consider now. Average values can be calculated in both the old and the new representations. The result should be the same. In other words, we require that
(A)
=
with
tr A l p
= tr Zitj
The new equation of motion is related to the original one by a similitude (see equation 3.13). But we expect that a transformation that permits the
In this sense, the two representations of quantum mechanics should indeed be equivalent (if they were not, at least one of them would make incorrect predictions). No experimental information available at present points in this direction.
Moreover, our interest lies in transformations that depend explicitly on the Liouvillc opcrator. This is indeed the physical motivation of the theory. We have seen in Chapter 7 that the Boltzmann-type equations have a broken L - t symmetry. We want to realize this new symmetry through our transformation. This can be done only by considering the L-dependent transformation A(L). The density y and observables have the same equations of motion, except that L is replaced by - L (see equations 3.36 and 3.40). We therefore require that, for an observable A,
Therefore,
We shall call this operator the "star-Hermitian " operator associated with A ("star" always mean the inversion L -+ - L followed by Hermitian conjugation). Then equation 8.15 shows that, for star-unitary transformations, the inverse of the transformation is equal to its star-Hermitian conjugate. As already explained, equation 8.12 can always be satisfied by unitary transformations (they are recovered if we consider A independent of L). The remarkable feature is that, in addition, there is a well-defined class of nonunitary transformations, which satisfies the equivalence condition and leads to a new form of the equations of motion. Let us now reconsider equation 8.7. The new dynamical operator @ is obtained through a similitude from L, but this similitude is in terms of a star-unitary (not a unitary !) operator. Using the facts that L is Hermitian and that equations 8.15 and 8.16 hold, we obtain
which in this development replaces the condition usually imposed on transformations in quantum mechanics, namely that the operators be unitary. If A is independent of L, then it is simply a unitary transformation, but this case is of no interest here. It is not astonishing that we find a nonunitary transformation law. Unitary transformations are very much like changes in coordinates, which do not affect the physics of the problem. Whatever the coordinate system, the physics of the system remains unaltered. But now we are dealing with a quite different problem. We want to go from one type of description, the dynamical one, to another, the "thermodynamic" one. This is why we need the drastic type of change in representation expressed by the new transformation law (equation 8.15). This transformation is called the star-unitary transformation; and a
The operator of motion is star-Hermitian. This is most welcome! Indeed, to be star-Hermitian, an operator may be either Hermitian and even under Linversion (i.e., it does not change sign when L is replaced by - L ) or anti-Hermitian and odd (odd means that it changes sign when L is replaced by - L). Therefore, a star-Hermitian operator can generally be written as
* 1 nere is an interesting analogy with quantum statistics that may be distinguished by + 1 or - 1 in the distribution functions. Here also the condition of equivalence (equation 8.12) leads to two classes of transformations: A t ( L ) = A 1 ( k L ) . The choice of + leads to conventional unitary transformations. whereas - leads t o representations displaying irreversible processes.
The superscripts "e" and " o wrefer to the even and the odd part of the new time evolution operator @. The condition of dissipativity (expression 8.11), which expresses the existence of a Lyapounov function R, now becomes
+ star-Hermitian
(with broken symmetry) It is the even part that gives the "entropy production." Thus we have obtained a new form of microscopic equation (as was the Liouville equation in classical or quantum mechanics), but our new form explicitly displays a part that can be associated with a Lyapounov function. In other words, the equation In simple cases (such as dilute systems or weakly interacting systems), the new equations of motion have a simple probabilistic interpretation in terms of Markov chains-again in line with Boltmann's intuition (see Prigogine et al., 1973, and Appendix A). But in our discussion (see Chapter 7) dynamics comes Jirst-the physical interpretation including its probabilistic aspect can only be a consequence of the transformation theory. A posteriori, it is difficult to imagine how the conflict between "being and becoming" could have been resolved in a different way. In the nineteenth century, there was a profusion of controversy between "energeticists" and "atomists," the former claiming that the second law destroys the mechanical conception of the universe, the latter that the second law could be reconciled with dynamics at the price of some "additional assumptions" such as probabilistic arguments. What this means 2xactly can now be seen more clearly. The "price" is not small because it involves a far-reaching modification of the structure of dynamics.
contains a reversible part and an irreversible part. The macroscopic thermodynamic distinction between reversible and irreversible processes I has now been incorporated into the microscopic description. What is so satisfactory here is that the symmetry obtained in equation 8.21 is exactly the Boltzmann symmetry. As we have seen in the Boltzmann-type of equation, the collision part is even in L a n d the flow part, odd. The physical meaning is also similar. The even term contains all the processes that contribute to the increase of the Lyapounov function and drive the system to equilibrium. This includes scattering, production and decay of particles, damping, and so forth. The step made through nonunitary transformation is quite crucial. We go from the dynamical description in terms of trajectories or wave packets to a description in terms of processes. It is amazing how the various elements of this approach conspire to achieve a picture that unifies dynamics and thermodynamics. Once we have postulated he existence of the Lyapounov function (expression 8.1), the existence c ' a representation of dynamics with the characteristic broken "L - t symmetry " follows immediately. The chain is as follows:
Construction of the Entropy Operator and the Transformation Theory: The Baker Transformation
So far we have considered only the formal properties of M and its relation to transformation theory. Let us now look briefly into the construction of M and the transformation operator A. This in itself is a vast subject and can be dealt with here only in general terms to indicate the methods that must be used (see also Appendixes A and C). First, we shall consider classical dynamics. Then, as repeatedly men-
tioned, we must consider two different situations that lead to the type of "weak stability" in which we can expect a Lyapounov function to exist (see Chapter 2). For ergodic systems, Misra (1978) has shown that mixing is a necessary condition and K-flow a sufficient condition for the existence of the microscopic entropy operator M. As noted in Chapter 2, this classification of dynamical systems is based on the spectral properties of the Liouville operator. Mixing implies that L has no discrete eigenvalues other than zero, and K-flow implies that all eigenvalues of L have the same multiplicity. Note that ergodicity alone is not sufficient; the Liouville operator L must have no discrete eigenvalues other than zero, which corresponds to equilibrium (see Chapter 2), so that there are no periodic motions. Misra has shown that in the case of a K-flow a conjugate Hermitian operator T can be associated with L such that their commutator is constant: -i[L, TI = -i(LT - TL) = 1 in which 1 is the unit operator. A plausibility argument follows (for the proof, see Misra, 1978, and, for an example of the construction, see Appendix A). For a K-flow, we may go to a representation in which the operator L is represented by a number, say A We then find an operator T, . which in the same representation will be given by the derivative i(d/dA). That our approach introduces a new complementarity between dynamics and thermodynamics is especially apparent here because the relation given by equation 8.22 is formally similar to that between momentum and coordinate in quantum theory, as equation 3.2 leads to
FIGURE 8.1
Baker transformation. First, the unit square (A) is flattened into a 4 x 2 unit rectangle (B). Then it is reassembled to form a new square ( C ) in which the shaded and unshaded areas are split into four separate regions rather than the two shown in part A.
with the general comments made in Chapter 7. A simple example is supplied by the baker transformation, so-called because it evokes the image of kneading dough. (This transformation, or mapping, is described in greater detail in Appendix A.) Consider the unit square shown in Figure 8.1A. The coordinates x, y are defined by modulus one: that is, all points that d o not lie in the unit square are moved into it by adding integers to or subtracting them from their coordinates. For example, (x, y) = (1.4, 2.3) is brought into the unit square as (0.4,0.3). The transformation is performed at regular time intervals (this is a discrete transformation): mod l , i f O < x < + ( x , y ) + [ 2 x - l , f ( y + I)] mod l , i f + < x < 1
The Liouville operator L corresponds formally to a time derivative (see equation 2.12). Therefore, the conjugate operator T corresponds to a "time" in the sense that the representation
satisfies the commutation relation (equation 8.22).In other words, we can add to dynamics an operator, T, representing a fluctuating time in accord
The mapping has a simple geometrical meaning. If at time t o the phase point is at x, y, then at time to 7 it is at the point obtained by flattening the square to a 112 x 2 rectangle and then cutting and reassembling to form a new square, as shown in Figure 8.1B, C. Although this is not a Hamiltonian, dynamical transformation, it can be used to illustrate many aspects of Hamiltonian flows because it is measure-preserving. The baker transformation leads precisely to the situation described in Chapter 7 in the section on a new complementarity. Each finite region is split by the transformation into separate regions. The operator T i n this case has a simple meaning-all of its eigenvalues are integers from - oo to m. The corresponding eigenfunctions corre-
spond to space distributions that are generated from some standard distribution in a given number of steps. For example, the eigenfunction corresponding to 20, means that 20 applications of the baker transformation are necessary to generate it from the distribution corresponding to the eigenvalue 0. A distribution (more precisely the excess with respect to the uniform equilibrium distribution) may have a well-defined age. It is then by definition an eigenfunction of T. In general, a distribution has no well-defined age but may be expanded in a series of functions having a well-defined age. We may then speak of the average age, of the "fluctuation" of age. The analogy with quantum mechanics is striking. More details can be found in Appendix A. Once T is known, we can take for M an operator that is a decreasing function of T. We then obtain a Lyapounov function (or an H quantity) that takes its minimum value at microcanonical equilibrium. The meaning of the microcanonical distribution is very simple: whatever the precision of one observation (assuming only that it is Jinite), successive applications of the baker transformation lead to a distribution that is uniform (the inhomogeneity lies below the scale of observation). It is quite remarkable that in such a simple case we may indeed introduce a Lyapounov functional that varies monotonically until the uniform distribution defined in this sense is reached. No thermodynamic limit to a large system is needed. Moreover, we may, starting from M, introduce a nonunitary transformation A to obtain a universal Lyapounov function. We write, in agreement with definition 8.5,
A(- L)
By inverting L, we obtain the inverse transformation. Such transformations are well known in physics. For example, the Lorentz transformation in special relativity belongs to this class (when we invert the relative velocity between two observers, we obtain the inverse transformation). That 9 has all the properties of a distribution function (notably it is positive) is verified in Appendix A.* The important point is that to obtain a universal Lyapounov functional, we need to perform a change of variables-a rescaling involving the dynamical properties of the system. Let us now turn to the second case in which we expect to find weak stability; that is, the Poincare catastrophe.
with
We can now see what the L-dependence of A(L) means. The transformation A depends on T, which itself is related to L through the commutation rule (equation 8.22). The L inversion also means the inversion of T: A(- L) = A(- T). (8.27)
* Moreover, as shown in Appendix A (at least for the class of dynamical systems studied there), we may choose A in such a way that 6 satisfies a Markov chain equation. This shows that a statistical scheme may be similar to a dynamical scheme. In other words, the transition from a deterministic t o a probabilistic description involves n o loss of information.
We can now decompose L,or its resolvent (L - z)-' using these operators. By definition,
say 4, which satisfies condition 2.33 (L$= 0) and which we now expect to have both a P and a Q part:
However, using the definition of Y(z), it may be shown that this is possible only if the condition Simple manipulations lead to the identity
QLQ - z
QLP
in which Y(z) is the so-called collision operator. It plays a central role in this approach. The behavior of Y(z) for z + 0 is of special interest because it determines the asymptotic behavior of the distribution function [i.e., the limit p(t) for t -+ oo]. More precisely, it can be shown that traditional kinetic equations such as the Boltzmann equation (or its quantum form, the Pauli equation) can be deduced from the so-called master equation for the N-particle velocity distribution p , , written in the form
in which Y(0) is the limit of Y(z) for z -0. The existence of kinetic equations is therefore closely related to the nonvanishing of the limit Y(0) of the z-dependent collision operator Y(z). The remarkable feature is that Y(0) also appears in the theory of the dynamical invariants in connection with PoincarC's theorem. Suppose that the projection operator P projects on the space of the invariants corresponding to the unperturbed motion due to H,. When we introduce the perturbation V, we hope t o " continue" this invariant into a new one,
is satisfied (see, e.g., Prigogine and Grecos, 1977). If Y(0) vanishes, equation 8.35 can evidently always be satisfied and the invariants of H, can be extended into invariants of H. O n the other hand, when we have what was called in Chapter 2 the "Poincare catastrophe," the invariants of H , cannot be extended into invariants of H (except H itself, or functions of H) and this implies that Y(0) is different from zero. The fact that Y(0) appears both in kinetic equations of the Boltzmann type (equation 8.34) and in the theory of the extension of invariants (equation 8.35) is most important. It shows that Boltzmann's kinetic equations originate not in ergodic properties (or stronger properties such as mixing or K-flows) but in the Poincare catastrophe. This could have been expected. Ergodic theory deals with the Liouville operator only as a whole. No decomposition into a part corresponding to free motion (due to the unperturbed Hamiltonian H,) and to collisions (due to the interaction V) ever appears. There are however limiting situations such as that for hard spheres in which the potential V is singular (it is either zero or infinite!). Such cases would require that we start with ergodic theory t o derive kinetic equations. In spite of much effort, this whole problem is still in a quite preliminary state. It is also interesting that the condition
can be satisfied in very simple systems. Consider, for example, the Hamiltonian
H
=o J
+ AV
sin a
Excess density
in which o is an unperturbed frequency and J the action variable for the unperturbed motion (A = 0). We calculate Y(0) (see Appendix B) and observe that it vanishes for each finite value of A. However, if we take the limit A -+ oo (or o -+ 0) first, then Y(0) does not vanish. This is not surprising: if w = 0, it is ol that becomes a new "action" variable (or constant of motion).and lies in the space orthogonal to the projection operator P. More details are given in Appendix B. The nonvanishing of Y(0) is, however, only a necessary but not a sufficient condition for the construction of operators M or A. We need stronger conditions that are related to the behavior of the dispcrsion equation
FIGURE 8.2
Excess density as a function of distance.
This equation must admit complex roots. A special method called "subdynamics," has been developed to deal with this problem (see, e.g., Prigogine and Grecos, 1977). A brief example is given in the next section. In conclusion, it should be emphasized that the construction of the Lyapounov operator M of the nonunitary transformation A does not presuppose a single mechanism on the level of the dynamical equation. Various mechanisms may be involved, the important element being that they lead to a complexity on the microscopic level such that the basic concepts involved in the trajectory or the wave function must be superseded by a statistical ensemble.
scopic time scales. These are the so-called hydrodynamic modes, which correspond to the evolution of conserved quantities such as number of particles, momentum, and energy (Forster 1975). This point can be illustrated by means of a system whose density is nonuniform. The excess density is represented in Figure 8.2. Because a particle cannot disappear (there are no chemical reactions), uniformity will be reached through a slow process of diffusion. The simple Brownian motion model presented in Chapter 1 indicates that the average of the square of the displacement is proportional to time:
We expect that the inhomogeneity will disappear when the distance travelled by the particles is of the order of thc wavelength of the perturbation (8.37). As a result, the order of magnitude of the time necessary to destroy the density fluctuation will be
Therefore, this time becomes large when the wavelength increases. This type of process is like those retained in classical hydrodynamics. They are collectitle processes, because they involve a large number of particles (whenever the wavelength is macroscopic). These collective processes include both reversible and irreversible processes such as wave propagation and damping. Therefore equations such as equation 8.2 1 are quite appropriate because they separate these two parts.
To construct the entropy operator and the transformation function, we must introduce, as in the preceding section, the collision operator Y(z), but we must retain only the long time modes in the dispersion equations. This has been done recently by Mary Theodosopulu and Alkis Grecos (1978), who have shown that the Lyapounov function (expression 8.1) then becomes precisely the macroscopic entropy, the Lyapounov function given in equation 4.30 (see Theodosopulu, Grecos, and Prigogine, 1978). Moreover, the moments of the equation of motion (8.21) are the microscopic analogs of the macroscopic hydrodynamic equations. This is most satisfactory. A bridge between microscopic and macroscopic physics has been achieved. The microscopic Lyapounov function introduced into the dynamical description acquires in this case a direct macroscopic meaning. The only assumptions necessary are short-range forces and small deviations from equilibrium to obtain the -linearized equations of hydrodynamics. Similar results have long been known for dilute gases, starting from the Boltzmann equation. The interesting point is that, in agreement with the expected generality of the second law, nonequilibrium thermodynamics, at least in the linear range, can now be derived from a statistical theory independently of any assumption concerning the density of the system. Important problems still remain unsolved. We do not yet know if the second law applies to gravitational interactions. Is the second law valid only for a given (or "slowly" varying) gravitational state? Can we include gravitation? We are at the frontier of our knowledge, but it is hoped that, as we begin t o understand irreversibility in a more precise way, as a symmetry-breaking mechanism, we will soon be able to make some progress. We now turn to a basic problem in which the formalism that has been introduced is likely to have interesting applications. As already mentioned, every measurement process includes an element of irreversibility. The measurement must increase the entropy. Thus, it can be seen that the dynamics of the apparatus must be such as to admit the operator M. But we have seen that this requires the concept of weak stability and that in this case the trajectory in the dynamical sense is no longer an observable because we cannot extrapolate it from our limited knowledge of phase space. The complementarity between dynamics and thermodynamics appears here in an especially striking way: either a Lyapounov function exists, in which case the system is not a "pure" dynamical one described by welldefined trajectories but only by statistical distribution functions; or no
Lyapounov function exists, and the system is described by trajectories. Nevertheless, as indicated in Appendix C, the main conclusion remains true: quantum systems for which the microscopic entropy operator M may be defined are such that the distinction between pure states and mixtures is lost.
We now look for a A such that condition 8.20 is satisfied and that in addition equation 8.41 can be written as
FIGURE 8.3
Three descriptions of a system: (A and B) the two Hamiltonian views; (C) description in terms of processes.
But this cannot be enough; there are families of star-unitary transformations-all of which satisfy condition 8.20. Which one to choose is a problem quite similar to that of the Born-Heisenberg-Jordan quantization rules mentioned in Chapter 3. The latter can be solved by considering all unitary transformations and choosing the one that leads to a diagonal form of the Hamiltonian operator. Here, too, we need a quantum rule, but a new one for choosing between the star-unitary transformations. How such a rule may be formulated will be described next; as could be expected, it will be in terms of superoperators. Remember that the Liouville operator corresponds to a commutator (see equation 3.35).
+ pH]
The two quantities L and H are ~uperoperators(remember that usual operators act on wave functions, whereas L and .H act on operators). The average value of the energy can be written in terms of the new quantity X as (see equation 3.38) (H)
= tr
Hp
= itr(Hp
+ pH) = tr .W'p
We now apply our transformation A to both L and K. addition to In obtaining equation 8.7, we obtain
When this is so, the Eimay be regarded as the energy levels associated with the system. We then have a most satisfactory description of the system: it evolves in accord with the second law (inequality 8.20) and yet the particles can be characterized by well-defined energies. The method that we have used can be summarized as follows. In conventional quantum mechanics, both the energy levels (see equation 3.16) and the time evolution (equation 3.17) are determined by the same quantity, the Hamiltonian operator H,, . This is a kind of remarkable "degeneracy" characteristic of quantum mechanics. However, after the A-transition, the superoperator formalism allows us to obtain two different operators: @ for the time evolution (see expression 8.2), and .% for the determination of the energy levels. In this way, this degeneracy is lifted for systems for which a star-unitary transformation A leading to a Lyapounov representation can be defined. The method is quite new (Prigogine and George 1978; George et al. 1978). It has been applied successfully to a very simple model (the "Friedrichs' model"), but its generality has yet to be investigated. The reasons for mentioning it here are that it avoids the technical difficulties mentioned in Chapter 3, and we obtain strictly exponential decay (the lifetime e is a matrix element of @). But in addition it is the whole concept of "elementary particles" that is at stake! The classical order was: particles first, the second law later-being before becoming! It is possible that this is no longer so when we come to the level of elementary particles and that here we mustjrst introduce the second law before being able to define the entities. Does this mean becoming before being? Certainly this would be a radical departure from the classical way of thought. But, after all, an elementary particle, contrary to its name, is not an object that is "given "; we must construct it, and in this construction it is not unlikely that becoming, the participation of the particles in the evolution of the physical world, may play an essential role.
Chapter
THE LAWS
CHANGE
Einstein's Dilemma
I am writing this chapter in 1979, Einstein's centennial year. Nobody has made greater contributions to the statistical theory of matter and more specifically to the theory of fluctuations than Einstein. Through the inversion of Boltzmann's formula (1.10). Einstein derived the probability of a macroscopic state in terms of the entropy associated with it. This step has proved to be decisive for the whole macroscopic theory of fluctuations (of special interest near critical points). Einstein's relation is a basic element in the proof of the Onsager reciprocity relations (equation 4.20). Einstein's description of brownian motion, as summarized in Chapter 1. was one of the first examples of "random processes." Its interest is far from being exhausted even today. The modeling of chemical reactions by Markov chains, described in Chapter 6, is an extension of the same line of thought.
'LOUSOV-ZHABOTINSKII REACTION: CHEMICAL SCROLL WAVES iemical waves develop when the Belousov Zhabotinsk~~ reagent IS allowed to stand in a dish The waves can appear spontaneously or be ~ n ~ t ~ a by do u c h ~ n g te t the surface with 3ment as In the photographs above The small circles are bubbles of carbon d ~ o x ~ d e by the reaction (see the section on coherent structures In cthernistry dnd biology In 5) After the ~ n l t ~ photograph was taken subsequent ones were taken at 0 5 1 0 al 4 5 5 5 6 5 and 8 0 seconds Photographs by Frltz Goro
Finally, it was Einstein who first recognized the general meaning of Planck's constant h as leading to the wave-particle duality. Einstein was concerned with electromagnetic radiation. But about twenty years later, de Broglie extended Einstein's relations to matter. The work of Heisenberg, Schrodinger, and others put these ideas into a mathematical framework. But, if matter is both wave and particles, the idea of trajectory of classical determinism is lost. As a result, only statistical predictions can be made by quantum theory (see Chapter 3 and Appendix D). To the end of his life Einstein remained opposed to the idea that such statistical considerations correspond to objective features of nature. In his well-known letter to Max Born (see Einstein, 1969), he wrote:
You believe in the God who plays dice, and I in complete law and order in a world which objectively exists, and which I, in a wildly speculative way, am trying to capture. I firmly believe, but I hope that someone will discover a more realistic way, or rather a more tangible basis than it has been my lot to do. Even the great initial success of the quantum theory does not make me believe in the fundamental dice game, although I am well aware that your younger colleagues interpret this as a consequence of senility.
Why did Einstein take such a strong view concerning time and randomness? Why did he prefer intellectual isolation to any compromise in these matters? Among the most moving documentation of Einstein's life is the collection of letters that he exchanged with his old friend Michele Besso (Einstein 1972). Einstein was usually very reticent about himself, but Besso was a very special case. They knew each other at an early age in Zurich when Einstein was seventeen and Besso twenty-three. Besso took care of Einstein's first wife and their children in Zurich when Einstein was working in Berlin. Although the affection between Besso and Einstein remained deep, their interests diverged with the years. Besso became more and more involved in literature and philosophy-in the very meaning of human existence. He knew that, to obtain a response from Einstein, he had to include problems of a scientific nature, but his interest was more and more elsewhere. Their friendship lasted their whole lives, Besso
having died only a few months earlier than Einstein in 1955. It is mainly the last part of the correspondence between 1940 and 1955 that is of interest to us here. There Besso returned again and again to the problem of time. What is irreversibility? How does it relate to the basic laws of physics? And patiently Einstein answered again and again, irreversibility is an illusion, a subjective impression, coming from exceptional initial conditions. Besso remained dissatisfied. His last scientific paper was a contribution to the Archives des Sciences published in Geneva. At the age of eighty, he presented an attempt to reconcile general relativity and irreversibility of time. Einstein was not happy with this attempt: "You are on a gliding ground," he wrote. "There is no irreversibility in thc basic laws of physics. You have to accept the idea that subjective time with its emphasis on the now has no objective meaning." When Besso passed away, Einstein wrote a moving letter to his widow and son: "Michele has preceded me a little in leaving this strange world. This is not important. For us who are convinced physicists, the distinction between past, present, and future is only an illusion, however persistent." Einstein believed in the god of Spinoza, a god identified with nature, a god of supreme rationality. In this conception there is no place for free creation, for contingency, for human freedom. Any contingency, any randomness that seems to exist is only apparent. If we think that our actions are free, this is only because we are ignorant of their true causes. Where do we stand today? I believe that the main progress that has been accomplished is that we begin to see that probability is not necessarily associated with ignorance, that the distance between deterministic and probabilistic descriptions is less great than most contemporaries of Einstein and Einstein himself were believing. Poincare (Poincare 1914) had already pointed out that, when we throw dice and use probabilities to predict the outcome, we do not mean that the concept of trajectories doesn't apply. Rather, the type of system is such that in each interval of initial conditions, as small as we want, the same number of trajectories go to each side of the dice. This is a simple version of the problem of dynamic instability that has been discussed repeatedly (see Chapters 2,3,7, and 8). Before returning to it once again, let us take an overview of the laws of change that have been described.
(1958, p. 75), Arthur Eddington introduced a distinction between "primary laws," controlling the behavior of single particles, and secondary laws, such as the principle of the increase of entropy, which would be applicable only to collections of atoms or molecules. Eddington fully recognized the importance of entropy. He wrote (p. 103): "From the point of view of philosophy of science the conception associated with entropy must, I think, be ranked as the great contribution of the nineteenth century to scientific thought. It marked a reaction from the view that everything to which science need pay attention is discovered by a microscopic dissection of objects." How can " primary " laws coexist with " secondary " ones ? " One would not be surprised," Eddington wrote (p. 98), "if in the reconstruction of the scheme of physics, which the quantum theory is now pressing on us, secondary laws become the basis and primary laws are discarded." Certainly quantum theory plays a role because it forces us to give up the idea of classical trajectories. But from the point of view of the relation with the second law, the concept of instability, which has been repeatedly discussed, seems to be of fundamental importance. The structure of the equations of motion with "randomness" on the microscopic level then emerges as irreversibility on the macroscopic level. In this sense, the meaning of irreversibility was already anticipated by Poincare (1921), who wrote: In conclusion, using ordinary language, the law of conservation of energy (or the principle of Clausius) can have only one significance, which is that there is a property common to all the possibilities; but on the deterministic hypothesis there is only a single possibility, and the law has no longer any meaning. On the indeterministic hypothesis, on the other hand, it would have a meaning, even if it were taken in an absolute sense; it would appear as a limitation imposed upon freedom. But these words remind me that I am digressing and am on the point of leaving the domains of mathematics and physics. PoincarC's confidence in a basic deterministic description was too firmly established to consider seriously a statistical description of nature. The situation is quite different for us. Many years after the foregoing passage was written, our confidence in the deterministic description of nature has been shaken both at the microscopic level and at the macro-
scopic one. We no longer recoil in horror from such bold conclusions! Moreover, we see that in a sense our point of view reconciles the conclusions of Boltzmann and Poincare. Boltzmann, the daring revolutionary physicist, whose thought was based on an extraordinary physical intuition, guessed the type of equation that could describe evolution of matter on the microscopic level and still display irreversible processes. Poincare, with his deep mathematical insight, could not be satisfied with only intuitive arguments, but he clearly saw the only direction in which a solution could be found. It is my belief that the methods summarized in this book (see Chapters 7 and 8 and the Appendixes) constitute the link between Boltzmann's great intuitive work and Poincart's requirement of mathematization. This mathematization leads us to a new concept of time and irreversibility to which we now turn.
Much of Chapter 7 dealt with some of the most significant attempts made in the past to define entropy on the microscopic level, with emphasis on Boltzmann's fundamental contribution to this subject, culminating in his discovery of the H-function (equation 7.7). However, independently of other remarks, Boltzmann's H-theorem could, in line with observations presented by Poincare, not be considered to be "derived" from dynamics. Boltzmann's kinetic equation, on which the derivation of the H-theorem is based, does not share the symmetry of classical dynamics (see the section titled Boltzmann's Kinetic Theory in Chapter 7 and that titled A New Transformation Theory in Chapter 8). In spite of its historical importance, it can at most be considered a phenomenological model. Ensemble theory does not lead us further even when extended by associating entropy with a microscopic phase function (in classical mechanics) or a Hermitian operator (in quantum mechanics). These negative conclusions are described in the sections titled Gibbs Entropy and the Poincare-Misra Theorem in Chapter 7. This left us with very few possibilities, short of accepting the view that
irreversibility results from mistakes or from supplementary approximations added to classical or quantum mechanics. However another, radically different approach has now emerged: the idea of associating macroscopic entropy (or Lyapounov function) with a microscopic entropy operator called M.* This is a momentous step: we were accustomed to considering, in classical mechanics, observables to be functions of correlations and momenta. Yet the introduction of the Liouville operator L in both classical and quantum ensemble theory (see Chapters 2 and 3) has prepared us for this new step, which is of a quite different nature. Indeed, ensemble theory was considered to be an " approximation," whereas the " basic " theory was in terms of trajectories or wave functions. With the introduction of operator M, the situation becomes quite different. It is the description in terms of bundles of trajectories, or distribution functions, that becomes basic; no further reduction to individual trajectories or wave functions can be performed. The physical meaning of entropy and time as operators is discussed in Chapters 7 and 8, as well as in Appendixes A and C . (See especially the introduction and the section titled An Extended Complementarity Principle in Chapter 7 and the sections titled Irreversibility and the Formalism of Classical and Quantum Mechanics, The Construction of the Entropy Operator and the Transformation Theory, and Particles and Dissipation-A Non-Hamiltonian Microworld in Chapter 8. Because operators were first introduced in physics through quantum mechanics, there remains in the minds of most scientists a close relation between quantization involving Planck's constant h and the appearance of operators. The association of operators with physical quantities has, however, a broader meaning quite independent of quantization. It means basically that for some reason the classical description in terms of trajectories has to be given up either because of instability and randomness on the microscopic level (see Appendix A) or because of quantum "correlations " (see Appendix D). * From a historical point of view, it is interesting that the nonunitary transformation A, which leads from the usual Liouville equation to the kinetic equations (see the section titled A New Transformation Theory in Chapter 8), was found first. It has only recently been realized that this means that a supplementary operator M exists in the original (Hamiltonian) representation and therefore in this sense the usual dynamical description was not complete.
FIGURE 9.1
FIGURE 9.2
For classical mechanics we may present the situation in the following way. The usual description (Figure 9.1A) is in terms of trajzctories or orbits generated by Hamilton's equations (2.4). The other description (Fig. 9.1B) is in terms of distribution functions (2.8), their motion being determined by the Liouville operator. These two descriptions can be different only if we cannot at each moment go from one description to the other. The physical reasons for this are discussed in the section on weak stability in Chapter 2. Experiments performed with an arbitrary but finite accuracy lead us only to the identification of some jinite region of phase space where the system may be located. The question is then whether we can perform, at least in principle, a transition-limiting process, as indicated schematically in Figure 9.2, from this region to a point P, to a 6-function corresponding to a well-defined orbit. This transition-limiting process is related to the question of weak stability discussed in Chapter 2. It becomes impossible to perform when we have a variety of trajectories in each region of phase space-however small. Then the microscopic description becomes so "complex" that we cannot go beyond it in terms of distribution functions. At present, we know of two types of dynamical systems for which this is so-systems with sufficiently strong mixing properties and systems presenting the Poincare catastrophe (see Chapters 2 and 7 and Appendixes A and B). In fact, almost "all" dynamical systems with the exception of a few "school" examples belong to these categories. We shall return to this question in the next section. One might think that such "natural limits" of classical or quantum physics would lead t o a decrease of their predictive power. In my opinion, the reverse is true. We can now make statements about the evolution of distribution functions that go beyond what can be said about individual trajectories. New concepts appear.
Among these new concepts some of the most interesting are the microscopic entropy operator M and the time operator T . Here we are dealing with a second time, an internal time quite different from the time that in classical or quantum mechanics simply labels trajectories or wave functions. We have seen that this operator time satisfies a new uncertainty relation with the Liouville opcrator L (see equation 8.22 and Appendixes A and C). We may define averages ( T ) . (T') through the bilinear forms
( T ) = tr ptTp, < T 2 ) = tr p t T 2 p
Interestingly enough, the " ordinary " time-the label of dynamics-then becomes an average over the new operator time. This is in fact a consequence of the uncertainty relation (8.22), which implies that
d (T) dt
=
d
-
dt
tr[(e-'L'p)tTe-i'2tp]
= tr
ptp = constant
With an appropriate normalization we may take this constant equal to one. We see therefore that
dt = d ( T )
(9.3)
In other words, macroscopic time is simply the average over the new operator time. In this perspective, the usual time concept is recovered only when T becomes a trivial operator such that (in classical mechanics)
Then "age " is independent of the form of the distribution in phase space.
On the contrary, the new concept implies that age depends on the distribution itself and is therefore no longer an external paramcter, a simple label as in the conventional formulation (see Appendix A). We see how deeply the new approach modifies our traditional view of time, which emerges now as a kind of average over individual times" of the ensemble.
"
herein from this point of view, especially the transition to statistical mechanics as a result of weak stability. It has also been mentioned that the existence of the universal constant h introduces correlations in phase space and prevents the transition from ensembles to single trajectories (further details are given in Appendixes C and D). The results are depicted in the following scheme. Classical dynamics ( t o - t)
I
(trajectories)
Levels of Description
Instability For a long time, the absolute predictability of classical mechanics, or the physics of being, was considered to be an essential element of the scientific picture of the physical world. It is quite remarkable that over the three centuries of modetn science (it seems indeed legitimate to consider 1685, the year Newton presented his Principia to the Royal Society, as the birth date of modern science) the scientific picture has shifted toward a new, more subtle conception in which both deterministic features and stochastic features play an essential role. Let us consider only the statistical formulation of the second law of thermodynamics by Boltzmann, in which the concept of probability played an essential role for the first time. We then have quantum mechanics, which preserves determinism but in the framework of a theory that deals with wave functions having a probabilistic content. In this way, probabilities appeared for the first time in the basic, microscopic description. This evolution is still continuing. We find essential stochastic elements not only in the theory of bifurcations on the macroscopic level (see Chapter 5), but also in the microscopic description as provided even by classical mechanics (see Chapters 7 and 8). As we have seen, these new elements lead finally to new concepts for time and entropy, the consequences of which must yet be explored. It is remarkable that classical dynamics, statistical mechanics, and quantum theory can be discussed starting from the ensemble point of view introduced by Einstein and Gibbs. When the transition from an ensemble to a single trajectory can no longer be performed, we obtain different theoretical structures. Classical dynamics has been discussed
II h (coherence) I
( t c t -t)
Statistical theory-irreversibility
1 1
J
Instability
(t
t)
Macroscopic physics (limited by bifurcations) We begin to be able to coordinate the various levels of description repeatedly discussed in this book. However, a few words of caution are necessary. For example, we may transform the deterministic description (in terms of a Liouville equation) into a Markov chain (see Appendix A) for a class of strongly unstable systems to which the baker transformation belongs. Is this so in more general situations involving a weaker form of instability such as the mixing property considered in Chapter 2? Another example is the quantum mechanics theory (see Appendix C). Quantum mechanical instability theory is still in its infancy. Supplementary classifications and new points of view are likely to
emerge in the future. Yet our present scheme is not empty and brings some unifying features into the structure of theoretical physics. A few comments seem appropriate here on the dynamical complexity associated with instability. In classical dynamics, some simple situations that are time reversible (t ++ - t) can at least be conceived of. Whenever chemical processes (and a fortiori whenever biological processes) are considered this becomes impossible because chemical reactions are always associated-nearly by definition-with irreversible processes. Moreover, measurements-which extend our sensory perceptionsnecessarily involve some element of irreversibility. Therefore, the two formulations of the laws of nature (one for which t ++ - t and the other for which t tf, - t) are equally fundamental. We need both. It is true that we may consider the world of trajectories (or of wave functions) to be the fundamental one. From this perspective, new formulations are obtained when supplementary assumptions are introduced. But we can also consider irreversibility to be a basic element of our description of the physical world. From this perspective, the world of trajectories and wave functions corresponds on the contrary to idealizations of great importance, but they lack essential elements and cannot be studied in isolation. We have arrived at a kind of self-consistent picture which will be described in a little more detail.
be broken in two ways; however, how to distinguish between them is a difficult question. As emphasized in the preceding section, life even in its simplest form presupposes a distinction between past and future. Monocellular organisms such as amoebas move from media poor in nutrients to media rich in nutrients. Even such organisms anticipate the future through signals received from their environment. When we study time-reversible laws of dynamics, we make a distinction between past and future-between, say, predicting the position of the moon or calculating what its position was in the past. The distinction between past and future is a kind of primitive concept that in a sense precedes scientific activity. We may, however, include this primitive concept in a self-consistent scheme, as shown in the following diagram: Observer (distinction between future and past) Dynamics
Irreversibility
C -
Dissipative structures
Once we can add a Lyapounov function to the dynamics, future and past can be distinguished, exactly as in macroscopic thermodynamics in which the future is associated with a larger entropy. But again some caution is necessary. We may construct a Lyapounov function that increases monotonously with the "flow" of time or another one that decreases. In more technical terms, the transition from the situation represented in Figure 9.1A, which corresponds to a dynamical group, to the one represented in Figure 9.1B, which corresponds to a semigroup, can be performed in two ways: in one description equilibrium is reached in the " future," and in the other in the "past." In other words, the time symmetry of dynamics can
We start with the observer, a living organism who makes the distinction between the future and the past, and we end with dissipative structures, which contain, as we have seen, a "historical dimension." Therefore, we can now recognize ourselves as a kind of evolved form of dissipative structure and justify in an "objective" way the distinction between the , future and the past that was introduced at the start. Again there is in this view no level of description that we can consider 1 to be the fundamental one. The description of coherent structures is not less "fundamental" than is the behavior of the simple dynamical systems. Note that the transition from one level to the other involves "symmetry-breaking"; the existence of irreversible processes on the microscopic level as described through kinetic equations violates the symmetry of canonical equations (see Chapter 8), and dissipative structures may in turn break the symmetries of space-time.
The very possibility of such a self-consistent scheme implies the existence of nonequilibrium processes and therefore a picture of a physical universe that for some cosmological reasons provides the necessary type of environment. Although the distinction between reversible and irreversible processes is a problem of dynamics and does not involve cosmological arguments, the possibility of life, the activity of the observer, cannot be dissociated from the cosmological environment in which we happen to be. However, the questions, What is irreversibility on the cosmic scale? Can we introduce an entropy operator in the framework of a dynamical description in which gravitation plays an essential role? are formidable ones. I prefer to confess my ignorance.
An Open World
The basis of the vision of classical physics was the conviction that the future is determined by the present, and therefore a careful study of the present permits the unveiling of the future. At no time, however, was this more than a theoretical possibility. Yet in some sense this unlimited predictability was an essential element of the scientific picture of the physical world. We may perhaps even call it the founding myth of classical science. The situation is greatly changed today. It is remarkable that this change results basically from our better understanding of the limitations of measurement processes because of the necessity to take into account the role of the observer. This is a recurrent theme in most of the basic ideas that originated with the development of physics in the twentieth century. It was already present in Einstein's analysis of space-time (1905) in which the limitation of the speed of propagation of signals to velocities smaller than the velocity of light in a vacuum plays such an essential role. It is certainly not logically inconsistent to suppose that signals may be transmitted with infinite speed, but this Galilean space-time concept seems to conflict with a whole host of experimental information that has been gathered through the years. The incorporation of the limitation of our way of acting on nature has been an essential element of progress.
The role of the observer in quantum mechanics has been a recurrent theme in the scientific literature in the past fifty years. Whatever the future developments are, this role is essential. The naive realism of classical physics, which assumed that properties of matter were "there" independently of the experimental device, had to be revised. Again, the developments described in this book point in a similar direction. Theoretical reversibility arises from the use of idealizations in classical or quantum mechanics that go beyond the possibilities of measurement performed with any finite precision. The irreversibility that we observe is a feature of theories that take proper account of the nature and limitation of observation. At the origin of thermodynamics we find "negative " statements expressing the impossibility of certain transformations. In many textbooks, the second law of thermodynamics is expressed as the postulate that it is impossible to transform heat into work using a single thermostat. This negative statement belongs to the macroscopic world-in a sense we have followed its meaning to the microscopic level when it becomes, as we have seen, a statement about the observability of the basic conceptual entities of classical or quantum mechanics. As in relativity, a negative statement is not the end of the story: it leads in turn to new theoretical structures. Have we lost essential elements of classical science in this recent evolution? The increased limitation of deterministic laws means that we go from a universe that is closed, in which all is given, to a new one that is open to fluctuations, to innovations. For most of the founders of classical science-even for Einsteinscience was an attempt to go beyond the world of appearances, to reach a timeless world of supreme rationality-the world of Spinoza. But perhaps there is a more subtle form of reality that involves both laws and games, time and eternity. Our century is a century of explorations: new forms of art, of music, of literature, and new forms of science. Now, nearly at the end of this century, we still cannot predict where this new chapter of human history will lead, but what is certain at this point is that it has generated a new dialogue between nature and man.
APPENDIXES
Appendix
The following discussion is an attempt to explain how the time operator T (see equation 8.22) and a microscopic entropy operator M may be associated with the baker transformation introduced in Chapter 8. The results given here summarize a recent paper by Misra, Prigogine, and Courbagel in which all proofs, as well as various generalizations of the results to other systems, can be found. Other aspects related t o the baker transformation that are not treated here can be found in important papers by Lebowitz, Ornstein, and others.2,3. 4. Phase space 0 will be the unit square in the plane. As shown in Figure 8.11, the baker transformation, B, sends point w = (p, q ) of 0 into Bo,with
'-
Transformation B describes a discrete process that takes place at regular time intervals and tends to progressively fragment an arbitrary given
T I M E AND ENTROPY O P E R A T O R S
surface element. As an example, let us apply transformation B to the half square 0 < q < f. The result is shown in Figure A.1. When the baker transformation is repeated many times, the initial half square is broken into smaller and smaller rectangles as shown in Figure A.2.
and similarily for q. Here the u take the values 0 or 1. : s A point o in Q is therefore represented by the double sequence {ui} with i = 0, f 1, f2 . . . . Using specific examples, one can easily verify that to Bw there corresponds the sequence jug, in which u: = ui+ We see clearly that the bake: transformation induces a shift in the sequence. It is mainly for this reason that one speaks of a "Bernouilli ~ h i f t . " ~ Consider a simple orthonormal basis for all square integrable functions on the "phase space." Let X be the function defined on (0, 1) by
,.
After some time, the fragmentation becomes so fine that whatever the precision of observation (supposed only to be finite) the distribution will appear uniform. At this stage the system has reached its equilibrium (microcanonical) distribution. The baker transformation admits a remarkable representation as a Bernouilli shift." To understand this relation, we write the coordinates p
"
The value of X,(w) in each point of Q therefore depends solely on the nth digit in the binary expansion of the coordinates p, q. In addition, we define for each finite set of integers (n,, n,, . . ., n,) = n the product function X,(w):
APPENDIX A
in which Xg(w) corresponds to the microcanonical ensemble. One can verify that this set of functions indeed forms an orthonormal basis. This means, as currently used in quantum mechanics (see the section on quantization rules in Chapter 3), that
in which 6,. , is equal to 1 when n = n' (i.e., n, = n;, . . . and nN = n;) and . is otherwise equal t o 0. For example, using equation A.3 it is easy to check that (see also Figures A.3 and A.6)
Also the functions X,(o) together with Xg form a complete set: every (square integrable) function on R can be expanded in terms of a suitable combination of these functions. In the following, we shall use the scalar product of two square integrable functions f l , fi defined by
in which n + 1 is the set of integers (n, + 1, n2 1,. . . , nN 1).The baker transformation therefore leads to a simple shift in the basis functions. We now introduce the characteristic function 4, of a domain A in Q. This is the function that takes the value 1 on A and vanishes elsewhere in R. We can express such characteristic functions in terms of the basis X, already introduced. As an example, let us consider X,(o). By definition X,(w) = X(u,) is a function taking the value - 1 when u, = 0 (i.e., for 0 d p < $) and + 1 when u, = 1 (i.e., for 4 ,< p < 1).Therefore Xl(w) takes the value - 1 on the left half of the square, At, and + 1 on the right half, A:. It is now easy to write the characteristic functions of the "atoms" of this partition (A:, A : ) of the square. The expressions of the characteristic functions are reproduced in Figure A.3. Similar expressions are valid for characteristic functions corresponding to fixed partitions. We shall now examine the evolution of an arbitrary domain in Q and link this evolution to the idea o f " weak stability" discussed repeatedly in this book (see the section on weak stability in Chapter 2). We may obtain the characteristic function of the transformed domain B- 'A from equation A.5, using in succession the definitions
The baker transformation can also be expressed in terms of an operator U acting on functions $(o)(as explained in textbooks, see Arnold and Avez,' U is a unitary operator): We therefore have As a result we have, using equation A.3, For example, the domain A: is transformed after n applications of B-' into B-"A: = A:+, (refer to Figure A.3 for the shapes of these domains and their characteristic functions in terms of the basis Xi). For the general case, we may consider an arbitrary small "atom" of R of width Ap = and height Aq = (i)". set of these 2" x 2" "atoms" The of forms a partition P,, , R. We may write the characteristic functions of such "atoms," A,, ,in terms of X ias follows: ,
O r more generally
(t)"
Such atoms can be chosen as small as we want by increasing n and m. The interesting point is that after (m 1) applications of B-', A,, , split into is two atoms and this stems from equation A.9:
I
A:,?
A;,;
---+---
A",
A:=A:Y
OA;
n A: = $ A ;
The two "atoms" so obtained are symmetric and separated by 2"'" subdivisions as shown in Figure A.4. The same result can be obtained by successive applications of B instead of B- '. We see therefore that, even if at the initial time the system is found in some arbitrary small region of the phase space, it will evolve in time to distinct domains separated in phase space and we can only estimate the probabilities of finding the system in these various domains. In other
FIGURE A.3
Examples showing the shapes of the surfaces A; and their intersections A!!,'. in which the surface A! is the set of points w such that ui = j and A:;'.' : is the set of points w such that ui = j , 11,. = j', . . . . Moreover, we have given the values of Xi(w),i = 0, 1, 2. on these surfaces and their characteristic functions 4,,,, and so forth.
FIGURE A.4
Splitting of A,,
APPENDIX A
words, each region (howetler small) contains different types of "trajectories " leading to the various domains. This is the very definition of weak stability. After these preliminary considerations, let us now introduce the basic operator T corresponding to "age" or to an "internal7' time. By definition, it satisfies (for continuous transformations) the " uncertainty relation " (see equation 8.22)
The unitary transformation U associated with the discrete baker transformation may be written formally as
and this directly implies relation A.ll. The eigenvalues of T (the numerical values of the operator age) are all integers from - co till co.This has a simple physical meaning. If we consider, for example, an eigenfunction corresponding to age 2, such as X , , the application of U transforms it into X , , which is an eigenfunction corresponding to age 3, and so forth. Not all distributions have a well-defined age: for example, a superposition of X I and X, has no well-defined age. But we can always introduce an "average age" for a distribution p, or more precisely for the excess p = p - 1 of p with respect to the equilibrium distribution p = 1. (Note that X,, being negative in a part of Q, is not a distribution function, whereas 1 + X,, being nonnegative everywhere, is a distribution function.) In conformity with the quantum mechanical definitions, the "average age" of p (or p) will be given by
in which .r corresponds to the time interval between two transformations (we may take z = 1). The uncertainty relation for L induces the relation
(A. 14) We see, for example, that the "age " of pA = 1 f X, is well defined:
[T, U]
(A.11) In the case of the baker transformation, it is easy to construct the explicit expression of T (for more details, see Misra, Prigogine, and Courbagel). We have seen that U when applied to the basis functions (X,} shifts Xn into Xn+ It is therefore not surprising that the X, are the eigenvectors of the conjugate operator T. Moreover, for each Xn the corresponding eigenvalue is the maximum of the n i (recall that n is a finite set of integers n . . . n ) . For example the eigenvalue corresponding to X, is n, that corresponding to X, X l X, is 2, and so forth. As a result, T has the spectral form
(X,, TX,) = n
,.
in which n(n) is the set of integers having the maximum n. So from equation A.14 we get
(A.12) Here En is a projection on the subspace in the orthocomplement of the microcanonical ensemble, generated by the functions X,: X i X n(i < n), Xi X j X, (i, j < n), and so forth. We may verify that
APPENDIX A
The coefficient pn represents the probability of finding the system at age n. Indeed, we may speak of probability because we have: (A.19)
The similarity with the rules of quantum mechanics is striking. However, there is a simple explanation for this: the uncertainty relation (equation A.ll) between the time evolution and the age T. We may also define the fluctuation in age and other characteristics of the stable distribution of ages. The microscopic entropy operator M follows directly. It is the sum of two terms. One is closely related to the eigenfunctions of T. The corresponding eigenprojectors are En and its eigenvalues form a decreasing sequence 2; of real numbers tending to zero for n -* c and to 1 for o n + -m. In addition, M contains the projection operator on the microcanonical ensemble :
F I G U R E A.5
The behavior of Q ( U n p )for a normalized state p.
There is a whole set of entropy operators according to the choice of the sequence {A;). Let us verify that definition A.21 indeed leads to the correct behavior of the Lyapounov function Q(p) defined as
The basic property is the monotonous time variation of R (it is a matter of definition to require that it increases or decreases monotonously with time). Each step decreases the value of the Lyapounov variable (Figure A.5). The monotonous variation of Q is the consequence of the existence of the time operator, T, which itself results from the "weak stability" of the dynamic transformation. No approach to probabilistic considerations is necessary. Several important results and properties follow. First, the nonunitary transformation A introduced in the text can be explicitly constructed. It corresponds to the square root of M and has the spectral form
,.
This transformation induces a contraction semigroup corresponding to the generator 0 = AUA-'. Indeed, we now have a monotonously decreasing norm (see equation A.23):
Other concepts could also be introduced. such as "measures" associated with partitions of the square, but such details will not be dealt with here. This simple example shows how geometry, dynamics, and irreversibility can be linked in the framework of an extension of classical dynamics involving supplementary observables such as T o r M represented by suitable operators. We also see how the requirement of a universal Lyapounov function can be satisfied using nonunitary transformations (the "square root" of hf), which themselves depend on the dynamics of the system. The analogy with the basic ideas of general relativity (the use of geometrical concepts to express the laws of dynamics in a simple way) is striking.
References
in which = Ap. It can be shown that this transformation takes a state into a state (it preserves the positivity and the normality). Another important property of this transformation is that it delocalizes the distribution. This means that, if p has a nonvanishing value only in a region A, then Ap has a nonvanishing density almost everywhere. In the A-picture, the approach to equilibrium corresponds to the cancellation of the local excess with respect to the equilibrium in each region. We can see this property in the simple example of the distribution p = 1 - XI (Figure A.6). This distribution has a value of 2 on the left half of the square and vanishes on the right, whereas Ap = 1 - I I X 1 is positive everywhere because I, is strictly positive and less than one. We check that
1 . B. Misra, I. Prigogine, and M . Courbage, Proceedings of the National Academy of Sciences, U.S.A. 76 (1979): 3607; Physica 98A (1979): 1. 2. J. L. Lebowitz, Proceedings of I.U.P.A.P. Conference on Statistical Mechanics (Chicago, 1971). 3. D. S. Ornstein, Advances in Mathematics.. 4 (19701: 337. \ , 4. J. G. Sinai, Theoryof Dynamical Systems, vol. 1 (Denmark: Aarhus University, 1970). 5. V. I. Arnold, and A. Avez, Ergodic Problems of Classical Mechanics (New York: Benjamin, 1968). 6. P. Shields, The Theory of Bernouilli Shifts (Chicago: University of Chicago Press, 1973).
with
X1+, is the excess with respect to the equilibrium distribution and tends
Appendix
The decomposition of the resolvent of L can be written in terms of the resolvent of QLQ: (L - z)-' = [P in which %(z)= - (QLQ - z)- 'QLP
T(z) = PLP
+ Y(z)
with Y the collision operator, defined in 8.33. The creation operator, V(z), "creates" correlations (vectors in the Q-subspace) out of the vacuum (vectors in the P-subspace). Similarly, 9 and 9 are the destruction and propagation operators.' Laplace inversion of equation B.l gives the solution of the Liouville equation in the time variable:
As noted in Chapter 8 (see the sections titled Construction of the Entropy Operator and the Transformation Theory and Entropy Operator and the Poincare Catastrophe), the microscopic operator M can exist in two cases: one corresponds to systems with strong mixing properties (an example is given in Appendix A); the other corresponds to the " Poincare catastrophe." In the second case, an important role is played by the collision operator, Y(z), defined in 8.33. In this appendix, a case will be presented in which the collision operator can be calculated explicitly. Although the example to be considered is somewhat schematic, it shows the meaning of resonances, so important in Poincark's approach (see Chapter 3), and their relation to the limit of Y as z + 0, which corresponds to long-time behavior and which is the basic quantity appearing in kinetic theory. To see this more clearly, the section on the entropy operator and the Poincare catastrophe in Chapter 8 must be developed further.
in which (Ja (p(t)) is the density-in-phase and B is a line parallel to the real z axis, directed from right to left, and situated above all singularities of the integrand. The P projection of equation B.6 obeys the exact nonMarkovian master equation, obtained by Resibois and myself some years ago:'.
in which G and F are the inverse Laplace transforms of Y and 2. can As be seen in equation B.7, the evolution of P 1 p(t)) depends on the values of P 1 p(t)) at all earlier times as well as on the initial correlations. However.
APPENDIX B
in the long-time limit t + a , : equation B.7 can, under suitable conditions, be replaced by the following Markovian kinetic equation, obtained by Rtsibois and myself:
which is equation 8.34 corrected by the operator 0. The nature of Y(iO+ ) and of V(i0 + ) places restrictions on the properties of the dynamical invariants, as was seen in equation 8.35. Further, by pointing out that lim
t-m
1 '
-
dr exp(- irL) = E
lim
z+iO+
Z(Z
- L)-
'O
in which E is the projector onto the null space of L (the subspace of the dynamical invariant), Stey showed how the invariants, the infinite-time averages, and the value of the limit of Y as z -+ iO+ are related.3.4 He showed that, when [z - Y(z)]-' possesses a simple pole as singularity at z = 0, and Ply)
=
lim
t-m
1
-
a '
drPe-'">Pl f )
(B.10)
FIGURE B.l
The nonzeroness of T ' ( i O + ) and the relation between the initial states, PI f ) , and the P projection of the corresponding infinite-time averages, P I g ) .
.O
then to P 1 g) there corresponds only one P ( f ) if and only if q ( z ) -+ 0 as z 4 iO+. Figure B.l is a schematic representation of the relation between the initial value of the above time-average, PI f ), and its final value Pig), as a function of Y(iO+). Equivalent to this result is Y(iO+) = 0 if and only if zero is the only P-subspace vector orthogonal to each invariant .3 Consider the example studied by Stey3that was mentioned earlier. The system has one degree of freedom and the state (J, a ) in action-angle variables evolves according to Hamilton's equations: (B.11) and the Hamiltonian is taken as
From the solution of equation B.ll, we see that two types of flow are possible. First, when w # 0, each bounded region of phase fluid in the (J, a ) plane undergoes a periodic deformation while performing an oscillatory motion in a band parallel to the a axis (see Figure B.2). The second kind of motion appears in the limit w -+ 0, in which each region flows parallel to the J axis and stretches itself out continuously along the line of flow, becoming infinitely long as t -+ a (see Figure B.3). In the limit o +,0, the : distance between points in the (J, cr) plane diverges at t when t + cc. To analyse this system from the point of view of the Liouville equation (2.12), we first look at the matrix elements of the Liouville operator (2.13). By definition of L, we have
The delta function in equation B.15 expresses the restriction of transitions to those on the energy surface. Laplace inversion of equation B.15 gives:
FIGURE 8.2
Oscillatory flow in phase space (cu i 0).
The series of delta functions in equation B.16 describes the motion of the angle variable in phase space. To study the distribution of J values over a statistical ensemble, we introduce the P, Q decomposition of equations 8.29 and B.l and define P through its matrix elements:
(B.17)
( J l a , lP(p(t)) thus comprises the density-in-action variables. An explicit evaluation of the collision operator in this case has been given:3 P L P = 0 here, Y = Y , and
FIGURE 8.3
Nonoscillatory unbounded flow in phase space (o= 0).
(B.18)
and in which
By straightforward diagonalization of L, we obtain (see Stey3)the matrix elements of the resolvent of L; for w # 0,
(B. 19)
=
6 J , - J 21
+m
2v
w
in which fl = 2 V / o and f ,.(x) is the Bessel function of the first kind. Looking at the small z behavior of the collision operator eigenvalues, we obtain
ein'(z~-"z)
2 x , =C, -
(nw-z)
APPENDIX B
as z + iO + and o 0 , provided that PK is not equal to the zero of f o ( x ) . The Fourier transform G ( K ) of the infinite-time average of a state (J a P p ( t ) ) whose initial value has Fourier transform f ( K ) is, from equations 8.32, B.18, and B.19, given by
I 1
Thus, the one-to-one correspondence between f ( K ) and g(rc) mentioned earlier no longer holds: for all suitable initial states the infinite time average is the same-zero. This is of course in complete agreement with the nonzero value of the asymptotic collision operator of equation B.24. Moreover, in the o + 0 limit equation B.22 tends to
which is equation B.10 in this case. It can be seen by explicit calculation of the resolvent of QLQ that the conditions associated with the results of equation B.10 are satisfied here. Indeed, equation B.21 shows immediately that there is one-to-one correspondence between G(K) and f ( K ) if and only if y O ( / ? ~# 0 , which is so if and only if Y ( i O + ) = 0 , as ex) plicitly seen above. The solution of equation B.7 can be obtained by inverse Laplace transformation, once q ( z ) ,[z - Y ( z ) ] - ' ,and the resolvent of QLQ are computed. It can be written in the form
( J , a , IP 1 p ( t ) ) =
- -
(B.26)
Upper bounds on the rate of relaxation, which is now possible, have been obtained by application of the Holder inequality. For all q > 2 and w = 0,
2v
in which ( J l a l l P ( p ( 0 ) )= h(J1,a l ) and h is 2.n-periodic in a,. Thus, we see that, when o # 0 and Y ( i 0 + ) = 0, the above solution is T-periodic in time, in which T = 2n/w, the PoincarC recurrence time. On the other hand, in the limit w + 0, the Poincare recurrence time is pushed to infinity and equation B.19 shows that lim Y K ( z = )
0-0
in which C is a constant. This result suggests that we recall that an important point concerning the flow of a region of phase fluid in this model is that each region must return to its initial shape after a period, 2nlo. When w = 0 , each region, no matter how small, of nonzero area becomes injinitely extended parallel to the J axis as t -+ oo. Thus, we see that when Y ( i O + ) # 0 here, the idea of a deterministic trajectory in phase space can be useful operationally in the long-time limit only when the initial state of the system is known exactly. In this connection, the description by kinetic equation therefore becomes of special interest. The analogy with mixing systems should also be stressed. A comparison of the solution of the kinetic equation (B.8)with the exact long-time decay of the right-hand member of equation B.26 has been made. Taking
and so lim
z-iO+
lim Y , ( z )
w-0
-2il V K1
and limiting ourselves to the simplest approximation, R = 1 for R in equation B.8, we have equation 8.34, which, using the value given in equation B.24, gives3
APPENDIX B
Appendix
From equation B.26, we have the following result:
u We see that equations B.29 and B.30 are in good agreement for t + c and that the initial distribution, although infinitely peaked as a function o of J, leads to a uniform distribution of action as t -+ c .This does not occur when the initial cc distribution is infinitely peaked about a certain value: that is, when a trajectory is considered.
References
1. I Prigogine, Non-equilibrium Statistical Mechanics (New York: Interscience, . 1962); I. Prigogine, C. George, F. Henin, and L. Rosenfeld, Chem. Scripta 4 (1973): 5 . 2. I . Prigogineand P. Resibois, Physica 27 (1961): 629; 1. Prigogine and A. Grecos,
of Physics "Enrico Fermi," 1977). 3. G. C. Stey, Physics Letters 69A (1978): 1 5 1 . 4. I. Prigogine, A. Grecos, and C. George, Proceedings o f t h e National Academy o Sciences 73 (1976): 1802. f
As noted in Chapter 3, a fundamental distinction is made in quantum mechanics between pure states (wave functions) and mixtures represented by density matrices. Pure states occupy a privileged position in quantum mechanics, somewhat analogous to orbits in classical mechanics. As indicated by the Schrodinger equation (see equations 3.17 and 3.18), pure states are transformed into other pure states during the time evolution. Moreover, observables are defined as Hermitian operators mapping vectors of the Hilbert space into itself. These operators also preserve pure states. The basic laws of quantum mechanics can thus be formulated without ever invoking the density-matrix description of states corresponding to mixtures. The use of that description is considered to be only a matter of practical convenience or approximation. The situation is similar to that considered in classical dynamics in which the basic element corresponding to the pure state is the orbit or the trajectory of a dynamical system (see, in particular, Chapters 2 and 7).
APPENDIX C
In Chapter 3, the question was asked: Is quantum mechanics complete? We have seen that one of the reasons for asking this question in spite of the striking successes of quantum mechanics in the past fifty years is the difficulty of incorporating the measurement process (see the section titled The Measurement Problem in Chapter 3). We have seen that the measurement process transforms a pure state into a mixture and therefore cannot be described by the Schrodinger equation, which transforms a pure state into another pure state. In spite of much discussion (see the beautiful account by d'Espagnatl), this problem is far from being solved. According to d'Espagnat (p. 161), "The problem [of measurement] is considered as non-existent or trivial by an impressive body of theoretical physicists and as presenting almost insurmountable difficulties by a somewhat lesser but steadily growing number of their colleagues." I do not wish to take a position that is too strong in this controversy, because, for the present purpose, the measurement process is simply an illustration of the problem of irreversibility in quantum mechanics. Whatever the position one takes, the fundamental distinction between pure states and mixtures and the privileged position of the pure states in the theory must be given up. Thus, the problem is to provide a fundamental justification for this loss of distinction. It is a remarkable fact that the introduction of the entropy operator M (see the section on irreversibility and the formalism of classical and quantum mechanics in Chapter 8) as a fundamental object of the theory entails just this loss of distinction between pure states and mixtures. The object of this appendix is to sketch a proof of this statement. For more details, the reader is referred to a soon-to-be published paper by Misra, Courbage, and myself2 on which the present appendix is based.
t o the Schriidinger equation (3.17)? Suppose that we have (see equation 7.27, for convenience the sign of D has been changed)
We can then view D as the microscopic entropy production operator. It seems natural to suppose that the measurements of M and D are mutually compatible. As is well known, this implies
[ M , D] = 0
Equation C.2 can be considered a "sufficient condition" for all that follows. It could be weakened, but it is not necessary to go into greater detail here. The basic reason that conditions C.l and C.2 cannot be satisfied by an operator M is that the Hamiltonian operator H plays a dual role in quantum mechanics (see the section on particles and dissipation in Chapter 8). In addition to generating the time evolution, it represents the energy of the system. Hence it must be bounded from below.
To see the incompatibility of the positivity of the Hamiltonian H with conditions C.l and C.2, consider the identity
d
-
(e-;Mi$, He-iMi+)
= -.
dt
L ( ~ - ' ~ '[I ), ,M ] e - i M i I ) ) H
= - (I),DI))
The last equality follows from the fact that M and D commute so that e i M t ~ e - iMr = D. Integration of both sides of equation C.4 from 0 to t now yields
(ePiMiI),
-
Why must we go beyond the standard formulation of quantum mechanics in which the Hamiltonian operator is the generator of motion according
APPENDIX C
Let us therefore investigate the existence of M in conjunction with the time evolution as generated by the Liouville operator. because
An important advantage of using the Liouvillian formulation of quantum dynamics is that the generator L of the time-translation group is no longer physically constrained to be bounded from below. In fact, if the spectrum of H extends from 0 to + m, the spectrum of L is the entire real line. The possibility of defining M as a "superoperator" (see the section titled Irreversibility and the Formalism of Classical and Quantum Mechanics in Chapter 8) that satisfies the relations
However, if a self-adjoint operator T satisfying equation C.6 exists, an entropy operator M satisfying equations C.l and C.2 can be obtained by simply taking M to be a monotonic function of T:
and [ M , D]= 0 is thus not excluded by the argument given earlier. As in classical mechanics, supplementary conditions (see the section on ergodic systems in Chapter 2) must be imposed; M cannot exist in both of the following cases:
The impossibility of defining the entropy operator M , the nonexistence of a t e e operator in quantum mechanics, and the problem of interpreting and justifying the time-energy uncertainty relation are thus linked together. Their common origin is the fact that in the usual formulation of quantum mechanics the generator H of the time-translation group is identical with the energy operator of the system. To be able to define the entropy operator M, it is thus necessary to overcome this degeneracy. The simplest way of achieving this is to go to the so-called Liouvillian formulation of (quantum) dynamics (see the section titled Shrodinger and Heisenberg Representations in Chapter 3). The basic object in this forrnulation is the group describing the time evolution of the density operators. As noted in Chapter 3, the generator of the time-translation group is now the Liouvillian operator L defined by the equation (see equations 3.35 and 3.36)
APPENDIX C
in which A, and A, are usually self-adjoint operators. A first remark is that, if M were factorizable, it could be written in the simpler form
Mp
ApA
using general properties such as preservation of Hermiticity (see note 2). Such a factorizable operator would preserve pure states. It would in fact simply transform 1 t j ) into A I $) (see equation 3.30). The nonfactorizability of M is therefore a very important property. In fact, if M p is given by equation C.lO, it is not difficult (see note 2) to verify that the commutation relations (C.8) for M lead to the following relation for A :
in which c is a real number. The three cases that arise (corresponding to c = 0, c > 0 and c < 0) can be ruled out separately as follows:
1. c = 0. The preceding argument now shows that [H, A] This together with equation C.10 then leads to the relation
= D l = 0.
This case can be studied by means of a formal analogy with the second section of this appendix; the positive operator D, plays the role of H; A the role of M; and cA2 of D. We may therefore directly conclude that A 2 = 0 and hence M as given by equation C.lO. The preceding considerations thus lead us to the following conclusions. For infinite quanta1 systems, there exists the possibility of enlarging the algebra of observables to include an operator M representing nonequilibI
rium entropy. The operator M can be defined, however, only as a nonfactorizable superoperator. The inclusion of an entropy operator (necessarily nonfactorizable) among the observables thus requires that the pure states lose their privileged position in theory and that the pure and mixed states be treated on an equal basis. Physically, this means that, for systems having entropy as an observable, the distinction between pure and mixed states must cease to be operationally meaningful and there would be limitations on the possibility of realizing coherent superposition of quantum states (see the section titled The Measurement Problem in Chapter 3). Evidently, this conclusion, which has been reached as a logical consequence of our theory of entropy operator, should be further elucidated by an analysis of the physical reason for the loss of the distinction between pure and mixed states. The situation for classical systems has been discussed repeatedly (see Chapters 3, 7, 8, and 9). We have seen that there are at present two mechanisms known to lead to instability of motion, which in turn prohibits the "observation" of well-defined trajectories. It can be expected that the physical reason for the loss of the distinction between pure and mixed states of quantal systems with an entropy operator is some suitable quantum analogue of the instability discussed earlier. Again, as in classical mechanics, there may be more than one mechanism involved. One may be the existence of analogues of classical systems with strong mixing properties; the other existence of the Poincare catastrophe for quantum systems (see the section on the entropy operator and the Poincare catastrophe in Chapter 8). One simple example for classical systems in which the asymptotic collision operator Y(z) for z + 0 does not vanish was discussed in Appendix B. Similar situations exist in the quantum case and play an essential role in the derivation of kinetic equations (see equation 8.34). A rigorous mathematical formulation of the quantum instability mechanism is work for the future. Nevertheless, it is satisfying that the second prmciple of thermodynamics when interpreted as a dynamical principle in terms of the existence of the operator M requires us to give up the distinction between pure and mixed states in precisely the situation in which this distinction is expected to be physically unobservable.
,
'
'
APPENDIX C
Appendix
References
1. B. d'Espagnat, Conceptual Foundations of Quantum Mechanics, 2d ed. (Menlo
Park, California: Benjamin, 1976). 2. B. Misra, I. Prigogine, and M. Courbage, Proceedings of the National Academy of Sciences, in press. 3. See M. Jammer, The Philosophy of Quantum Mechanics (New York: WileyInterscience, 1974), p. 141.
In Chapter 9, emphasis was on the important role of instability in the foundations of statistical physics. Appendix A demonstrated the possibility of obtaining stochastic (Markov) processes, starting with deterministic dynamics, through an appropriate nonunitary "change of representation" that does not entail any loss of information. It is possible to define this change of representation, provided the dynamics of the system has a suitable high degree of instability. This proves that a probabilistic theory can still be "complete" and "objective." The viewpoint adopted in Appendix A is that, when a (classical) dynamica1 system is sufficiently unstable, we can no longer speak of trajectories and we are forced to deal with a basically different approach: the evolution of distribution functions (or bundles of trajectories) in phase I space. The transition from distribution functions to a single point in phase space cannot be performed under such conditions (see the section on time and change in Chapter 9).
APPENDIX D
In quantum theory, coordinates and momenta retain their meaning, and measurements can delimitate in the appropriate phase space a region in which the system is located. The question may then be asked: Are there other circumstances, such as those related to the formulation of quantum theory, in which the transition from phase-space distribution functions to individual trajectories is also impossible? Usually another attitude is adopted: the transition to a single trajectory is performed before the question of the relation between classical and quantum mechanics is raised. However, the concepts of a classical trajectory and of a quantum wave function are so different that it is difficult to compare them in a meaningful way. The type of problem met with here is quite different from that encountered in classical theory. There we deal with unstable, "disordered" systems-in fact, so disordered that it becomes possible to define Lyapounov functions closely related to entropy. In contrast, the transition from classical to quantum mechanics does not affect the basic reversibility of classical dynamics (see Chapter 3). Moreover, as mentioned in Chapter 3 in the section on the decay of unstable particles, all finite systems of quantum mechanics have a discrete energy spectrum and therefore a purely periodic motion. Quantum theory leads in this sense to a more "coherent" behavior of motion than does classical theory. This can be considered a strong physical argument against any attempt to understand quantum theory in terms of" hidden " variables or in terms of traditional stochastic models. On the contrary, this increased coherence seems to indicate that quantum theory should correspond to an "overdetermined " classical theory. Alternatively, it seems that quantum effects lead to correlations between neighboring classical trajectories in phase , space. This is what the old Bohr-Sommerfeld image of phase cells of area h expresses in an intuitive way. To express this idea in a new, precise way,' we must reintroduce a basic distinction between operators and superoperators.' This distinction was already discussed in Chapter 8 in the section titled Irreversibility and the Formalism of Classical and Quantum Mechanics. It has also been mentioned (equation 3.35 and Appendix C ) that the Liouville operator is a factorizable superoperator. By definition, a factorizable superoperator F may be written as A , x A , (see equation C.9) with the meaning
L=
1 (HxI-IxH) h
The factorizability of quantum superoperators is a fundamental property, without classical analogue. For example the classical Liouville operator LC,is also a superoperator because it acts on the distribution function (which is a function of the two sets of variables q and p and therefore the analogue of a continuous matrix). However, LC,is expressed in terms of a Poisson bracket (see equation 2.13) and is not factorizable. Establishing a simple correspondence between classical and quantum superoperators will afford a source of insight into the structure of quantum mechanics.
In a classical system with a single degree of freedom, four basic superoperators (two of which are multiplication superoperators) may be introduced :
To emphasize that we consider them to be superoperators acting on distribution functions, we use uppercase letters. The multiplication by i is to obtain Hermitian superoperators. Obviously these four quantities satisfy two independent noncommutation rules: one for P and i(d/aP), the other for Q and - i(d/dQ) (see the section on operators and complementarity in Chapter 3). In contrast, classical trajectory theory is entirely built in terms of functions of Q and P and does not admit any noncommutation relation. Quantum theory therefore occupies an intermediate position because it leads to a single noncommutation relation between the quantum mechanical operators q,, , pop.In this sense quantum mechanics is "more deter-
APPENDIX D
ministic" than classical ensemble theory and less so than classical trajectory theory. What is the meaning of the classical noncommutation rules? This question is studied in detail in a recent paper by C1. George and myself.' As could be expected from the a ~ ~ a l o g y quantum mechanics, the with following correspondence is obtained:
Superoperators
Eigendistribution
There is a remarkable isomorphism between the classical superoperators (D.3) and the quantum superoperators (D.5). The commutation rules are identical and we may write the correspondence
a
ap
-
Uniform distribution in P
The classical noncommutation rules therefore have a simple meaning: for example, a distribution function cannot simultaneously correspond to a well-defined value of Q and be independent of Q. The classical uncertainty relations therefore express a "logical" inconsistency. However, nothing prevents us from having, as an example, a distribution function that corresponds to well-defined values of both Q and P and thus to a classical trajectory.
This correspondence permits us to attribute "similar " physical meaning to these sets of quantities. But that means that, using linear combinations and definition D.l, we have the correspondence
Let us now introduce quantum mechanisms for four factorizable superoperators expressed in terms of the operators gap, pa, that satisfy the Heisenberg uncertainty relation:
This result seems most interesting. The Hilbert space operators p o p , qop cannot be expressed in terms of the quantities P, Q defined on a trajectory. They also involve superoperators acting on distribution functions. We now see clearly why the pure state of classical mechanics cannot be realized any longer in the Hilbert space: the coupling of the classical superoperators through the universal constant h prevents the realization of eigenensembles corresponding to well-defined values of both Q and P. If we were to try to go from a continuous distribution function to a single point (a &function), the derivatives in expression D.7 would tend to infinity and we would obtain states of infinite energy. This precisely expresses the idea of correlations in phase space induced by h.
APPENDIX D
We see that the ensemble point of view, which has often been advocated in the past,3 clarifies the position of quantum mechanics with respect to classical theory. It is not the appearance of noncommuting operators that is characteristic of quantum theory. This feature can always be incorporated in classical ensemble theory. The new and unique feature is the reduction of the four basic superoperators (expression D.5) in terms of the two combinations given in expression D.7. This is possible only through the existence of a universal constant having the physical dimensions of an action (momentum x coordinate). As a result, the concepts of momentum and coordinate in the Hilbert space are no longer independent and quantum theory seems an overdetermined classical theory in which the motion of neighboring points cannot be prescribed independently. Although there will never be a " classical" theory of quantum mechanics, a close analogy to the physical situation would be the classical motion of a string in which the motion of neighboring points can also no longer be prescribed independently-if we did so, this would lead to violent distortions of the string and to states of arbitrary high energy.
role (see Chapter 9 and Appendixes A and C). Here the structure of the dynamic operators describing quantum ensembles leads to a theory that is both complete and probabilistic. In conclusion, the difficult questions that were at the core of the celebrated Einstein-Bohr debate on the foundations of quantum theory (see The Philosophy of Quurrtum Mechanics3) begin to take new shapes: it is indeed possible to consider probabilistic theories that are complete and objective. Far from being the expression of ignorance, the probabilistic element may be the expression of new, fundamental features in the structure of the dynamic theory.
References
1. This discussion closely follows a recent paper by C. George and I. Prigogine, Physica 99A (1979): 369. 2. 1. Prigogine. CI. George. F. Henin, and L. Rosenfeld, Chemica Scripta, 4 (1973): 51. 3. For references to the work of Wigner, Moyal, Bopp, and others, see M. Jammer, T h e Philosophy of Quantum Mechanics (New York: Wiley, 1974), which includes an extensive bibliography.
Concluding Remarks
As mentioned earlier in this appendix, we may also introduce noncommuting operators and a classical complementarity principle into the framework of classical ensemble theory. However, this principle has a trivial meaning: we cannot make contradictory statements about the distribution function p. The new feature in quantum mechanics is that the type of ensembles that can be constructed is limited by h. In addition, we can no longer take the limit to single trajectories and therefore the complementarity principle acquires a fundamental status in quantum mechanics. It should be emphasized that at no point have we appealed to perturbations due to the observer or to other subjectivistic elements in this approach to quantum theory. As in statistical mechanics, the transition from ensembles to trajectories is prevented by a change in the structure of the phase space. In statistical mechanics, it is the instability of motion that plays the critical
REFERENCES
Allen, P. M. 1976. Proc. Natl. Acad. Sci. U.S. 73(3):665. Allen, P. M., Deneubourg, J. L., Sanglier, M., Boon, F., and de Palma, A. 1977. Dynamic urban models. Reports to the Department of Transportation, under contracts TSC-1185 and TSC-1460. Allen, P. M., and Sanglier, M. 1978. J. Soc. Biol. Struct. 1:265-280. Arnold, L. 1973. Stochastic differential equations. New York: Wiley-Interscience. Arnold, L., Horsthemke, W., and Lefever, R. 1978. Z. Physik B29:367. Babloyantz, A,, and Hiernaux, J. 1975. Bull. Math. Biol. 37:637. Balescu, R. 1975. Equilibrium and non-equilibrium statistical mechanics. New York: Wiley-Interscience. Balescu, R., and Brenig, L. 1971. Relativistic covariance of non-equilibrium statistical mechanics. Physica 54: 504-521. Barucha-Reid, A. T. 1960. Elements of the theory of Markov processes and their applications. New York: McGraw-Hill. Bellemans, A., and Orban, J. 1967. Phys. Letters 24A:620. Bergson, H. 1963. L'cvolution crtatrice. In Oeuvres, Editions du Centenaire. Paris: PUF. Bergson, H. 1972. Duree et simultaneite. In Mtlanges. Paris: PUF. Bohr, N. 1928. Atti Congr. Intern. Fis. Como, 1927, vol. 2. [Also Nature suppl. (1928) 121:78.] Bohr, N. 1948. Dialectics 2:312.
REFERENCES
REFERENCES
Boltzmann, L. 1872. Wien. Ber. 66:275. Boltzmann, L. 1905. Populiire Schrlften. Leipzig. (English translation published in 1974 by Reidel, Dordrecht/Boston.) Bray, W. 1921. J. Am. Chem. Soc. 43: 1262. Briggs, T., and Rauscher, W. 1973. J. Chem. Educ. 50:496. Caillois, R. 1976. Avant propos a la dissymetrie. In Coherences aventureuses. Paris: Gallimard. Chandrasekhar, S. 1943. Rev. Mod. Phys. 15(1). Chandrasekhar, S. 1961. Hydrodynamic and hydromagnetic stability. Oxford: Clarendon. C h a ~ m a nS., and Cowling, T. G. 1970. Kinetic theory of non-unijorm gases. 3d ed. . Cambridge University Press. Clausius, R. 1865. Ann. Phys. 125:353. Currie, D. G., Jordan, T. F., and Sudarshan, E. C. G. 1963. Rev. Mod. Phys.
Forster. D. 1975. Hydrodynamic Jluctuations, broken symmetry, and correlation functions. New York: Benjamin. George, CI., Henin, F., Mayne, F., and Prigogine, I. 1978. New quantum rules for dissipative systems. Hadronic J. 1:520-573. George, Cl., Prigogine, I., and Rosenfeld, L. 1973. The macroscopic level of quantum mechanics. Kon. Danske Videns. Sels. Mat-fys. Meddelelsev 38: 12. Gibbs, J. W. 1875-78. On the equilibrium of heterogeneous substances. Trans. Connecticut Acad. 3 : 108-248 ; 343-524. (See Collected Papers. New Haven : Yale University Press.) Gibbs, J. W. 1902. Elementary principle5 in statistical mechanics. New Haven: Yale University Press. (Dover reprint.) Glansdorff, P., and Prigogine, I. 1971. Thermodynamic theory of structure, stabilitl; and Jluctuations. New York : Wiley-Interscience. Goldbeter, A., and Caplan, R. 1976. Ann. Rev. Biophys. Bioeng. 5:449. Goldstein, H. 1950. Classical mechanics. Reading, Massachusetts: Addison- Wesley. Golubitsky, M., and Schaeffer, D. 1979. An analysis of imperfect bifurcation. Proceedings of the Conference on Bifurcation Theory and Application in Scientific Disciplines. Ann. N . Y. Acad. Sci. 3 16: 127-133. Grecos, A., and Theodosopulu, M. 1976. On the theory of dissipative processes in quantum systems. Acta Phys. Polon. A50: 749-765. Haraway, D. J. 1976. Crystals, fabrics, and jields. New Haven: Yale University Press. Heisenberg, W. 1925. Z. Physik 33:879. Henon, M., and Heiles, C. 1964. Astron. J. 69:73. Hirschfelder, J. O., Curtiss, C. F., and Bird, R. B. 1954. The molecular theory of liquids. New York: Wiley. Hopf, E. 1942. Ber. Math. Phys. Akad. Wiss. (Leipzig) 94:l. Horsthemke, W., and Malek-Mansour, M. 1976. Z. Physik B24:307. Jammer, M. 1966. Conceptual development of quantum mechanics. New York: McGraw-Hill. Jammer, M. 1974. The philosophy of quantum mechanics. New York: Wiley. Kauffmann, S., Shymko, R., and Trabert, K. 1978. Science 199:259. Kawakubo, T., Kabashima, S., and Tsuchiya, Y. 1978. Progr. Theo. Phys. 64: 150. KolmogorofT, A. N. 1954. Dokl. Akad. Nauk. U S S R 98:527. Koros, E. 1978. In Far from equilibrium, ed. A. Pacault and C. Vidal. Berlin: Springer Verlag. Koyre, A. 1968. Etudes Newtoniennes. Paris: Gallimard. Lagrange, J. L. 1796. Thtorie des fonctions analytiques. Paris: Imprimerie de la Republique.
d'Alembert, J. 1754. Dimension, article in l'Encyclopedie, vol. N. De Donder, Th. 1936. L'afJinith. Revised edition by P. Van Rysselberghe. Paris: Gauthier-Villars. d'Espagnat, B. 1976. Conceptual foundations of quantum mechanics. 2d ed. Read- ing, Massachusetts: Benjamin. Dewel, G., Walgraef, D., and Borckmans, P. 1977. Z. Physik. B28:235. Dirac, P. A. M. 1958. The principles of quantum mechanics. 4th ed. Oxford: Clarendon. (1st ed., 1930.) Ehrenfest, P., and Ehrenfest, T. 1911. Begrifflische Grundlagen der Statistischen Auffassung der Mechanik. Encyl. Math. Wiss. 4:4. (English translation, The conceptual foundations of statistical mechanics, published in 1959 by Cornell University Press, Ithaca.) Eigen, M., and Schuster, P. 1978. Naturwissenschaften 65:341. Eigen, M., and Winkler, R. 1975. Das Spiel. Miinchen: Piper. Einstein, A. 1917. Zum Quantensatz von Sommerfeld und Epstein. Verhandl. Deut. Phys. Ges. 19:82-92. Einstein, A,, and Besso, M. 1972. Correspondence 1903-1955. Paris: Hermann. Einstein, A., and Born, M. 1969. Correspondence 1916-1955. Seuil. (See letter of September 7, 1944.) Einstein, A,, Lorentz, H. A,, Weyl, H., and Minkowski, H. 1923. The principle of relativity. London: Methuen. (Dover edition.) Erneux, T., and Hiernaux, J. In press. Farquhar, I. E. 1964. Ergodic theory in statistical mechanics. New York: Interscience. Feller, W. 1957. An introduction to probability theory and its applications, vol. 1. New York: Wiley.
REFERENCES
REFERENCES
Landau, L., and Lifschitz, E. M. 1960. Quantum mechanics. Oxford: Pergamon. Landau, L., and Lifschitz, E. M. 1968. Sratistical physics. 2d ed. Reading, Massachusetts: Addison-Wesley. Leclerc, Ivor. 1958. Whitehead's metaphysics. London: Allen & Unwin. Lefever, d., Herschkowitz-Kaufman, M., and Turner, J. W. 1977. Phys. Letters 60A: 389. Lemarchand, H., and Nicolis, G . 1976. Physica 82A:521. McNeil, K. J., and Walls, D. F. 1974. J. Statist. Phys. 10:439. Margalef, E. 1976. In Seminuire d'ecologie quantitative (third session of E4, Venice). Maxwell, J. C. 1867. Phil. Trans. Roy. Soc. 157:49. May, R. M. 1974. Model ecosystems. Princeton, New Jersey: Princeton University Press. Mehra, J., ed. 1973. The physicist's conception of nature. Dordrecht/Boston: Reidel. Mehra, J. 1976. The birth of quantum mechanics. Conseil Europeen pour la Recherche Nucleaire, 76- 10. Mehra, J. 1979. The historical de~lelopment of quantum theory: The discovery of quantum mechanics. New York: Wiley-Interscience. Minorski, N. 1962. Nonlinear ascillations. Princeton, New Jersey: Van Nostrand. Misra, B. 1978. Proc. Natl. Acad. Sci. U.S. 75: 1629. Misra, B., and Courbage, M. In press. Monod, J. 1970. Le hasard et la nbcessite. Paris: Seuil. (English translation, Chance and necessity, published in 1972 by Collins, London.) Morin, E. 1977. La methode. Paris: Seuil. Moscovici, S. 1977. Essai sur rhistoire humaine de la nature. Collection Champs Philosophique. Paris: Flammarion. Moser, J. 1974. Stable and random motions in dynamical systems. Princeton, New Jersey: Princeton University Press. Nicolis, J., and Benrubi, M. 1976. J. Theo. Biol. 58:76. Nicolis, G., and Malek-Mansour, M. 1978. Progr. Theo. Phys. suppl. 64:249-268. Nicolis, G., and Prigogine, 1. 1971. Proc. Natl. Acad. Sci. U.S. 68:2102. Nicolis, G., and Prigogine, I. 1977. Self-organization in non-equilibrium systems. New York: Wiley. Nicolis, G., and Prigogine, I. In press. Non-equilibrium phase transitions. Sci. Am. Nicolis, G., and Turner, J. 1977a. Ann. N.Y. Acad. Sci. 316:251. Nicolis, G., and Turner, J. 1977b. Physica 89A: 326. Noyes, R. M., and Field, R. J. 1974. Ann. Rec. Phys. Chem. 25:95. Onsager, L. 1931a. Phys. Rev. 37:405. Onsager, L. 1931b. Phjs. Rev. 38:2265. Pacault, A,, de Kepper, P., and Hanusse, P. 1975. C . R. Acad. Sci. (Paris) 280C: 197.
Paley, R., and Wiener, N. 1934. Fourier transforms in the complex domain. Providence, Rhode Island: American Mathematical Society. Planck, M. 1930. Vorlesungen uber Thermodynamik. Leipzig. (English translation, Dover.) Poincare, H. 1889. C . R. Acad. Sci. (Paris) 108:550. Poincare, H. 1893a. Le mecanisme et l'experience. Rev. Metaphys. 1:537. Poincare, H. 1893b. Les methodes nouvelles de la mecanique celeste. Paris: Gauthier-Villars. (Dover edition, 1957.) Poincare, H. 1914. Science et methode. Paris: Flammardon. Poincare, H. 1921. Science and hypothesis. In The foundations of science. New York: The Science Press. Popper, K. 1972. Logic of scientiJic discovery. London: Hutchinson. Prigogine, I. 1945. Acad. Roy. Belg., Bull. Classe Sci. 31:600. Prigogine, I. 1962a. Introduction to nonequilibrium thermodynamics. New York: Wiley-Interscience. Prigogine, I. 1962b. Nonequilibrium statistical mechanics. New York: Wiley. Prigogine, I. 1975. Physique et metaphysique. In Connaissance scientiJique et philosophie. Publication no. 4 of the Bicentennial, Royal Academy of Belgium. Prigogine, I. Forthcoming. The microscopic theory of irreversible processes. New York: Wiley. Prigogine, I., and George, C. 1977. New quantization rules for dissipative systems. Intern. J. Quantum Chem. 12(suppl. 1):177-184. Prigogine, I., George, C., Henin, F., and Rosenfeld, L. 1973. A unified formulation of dynamics and thermodynamics. Chem. Scripta 4: 5-32. Prigogine, I., and Glansdorff, P. 1971. Acad. Roy. Belg., Bull. Classe Sci. 59: 672-702. Prigogine, I., and Grecos, A. 1979. Topics in nonequilibrium statistical mechanics. In Problems in the foundations of physics. Varenna: International School of Physics " Enrico Fermi." Prigogine, I., Herman, R., and Allen, P. 1977. The evolution of complexity and the laws of nature. In Goals in a global community: A report to the Club of Rome, vol. 1, ed. E. Laszlo and J. Bierman. Oxford: Pergamon. Prigogine, I., Mayne, F., George, C., and De Haan, M. 1977. Microscopic theory of irreversible processes. Proc. Natl. Acad. Sci. U.S. 74:4152-4156. Prigogine, I., and Stengers, I. 1977. The new alliance, parts 1 and 2. Scientia 112:319-332; 643-653. Prigogine, I., and Stengers, I. 1979. La nouvelle alliance. Paris: Gallimard. Prigogine, I., and Stengers, I. Forthcoming. Science and metascience. New York: Doubleday.
Rice, S., Freed, K. F., and Light, J. C., eds. 1972. Statistical mechanics: New concepts, new problems, new applications. Chicago: University of Chicago Press. Rosenfeld, L. 1965. Progr. Theoret. Phys. Suppl. Commemoration issue, p. 222. Ross, W. D. 1955. Aristorle's physics. Oxford: Clarendon.
Sambursky, S. 1963. The physical world of the Greeks. Translated from the Hebrew hv M. Daaut. London: Routledge and Kegan Paul. -, Schldel. F. 1971. Z. Physik. 248: 446. F. Schli5~l~ 1972. Z. Physik. 253: 147. SchrGdinger, E. 1929. Inaugural lecture (Antrittsrede), 4 July 1929. (English translation in Science, theory, and men, published in 1957 by Dover.) Serres. M. 1977. La naissance de la physique duns le texte de Lucrece: Fleuves et turbulences. Paris: Minuit. Sharma, K., and Noyes, R. 1976. J. Am. Chem. Soc. 98:4345. Snow, C. P. 1964. The two cultures and a second look. Cambridge University Press. Spencer, H. 1870. First principles. London: Kegan Paul. Stanley, H. E. 1971. Introduction to phase transitions and critical phenomena. Oxford University Press.
NAME INDEX
Theodosopulu, M., Grecos, A,, and Prigogine, 1. 1978. Proc. Natl. Acad. Sci. U.S. 75: 1632. Theodosopulu, M., and Grecos, A. 1979. Physica 95A: 35. Thorn, R. 1975. Structural stability and morphogenesis. Reading, Massachusetts: Beniamin. Thomson, W. 1852. Phil. Mag. 4:304. Tolman, R. C. 1938. The principles of statistical mechanics. Oxford University Press. Turing, A. M. 1952. Phil. Trans. Roy. Soc. London, Ser. 8.237:37.
-
Allen, P., 123, 124 Aristotle, 2, 104 Arnold, L., 147, 149, 222 Arnol'd, V., 19, 38,42 Avez, A,, 222 Babloyantz, A,, 119 Bach, J. S., 51 Balescu, R., 35, 37, 38 Barucha-Reid, A. T., 135 Bellemans, A., 163, 166 Belousov, B. B., 120 Bcnrubi, M., 147 Bergson, H., 3 Besso, M., 202-203 Bird, R. B., 163 Birkhoff, G., 35 Bohr, N., 48, 51, 70, 174, 250, 255 Boltzmann, L., 9, 10, 33, 47, 80, 156-157, 159-166, 170, 181, 186, 187, 206, 210 Borckmans, P., 138 Born, N., 48, 50, 51, 54, 55, 56, 60, 198, 202 Bray, W., 121 Briggs, T., 121 Bruns. H., 32 Caillois, R., 84 Caplan, R., 122 Chandrasekhar, S., 11, 88, 165 Chapman, S., 155, 160, 163 Clausius, R., 5, 78, 84 Courbage, M., 219, 226, 242 Cowling, T. G., 155, 160, 163 Curtiss, C. F., 163
Darwin, C., 84, 156 de Broglie, L., 48, 202 de Donder, T.. 85 de Kepper, P., 122 Democritus, 3 d'Espagnat, B., 48, 65, 66, 242 Dewel, G., 138 Dirac, P., 48, 50, 53, 55, 61 Eddington, A,, 205 Ehrenfest, P., 10, 166 Ehrenfest, T., 10, 166 Eigen, M., 10, 108, 109 Einstein, A., 20, 23, 24, 48, 70, 156, 201-203, 210, 214-215,255 Enskog, D., 163 Erneux, T., 120 Euclid, 104 Farquhar, I. E., 33, 35, 64 Feller, W., 132 Fermi, E., 38 Field, R. J., 120-121 Ford, J., 38 Forster, D., 195 Freed, K. F., 165 Galilei, G., 2, 12, 104 George, C., 49, 199, 252 Gibbs, J. W., 12, 23-24, 29, 78, 79, 156, 170, 2 10 Glansdorff, P.. 8 1, 88, 91 Goldbeter, A,, 122 Goldstem, H., 22, 29 Golubitsky, M., 120 Grecos, A,, 191, 193, 194, 196
Von Neumann, J. 1955. Mathematical foundations of quantum mechanics. Princeton, New Jersey: Princeton University Press. Welch, R. 1977. Progr. Biophys. Mol. Biol. 32: 103-191. Whittaker, E. T. 1937. A treatise on the analytical dynamics of particles and rigid bodies. 4th ed. Cambridge University Press. (Reprint, 1965.)
NAME INDEX
NAME INDEX
Haas, A., 48 Hamilton, W., 3, 19 Hanusse, P., 122 Heiles, C., 38, 39 Heisenberg, W., 48, 50-51, 55-56, 70, 198 Henon, M., 202 Herman, R., 123 Herschkowitz-Kaufman, M., 116 Hiernaux, J., 119, 120 Hirschfelder, J. O., 163 Hopf, H., 35, 110 Horsthemke, W., 147 Jammer, M., 47, 48, 51, 54 Jordan, P., 50, 55, 56, 198 Kabashima, S., 147, 149 Kadanoff, L., 138 Kauffmann, S., 119 Kawakubo, T., 147, 149 Kelvin. See Thompson, W. Kepler, J. ,.20 Kolmogoroff, H., 19, 35,37-38,42 Koros, E., 122 Koyre, A,, 2 Lagrange, J., 3, 19 Landau, L., 10, 50 Langevin, P., 148 Laplace, P., 3 Lebowitz, J., 37, 165, 219 Lefever, R., 116, 147 Lemarchand, H., 142 Lifschitz, E., 10, 50 Light, J. C., 165 Lotka, A. J., 96 Lyapounov, A., 6 McNeil, A. J., 105 Malek-Mansour, M., 144, 147 Mandelbrot, B., 35, 36 Margalef, R., 128
Maxwell. J. C., 33, 159 May, R. M., 96, 146 Mehra, J., 47, 170 Minorski, N., 171 Misra, B., 157. 171, 188, 219,226,242 Moser, J., 19, 38, 42 Newton, I., 3, 19-20, 210 Nicolis, G., 6,98, 105, 120, 123, 137, 139, 140-141, 142, 144, 145,147 Noyes, R. M., 120-121 Onsager, L., 31, 87 Orban, J., 163, 166 Ornstein, D. S., 219 Pacault, A,, 122 Paley, R., 68 Pasta, J., 38 Pauli, W., 244 Planck, M., 5, 8, 47, 156 Poincare, H., 19, 32, 38, 157-158, 165, 171, 173,203,205,206 Prigogine, I., 6, 26,49, 79, 81, 87, 88, 91,98, 105, 109, 120, 123, 137, 139, 145, 155, 167-168, 181, 182, 187, 191, 193-194, 196, 199, 219, 226, 242,252 Rauscher, W., 122 Resibois, P., 233-234 Rice, S., 165 Rosenfeld, L., 49, 51 Ross, W. D., 2 Schaeffer, D., 120 Schlogl, F., 139 Schoenberg. A., 51 Schrodinger, E., 50, 56, 156, 202 Schuster, P., 109 Sharma, K., 121 Sinai. Y., 35, 37 Sommerfeld, A,, 48, 70,250 Spencer, H., 84
Spinoza, B., 203, 215 Stanley, H. E., 138 Stey, G., 234, 236 Swift, J., 138 Theodosopulu, M., 196 Thom, R., 106 Thomson, W. (Lord Kelvin), 77-78 Tolman, R. C . , 23,25 Tsuchya, Y., 147, 149 Turing, A,, 110 Turner, J. W., 116, 140-141 Uhlenbeck, G., 170 Ulam, S., 38 Van der Waals, J. D., 138
Voigt, W., 87 Volterra, V., 96 von Neumann, J., 35,64-66 Walgraef, D., 138 Walls, D. F., 105 Weiss, G., 138 Welch, R., 83 Whittaker, E., 41 Wiener, N., 68 Wigner, E., 66, 69 Wilson, K., 138 Winkler, R., 10, 108 Zermelo, E., 165 Zhabotinskii, A. M., 120
SUBJECT INDEX
absolute temperature, 78 absorption and emission lines, 4 acceleration, 2 action variable, 30-32, 40, 56, 194 adelphic integrals, 41 adenosine triphosphate, 95 adjoint operator, 54, 181, 183 affinity, 85-86, 94-95 aggregation process, 123 amoeba, 21 3 angle variable, 30, 32, 40 anticommutator, 198 attractor, 7-8, 29, 182 autocatalytic reaction, 95-96, 108, 125 average value, 23, 59, 61 Avogadro's number, 33, 79, 144 baker transformation, 187-191, 21 1, 219-231 bare particle, 197 Belousov-Zhabotinskii reaction, 120-122,200 Benard convection, 89 instability, 88 Bernouilli shift, 220-221 bifurcation, 103, 105, 109-110, 116, 147, 204 diagram, 105, 106 parameter, 119, 147 point, 119, 134 biological order, 83 black hole, 4 Boltzmann constant, 10, 79
equation, 157, 161, 162, 164, 166, 168, 192-193, 196, 206 formula, 80 H quantity, 161, 166, 167, 169, 181, 182 H theorem, 156, 164, 206 order principle, 77, 8 1, 89 Bose statistics, 59 bosons, 59 boundary conditions, 8, 28, 104, 109 bra vector, 53 broken symmetry, 164, 184, 186-187 Brownian motion, 1 1, 26, 57, 135, 148, 195 Brusselator, 98, 105, 109, 116, 142 canonical ensemble, 29 canonical equations, 32 canonical transformation, 30, 32, 56 canonical variables, 22, 30 catalyst, 83 catalytic effect, 122 catalytic reaction, 95, 103, 122 catastrophe, 103, 106, 120 causality, 65 central limit theorem, 148 chemical clock, 112 chemical games, 135, 137 chemical instabilities, 104 chemical kinetics, 98, 132-133, 204 chemical potentials, 79, 87 chemical reactions, 6, 84-85, 93, 135 rates of, 94 chemical waves, 109, 110, 113 classical dynamics, 19, 26
SUBJECT INDEX
SUBJECT INDEX
classical mechanics, 2, 3, 13, 20-21, 56 closed system, 78 coarse graining, 12, 157 coherence in quantum theory, 249-255 coherent state, 89 collective mode, 194 collision, 193 operator, 192, 196 term, 160, 164 collisional invariant, 162 commutation rules, 25 1-254 commutator. 63, I88 complementarity, 49, 51, 71, 159, 173-177, 188-1 89, 196 completeness relation, 53 computer simulation, 110, 1 12, 144 concentration, 7 constant of motion, 32, 194 coordinate space, 26 coordination, 83 correlations in phase space, 21 1 cosmology, 1 critical fluctuation, 142 critical point, 138, 142, 201 critical size, 145 critical value, 89, 104 cross catalysis, 108, 121 Curie point, 144
dissipation, 88, 197 dissipative structure, 84, 90, 100, 103, 119, 145, 213 dissipativity condition, 186, 197 distribution function. See density in phase space Drosophila, 119 dynamical description, 1 dynamical group, 212 dynamical systems, 33, 37-38, 60, 175 dynamics, 3, 12, 13, 23, 32,49, 65, 196 ecological fluctuation, 123 ecological r:iche, 124
ergodic surface, 37 ergodic systems, 21, 33-35, 37-38, 42, 64, 175, 188, 193 Escherichia coli, 123 event horizon, 1 exponential decay, 68, 69, 199 factorizable superoperator, 180 Ceedback mechanisms, 95, 123 fermion, 59 Fermi statistics, 59 ferromagnetic system, 144 Fick's law, 86 first law of thermodynamics, 78 fluctuation, 89-90, 92, 94, 100, 103, 106, 108, 132 external, 147 long-range, 138, 142 Fokker-Planck equation, 138, 149 Fourier coefficient, 40 equation, 8, 12, 49, 91 integral, 68 law, 86, 204 series, 40 free energy, 80 functional organization, 83 gas constant, 79 Gaussian distribution, 11, 133 Gaussian white noise, 149 generalized force, 86 generator of motion, 242-245 Gibbs density, 26, 29, 60 distribution function. See Gibbs density ensemble, 24, 26, 60 entropy, 156, 170 formula, 79, 85-86 glycolytic cycle, 95, 122 gravitation, 196, 214 group, 26, 212 Hamiltonian, 22, 24, 3 1-32, 40, 49,
ecology, 123 economic function, 125 eigenfunctions, 27, 50-52, 55, 58, 64-65, 158, 174, 190 eigenvalues, 27, 50-51, 55, 64, 65, 189 Einstein's relation, 201 electron, 1, 59, 69, 70, 197 elementary particles, 1, 4, 197, 199 energy, 21, 29, 64, 79 level, 49-50, 56, 57, 80, 199 theory, 21, 59, 158, 170, 179, 206 of world lines, 20 entropy, 5, 10, 45, 66, 77, 156, 206, 241-248 change, 5, 78 damping, 195 excess production, 92-93,95 flow, 85 decay of unstable particles, 67, 69, 186 increase, 10, 80 degeneracy, 64, 199 operator, 158, 174, 180-182, 187-188, degenerate bifurcation, 120 191, 196-197, 207,209, 219-231, degree of advancement, 84 density 241-248 production, 5, 78, 84-92, 169. 183, fluctuation, 195 186 operator, 61-62, 65-66, 168-169, 180 environmental noise, 147 in phase space, 24, 26, 28, 158, 170, en~ymes, 116, 122 83, 179, 196,208 equilibrium, 7, 65, 78 determinism, 20, 37, 57, 131, 21 0 ensembles, 29, 64, 173, 180 diagonalization, 56 diagonal matrix, 55 entropy, 29 structure, 81 diffusion, 4, 6, 12, 85, 104, 142, 147 ergodic flow, 37, 188 dispersion, 133 ergodic hypothesis, 33 equation, 96-97, 110, 146, 196
57-58,68,69, 191 dynamics, 157 equations, 21-22, 49, 56, 57, 77, 208 operator, 50 hard sphere, 193 harmonic oscillator, 22, 30,35, 40, 48, 56 harmonic solid, 56 heat conductivity, 6, 8, 87, 156 flow, 85 Heisenberg representation, 64 Helmholtz free energy, 80 Hermitian conjugate. See adjoint operator Hermitian operator, 26, 54, 181, 191 hidden variables, 5 1, 65 Hilbert space, 53, 58,61 Hopf bifurcation, 110 hydrodynamic equations, 196 hydrodynamic mode, 195 hydrodynamics, 88 hydrogen atom, 4,48 hydrogen bonds, 104 hypercycle, 108 ideal system, 79,94 indistinguishability, 58, 66 innovations, 109, 128 instability of motion, 43 integrable systems, 21, 29, 31, 32, 38, 56 intensive variable, 134, 141 interaction, 193 internal fluctuation, 147 invariants, 32, 33, 41-42, 192-193 invariant tori, 37 irreversibility, 11-13, 44, 48-49, 66. 70, 157, 176, 203, 205 irreversible processes, 6, 77-78, 83, 85 Ising model, 141
KAM theory, 38
ket vector, 53 K-flow, 35, 37, 43, 188, 193 kinetic constant, 94, 98, 136
SUBJECT INDEX
SUBJECT INDEX
kinetic description, 232-240 kinetic energy, 22, 30 kinetic equations, 96,98, 100, 135, 139, 166, 192 kinetic theory, 4, 43, 137, 155 lactose operon, 123 Langevin equation, 148-149 law of large numbers, 131-134, 140, 204 lifetime, 68, 70, 199 limit cycle, 99, 100, 109-110, 144 linear operator, 25 Liouville equation, 25, 26, 28, 57, 66, 169, 170 operator, 26, 37, 58, 63, 180, 193, 207 local equilibrium, 86 logistic curve, 123 long range fluctuation. See fluctuation long-range order, 104, 144 Loschmidt's reversibility paradox, 166 Lotka-Volterra model, 95-99, 107 Lyapounov function, 6-8,45,90,92-94, 157, 158; 161, 164, 167-168, 170-173, 180-182, 186, 188, 190-191, 196-197,212 operator. See entropy operator representation, 199 theorem, 7 macroobservables, 64 macroscopic level, 103,210 mapping, 189 marginal stability, 97 Markov equations, 137 processes, 135, 157, 160, 163, 182, 187, 201,204,211 master equation, 5, 144, 192 Maxwell velocity distribution, 159-161 mean field theory, 138, 150 measurement, 13,44,48, 53,65,67,70, 241-248 membranes, l I6
meson, 69 metabolic oscillation, 122 metabolic reaction, 83 metabolism, 95, 122 metastable state, 147 microcanonical ensemble, 29, 168, 173, 190 microcanonical surface, 32, 35, 37 microscopic description, 3, 155, 210 microscopic entropy, 158, 173-1 74 production, 174, 187 microscopic theory of irreversible processes, 179-1 80 minimum-production entropy, 88,90 mixing flow, 37,43, 188 systems, 35, 42, 165, 176, 193, 208 mixture, 26, 44, 60, 62, 66, 180, 197, 241-248 molecular chaos, 162 molecular disorder, 10 mole fraction, 79, 91 morphogenesis, 110, 119 morphology, 112, 119 most probable state, 11 mutation, 109, 128 neutron star, 4 Newtonian dynamics, 3-4 noncommutativity, 51 nonequilibrium fluctuations, 131 phase transition, 138-142 thermodynamics, 84 normal mode, 56,96 Noyes mechanism, 121 nucleation process, 134, 145, 147 nucleotide, 108 number of complexions, 10, 80, 84, 89 numerical computation, 38 observables, 54, 196 Onsager relations, 86-87, 201 open system, 6, 78
operator, 26-27, 49, 54, 174 order through fluctuation, 100, 124, 132 Oregonator, 121 orthonormality, 52, 60 oscillation, 143 oscillatory reactions, 122-123, 143-144 paramagnetic system, 144 Pauli equation, 192 pendulum, 43, 119 periodic behavior, 40, 68 perturbation in Hamiltonian, 40, 55, 191 phase space, 22-26, 35,44,61, 66, 196 phase transition, 138, 141 photon, 59, 69, 70, 197 Planck constant, 4,47 Poincare catastrophe, 41-43, 165, 176, 191, 193,208,232-240 theorem, 32,41, 165, 192 Poincart-Misra theorem, 171, 174 Poisson bracket, 23, 25, 63 distribution, 133, 137-138, 146 polymer, 108 potential energy, 22, 30-31, 43, 55, 56 potential function, 107 probability, 11, 24, 62, 80, 131 amplitude, 53, 57, 58 distribution, 133, 137 processes, 26, 57 projection operator, 191 proteins, 108 proton, 1, 59 pure case, 26, 44, 60, 66-67, 180, 197, 241-248 pure state. See pure case quantization rules, 51, 56 quantum mechanics, 4, 13,20, 27, 47, 50, 56, 59, 65, 205 quantum statistics, 59 quasi-ergodic hypothesis, 33-34
random behavior, 40 random forcc, 148 random motion, 42, 176 random walk, 1 1, 135 rate equations, 6, 7 rational mechanics, 20 reaction-diffusion equations, 104, 204 reaction-diffusion system, 117 reactive collision, 132 reduction of the wave packet, 66 regulation, 83, 122 relative fluctuations, 134 renormalization, 197 representative ensemble, 23 resolvent, 191 resonance, 41, 232-240 reversible processes, 6, 66, 77 rotator, 48 scalar product, 52, 54 scattering, 197 Schlogl model, 139, 148 Schrodinger equation, 57-58,77, 131, 179-180 representation, 64 second law of thermodynamics, 5-8, 12, 47, 77, 157, 194, 196 self-adjoint operator. See Hermitian operator self-organization, 13, 77, 119 semigroup, 26, 212 similitude transformation, 55 slime mold, 123 snow crystals, 8 1, 82 space-time structure, 103, 204 specific heat, 91 spectrum, 28, 68, 180 stability, 7, 42-44, 99, 103, 123 conditions, 91, 94 star-Hermiticity, 185 star-unitary operator, 185 star-unitary transforlnation, 184, 198-199 state vector, 56
SUBJECT INDEX
stationary state, 87 steady state, 87, 90 dissipative structure, 110 stochastic differential equation, 149 stochastic processes, 26 stochastic systems, 35 stoichiometric coefficient, 84 structural stability, 105, 123-126 subdynamics, 194 supercritical behavior, 99-100 superoperator, 180, 198 superposition principle, 241-248 symmetry breaking, 100, 145, 196 temperature, 5, 9 thermal conductivity, 162 thermal diffusion, 81 thermodynamic branch, 95-95, 99, 104, 113, 123 thermodynamic description, 13 thermodynamic equilibrium, 8, 29,33, 86,90 thermodynamic limit, 28, 141, 190 thermodynamic potential, 80 thermodynamic stability, 90 thermodynamics, 23, 32, 78, 196 three-body problem, 20 time, 12, 56, 206 direction, 6, 26 evolution, 24, 26, 56-58 inversion, 2, 6, 77, 166, 212 operator, 176, 188-190,209, 219-231 symmetry breaking, 143 variation, 56, 64 trace (operator), 61 trajectory, 2, 8, 20, 38-39, 44, 197, 210
transfer energy, 41 transfer momentum, 41 transformation theory, 181-1 87 transition probability, 26, 135 trimolecular model. See Brusselator Turing bifurcation, 1 10 two-body problem, 20 uncertainty relations, 51,70, 131, 188, 209 unimolecular reaction, 136 unitary operalor, 54, 55 unitary transformation, 66, 183, 199 universal constants, 4 unperturbed Hamiltonian, 40,43, 55, 191-193 unstable particles, 67, 69 valency forces, 104 Van der Waals forces, 104 Van der Waals theory, 138, 140 variance, 133, 138 vectors, 51-53 velocity distribution function, 159, 167, 170, 192 velocity inversion, 164, 166, 182 viscosity, 4, 156, 162 wave function, 49, 52, 58, 65, 132, 179 wave-particle duality, 202 wave propagation, 195 weak stability, 43-45, 165-166, 170, 175, 188, 208, 21 1 Zermelo's recurrence paradox, 165