Jump to content

Ising model: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m link [TT]aylor series
Citation bot (talk | contribs)
Added bibcode. | Use this bot. Report bugs. | Suggested by Abductive | Category:Articles with excessive see also sections from November 2024 | #UCB_Category 1/4
 
(39 intermediate revisions by 24 users not shown)
Line 2: Line 2:
{{Statistical mechanics|cTopic=Models}}
{{Statistical mechanics|cTopic=Models}}


The '''Ising model''' ({{IPA-de|ˈiːzɪŋ}}) (or '''Lenz-Ising model''' or '''Ising-Lenz model'''), named after the physicists [[Ernst Ising]] and [[Wilhelm Lenz]], is a [[mathematical models in physics|mathematical model]] of [[ferromagnetism]] in [[statistical mechanics]]. The model consists of [[discrete variables]] that represent [[Nuclear magnetic moment|magnetic dipole moments of atomic "spins"]] that can be in one of two states (+1 or −1). The spins are arranged in a graph, usually a [[lattice (group)|lattice]] (where the local structure repeats periodically in all directions), allowing each spin to interact with its neighbors. Neighboring spins that agree have a lower energy than those that disagree; the system tends to the lowest energy but heat disturbs this tendency, thus creating the possibility of different structural phases. The model allows the identification of [[phase transition]]s as a simplified model of reality. The two-dimensional [[square-lattice Ising model]] is one of the simplest statistical models to show a [[phase transition]].<ref>See {{harvtxt|Gallavotti|1999}}, Chapters VI-VII.</ref>
The '''Ising model''' (or '''Lenz–Ising model'''), named after the physicists [[Ernst Ising]] and [[Wilhelm Lenz]], is a [[mathematical models in physics|mathematical model]] of [[ferromagnetism]] in [[statistical mechanics]]. The model consists of [[discrete variables]] that represent [[Nuclear magnetic moment|magnetic dipole moments of atomic "spins"]] that can be in one of two states (+1 or −1). The spins are arranged in a graph, usually a [[lattice (group)|lattice]] (where the local structure repeats periodically in all directions), allowing each spin to interact with its neighbors. Neighboring spins that agree have a lower energy than those that disagree; the system tends to the lowest energy but heat disturbs this tendency, thus creating the possibility of different structural phases. The model allows the identification of [[phase transition]]s as a simplified model of reality. The two-dimensional [[square-lattice Ising model]] is one of the simplest statistical models to show a [[phase transition]].<ref>See {{harvtxt|Gallavotti|1999}}, Chapters VI-VII.</ref>


The Ising model was invented by the physicist {{harvs|txt|authorlink=Wilhelm Lenz|first=Wilhelm|last=Lenz|year=1920}}, who gave it as a problem to his student Ernst Ising. The one-dimensional Ising model was solved by {{harvtxt|Ising|1925}} alone in his 1924 thesis;<ref>[http://www.hs-augsburg.de/~harsch/anglica/Chronology/20thC/Ising/isi_fm00.html Ernst Ising, ''Contribution to the Theory of Ferromagnetism'']</ref> it has no phase transition. The two-dimensional square-lattice Ising model is much harder and was only given an analytic description much later, by {{harvs|txt|authorlink=Lars Onsager|first=Lars |last=Onsager|year=1944}}. It is usually solved by a [[transfer-matrix method]], although there exist different approaches, more related to [[quantum field theory]].
The Ising model was invented by the physicist {{harvs|txt|authorlink=Wilhelm Lenz|first=Wilhelm|last=Lenz|year=1920}}, who gave it as a problem to his student Ernst Ising. The one-dimensional Ising model was solved by {{harvtxt|Ising|1925}} alone in his 1924 thesis;<ref>[http://www.hs-augsburg.de/~harsch/anglica/Chronology/20thC/Ising/isi_fm00.html Ernst Ising, ''Contribution to the Theory of Ferromagnetism'']</ref> it has no phase transition. The two-dimensional square-lattice Ising model is much harder and was only given an analytic description much later, by {{harvs|txt|authorlink=Lars Onsager|first=Lars |last=Onsager|year=1944}}. It is usually solved by a [[Transfer-matrix method (statistical mechanics)|transfer-matrix method]], although there exist different approaches, more related to [[quantum field theory]].


In dimensions greater than four, the phase transition of the Ising model is described by [[mean-field theory]]. The Ising model for greater dimensions was also explored with respect to various tree topologies in the late 1970s, culminating in an exact solution of the zero-field, time-independent {{harvtxt|Barth|1981}} model for closed Cayley trees of arbitrary branching ratio, and thereby, arbitrarily large dimensionality within tree branches. The solution to this model exhibited a new, unusual phase transition behavior, along with non-vanishing long-range and nearest-neighbor spin-spin correlations, deemed relevant to large neural networks as one of its possible {{pslink|Ising model|applications|nopage=y}}.
In dimensions greater than four, the phase transition of the Ising model is described by [[mean-field theory]]. The Ising model for greater dimensions was also explored with respect to various tree topologies in the late 1970s, culminating in an exact solution of the zero-field, time-independent {{harvtxt|Barth|1981}} model for closed Cayley trees of arbitrary branching ratio, and thereby, arbitrarily large dimensionality within tree branches. The solution to this model exhibited a new, unusual phase transition behavior, along with non-vanishing long-range and nearest-neighbor spin-spin correlations, deemed relevant to large neural networks as one of its possible {{pslink|Ising model|applications|nopage=y}}.
Line 15: Line 15:
For any two adjacent sites <math>i, j\in\Lambda</math> there is an ''interaction'' <math>J_{ij}</math>. Also a site <math>j\in\Lambda</math> has an ''external magnetic field'' <math>h_j</math> interacting with it. The ''energy'' of a configuration <math>{\sigma}</math> is given by the [[Hamiltonian function]]
For any two adjacent sites <math>i, j\in\Lambda</math> there is an ''interaction'' <math>J_{ij}</math>. Also a site <math>j\in\Lambda</math> has an ''external magnetic field'' <math>h_j</math> interacting with it. The ''energy'' of a configuration <math>{\sigma}</math> is given by the [[Hamiltonian function]]


: <math>H(\sigma) = -\sum_{\langle ij\rangle} J_{ij} \sigma_i \sigma_j - \mu \sum_j h_j \sigma_j,</math>
<math display="block">H(\sigma) = -\sum_{\langle ij\rangle} J_{ij} \sigma_i \sigma_j - \mu \sum_j h_j \sigma_j,</math>


where the first sum is over pairs of adjacent spins (every pair is counted once). The notation <math>\langle ij\rangle</math> indicates that sites <math>i</math> and <math>j</math> are nearest neighbors. The [[magnetic moment]] is given by <math>\mu</math>. Note that the sign in the second term of the Hamiltonian above should actually be positive because the electron's magnetic moment is antiparallel to its spin, but the negative term is used conventionally.<ref>See {{harvtxt|Baierlein|1999}}, Chapter 16.</ref> The ''configuration probability'' is given by the [[Boltzmann distribution]] with [[inverse temperature]] <math>\beta\geq0</math>:
where the first sum is over pairs of adjacent spins (every pair is counted once). The notation <math>\langle ij\rangle</math> indicates that sites <math>i</math> and <math>j</math> are nearest neighbors. The [[magnetic moment]] is given by <math>\mu</math>. Note that the sign in the second term of the Hamiltonian above should actually be positive because the electron's magnetic moment is antiparallel to its spin, but the negative term is used conventionally.<ref>See {{harvtxt|Baierlein|1999}}, Chapter 16.</ref> The ''configuration probability'' is given by the [[Boltzmann distribution]] with [[inverse temperature]] <math>\beta\geq0</math>:


: <math>P_\beta(\sigma) = \frac{e^{-\beta H(\sigma)}}{Z_\beta},</math>
<math display="block">P_\beta(\sigma) = \frac{e^{-\beta H(\sigma)}}{Z_\beta},</math>


where <math>\beta = (k_{\rm B} T)^{-1}</math>, and the normalization constant
where <math>\beta = 1 / (k_\text{B} T)</math>, and the normalization constant


: <math>Z_\beta = \sum_\sigma e^{-\beta H(\sigma)}</math>
<math display="block">Z_\beta = \sum_\sigma e^{-\beta H(\sigma)}</math>


is the [[partition function (statistical mechanics)|partition function]]. For a function <math>f</math> of the spins ("observable"), one denotes by
is the [[partition function (statistical mechanics)|partition function]]. For a function <math>f</math> of the spins ("observable"), one denotes by


: <math>\langle f \rangle_\beta = \sum_\sigma f(\sigma) P_\beta(\sigma)</math>
<math display="block">\langle f \rangle_\beta = \sum_\sigma f(\sigma) P_\beta(\sigma)</math>


the expectation (mean) value of <math>f</math>.
the expectation (mean) value of <math>f</math>.
Line 35: Line 35:
===Discussion===
===Discussion===
The minus sign on each term of the Hamiltonian function <math>H(\sigma)</math> is conventional. Using this sign convention, Ising models can be classified according to the sign of the interaction: if, for a pair ''i'',&nbsp;''j''
The minus sign on each term of the Hamiltonian function <math>H(\sigma)</math> is conventional. Using this sign convention, Ising models can be classified according to the sign of the interaction: if, for a pair ''i'',&nbsp;''j''
{{unbulleted list | style = padding-left: 1.5em
: <math>J_{ij} > 0</math>, the interaction is called [[ferromagnetic]],
: <math>J_{ij} < 0</math>, the interaction is called [[antiferromagnetic]],
| <math>J_{ij} > 0</math>, the interaction is called [[ferromagnetic]],
: <math>J_{ij} = 0</math>, the spins are ''noninteracting''.
| <math>J_{ij} < 0</math>, the interaction is called [[antiferromagnetic]],
| <math>J_{ij} = 0</math>, the spins are ''noninteracting''.

}}
The system is called ferromagnetic or antiferromagnetic if all interactions are ferromagnetic or all are antiferromagnetic. The original Ising models were ferromagnetic, and it is still often assumed that "Ising model" means a ferromagnetic Ising model.
The system is called ferromagnetic or antiferromagnetic if all interactions are ferromagnetic or all are antiferromagnetic. The original Ising models were ferromagnetic, and it is still often assumed that "Ising model" means a ferromagnetic Ising model.


Line 44: Line 45:


The sign convention of ''H''(σ) also explains how a spin site ''j'' interacts with the external field. Namely, the spin site wants to line up with the external field. If:
The sign convention of ''H''(σ) also explains how a spin site ''j'' interacts with the external field. Namely, the spin site wants to line up with the external field. If:
{{unbulleted list | style = padding-left: 1.5em
: <math>h_j > 0</math>, the spin site ''j'' desires to line up in the positive direction,
: <math>h_j < 0</math>, the spin site ''j'' desires to line up in the negative direction,
| <math>h_j > 0</math>, the spin site ''j'' desires to line up in the positive direction,
: <math>h_j = 0</math>, there is no external influence on the spin site.
| <math>h_j < 0</math>, the spin site ''j'' desires to line up in the negative direction,
| <math>h_j = 0</math>, there is no external influence on the spin site.

}}
===Simplifications===
===Simplifications===
Ising models are often examined without an external field interacting with the lattice, that is, ''h''&nbsp;=&nbsp;0 for all ''j'' in the lattice Λ. Using this simplification, the Hamiltonian becomes
Ising models are often examined without an external field interacting with the lattice, that is, ''h''&nbsp;=&nbsp;0 for all ''j'' in the lattice Λ. Using this simplification, the Hamiltonian becomes


: <math>H(\sigma) = -\sum_{\langle i~j\rangle} J_{ij} \sigma_i \sigma_j.</math>
<math display="block">H(\sigma) = -\sum_{\langle i~j\rangle} J_{ij} \sigma_i \sigma_j.</math>


When the external field is zero everywhere, ''h''&nbsp;=&nbsp;0, the Ising model is symmetric under switching the value of the spin in all the lattice sites; a nonzero field breaks this symmetry.
When the external field is zero everywhere, ''h''&nbsp;=&nbsp;0, the Ising model is symmetric under switching the value of the spin in all the lattice sites; a nonzero field breaks this symmetry.
Line 57: Line 59:
Another common simplification is to assume that all of the nearest neighbors ⟨''ij''⟩ have the same interaction strength. Then we can set ''J<sub>ij</sub>'' = ''J'' for all pairs ''i'',&nbsp;''j'' in Λ. In this case the Hamiltonian is further simplified to
Another common simplification is to assume that all of the nearest neighbors ⟨''ij''⟩ have the same interaction strength. Then we can set ''J<sub>ij</sub>'' = ''J'' for all pairs ''i'',&nbsp;''j'' in Λ. In this case the Hamiltonian is further simplified to


: <math>H(\sigma) = -J \sum_{\langle i~j\rangle} \sigma_i \sigma_j.</math>
<math display="block">H(\sigma) = -J \sum_{\langle i~j\rangle} \sigma_i \sigma_j.</math>


===Connection to [[Graph (discrete mathematics)|graph]] [[maximum cut]]===
===Connection to [[Graph (discrete mathematics)|graph]] [[maximum cut]]===
Line 68: Line 70:
Here each vertex i of the graph is a spin site that takes a spin value <math>\sigma_i = \pm 1 </math>. A given spin configuration <math>\sigma</math> partitions the set of vertices <math>V(G)</math> into two <math>\sigma</math>-depended subsets, those with spin up <math>V^+</math> and those with spin down <math>V^-</math>. We denote by <math>\delta(V^+)</math> the <math>\sigma</math>-depended set of edges that connects the two complementary vertex subsets <math>V^+</math> and <math>V^-</math>. The ''size'' <math>\left|\delta(V^+)\right|</math> of the cut <math>\delta(V^+)</math> to [[bipartite graph|bipartite]] the weighted undirected graph G can be defined as
Here each vertex i of the graph is a spin site that takes a spin value <math>\sigma_i = \pm 1 </math>. A given spin configuration <math>\sigma</math> partitions the set of vertices <math>V(G)</math> into two <math>\sigma</math>-depended subsets, those with spin up <math>V^+</math> and those with spin down <math>V^-</math>. We denote by <math>\delta(V^+)</math> the <math>\sigma</math>-depended set of edges that connects the two complementary vertex subsets <math>V^+</math> and <math>V^-</math>. The ''size'' <math>\left|\delta(V^+)\right|</math> of the cut <math>\delta(V^+)</math> to [[bipartite graph|bipartite]] the weighted undirected graph G can be defined as


<math>\left|\delta(V^+)\right|=\frac12\sum_{ij\in \delta(V^+)} W_{ij}</math>,
<math display="block">\left|\delta(V^+)\right|=\frac12\sum_{ij\in \delta(V^+)} W_{ij},</math>


where <math>W_{ij}</math> denotes a weight of the edge <math>ij</math> and the scaling 1/2 is introduced to compensate for double counting the same weights <math>W_{ij}=W_{ji}</math>.
where <math>W_{ij}</math> denotes a weight of the edge <math>ij</math> and the scaling 1/2 is introduced to compensate for double counting the same weights <math>W_{ij}=W_{ji}</math>.
Line 74: Line 76:
The identities
The identities


<math>\begin{align}
<math display="block">\begin{align}
H(\sigma) &= -\sum_{ij\in E(V^+)} J_{ij} - \sum_{ij\in E(V^-)} J_{ij} + \sum_{ij\in \delta(V^+)} J_{ij} \\
H(\sigma) &= -\sum_{ij\in E(V^+)} J_{ij} - \sum_{ij\in E(V^-)} J_{ij} + \sum_{ij\in \delta(V^+)} J_{ij} \\
&= - \sum_{ij \in E(G)} J_{ij} + 2 \sum_{ij\in \delta(V^+)} J_{ij},
&= - \sum_{ij \in E(G)} J_{ij} + 2 \sum_{ij\in \delta(V^+)} J_{ij},
Line 82: Line 84:
<ref name=":0">{{Cite journal|last1=Barahona|first1=Francisco|last2=Grötschel|first2=Martin|last3=Jünger|first3=Michael|last4=Reinelt|first4=Gerhard|date=1988|title=An Application of Combinatorial Optimization to Statistical Physics and Circuit Layout Design|journal=Operations Research|volume=36|issue=3|pages=493–513|issn=0030-364X|jstor=170992|doi=10.1287/opre.36.3.493}}</ref> maximizing the cut size <math>\left|\delta(V^+)\right|</math>, which is related to the Ising Hamiltonian as follows,
<ref name=":0">{{Cite journal|last1=Barahona|first1=Francisco|last2=Grötschel|first2=Martin|last3=Jünger|first3=Michael|last4=Reinelt|first4=Gerhard|date=1988|title=An Application of Combinatorial Optimization to Statistical Physics and Circuit Layout Design|journal=Operations Research|volume=36|issue=3|pages=493–513|issn=0030-364X|jstor=170992|doi=10.1287/opre.36.3.493}}</ref> maximizing the cut size <math>\left|\delta(V^+)\right|</math>, which is related to the Ising Hamiltonian as follows,


<math>H(\sigma) = \sum_{ij \in E(G)} W_{ij} - 4 \left|\delta(V^+)\right|.</math>
<math display="block">H(\sigma) = \sum_{ij \in E(G)} W_{ij} - 4 \left|\delta(V^+)\right|.</math>


===Questions===
===Questions===
Line 97: Line 99:


=== No phase transition in one dimension ===
=== No phase transition in one dimension ===
In his 1924 PhD thesis, Ising solved the model for the ''d''&nbsp;=&nbsp;1 case, which can be thought of as a linear horizontal lattice where each site only interacts with its left and right neighbor. In one dimension, the solution admits no [[phase transition]].<ref>{{Cite journal |url=http://users-phys.au.dk/fogedby/statphysII/no-PT-in-1D.pdf |title=Solving the 3d Ising Model with the Conformal Bootstrap II. C -Minimization and Precise Critical Exponents |journal=Journal of Statistical Physics |volume=157 |issue=4–5 |pages=869–914 |last1=El-Showk |first1=Sheer |last2=Paulos |first2=Miguel F. |last3=Poland |first3=David |last4=Rychkov |first4=Slava |last5=Simmons-Duffin |first5=David |last6=Vichi |first6=Alessandro |year=2014 |doi=10.1007/s10955-014-1042-7 |arxiv=1403.4545 |access-date=2013-04-21 |archive-url=https://web.archive.org/web/20140407154639/http://users-phys.au.dk/fogedby/statphysII/no-PT-in-1D.pdf |archive-date=2014-04-07 |url-status=dead |bibcode=2014JSP...157..869E|s2cid=119627708 }}</ref> Namely, for any positive β, the correlations ⟨σ<sub>''i''</sub>σ<sub>''j''</sub>⟩ decay exponentially in |''i''&nbsp;−&nbsp;''j''|:
In his 1924 PhD thesis, Ising solved the model for the ''d''&nbsp;=&nbsp;1 case, which can be thought of as a linear horizontal lattice where each site only interacts with its left and right neighbor. In one dimension, the solution admits no [[phase transition]].<ref>{{Cite journal |url=http://users-phys.au.dk/fogedby/statphysII/no-PT-in-1D.pdf |title=Solving the 3d Ising Model with the Conformal Bootstrap II. C -Minimization and Precise Critical Exponents |journal=Journal of Statistical Physics |volume=157 |issue=4–5 |pages=869–914 |last1=El-Showk |first1=Sheer |last2=Paulos |first2=Miguel F. |last3=Poland |first3=David |last4=Rychkov |first4=Slava |last5=Simmons-Duffin |first5=David |last6=Vichi |first6=Alessandro |year=2014 |doi=10.1007/s10955-014-1042-7 |arxiv=1403.4545 |access-date=2013-04-21 |archive-url=https://web.archive.org/web/20140407154639/http://users-phys.au.dk/fogedby/statphysII/no-PT-in-1D.pdf |archive-date=2014-04-07 |url-status=dead |bibcode=2014JSP...157..869E | s2cid=119627708 }}</ref> Namely, for any positive β, the correlations ⟨σ<sub>''i''</sub>σ<sub>''j''</sub>⟩ decay exponentially in |''i''&nbsp;−&nbsp;''j''|:
<math display="block">\langle \sigma_i \sigma_j \rangle_\beta \leq C \exp\left(-c(\beta) |i - j|\right),</math>

: <math>\langle \sigma_i \sigma_j \rangle_\beta \leq C \exp\big(-c(\beta) |i - j|\big),</math>


and the system is disordered. On the basis of this result, he incorrectly concluded {{Citation needed|date=November 2022}} that this model does not exhibit phase behaviour in any dimension.
and the system is disordered. On the basis of this result, he incorrectly concluded {{Citation needed|date=November 2022}} that this model does not exhibit phase behaviour in any dimension.
Line 106: Line 107:
The Ising model undergoes a [[phase transition]] between an [[ordered phase|ordered]] and a [[disordered phase]] in 2 dimensions or more. Namely, the system is disordered for small β, whereas for large β the system exhibits ferromagnetic order:
The Ising model undergoes a [[phase transition]] between an [[ordered phase|ordered]] and a [[disordered phase]] in 2 dimensions or more. Namely, the system is disordered for small β, whereas for large β the system exhibits ferromagnetic order:


: <math>\langle \sigma_i \sigma_j \rangle_\beta \geq c(\beta) > 0.</math>
<math display="block">\langle \sigma_i \sigma_j \rangle_\beta \geq c(\beta) > 0.</math>


This was first proven by [[Rudolf Peierls]] in 1936,<ref>{{Cite journal |doi=10.1017/S0305004100019174 |title=On Ising's model of ferromagnetism |journal=Mathematical Proceedings of the Cambridge Philosophical Society |volume=32 |issue=3 |pages=477 |year=1936 |last1=Peierls |first1=R. |last2=Born |first2=M. |bibcode=1936PCPS...32..477P|s2cid=122630492 }}</ref> using what is now called a '''Peierls argument'''.
This was first proven by [[Rudolf Peierls]] in 1936,<ref>{{Cite journal |doi=10.1017/S0305004100019174 |title=On Ising's model of ferromagnetism |journal=Mathematical Proceedings of the Cambridge Philosophical Society |volume=32 |issue=3 |pages=477 |year=1936 |last1=Peierls |first1=R. |last2=Born |first2=M. |bibcode=1936PCPS...32..477P|s2cid=122630492 }}</ref> using what is now called a '''Peierls argument'''.
Line 113: Line 114:


=== Correlation inequalities ===
=== Correlation inequalities ===
A number of [[Correlation inequality|correlation inequalities]] has been derived rigorously for the Ising spin correlations (for general lattice structures), which enabled mathematicians to study the Ising model both on and off criticality.
A number of [[Correlation inequality|correlation inequalities]] have been derived rigorously for the Ising spin correlations (for general lattice structures), which have enabled mathematicians to study the Ising model both on and off criticality.


==== Griffiths inequality ====
==== Griffiths inequality ====
Line 119: Line 120:
Given any subset of spins <math>\sigma_A</math> and <math>\sigma_B</math> on the lattice, the following inequality holds,
Given any subset of spins <math>\sigma_A</math> and <math>\sigma_B</math> on the lattice, the following inequality holds,


<math>\langle \sigma_A \sigma_B \rangle \geq \langle \sigma_A \rangle \langle \sigma_B \rangle</math>,
<math display="block">\langle \sigma_A \sigma_B \rangle \geq \langle \sigma_A \rangle \langle \sigma_B \rangle,</math>

where <math> \langle \sigma_A \rangle = \langle \prod_{j \in A} \sigma_j \rangle </math>.

With <math> B = \empty </math>, the special case <math> \langle \sigma_A \rangle \ge 0 </math> results.


which means that spins are positively correlated on the Ising ferromagnet. An immediate application of this is that the magnetization of any set of spins <math>\langle \sigma_A \rangle</math> is increasing with respect to any set of coupling constants <math>J_B</math>.
This means that spins are positively correlated on the Ising ferromagnet. An immediate application of this is that the magnetization of any set of spins <math>\langle \sigma_A \rangle</math> is increasing with respect to any set of coupling constants <math>J_B</math>.


==== Simon-Lieb inequality ====
==== Simon-Lieb inequality ====
The Simon-Lieb inequality<ref>{{Cite journal |last=Simon |first=Barry |date=1980-10-01 |title=Correlation inequalities and the decay of correlations in ferromagnets |url=https://doi.org/10.1007/BF01982711 |journal=Communications in Mathematical Physics |language=en |volume=77 |issue=2 |pages=111–126 |doi=10.1007/BF01982711 |bibcode=1980CMaPh..77..111S |s2cid=17543488 |issn=1432-0916}}</ref> states that for any set <math>S</math> disconnecting <math>x</math> from <math>y</math> (e.g. the boundary of a box with <math>x</math> being inside the box and <math>y</math> being outside),
The Simon-Lieb inequality<ref>{{Cite journal |last=Simon |first=Barry |date=1980-10-01 |title=Correlation inequalities and the decay of correlations in ferromagnets |url=https://doi.org/10.1007/BF01982711 |journal=Communications in Mathematical Physics |language=en |volume=77 |issue=2 |pages=111–126 |doi=10.1007/BF01982711 |bibcode=1980CMaPh..77..111S |s2cid=17543488 |issn=1432-0916}}</ref> states that for any set <math>S</math> disconnecting <math>x</math> from <math>y</math> (e.g. the boundary of a box with <math>x</math> being inside the box and <math>y</math> being outside),


<math>\langle \sigma_x \sigma_y \rangle \leq \sum_{z\in S} \langle \sigma_x \sigma_z \rangle \langle \sigma_z \sigma_y \rangle</math>.
<math display="block">\langle \sigma_x \sigma_y \rangle \leq \sum_{z\in S} \langle \sigma_x \sigma_z \rangle \langle \sigma_z \sigma_y \rangle.</math>


This inequality can be used to establish the sharpness of phase transition for the Ising model.<ref>{{Cite journal |last1=Duminil-Copin |first1=Hugo |last2=Tassion |first2=Vincent |date=2016-04-01 |title=A New Proof of the Sharpness of the Phase Transition for Bernoulli Percolation and the Ising Model |url=https://doi.org/10.1007/s00220-015-2480-z |journal=Communications in Mathematical Physics |language=en |volume=343 |issue=2 |pages=725–745 |doi=10.1007/s00220-015-2480-z |arxiv=1502.03050 |bibcode=2016CMaPh.343..725D |s2cid=119330137 |issn=1432-0916}}</ref>
This inequality can be used to establish the sharpness of phase transition for the Ising model.<ref>{{Cite journal |last1=Duminil-Copin |first1=Hugo |last2=Tassion |first2=Vincent |date=2016-04-01 |title=A New Proof of the Sharpness of the Phase Transition for Bernoulli Percolation and the Ising Model |url=https://doi.org/10.1007/s00220-015-2480-z |journal=Communications in Mathematical Physics |language=en |volume=343 |issue=2 |pages=725–745 |doi=10.1007/s00220-015-2480-z |arxiv=1502.03050 |bibcode=2016CMaPh.343..725D |s2cid=119330137 |issn=1432-0916}}</ref>
Line 132: Line 137:
==== FKG inequality ====
==== FKG inequality ====
{{Main|FKG inequality}}
{{Main|FKG inequality}}
This inequality is proven first for a type of [[Random cluster model|positively-correlated percolation model]], of which includes a representation of the Ising model. It is used to determine the critical temperatures of planar [[Potts model]] using percolation arguments (which includes the Ising model as a special case).<ref>{{Cite journal |last1=Beffara |first1=Vincent |last2=Duminil-Copin |first2=Hugo |date=2012-08-01 |title=The self-dual point of the two-dimensional random-cluster model is critical for q ≥ 1 |url=https://doi.org/10.1007/s00440-011-0353-8 |journal=Probability Theory and Related Fields |language=en |volume=153 |issue=3 |pages=511–542 |doi=10.1007/s00440-011-0353-8 |s2cid=55391558 |issn=1432-2064}}</ref>
This inequality is proven first for a type of [[Random cluster model|positively-correlated percolation model]], of which includes a representation of the Ising model. It is used to determine the critical temperatures of planar [[Potts model]] using percolation arguments (which includes the Ising model as a special case).<ref>{{Cite journal |last1=Beffara |first1=Vincent |last2=Duminil-Copin |first2=Hugo |date=2012-08-01 |title=The self-dual point of the two-dimensional random-cluster model is critical for q ≥ 1 |journal=Probability Theory and Related Fields |language=en |volume=153 |issue=3 |pages=511–542 |doi=10.1007/s00440-011-0353-8 |s2cid=55391558 |issn=1432-2064|doi-access=free }}</ref>


==Historical significance==
==Historical significance==
Line 159: Line 164:
A quantitative measure of the excess is the '''magnetization''', which is the average value of the spin:
A quantitative measure of the excess is the '''magnetization''', which is the average value of the spin:


: <math>M = \frac{1}{N} \sum_{i=1}^N \sigma_i.</math>
<math display="block">M = \frac{1}{N} \sum_{i=1}^N \sigma_i.</math>


A bogus argument analogous to the argument in the last section now establishes that the magnetization in the Ising model is always zero.
A bogus argument analogous to the argument in the last section now establishes that the magnetization in the Ising model is always zero.
Line 172: Line 177:


The number of paths of length ''L'' on a square lattice in ''d'' dimensions is
The number of paths of length ''L'' on a square lattice in ''d'' dimensions is
: <math>N(L) = (2d)^L,</math>
<math display="block">N(L) = (2d)^L,</math>
since there are 2''d'' choices for where to go at each step.
since there are 2''d'' choices for where to go at each step.


A bound on the total correlation is given by the contribution to the correlation by summing over all paths linking two points, which is bounded above by the sum over all paths of length ''L'' divided by
A bound on the total correlation is given by the contribution to the correlation by summing over all paths linking two points, which is bounded above by the sum over all paths of length ''L'' divided by
: <math>\sum_L (2d)^L \varepsilon^L,</math>
<math display="block">\sum_L (2d)^L \varepsilon^L,</math>
which goes to zero when ε is small.
which goes to zero when ε is small.


Line 182: Line 187:


The energy of a droplet of plus spins in a minus background is proportional to the perimeter of the droplet L, where plus spins and minus spins neighbor each other. For a droplet with perimeter ''L'', the area is somewhere between (''L''&nbsp;−&nbsp;2)/2 (the straight line) and (''L''/4)<sup>2</sup> (the square box). The probability cost for introducing a droplet has the factor ''e''<sup>−β''L''</sup>, but this contributes to the partition function multiplied by the total number of droplets with perimeter ''L'', which is less than the total number of paths of length ''L'':
The energy of a droplet of plus spins in a minus background is proportional to the perimeter of the droplet L, where plus spins and minus spins neighbor each other. For a droplet with perimeter ''L'', the area is somewhere between (''L''&nbsp;−&nbsp;2)/2 (the straight line) and (''L''/4)<sup>2</sup> (the square box). The probability cost for introducing a droplet has the factor ''e''<sup>−β''L''</sup>, but this contributes to the partition function multiplied by the total number of droplets with perimeter ''L'', which is less than the total number of paths of length ''L'':
: <math>N(L) < 4^{2L}.</math>
<math display="block">N(L) < 4^{2L}.</math>
So that the total spin contribution from droplets, even overcounting by allowing each site to have a separate droplet, is bounded above by
So that the total spin contribution from droplets, even overcounting by allowing each site to have a separate droplet, is bounded above by
: <math>\sum_L L^2 4^{2L} e^{-4\beta L},</math>
<math display="block">\sum_L L^2 4^{2L} e^{-4\beta L},</math>


which goes to zero at large β. For β sufficiently large, this exponentially suppresses long loops, so that they cannot occur, and the magnetization never fluctuates too far from&nbsp;−1.
which goes to zero at large β. For β sufficiently large, this exponentially suppresses long loops, so that they cannot occur, and the magnetization never fluctuates too far from&nbsp;−1.
Line 198: Line 203:
After Onsager's solution, Yang and Lee investigated the way in which the partition function becomes singular as the temperature approaches the critical temperature.
After Onsager's solution, Yang and Lee investigated the way in which the partition function becomes singular as the temperature approaches the critical temperature.


==Applications==
==Monte Carlo methods for numerical simulation==

===Magnetism===
The original motivation for the model was the phenomenon of [[ferromagnetism]]. Iron is magnetic; once it is magnetized it stays magnetized for a long time compared to any atomic time.

In the 19th century, it was thought that magnetic fields are due to currents in matter, and [[André-Marie Ampère|Ampère]] postulated that permanent magnets are caused by permanent atomic currents. The motion of classical charged particles could not explain permanent currents though, as shown by [[Joseph Larmor|Larmor]]. In order to have ferromagnetism, the atoms must have permanent [[magnetic moment]]s which are not due to the motion of classical charges.

Once the electron's spin was discovered, it was clear that the magnetism should be due to a large number of electron spins all oriented in the same direction. It was natural to ask how the electrons' spins all know which direction to point in, because the electrons on one side of a magnet don't directly interact with the electrons on the other side. They can only influence their neighbors. The Ising model was designed to investigate whether a large fraction of the electron spins could be oriented in the same direction using only local forces.

===Lattice gas===
The Ising model can be reinterpreted as a statistical model for the motion of atoms. Since the kinetic energy depends only on momentum and not on position, while the statistics of the positions only depends on the potential energy, the thermodynamics of the gas only depends on the potential energy for each configuration of atoms.

A coarse model is to make space-time a lattice and imagine that each position either contains an atom or it doesn't. The space of configuration is that of independent bits ''B<sub>i</sub>'', where each bit is either 0 or 1 depending on whether the position is occupied or not. An attractive interaction reduces the energy of two nearby atoms. If the attraction is only between nearest neighbors, the energy is reduced by −4''JB''<sub>''i''</sub>''B''<sub>''j''</sub> for each occupied neighboring pair.

The density of the atoms can be controlled by adding a [[chemical potential]], which is a multiplicative probability cost for adding one more atom. A multiplicative factor in probability can be reinterpreted as an additive term in the logarithm – the energy. The extra energy of a configuration with ''N'' atoms is changed by ''μN''. The probability cost of one more atom is a factor of exp(−''βμ'').

So the energy of the lattice gas is:
<math display="block">E = - \frac{1}{2} \sum_{\langle i,j \rangle} 4 J B_i B_j + \sum_i \mu B_i</math>

Rewriting the bits in terms of spins, <math>B_i = (S_i + 1)/2. </math>
<math display="block">E = - \frac{1}{2} \sum_{\langle i,j \rangle} J S_i S_j - \frac{1}{2} \sum_i (4 J - \mu) S_i</math>

For lattices where every site has an equal number of neighbors, this is the Ising model with a magnetic field ''h'' = (''zJ''&nbsp;−&nbsp;''μ'')/2, where ''z'' is the number of neighbors.

In biological systems, modified versions of the lattice gas model have been used to understand a range of binding behaviors. These include the binding of ligands to receptors in the cell surface,<ref>{{Cite journal|last1=Shi|first1=Y.|last2=Duke|first2=T.|date=1998-11-01|title=Cooperative model of bacteril sensing|journal=Physical Review E|language=en|volume=58|issue=5|pages=6399–6406|doi=10.1103/PhysRevE.58.6399|arxiv=physics/9901052|bibcode=1998PhRvE..58.6399S|s2cid=18854281}}</ref> the binding of [[chemotaxis]] proteins to the flagellar motor,<ref>{{Cite journal|last1=Bai|first1=Fan|last2=Branch|first2=Richard W.|last3=Nicolau|first3=Dan V.|last4=Pilizota|first4=Teuta|last5=Steel|first5=Bradley C.|last6=Maini|first6=Philip K.|last7=Berry|first7=Richard M.|date=2010-02-05|title=Conformational Spread as a Mechanism for Cooperativity in the Bacterial Flagellar Switch|journal=Science|language=en|volume=327|issue=5966|pages=685–689|doi=10.1126/science.1182105|issn=0036-8075|pmid=20133571|bibcode = 2010Sci...327..685B |s2cid=206523521}}</ref> and the condensation of DNA.<ref>{{Cite journal|last1=Vtyurina|first1=Natalia N.|last2=Dulin|first2=David|last3=Docter|first3=Margreet W.|last4=Meyer|first4=Anne S.|last5=Dekker|first5=Nynke H.|last6=Abbondanzieri|first6=Elio A.|date=2016-04-18|title=Hysteresis in DNA compaction by Dps is described by an Ising model|journal=Proceedings of the National Academy of Sciences|language=en|pages=4982–7|doi=10.1073/pnas.1521241113|issn=0027-8424|pmid=27091987|pmc=4983820|volume=113|issue=18|bibcode=2016PNAS..113.4982V|doi-access=free}}</ref>

===Neuroscience===
The activity of [[neuron]]s in the brain can be modelled statistically. Each neuron at any time is either active + or inactive&nbsp;−. The active neurons are those that send an [[action potential]] down the axon in any given time window, and the inactive ones are those that do not.

Following the general approach of Jaynes,<ref>{{Citation| author=Jaynes, E. T.| title= Information Theory and Statistical Mechanics | journal= Physical Review| volume = 106 | pages= 620–630 | year= 1957| doi=10.1103/PhysRev.106.620| postscript=.|bibcode = 1957PhRv..106..620J| issue=4 | s2cid= 17870175 }}</ref><ref>{{Citation| author= Jaynes, Edwin T.| title = Information Theory and Statistical Mechanics II |journal = Physical Review |volume =108 | pages = 171–190 | year = 1957| doi= 10.1103/PhysRev.108.171| postscript= .|bibcode = 1957PhRv..108..171J| issue= 2 }}</ref> a later interpretation of Schneidman, Berry, Segev and Bialek,<ref>{{Citation|author1=Elad Schneidman |author2=Michael J. Berry |author3=Ronen Segev |author4=William Bialek | title= Weak pairwise correlations imply strongly correlated network states in a neural population| journal=Nature| volume= 440 | pages= 1007–1012| year=2006| doi= 10.1038/nature04701| pmid= 16625187| issue= 7087| pmc= 1785327| postscript= .|arxiv = q-bio/0512013 |bibcode = 2006Natur.440.1007S |title-link=neural population }}</ref>
is that the Ising model is useful for any model of neural function, because a statistical model for neural activity should be chosen using the [[principle of maximum entropy]]. Given a collection of neurons, a statistical model which can reproduce the average firing rate for each neuron introduces a [[Lagrange multiplier]] for each neuron:
<math display="block">E = - \sum_i h_i S_i</math>
But the activity of each neuron in this model is statistically independent. To allow for pair correlations, when one neuron tends to fire (or not to fire) along with another, introduce pair-wise lagrange multipliers:
<math display="block">E= - \tfrac{1}{2} \sum_{ij} J_{ij} S_i S_j - \sum_i h_i S_i</math>
where <math>J_{ij}</math> are not restricted to neighbors. Note that this generalization of Ising model is sometimes called the quadratic exponential binary distribution in statistics.
This energy function only introduces probability biases for a spin having a value and for a pair of spins having the same value. Higher order correlations are unconstrained by the multipliers. An activity pattern sampled from this distribution requires the largest number of bits to store in a computer, in the most efficient coding scheme imaginable, as compared with any other distribution with the same average activity and pairwise correlations. This means that Ising models are relevant to any system which is described by bits which are as random as possible, with constraints on the pairwise correlations and the average number of 1s, which frequently occurs in both the physical and social sciences.

===Spin glasses===
With the Ising model the so-called [[spin glasses]] can also be described, by the usual Hamiltonian <math display="inline">H=-\frac{1}{2}\,\sum J_{i,k}\,S_i\,S_k,</math> where the ''S''-variables describe the Ising spins, while the ''J<sub>i,k</sub>'' are taken from a random distribution. For spin glasses a typical distribution chooses antiferromagnetic bonds with probability ''p'' and ferromagnetic bonds with probability 1&nbsp;−&nbsp;''p'' (also known as the random-bond Ising model). These bonds stay fixed or "quenched" even in the presence of thermal fluctuations. When ''p''&nbsp;=&nbsp;0 we have the original Ising model. This system deserves interest in its own; particularly one has "non-ergodic" properties leading to strange relaxation behaviour. Much attention has been also attracted by the related bond and site dilute Ising model, especially in two dimensions, leading to intriguing critical behavior.<ref>{{Citation|author= J-S Wang, [[Walter Selke|W Selke]], VB Andreichenko, and VS Dotsenko| title= The critical behaviour of the two-dimensional dilute model|journal= Physica A|volume= 164| issue= 2| pages= 221–239 |year= 1990|doi=10.1016/0378-4371(90)90196-Y|bibcode = 1990PhyA..164..221W }}</ref>

=== Artificial neural network ===
{{Main|Hopfield network}}
Ising model was instrumental in the development of the [[Hopfield network]]. The original Ising model is a model for equilibrium. [[Roy J. Glauber]] in 1963 studied the Ising model evolving in time, as a process towards thermal equilibrium ([[Glauber dynamics]]), adding in the component of time.<ref name=":222">{{cite journal |last1=Glauber |first1=Roy J. |date=February 1963 |title=Roy J. Glauber "Time-Dependent Statistics of the Ising Model" |url=https://aip.scitation.org/doi/abs/10.1063/1.1703954 |journal=Journal of Mathematical Physics |volume=4 |issue=2 |pages=294–307 |doi=10.1063/1.1703954 |access-date=2021-03-21}}</ref> (Kaoru Nakano, 1971)<ref name="Nakano1971">{{cite book |last1=Nakano |first1=Kaoru |title=Pattern Recognition and Machine Learning |date=1971 |isbn=978-1-4615-7568-9 |pages=172–186 |chapter=Learning Process in a Model of Associative Memory |doi=10.1007/978-1-4615-7566-5_15}}</ref><ref name="Nakano1972">{{cite journal |last1=Nakano |first1=Kaoru |date=1972 |title=Associatron-A Model of Associative Memory |journal=IEEE Transactions on Systems, Man, and Cybernetics |volume=SMC-2 |issue=3 |pages=380–388 |doi=10.1109/TSMC.1972.4309133}}</ref> and ([[Shun'ichi Amari]], 1972),<ref name="Amari19722">{{cite journal |last1=Amari |first1=Shun-Ichi |date=1972 |title=Learning patterns and pattern sequences by self-organizing nets of threshold elements |journal=IEEE Transactions |volume=C |issue=21 |pages=1197–1206}}</ref> proposed to modify the weights of an Ising model by [[Hebbian theory|Hebbian learning]] rule as a model of associative memory. The same idea was published by ({{ill|William A. Little (physicist)|lt=William A. Little|de|William A. Little}}, 1974),<ref name="little74">{{cite journal |last=Little |first=W. A. |year=1974 |title=The Existence of Persistent States in the Brain |journal=Mathematical Biosciences |volume=19 |issue=1–2 |pages=101–120 |doi=10.1016/0025-5564(74)90031-5}}</ref> who was cited by Hopfield in his 1982 paper.

The [[Spin glass#Sherrington–Kirkpatrick model|Sherrington–Kirkpatrick model]] of spin glass, published in 1975,<ref>{{Cite journal |last1=Sherrington |first1=David |last2=Kirkpatrick |first2=Scott |date=1975-12-29 |title=Solvable Model of a Spin-Glass |url=https://link.aps.org/doi/10.1103/PhysRevLett.35.1792 |journal=Physical Review Letters |volume=35 |issue=26 |pages=1792–1796 |bibcode=1975PhRvL..35.1792S |doi=10.1103/PhysRevLett.35.1792 |issn=0031-9007}}</ref> is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions.<ref name="Hopfield1982">{{cite journal |last1=Hopfield |first1=J. J. |date=1982 |title=Neural networks and physical systems with emergent collective computational abilities |journal=Proceedings of the National Academy of Sciences |volume=79 |issue=8 |pages=2554–2558 |bibcode=1982PNAS...79.2554H |doi=10.1073/pnas.79.8.2554 |pmc=346238 |pmid=6953413 |doi-access=free}}</ref> In a 1984 paper he extended this to continuous activation functions.<ref name=":03">{{cite journal |last1=Hopfield |first1=J. J. |date=1984 |title=Neurons with graded response have collective computational properties like those of two-state neurons |journal=Proceedings of the National Academy of Sciences |volume=81 |issue=10 |pages=3088–3092 |bibcode=1984PNAS...81.3088H |doi=10.1073/pnas.81.10.3088 |pmc=345226 |pmid=6587342 |doi-access=free}}</ref> It became a standard model for the study of neural networks through statistical mechanics.<ref>{{Cite book |last1=Engel |first1=A. |title=Statistical mechanics of learning |last2=Broeck |first2=C. van den |date=2001 |publisher=Cambridge University Press |isbn=978-0-521-77307-2 |location=Cambridge, UK ; New York, NY}}</ref><ref>{{Cite journal |last1=Seung |first1=H. S. |last2=Sompolinsky |first2=H. |last3=Tishby |first3=N. |date=1992-04-01 |title=Statistical mechanics of learning from examples |url=https://journals.aps.org/pra/abstract/10.1103/PhysRevA.45.6056 |journal=Physical Review A |volume=45 |issue=8 |pages=6056–6091 |bibcode=1992PhRvA..45.6056S |doi=10.1103/PhysRevA.45.6056 |pmid=9907706}}</ref>

===Sea ice===
The [[melt pond]] can be modelled by the Ising model; sea ice topography data bears rather heavily on the results. The state variable is binary for a simple 2D approximation, being either water or ice.<ref>{{cite arXiv|author= Yi-Ping Ma|author2= Ivan Sudakov|author3= Courtenay Strong|author4= Kenneth Golden|title= Ising model for melt ponds on Arctic sea ice|year= 2017|class= physics.ao-ph|eprint=1408.2487v3}}</ref>

===Cayley tree topologies and large neural networks===

[[File:Cayley Tree Branch with Branching Ratio = 2.jpg|thumb|An Open Cayley Tree or Branch with Branching Ratio = 2 and k Generations]]

In order to investigate an Ising model with potential relevance for large (e.g. with <math>10^4</math> or <math>10^5</math> interactions per node) neural nets, at the suggestion of Krizan in 1979, {{harvtxt|Barth|1981}} obtained the exact analytical expression for the free energy of the Ising model on the closed Cayley tree (with an arbitrarily large branching ratio) for a zero-external magnetic field (in the thermodynamic limit) by applying the methodologies of {{harvtxt|Glasser|1970}} and {{harvtxt|Jellito|1979}}

<math display="block">-\beta f = \ln 2 + \frac{2\gamma}{(\gamma+1)} \ln (\cosh J) + \frac{\gamma(\gamma-1)}{(\gamma+1)} \sum_{i=2}^z\frac{1}{\gamma^i}\ln J_i (\tau) </math>

[[File:Closed Cayley Tree with Branching Ratio = 4.jpg |thumb|A Closed Cayley Tree with Branching Ratio = 4. (Only sites for generations k, k-1, and k=1(overlapping as one row) are shown for the joined trees)]] where <math>\gamma</math> is an arbitrary branching ratio (greater than or equal to 2), <math>t \equiv \tanh J</math>, <math>\tau \equiv t^2</math>, <math>J \equiv \beta\epsilon</math> (with <math>\epsilon</math> representing the nearest-neighbor interaction energy) and there are k (→ ∞ in the thermodynamic limit) generations in each of the tree branches (forming the closed tree architecture as shown in the given closed Cayley tree diagram.) The sum in the last term can be shown to converge uniformly and rapidly (i.e. for z → ∞, it remains finite) yielding a continuous and monotonous function, establishing that, for <math>\gamma</math> greater than or equal to 2, the free energy is a continuous function of temperature T. Further analysis of the free energy indicates that it exhibits an unusual discontinuous first derivative at the critical temperature ({{harvtxt|Krizan|Barth|Glasser|1983}}, {{harvtxt|Glasser|Goldberg|1983}}.)

The spin-spin correlation between sites (in general, m and n) on the tree was found to have a transition point when considered at the vertices (e.g. A and Ā, its reflection), their respective neighboring sites (such as B and its reflection), and between sites adjacent to the top and bottom extreme vertices of the two trees (e.g. A and B), as may be determined from
<math display="block">\langle s_m s_n \rangle = {Z_N}^{-1}(0,T) [\cosh J]^{N_b} 2^N \sum_{l=1}^z g_{mn}(l) t^l</math>
where <math>N_b</math> is equal to the number of bonds, <math>g_{mn}(l)t^l</math> is the number of graphs counted for odd vertices with even intermediate sites (see cited methodologies and references for detailed calculations), <math>2^N</math> is the multiplicity resulting from two-valued spin possibilities and the partition function <math>{Z_N}</math> is derived from <math>\sum_{\{s\}}e^{-\beta H}</math>. (Note: <math>s_i </math> is consistent with the referenced literature in this section and is equivalent to <math>S_i</math> or <math>\sigma_i</math> utilized above and in earlier sections; it is valued at <math>\pm 1 </math>.) The critical temperature <math>T_C</math> is given by
<math display="block">T_C = \frac{2\epsilon}{k_\text{B}[\ln(\sqrt \gamma+1) - \ln(\sqrt \gamma-1)]}.</math>

The critical temperature for this model is only determined by the branching ratio <math>\gamma</math> and the site-to-site interaction energy <math>\epsilon</math>, a fact which may have direct implications associated with neural structure vs. its function (in that it relates the energies of interaction and branching ratio to its transitional behavior.) For example, a relationship between the transition behavior of activities of neural networks between sleeping and wakeful states (which may correlate with a spin-spin type of phase transition) in terms of changes in neural interconnectivity (<math>\gamma</math>) and/or neighbor-to-neighbor interactions (<math>\epsilon</math>), over time, is just one possible avenue suggested for further experimental investigation into such a phenomenon. In any case, for this Ising model it was established, that “the stability of the long-range correlation increases with increasing <math>\gamma</math> or increasing <math>\epsilon</math>.”

For this topology, the spin-spin correlation was found to be zero between the extreme vertices and the central sites at which the two trees (or branches) are joined (i.e. between A and individually C, D, or E.) This behavior is explained to be due to the fact that, as k increases, the number of links increases exponentially (between the extreme vertices) and so even though the contribution to spin correlations decrease exponentially, the correlation between sites such as the extreme vertex (A) in one tree and the extreme vertex in the joined tree (Ā) remains finite (above the critical temperature.) In addition, A and B also exhibit a non-vanishing correlation (as do their reflections) thus lending itself to, for B level sites (with A level), being considered “clusters” which tend to exhibit synchronization of firing.

Based upon a review of other classical network models as a comparison, the Ising model on a closed Cayley tree was determined to be the first classical statistical mechanical model to demonstrate both local and long-range sites with non-vanishing spin-spin correlations, while at the same time exhibiting intermediate sites with zero correlation, which indeed was a relevant matter for large neural networks at the time of its consideration. The model's behavior is also of relevance for any other divergent-convergent tree physical (or biological) system exhibiting a closed Cayley tree topology with an Ising-type of interaction. This topology should not be ignored since its behavior for Ising models has been solved exactly, and presumably nature will have found a way of taking advantage of such simple symmetries at many levels of its designs.

{{harvtxt|Barth|1981}} early on noted the possibility of interrelationships between (1) the classical large neural network model (with similar coupled divergent-convergent topologies) with (2) an underlying statistical quantum mechanical model (independent of topology and with persistence in fundamental quantum states):

{{Blockquote|The most significant result obtained from the closed Cayley tree model involves the occurrence of long-range correlation in the absence of intermediate-range correlation. This result has not been demonstrated by other classical models. The failure of the classical view of impulse transmission to account for this phenomenon has been cited by numerous investigators (Ricciiardi and Umezawa, 1967, Hokkyo 1972, Stuart, Takahashi and Umezawa 1978, 1979) as significant enough to warrant radically new assumptions on a very fundamental level and have suggested the existence of quantum cooperative modes within the brain…In addition, it is interesting to note that the (modeling) of…Goldstone particles or bosons (as per Umezawa, et al)…within the brain, demonstrates the long-range correlation of quantum numbers preserved in the ground state…In the closed Cayley tree model ground states of pairs of sites, as well as the state variable of individual sites, (can) exhibit long-range correlation.|author=|title=|source=}}

It was a natural and common belief among early neurophysicists (e.g. Umezawa, Krizan, Barth, etc.) that classical neural models (including those with statistical mechanical aspects) will one day have to be integrated with quantum physics (with quantum statistical aspects), similar perhaps to how the domain of chemistry has historically integrated itself into quantum physics via quantum chemistry.

Several additional statistical mechanical problems of interest remain to be solved for the closed Cayley tree, including the time-dependent case and the external field situation, as well as theoretical efforts aimed at understanding interrelationships with underlying quantum constituents and their physics.

== Numerical simulation ==
[[File:Ising quench b10.gif|framed|right|Quench of an Ising system on a two-dimensional square lattice (500&nbsp;×&nbsp;500) with inverse temperature ''β''&nbsp;=&nbsp;10, starting from a random configuration]]
[[File:Ising quench b10.gif|framed|right|Quench of an Ising system on a two-dimensional square lattice (500&nbsp;×&nbsp;500) with inverse temperature ''β''&nbsp;=&nbsp;10, starting from a random configuration]]


===Definitions===
The Ising model can often be difficult to evaluate numerically if there are many states in the system. Consider an Ising model with
The Ising model can often be difficult to evaluate numerically if there are many states in the system. Consider an Ising model with
: ''L'' = |Λ|: the total number of sites on the lattice,
: ''L'' = |Λ|: the total number of sites on the lattice,
Line 209: Line 291:
Since every spin site has ±1 spin, there are ''2''<sup>''L''</sup> different states that are possible.<ref name = "Newman">{{cite book |last1=Newman |first1=M.E.J. |last2=Barkema |first2=G.T. |title=Monte Carlo Methods in Statistical Physics |publisher=Clarendon Press |year=1999 |isbn=9780198517979 }}</ref> This motivates the reason for the Ising model to be simulated using [[Monte Carlo methods]].<ref name="Newman" />
Since every spin site has ±1 spin, there are ''2''<sup>''L''</sup> different states that are possible.<ref name = "Newman">{{cite book |last1=Newman |first1=M.E.J. |last2=Barkema |first2=G.T. |title=Monte Carlo Methods in Statistical Physics |publisher=Clarendon Press |year=1999 |isbn=9780198517979 }}</ref> This motivates the reason for the Ising model to be simulated using [[Monte Carlo methods]].<ref name="Newman" />


The [[Hamiltonian mechanics|Hamiltonian]] that is commonly used to represent the energy of the model when using Monte Carlo methods is
The [[Hamiltonian mechanics|Hamiltonian]] that is commonly used to represent the energy of the model when using Monte Carlo methods is:
: <math>H(\sigma) = -J \sum_{\langle i~j\rangle} \sigma_i \sigma_j - h \sum_j \sigma_j.</math>
Furthermore, the Hamiltonian is further simplified by assuming zero external field ''h'', since many questions that are posed to be solved using the model can be answered in absence of an external field. This leads us to the following energy equation for state σ:
: <math>H(\sigma) = -J \sum_{\langle i~j\rangle} \sigma_i \sigma_j.</math>
Given this Hamiltonian, quantities of interest such as the specific heat or the magnetization of the magnet at a given temperature can be calculated.<ref name="Newman" />


<math display="block">H(\sigma) = -J \sum_{\langle i~j\rangle} \sigma_i \sigma_j - h \sum_j \sigma_j.</math>
===Metropolis algorithm===


Furthermore, the Hamiltonian is further simplified by assuming zero external field ''h'', since many questions that are posed to be solved using the model can be answered in absence of an external field. This leads us to the following energy equation for state σ:
====Overview====
The [[Metropolis–Hastings algorithm]] is the most commonly used Monte Carlo algorithm to calculate Ising model estimations.<ref name="Newman" /> The algorithm first chooses ''selection probabilities'' ''g''(μ, ν), which represent the probability that state ν is selected by the algorithm out of all states, given that one is in state μ. It then uses acceptance probabilities ''A''(μ, ν) so that [[detailed balance]] is satisfied. If the new state ν is accepted, then we move to that state and repeat with selecting a new state and deciding to accept it. If ν is not accepted then we stay in μ. This process is repeated until some stopping criterion is met, which for the Ising model is often when the lattice becomes [[ferromagnetic]], meaning all of the sites point in the same direction.<ref name="Newman" />


<math display="block">H(\sigma) = -J \sum_{\langle i~j\rangle} \sigma_i \sigma_j.</math>
When implementing the algorithm, one must ensure that ''g''(μ, ν) is selected such that [[ergodicity]] is met. In [[thermal equilibrium]] a system's energy only fluctuates within a small range.<ref name="Newman" /> This is the motivation behind the concept of '''single-spin-flip dynamics''',<ref name="pre0">{{cite journal|url= http://journals.aps.org/pre/abstract/10.1103/PhysRevE.90.032141|title= M. Suzen "Effective ergodicity in single-spin-flip dynamics"|journal= Physical Review E|date= 29 September 2014|volume= 90|issue= 3|page= 032141|doi= 10.1103/PhysRevE.90.032141|language=en-US|access-date=2022-08-09|last1= Süzen|first1= Mehmet|pmid= 25314429|arxiv= 1405.4497|s2cid= 118355454}}</ref> which states that in each transition, we will only change one of the spin sites on the lattice.<ref name="Newman" /> Furthermore, by using single- spin-flip dynamics, one can get from any state to any other state by flipping each site that differs between the two states one at a time.


Given this Hamiltonian, quantities of interest such as the specific heat or the magnetization of the magnet at a given temperature can be calculated.<ref name="Newman" />
The maximum amount of change between the energy of the present state, ''H''<sub>μ</sub> and any possible new state's energy ''H''<sub>ν</sub> (using single-spin-flip dynamics) is 2''J'' between the spin we choose to "flip" to move to the new state and that spin's neighbor.<ref name="Newman" /> Thus, in a 1D Ising model, where each site has two neighbors (left and right), the maximum difference in energy would be 4''J''.


=== Metropolis algorithm ===
Let ''c'' represent the '''lattice coordination number'''; the number of nearest neighbors that any lattice site has. We assume that all sites have the same number of neighbors due to [[periodic boundary conditions]].<ref name="Newman" /> It is important to note that the Metropolis–Hastings algorithm does not perform well around the critical point due to critical slowing down. Other techniques such as multigrid methods, Niedermayer's algorithm, Swendsen–Wang algorithm, or the Wolff algorithm are required in order to resolve the model near the critical point; a requirement for determining the critical exponents of the system.


The [[Metropolis–Hastings algorithm]] is the most commonly used Monte Carlo algorithm to calculate Ising model estimations.<ref name="Newman" /> The algorithm first chooses ''selection probabilities'' ''g''(μ, ν), which represent the probability that state ν is selected by the algorithm out of all states, given that one is in state μ. It then uses acceptance probabilities ''A''(μ, ν) so that [[detailed balance]] is satisfied. If the new state ν is accepted, then we move to that state and repeat with selecting a new state and deciding to accept it. If ν is not accepted then we stay in μ. This process is repeated until some stopping criterion is met, which for the Ising model is often when the lattice becomes [[ferromagnetic]], meaning all of the sites point in the same direction.<ref name="Newman" />
Open-source packages implementing these algorithms are available.<ref>{{Cite web|title=For example, SquareIsingModel.jl (in Julia).|website=[[GitHub]] |date=28 June 2022 |url=https://github.com/cossio/SquareIsingModel.jl}}</ref>


When implementing the algorithm, one must ensure that ''g''(μ, ν) is selected such that [[ergodicity]] is met. In [[thermal equilibrium]] a system's energy only fluctuates within a small range.<ref name="Newman" /> This is the motivation behind the concept of '''single-spin-flip dynamics''',<ref name="pre0">{{cite journal|url= http://journals.aps.org/pre/abstract/10.1103/PhysRevE.90.032141|title= M. Suzen "Effective ergodicity in single-spin-flip dynamics"|journal= Physical Review E|date= 29 September 2014|volume= 90|issue= 3|page= 032141|doi= 10.1103/PhysRevE.90.032141|language=en-US|access-date=2022-08-09|last1= Süzen|first1= Mehmet|pmid= 25314429|arxiv= 1405.4497|bibcode= 2014PhRvE..90c2141S|s2cid= 118355454}}</ref> which states that in each transition, we will only change one of the spin sites on the lattice.<ref name="Newman" /> Furthermore, by using single- spin-flip dynamics, one can get from any state to any other state by flipping each site that differs between the two states one at a time. The maximum amount of change between the energy of the present state, ''H''<sub>μ</sub> and any possible new state's energy ''H''<sub>ν</sub> (using single-spin-flip dynamics) is 2''J'' between the spin we choose to "flip" to move to the new state and that spin's neighbor.<ref name="Newman" /> Thus, in a 1D Ising model, where each site has two neighbors (left and right), the maximum difference in energy would be 4''J''. Let ''c'' represent the ''lattice coordination number''; the number of nearest neighbors that any lattice site has. We assume that all sites have the same number of neighbors due to [[periodic boundary conditions]].<ref name="Newman" /> It is important to note that the Metropolis–Hastings algorithm does not perform well around the critical point due to critical slowing down. Other techniques such as multigrid methods, Niedermayer's algorithm, Swendsen–Wang algorithm, or the Wolff algorithm are required in order to resolve the model near the critical point; a requirement for determining the critical exponents of the system.
====Specification====
Specifically for the Ising model and using single-spin-flip dynamics, one can establish the following.


Since there are ''L'' total sites on the lattice, using single-spin-flip as the only way we transition to another state, we can see that there are a total of ''L'' new states ν from our present state μ. The algorithm assumes that the selection probabilities are equal to the ''L'' states: ''g''(μ, ν) = 1/''L''. [[Detailed balance]] tells us that the following equation must hold:
Specifically for the Ising model and using single-spin-flip dynamics, one can establish the following. Since there are ''L'' total sites on the lattice, using single-spin-flip as the only way we transition to another state, we can see that there are a total of ''L'' new states ν from our present state μ. The algorithm assumes that the selection probabilities are equal to the ''L'' states: ''g''(μ, ν) = 1/''L''. [[Detailed balance]] tells us that the following equation must hold:


: <math>\frac{P(\mu, \nu)}{P(\nu, \mu)} =
<math display="block">\frac{P(\mu, \nu)}{P(\nu, \mu)} =
\frac{g(\mu, \nu) A(\mu, \nu)}{g(\nu, \mu) A(\nu, \mu)} =
\frac{g(\mu, \nu) A(\mu, \nu)}{g(\nu, \mu) A(\nu, \mu)} =
\frac{A(\mu, \nu)}{A(\nu, \mu)} =
\frac{A(\mu, \nu)}{A(\nu, \mu)} =
Line 242: Line 318:
Thus, we want to select the acceptance probability for our algorithm to satisfy
Thus, we want to select the acceptance probability for our algorithm to satisfy


: <math>\frac{A(\mu, \nu)}{A(\nu, \mu)} = e^{-\beta(H_\nu - H_\mu)}.</math>
<math display="block">\frac{A(\mu, \nu)}{A(\nu, \mu)} = e^{-\beta(H_\nu - H_\mu)}.</math>


If ''H''<sub>ν</sub> > ''H''<sub>μ</sub>, then ''A''(ν, μ) > ''A''(μ, ν). Metropolis sets the larger of ''A''(μ,&nbsp;ν) or ''A''(ν,&nbsp;μ) to be 1. By this reasoning the acceptance algorithm is:<ref name="Newman" />
If ''H''<sub>ν</sub> > ''H''<sub>μ</sub>, then ''A''(ν, μ) > ''A''(μ, ν). Metropolis sets the larger of ''A''(μ,&nbsp;ν) or ''A''(ν,&nbsp;μ) to be 1. By this reasoning the acceptance algorithm is:<ref name="Newman" />


: <math>A(\mu, \nu) = \begin{cases}
<math display="block">A(\mu, \nu) = \begin{cases}
e^{-\beta(H_\nu - H_\mu)}, & \text{if } H_\nu - H_\mu > 0, \\
e^{-\beta(H_\nu - H_\mu)}, & \text{if } H_\nu - H_\mu > 0, \\
1 & \text{otherwise}.
1 & \text{otherwise}.
Line 252: Line 328:


The basic form of the algorithm is as follows:
The basic form of the algorithm is as follows:

# Pick a spin site using selection probability ''g''(μ,&nbsp;ν) and calculate the contribution to the energy involving this spin.
# Pick a spin site using selection probability ''g''(μ,&nbsp;ν) and calculate the contribution to the energy involving this spin.
# Flip the value of the spin and calculate the new contribution.
# Flip the value of the spin and calculate the new contribution.
Line 260: Line 337:
The change in energy ''H''<sub>ν</sub>&nbsp;−&nbsp;''H''<sub>μ</sub> only depends on the value of the spin and its nearest graph neighbors. So if the graph is not too connected, the algorithm is fast. This process will eventually produce a pick from the distribution.
The change in energy ''H''<sub>ν</sub>&nbsp;−&nbsp;''H''<sub>μ</sub> only depends on the value of the spin and its nearest graph neighbors. So if the graph is not too connected, the algorithm is fast. This process will eventually produce a pick from the distribution.


===Viewing the Ising model as a Markov chain===
=== As a Markov chain ===

It is possible to view the Ising model as a [[Markov chain]], as the immediate probability ''P''<sub>β</sub>(ν) of transitioning to a future state ν only depends on the present state μ. The Metropolis algorithm is actually a version of a [[Markov chain Monte Carlo]] simulation, and since we use single-spin-flip dynamics in the Metropolis algorithm, every state can be viewed as having links to exactly ''L'' other states, where each transition corresponds to flipping a single spin site to the opposite value.<ref>{{cite journal |last=Teif |first=Vladimir B.|title=General transfer matrix formalism to calculate DNA-protein-drug binding in gene regulation |journal=Nucleic Acids Res. |year=2007 |volume=35 |issue=11 |pages=e80 |doi=10.1093/nar/gkm268 |pmid=17526526 |pmc=1920246}}</ref> Furthermore, since the energy equation ''H''<sub>σ</sub> change only depends on the nearest-neighbor interaction strength ''J'', the Ising model and its variants such the [[Sznajd model]] can be seen as a form of a [[Contact process (mathematics)#Voter model|voter model]] for opinion dynamics.
It is possible to view the Ising model as a [[Markov chain]], as the immediate probability ''P''<sub>β</sub>(ν) of transitioning to a future state ν only depends on the present state μ. The Metropolis algorithm is actually a version of a [[Markov chain Monte Carlo]] simulation, and since we use single-spin-flip dynamics in the Metropolis algorithm, every state can be viewed as having links to exactly ''L'' other states, where each transition corresponds to flipping a single spin site to the opposite value.<ref>{{cite journal |last=Teif |first=Vladimir B.|title=General transfer matrix formalism to calculate DNA-protein-drug binding in gene regulation |journal=Nucleic Acids Res. |year=2007 |volume=35 |issue=11 |pages=e80 |doi=10.1093/nar/gkm268 |pmid=17526526 |pmc=1920246}}</ref> Furthermore, since the energy equation ''H''<sub>σ</sub> change only depends on the nearest-neighbor interaction strength ''J'', the Ising model and its variants such the [[Sznajd model]] can be seen as a form of a [[Contact process (mathematics)#Voter model|voter model]] for opinion dynamics.


==One dimension==
== Solutions ==
=== One dimension ===

The thermodynamic limit exists as long as the interaction decay is <math>J_{ij} \sim |i - j|^{-\alpha}</math> with α > 1.<ref name="Ruelle">{{cite book |first=David |last=Ruelle |title=Statistical Mechanics: Rigorous Results |url=https://books.google.com/books?id=2HPVCgAAQBAJ&pg=PR4 |date=1999 |publisher=World Scientific |isbn=978-981-4495-00-4 |orig-year=1969}}</ref>
The thermodynamic limit exists as long as the interaction decay is <math>J_{ij} \sim |i - j|^{-\alpha}</math> with α > 1.<ref name="Ruelle">{{cite book |first=David |last=Ruelle |title=Statistical Mechanics: Rigorous Results |url=https://books.google.com/books?id=2HPVCgAAQBAJ&pg=PR4 |date=1999 |publisher=World Scientific |isbn=978-981-4495-00-4 |orig-year=1969}}</ref>


Line 271: Line 351:
* In the case of ''nearest neighbor'' interactions, E. Ising provided an exact solution of the model. At any positive temperature (i.e. finite β) the free energy is analytic in the thermodynamics parameters, and the truncated two-point spin correlation decays exponentially fast. At zero temperature (i.e. infinite β), there is a second-order phase transition: the free energy is infinite, and the truncated two-point spin correlation does not decay (remains constant). Therefore, ''T'' = 0 is the critical temperature of this case. Scaling formulas are satisfied.<ref>{{citation | last1=Baxter | first1=Rodney J. | title=Exactly solved models in statistical mechanics | url=http://tpsrv.anu.edu.au/Members/baxter/book | url-status=dead | publisher=Academic Press Inc. [Harcourt Brace Jovanovich Publishers] | location=London | isbn=978-0-12-083180-7 | mr=690578 | year=1982 | access-date=2009-10-25 | archive-date=2012-03-20 | archive-url=https://web.archive.org/web/20120320064257/http://tpsrv.anu.edu.au/Members/baxter/book }}</ref>
* In the case of ''nearest neighbor'' interactions, E. Ising provided an exact solution of the model. At any positive temperature (i.e. finite β) the free energy is analytic in the thermodynamics parameters, and the truncated two-point spin correlation decays exponentially fast. At zero temperature (i.e. infinite β), there is a second-order phase transition: the free energy is infinite, and the truncated two-point spin correlation does not decay (remains constant). Therefore, ''T'' = 0 is the critical temperature of this case. Scaling formulas are satisfied.<ref>{{citation | last1=Baxter | first1=Rodney J. | title=Exactly solved models in statistical mechanics | url=http://tpsrv.anu.edu.au/Members/baxter/book | url-status=dead | publisher=Academic Press Inc. [Harcourt Brace Jovanovich Publishers] | location=London | isbn=978-0-12-083180-7 | mr=690578 | year=1982 | access-date=2009-10-25 | archive-date=2012-03-20 | archive-url=https://web.archive.org/web/20120320064257/http://tpsrv.anu.edu.au/Members/baxter/book }}</ref>


===Ising's exact solution===
==== Ising's exact solution ====

In the nearest neighbor case (with periodic or free boundary conditions) an exact solution is available. The Hamiltonian of the one-dimensional Ising model on a lattice of ''L'' sites with periodic boundary conditions is
In the nearest neighbor case (with periodic or free boundary conditions) an exact solution is available. The Hamiltonian of the one-dimensional Ising model on a lattice of ''L'' sites with free boundary conditions is
: <math>H(\sigma) = -J \sum_{i=1,\ldots,L-1} \sigma_i \sigma_{i+1} - h \sum_i \sigma_i,</math>
<math display="block">H(\sigma) = -J \sum_{i=1,\ldots,L-1} \sigma_i \sigma_{i+1} - h \sum_i \sigma_i,</math>
where ''J'' and ''h'' can be any number, since in this simplified case ''J'' is a constant representing the interaction strength between the nearest neighbors and ''h'' is the constant external magnetic field applied to lattice sites. Then the
where ''J'' and ''h'' can be any number, since in this simplified case ''J'' is a constant representing the interaction strength between the nearest neighbors and ''h'' is the constant external magnetic field applied to lattice sites. Then the
[[Thermodynamic free energy|free energy]] is
[[Thermodynamic free energy|free energy]] is
: <math>f(\beta, h) = -\lim_{L \to \infty} \frac{1}{\beta L} \ln Z(\beta) = -\frac{1}{\beta} \ln\left(e^{\beta J} \cosh \beta h + \sqrt{e^{2\beta J}(\sinh\beta h)^2 + e^{-2\beta J}}\right),
<math display="block">f(\beta, h) = -\lim_{L \to \infty} \frac{1}{\beta L} \ln Z(\beta) = -\frac{1}{\beta} \ln\left(e^{\beta J} \cosh \beta h + \sqrt{e^{2\beta J}(\sinh\beta h)^2 + e^{-2\beta J}}\right),
</math>
</math>
and the spin-spin correlation (i.e. the covariance) is
and the spin-spin correlation (i.e. the covariance) is
: <math>\langle\sigma_i \sigma_j\rangle - \langle\sigma_i\rangle \langle\sigma_j\rangle = C(\beta) e^{-c(\beta)|i - j|},</math>
<math display="block">\langle\sigma_i \sigma_j\rangle - \langle\sigma_i\rangle \langle\sigma_j\rangle = C(\beta) e^{-c(\beta)|i - j|},</math>
where ''C''(β) and ''c''(β) are positive functions for ''T'' > 0. For ''T'' → 0, though, the inverse correlation length ''c''(β) vanishes.
where ''C''(β) and ''c''(β) are positive functions for ''T'' > 0. For ''T'' → 0, though, the inverse correlation length ''c''(β) vanishes.


====Proof====
===== Proof =====

The proof of this result is a simple computation.
The proof of this result is a simple computation.


If ''h'' = 0, it is very easy to obtain the free energy in the case of free boundary condition, i.e. when
If ''h'' = 0, it is very easy to obtain the free energy in the case of free boundary condition, i.e. when
: <math>H(\sigma) = -J(\sigma_1 \sigma_2 + \cdots + \sigma_{L-1} \sigma_L).</math>
<math display="block">H(\sigma) = -J \left(\sigma_1 \sigma_2 + \cdots + \sigma_{L-1} \sigma_L\right).</math>
Then the model factorizes under the change of variables
Then the model factorizes under the change of variables
: <math>\sigma'_j = \sigma_j \sigma_{j-1}, \quad j \ge 2.</math>
<math display="block">\sigma'_j = \sigma_j \sigma_{j-1}, \quad j \ge 2.</math>


This gives
This gives
: <math>Z(\beta) = \sum_{\sigma_1,\ldots, \sigma_L} e^{\beta J \sigma_1 \sigma_2} e^{\beta J \sigma_2 \sigma_3} \cdots e^{\beta J \sigma_{L-1} \sigma_L} = 2 \prod_{j=2}^L \sum_{\sigma'_j} e^{\beta J\sigma'_j} = 2 \left[e^{\beta J} + e^{-\beta J}\right]^{L-1}. </math>
<math display="block">Z(\beta) = \sum_{\sigma_1,\ldots, \sigma_L} e^{\beta J \sigma_1 \sigma_2} e^{\beta J \sigma_2 \sigma_3} \cdots e^{\beta J \sigma_{L-1} \sigma_L} = 2 \prod_{j=2}^L \sum_{\sigma'_j} e^{\beta J\sigma'_j} = 2 \left[e^{\beta J} + e^{-\beta J}\right]^{L-1}. </math>


Therefore, the free energy is
Therefore, the free energy is


: <math>f(\beta, 0) = -\frac{1}{\beta} \ln\left[e^{\beta J} + e^{-\beta J}\right].</math>
<math display="block">f(\beta, 0) = -\frac{1}{\beta} \ln\left[e^{\beta J} + e^{-\beta J}\right].</math>


With the same change of variables
With the same change of variables


: <math>\langle\sigma_j\sigma_{j+N}\rangle = \left[\frac{e^{\beta J} - e^{-\beta J}}{e^{\beta J} + e^{-\beta J}}\right]^N,</math>
<math display="block">\langle\sigma_j\sigma_{j+N}\rangle = \left[\frac{e^{\beta J} - e^{-\beta J}}{e^{\beta J} + e^{-\beta J}}\right]^N,</math>


hence it decays exponentially as soon as ''T'' ≠ 0; but for ''T'' = 0, i.e. in the limit β → ∞ there is no decay.
hence it decays exponentially as soon as ''T'' ≠ 0; but for ''T'' = 0, i.e. in the limit β → ∞ there is no decay.


If ''h'' ≠ 0 we need the transfer matrix method. For the periodic boundary conditions case is the following. The partition function is
If ''h'' ≠ 0 we need the transfer matrix method. For the periodic boundary conditions case is the following. The partition function is
: <math>Z(\beta) = \sum_{\sigma_1,\ldots,\sigma_L} e^{\beta h \sigma_1} e^{\beta J\sigma_1\sigma_2} e^{\beta h \sigma_2} e^{\beta J\sigma_2\sigma_3} \cdots e^{\beta h \sigma_L} e^{\beta J\sigma_L\sigma_1} = \sum_{\sigma_1,\ldots,\sigma_L} V_{\sigma_1,\sigma_2} V_{\sigma_2,\sigma_3} \cdots V_{\sigma_L,\sigma_1}.</math>
<math display="block">Z(\beta) = \sum_{\sigma_1,\ldots,\sigma_L} e^{\beta h \sigma_1} e^{\beta J\sigma_1\sigma_2} e^{\beta h \sigma_2} e^{\beta J\sigma_2\sigma_3} \cdots e^{\beta h \sigma_L} e^{\beta J\sigma_L\sigma_1} = \sum_{\sigma_1,\ldots,\sigma_L} V_{\sigma_1,\sigma_2} V_{\sigma_2,\sigma_3} \cdots V_{\sigma_L,\sigma_1}.</math>
The coefficients <math>V_{\sigma, \sigma'}</math> can be seen as the entries of a matrix. There are different possible choices: a convenient one (because the matrix is symmetric) is
The coefficients <math>V_{\sigma, \sigma'}</math> can be seen as the entries of a matrix. There are different possible choices: a convenient one (because the matrix is symmetric) is
: <math>V_{\sigma, \sigma'} = e^{\frac{\beta h}{2} \sigma} e^{\beta J\sigma\sigma'} e^{\frac{\beta h}{2} \sigma'}</math>
<math display="block">V_{\sigma, \sigma'} = e^{\frac{\beta h}{2} \sigma} e^{\beta J\sigma\sigma'} e^{\frac{\beta h}{2} \sigma'}</math>
or
or
: <math>V = \begin{bmatrix}
<math display="block">V = \begin{bmatrix}
e^{\beta(h+J)} & e^{-\beta J} \\
e^{\beta(h+J)} & e^{-\beta J} \\
e^{-\beta J} & e^{-\beta(h-J)}
e^{-\beta J} & e^{-\beta(h-J)}
\end{bmatrix}.</math>
\end{bmatrix}.</math>
In matrix formalism
In matrix formalism
: <math>Z(\beta) = \operatorname{Tr} \left(V^L\right) = \lambda_1^L + \lambda_2^L = \lambda_1^L \left[1 + \left(\frac{\lambda_2}{\lambda_1}\right)^L\right],</math>
<math display="block">Z(\beta) = \operatorname{Tr} \left(V^L\right) = \lambda_1^L + \lambda_2^L = \lambda_1^L \left[1 + \left(\frac{\lambda_2}{\lambda_1}\right)^L\right],</math>
where λ<sub>1</sub> is the highest eigenvalue of ''V'', while λ<sub>2</sub> is the other eigenvalue:
where λ<sub>1</sub> is the highest eigenvalue of ''V'', while λ<sub>2</sub> is the other eigenvalue:
: <math>\lambda_1 = e^{\beta J} \cosh \beta h + \sqrt{e^{2\beta J} (\sinh\beta h)^2 + e^{-2\beta J}},</math>
<math display="block">\lambda_1 = e^{\beta J} \cosh \beta h + \sqrt{e^{2\beta J} (\sinh\beta h)^2 + e^{-2\beta J}},</math>
and |λ<sub>2</sub>| < λ<sub>1</sub>. This gives the formula of the free energy.
and |λ<sub>2</sub>| < λ<sub>1</sub>. This gives the formula of the free energy.


====Comments====
===== Comments =====

The energy of the lowest state is −''JL'', when all the spins are the same. For any other configuration, the extra energy is equal to 2''J'' times the number of sign changes that are encountered when scanning the configuration from left to right.
The energy of the lowest state is −''JL'', when all the spins are the same. For any other configuration, the extra energy is equal to 2''J'' times the number of sign changes that are encountered when scanning the configuration from left to right.


If we designate the number of sign changes in a configuration as ''k'', the difference in energy from the lowest energy state is 2''k''. Since the energy is additive in the number of flips, the probability ''p'' of having a spin-flip at each position is independent. The ratio of the probability of finding a flip to the probability of not finding one is the Boltzmann factor:
If we designate the number of sign changes in a configuration as ''k'', the difference in energy from the lowest energy state is 2''k''. Since the energy is additive in the number of flips, the probability ''p'' of having a spin-flip at each position is independent. The ratio of the probability of finding a flip to the probability of not finding one is the Boltzmann factor:


: <math>\frac{p}{1 - p} = e^{-2\beta J}.</math>
<math display="block">\frac{p}{1 - p} = e^{-2\beta J}.</math>


The problem is reduced to independent biased [[coin toss]]es. This essentially completes the mathematical description.
The problem is reduced to independent biased [[coin toss]]es. This essentially completes the mathematical description.
Line 329: Line 412:
From the description in terms of independent tosses, the statistics of the model for long lines can be understood. The line splits into domains. Each domain is of average length exp(2β). The length of a domain is distributed exponentially, since there is a constant probability at any step of encountering a flip. The domains never become infinite, so a long system is never magnetized. Each step reduces the correlation between a spin and its neighbor by an amount proportional to ''p'', so the correlations fall off exponentially.
From the description in terms of independent tosses, the statistics of the model for long lines can be understood. The line splits into domains. Each domain is of average length exp(2β). The length of a domain is distributed exponentially, since there is a constant probability at any step of encountering a flip. The domains never become infinite, so a long system is never magnetized. Each step reduces the correlation between a spin and its neighbor by an amount proportional to ''p'', so the correlations fall off exponentially.


: <math>\langle S_i S_j \rangle \propto e^{-p|i-j|}.</math>
<math display="block">\langle S_i S_j \rangle \propto e^{-p|i-j|}.</math>


The [[partition function (statistical mechanics)|partition function]] is the volume of configurations, each configuration weighted by its Boltzmann weight. Since each configuration is described by the sign-changes, the partition function factorizes:
The [[partition function (statistical mechanics)|partition function]] is the volume of configurations, each configuration weighted by its Boltzmann weight. Since each configuration is described by the sign-changes, the partition function factorizes:


: <math>Z = \sum_{\text{configs}} e^{\sum_k S_k} = \prod_k (1 + p ) = (1 + p)^L.</math>
<math display="block">Z = \sum_{\text{configs}} e^{\sum_k S_k} = \prod_k (1 + p ) = (1 + p)^L.</math>


The logarithm divided by ''L'' is the free energy density:
The logarithm divided by ''L'' is the free energy density:


: <math>\beta f = \log(1 + p) = \log\left(1 + \frac{e^{-2\beta J}}{1 + e^{-2\beta J}}\right),</math>
<math display="block">\beta f = \log(1 + p) = \log\left(1 + \frac{e^{-2\beta J}}{1 + e^{-2\beta J}}\right),</math>


which is [[Analytic function|analytic]] away from β = ∞. A sign of a [[phase transition]] is a non-analytic free energy, so the one-dimensional model does not have a phase transition.
which is [[Analytic function|analytic]] away from β = ∞. A sign of a [[phase transition]] is a non-analytic free energy, so the one-dimensional model does not have a phase transition.


===One-dimensional solution with transverse field===
==== One-dimensional solution with transverse field ====


To express the Ising Hamiltonian using a quantum mechanical description of spins, we replace the spin variables with their respective Pauli matrices. However, depending on the direction of the magnetic field, we can create a transverse-field or longitudinal-field Hamiltonian. The [[transverse-field Ising model|transverse-field]] Hamiltonian is given by
To express the Ising Hamiltonian using a quantum mechanical description of spins, we replace the spin variables with their respective [[Pauli matrices]]. However, depending on the direction of the magnetic field, we can create a transverse-field or longitudinal-field Hamiltonian. The [[transverse-field Ising model|transverse-field]] Hamiltonian is given by


: <math>H(\sigma) = -J \sum_{i=1,\ldots,L} \sigma_i^z \sigma_{i+1}^z - h \sum_i \sigma_i^x.</math>
<math display="block">H(\sigma) = -J \sum_{i=1,\ldots,L} \sigma_i^z \sigma_{i+1}^z - h \sum_i \sigma_i^x.</math>


The transverse-field model experiences a phase transition between an ordered and disordered regime at ''J''&nbsp;~&nbsp;''h''. This can be shown by a mapping of Pauli matrices
The transverse-field model experiences a phase transition between an ordered and disordered regime at ''J''&nbsp;~&nbsp;''h''. This can be shown by a mapping of Pauli matrices


: <math>\sigma_n^z = \prod_{i=1}^n T_i^x,</math>
<math display="block">\sigma_n^z = \prod_{i=1}^n T_i^x,</math>


: <math>\sigma_n^x = T_n^z T_{n+1}^z.</math>
<math display="block">\sigma_n^x = T_n^z T_{n+1}^z.</math>


Upon rewriting the Hamiltonian in terms of this change-of-basis matrices, we obtain
Upon rewriting the Hamiltonian in terms of this change-of-basis matrices, we obtain


: <math>H(\sigma) = -h \sum_{i=1,\ldots,L} T_i^z T_{i+1}^z - J \sum_i T_i^x.</math>
<math display="block">H(\sigma) = -h \sum_{i=1,\ldots,L} T_i^z T_{i+1}^z - J \sum_i T_i^x.</math>


Since the roles of ''h'' and ''J'' are switched, the Hamiltonian undergoes a transition at ''J'' = ''h''.<ref name="Chakra">{{cite book |last1=Suzuki |first1= Sei |last2= Inoue |first2= Jun-ichi |last3= Chakrabarti |first3= Bikas K. |title=Quantum Ising Phases and Transitions in Transverse Ising Models |publisher=Springer |year=2012 |doi=10.1007/978-3-642-33039-1 |isbn=978-3-642-33038-4 |url= https://cds.cern.ch/record/1513030}}</ref>
Since the roles of ''h'' and ''J'' are switched, the Hamiltonian undergoes a transition at ''J'' = ''h''.<ref name="Chakra">{{cite book |last1=Suzuki |first1= Sei |last2= Inoue |first2= Jun-ichi |last3= Chakrabarti |first3= Bikas K. |title=Quantum Ising Phases and Transitions in Transverse Ising Models |publisher=Springer |year=2012 |doi=10.1007/978-3-642-33039-1 |isbn=978-3-642-33038-4 |url= https://cds.cern.ch/record/1513030}}</ref>


=== Renormalization ===
==== Renormalization ====

When there no external field, we can derive a functional equation that <math>f(\beta, 0) = f(\beta)</math> satisfies using renormalization.<ref>{{Cite journal |last1=Maris |first1=Humphrey J. |last2=Kadanoff |first2=Leo P. |date=June 1978 |title=Teaching the renormalization group |url=https://pubs.aip.org/aapt/ajp/article/46/6/652-657/1045608 |journal=American Journal of Physics |language=en |volume=46 |issue=6 |pages=652–657 |doi=10.1119/1.11224 |issn=0002-9505}}</ref> Specifically, let <math>Z_N(\beta, J)</math> be the partition function with <math>N</math> sites. Now we have:<math display="block">Z_N(\beta, J) = \sum_{\sigma} e^{K \sigma_2(\sigma_1 + \sigma_3)}e^{K \sigma_4(\sigma_3 + \sigma_5)}\cdots</math>where <math>K := \beta J</math>. We sum over each of <math>\sigma_2, \sigma_4, \cdots</math>, to obtain<math display="block">Z_N(\beta, J) = \sum_{\sigma} (2\cosh(K(\sigma_1 + \sigma_3))) \cdot (2\cosh(K(\sigma_3 + \sigma_5))) \cdots</math>Now, since the cosh function is even, we can solve <math>Ae^{K'\sigma_1\sigma_3} = 2\cosh(K(\sigma_1+\sigma_3))</math> as <math>A = 2\sqrt{\cosh(2K)}, K' = \frac 12 \ln\cosh(2K)</math>. Now we have a self-similarity relation:<math display="block">\frac 1N \ln Z_N(K) = \frac 12 \ln\left(2\sqrt{\cosh(2K)}\right) + \frac 12 \frac{1}{N/2} \ln Z_{N/2}(K')</math>Taking the limit, we obtain<math display="block">f(\beta) = \frac 12 \ln\left(2\sqrt{\cosh(2K)}\right) + \frac 12 f(\beta')</math>where <math>\beta' J = \frac 12 \ln\cosh(2\beta J)</math>.
When there is no external field, we can derive a functional equation that <math>f(\beta, 0) = f(\beta)</math> satisfies using renormalization.<ref>{{Cite journal |last1=Maris |first1=Humphrey J. |last2=Kadanoff |first2=Leo P. |date=June 1978 |title=Teaching the renormalization group |url=https://pubs.aip.org/aapt/ajp/article/46/6/652-657/1045608 |journal=American Journal of Physics |language=en |volume=46 |issue=6 |pages=652–657 |doi=10.1119/1.11224 |bibcode=1978AmJPh..46..652M |issn=0002-9505}}</ref> Specifically, let <math>Z_N(\beta, J)</math> be the partition function with <math>N</math> sites. Now we have:<math display="block">Z_N(\beta, J) = \sum_{\sigma} e^{K \sigma_2(\sigma_1 + \sigma_3)}e^{K \sigma_4(\sigma_3 + \sigma_5)}\cdots</math>where <math>K := \beta J</math>. We sum over each of <math>\sigma_2, \sigma_4, \cdots</math>, to obtain<math display="block">Z_N(\beta, J) = \sum_{\sigma} (2\cosh(K(\sigma_1 + \sigma_3))) \cdot (2\cosh(K(\sigma_3 + \sigma_5))) \cdots</math>Now, since the cosh function is even, we can solve <math>Ae^{K'\sigma_1\sigma_3} = 2\cosh(K(\sigma_1+\sigma_3))</math> as <math display="inline">A = 2\sqrt{\cosh(2K)}, K' = \frac 12 \ln\cosh(2K)</math>. Now we have a self-similarity relation:<math display="block">\frac 1N \ln Z_N(K) = \frac 12 \ln\left(2\sqrt{\cosh(2K)}\right) + \frac 12 \frac{1}{N/2} \ln Z_{N/2}(K')</math>Taking the limit, we obtain<math display="block">f(\beta) = \frac 12 \ln\left(2\sqrt{\cosh(2K)}\right) + \frac 12 f(\beta')</math>where <math>\beta' J = \frac 12 \ln\cosh(2\beta J)</math>.


When <math>\beta</math> is small, we have <math>f(\beta)\approx \ln 2</math>, so we can numerically evaluate <math>f(\beta)</math> by iterating the functional equation until <math>K</math> is small.
When <math>\beta</math> is small, we have <math>f(\beta)\approx \ln 2</math>, so we can numerically evaluate <math>f(\beta)</math> by iterating the functional equation until <math>K</math> is small.


==Two dimensions==
=== Two dimensions ===

* In the ferromagnetic case there is a phase transition. At low temperature, the [[Peierls argument]] proves positive magnetization for the nearest neighbor case and then, by the [[Griffiths inequality]], also when longer range interactions are added. Meanwhile, at high temperature, the [[cluster expansion]] gives analyticity of the thermodynamic functions.
* In the ferromagnetic case there is a phase transition. At low temperature, the [[Peierls argument]] proves positive magnetization for the nearest neighbor case and then, by the [[Griffiths inequality]], also when longer range interactions are added. Meanwhile, at high temperature, the [[cluster expansion]] gives analyticity of the thermodynamic functions.
* In the nearest-neighbor case, the free energy was exactly computed by Onsager, through the equivalence of the model with free fermions on lattice. The spin-spin correlation functions were computed by McCoy and Wu.
* In the nearest-neighbor case, the free energy was exactly computed by Onsager, through the equivalence of the model with free fermions on lattice. The spin-spin correlation functions were computed by McCoy and Wu.


===Onsager's exact solution===
==== Onsager's exact solution ====
{{main|Square lattice Ising model}}
{{main|Square lattice Ising model}}
{{harvtxt|Onsager|1944}} obtained the following analytical expression for the free energy of the Ising model on the anisotropic square lattice when the magnetic field <math>h=0</math> in the thermodynamic limit as a function of temperature and the horizontal and vertical interaction energies <math>J_1</math> and <math>J_2</math>, respectively
{{harvtxt|Onsager|1944}} obtained the following analytical expression for the free energy of the Ising model on the anisotropic square lattice when the magnetic field <math>h=0</math> in the thermodynamic limit as a function of temperature and the horizontal and vertical interaction energies <math>J_1</math> and <math>J_2</math>, respectively


:<math> -\beta f = \ln 2 + \frac{1}{8\pi^2}\int_0^{2\pi}d\theta_1\int_0^{2\pi}d\theta_2 \ln[\cosh(2\beta J_1)\cosh(2\beta J_2) -\sinh(2\beta J_1)\cos(\theta_1)-\sinh(2\beta J_2)\cos(\theta_2)]. </math>
<math display="block"> -\beta f = \ln 2 + \frac{1}{8\pi^2}\int_0^{2\pi}d\theta_1\int_0^{2\pi}d\theta_2 \ln[\cosh(2\beta J_1)\cosh(2\beta J_2) -\sinh(2\beta J_1)\cos(\theta_1)-\sinh(2\beta J_2)\cos(\theta_2)]. </math>


From this expression for the free energy, all thermodynamic functions of the model can be calculated by using an appropriate derivative. The 2D Ising model was the first model to exhibit a continuous phase transition at a positive temperature. It occurs at the temperature <math>T_c</math> which solves the equation
From this expression for the free energy, all thermodynamic functions of the model can be calculated by using an appropriate derivative. The 2D Ising model was the first model to exhibit a continuous phase transition at a positive temperature. It occurs at the temperature <math>T_c</math> which solves the equation


:<math> \sinh\left(\frac{2J_1}{kT_c}\right)\sinh\left(\frac{2J_2}{kT_c}\right) = 1. </math>
<math display="block"> \sinh\left(\frac{2J_1}{kT_c}\right)\sinh\left(\frac{2J_2}{kT_c}\right) = 1. </math>


In the isotropic case when the horizontal and vertical interaction energies are equal <math>J_1=J_2=J</math>, the critical temperature <math>T_c</math> occurs at the following point
In the isotropic case when the horizontal and vertical interaction energies are equal <math>J_1=J_2=J</math>, the critical temperature <math>T_c</math> occurs at the following point


:<math> T_c = \frac{2J}{k\ln(1+\sqrt{2})} = (2.269185\cdots)\frac{J}{k}</math>
<math display="block"> T_c = \frac{2J}{k\ln(1+\sqrt{2})} = (2.269185\cdots)\frac{J}{k}</math>


When the interaction energies <math>J_1</math>, <math>J_2</math> are both negative, the Ising model becomes an antiferromagnet. Since the square lattice is bi-partite, it is invariant under this change when the magnetic field <math>h=0</math>, so the free energy and critical temperature are the same for the antiferromagnetic case. For the triangular lattice, which is not bi-partite, the ferromagnetic and antiferromagnetic Ising model behave notably differently. Specifically, around a triangle, it is impossible to make all 3 spin-pairs antiparallel, so the antiferromagnetic Ising model cannot reach the minimal energy state. This is an example of [[geometric frustration]].
When the interaction energies <math>J_1</math>, <math>J_2</math> are both negative, the Ising model becomes an antiferromagnet. Since the square lattice is bi-partite, it is invariant under this change when the magnetic field <math>h=0</math>, so the free energy and critical temperature are the same for the antiferromagnetic case. For the triangular lattice, which is not bi-partite, the ferromagnetic and antiferromagnetic Ising model behave notably differently. Specifically, around a triangle, it is impossible to make all 3 spin-pairs antiparallel, so the antiferromagnetic Ising model cannot reach the minimal energy state. This is an example of [[geometric frustration]].


====Transfer matrix====
===== Transfer matrix =====

Start with an analogy with quantum mechanics. The Ising model on a long periodic lattice has a partition function
Start with an analogy with quantum mechanics. The Ising model on a long periodic lattice has a partition function


:<math>\sum_{\{S\}} \exp\biggl(\sum_{ij} S_{i,j} \left( S_{i,j+1} + S_{i+1,j} \right)\biggr).</math>
<math display="block">\sum_{\{S\}} \exp\biggl(\sum_{ij} S_{i,j} \left( S_{i,j+1} + S_{i+1,j} \right)\biggr).</math>


Think of the ''i'' direction as ''space'', and the ''j'' direction as ''time''. This is an independent sum over all the values that the spins can take at each time slice. This is a type of [[path integral formulation|path integral]], it is the sum over all spin histories.
Think of the ''i'' direction as ''space'', and the ''j'' direction as ''time''. This is an independent sum over all the values that the spins can take at each time slice. This is a type of [[path integral formulation|path integral]], it is the sum over all spin histories.


A path integral can be rewritten as a Hamiltonian evolution. The Hamiltonian steps through time by performing a unitary rotation between time ''t'' and time ''t'' + Δ''t'':
A path integral can be rewritten as a Hamiltonian evolution. The Hamiltonian steps through time by performing a unitary rotation between time ''t'' and time ''t'' + Δ''t'':
:<math> U = e^{i H \Delta t}</math>
<math display="block"> U = e^{i H \Delta t}</math>


The product of the U matrices, one after the other, is the total time evolution operator, which is the path integral we started with.
The product of the U matrices, one after the other, is the total time evolution operator, which is the path integral we started with.


:<math> U^N = (e^{i H \Delta t})^N = \int DX e^{iL}</math>
<math display="block"> U^N = (e^{i H \Delta t})^N = \int DX e^{iL}</math>


where ''N'' is the number of time slices. The sum over all paths is given by a product of matrices, each matrix element is the transition probability from one slice to the next.
where ''N'' is the number of time slices. The sum over all paths is given by a product of matrices, each matrix element is the transition probability from one slice to the next.


Similarly, one can divide the sum over all partition function configurations into slices, where each slice is the one-dimensional configuration at time 1. This defines the [[Transfer-matrix method|transfer matrix]]:
Similarly, one can divide the sum over all partition function configurations into slices, where each slice is the one-dimensional configuration at time 1. This defines the [[Transfer-matrix method (statistical mechanics)|transfer matrix]]:
:<math>T_{C_1 C_2}.</math>
<math display="block">T_{C_1 C_2}.</math>


The configuration in each slice is a one-dimensional collection of spins. At each time slice, ''T'' has matrix elements between two configurations of spins, one in the immediate future and one in the immediate past. These two configurations are ''C''<sub>1</sub> and ''C''<sub>2</sub>, and they are all one-dimensional spin configurations. We can think of the vector space that ''T'' acts on as all complex linear combinations of these. Using quantum mechanical notation:
The configuration in each slice is a one-dimensional collection of spins. At each time slice, ''T'' has matrix elements between two configurations of spins, one in the immediate future and one in the immediate past. These two configurations are ''C''<sub>1</sub> and ''C''<sub>2</sub>, and they are all one-dimensional spin configurations. We can think of the vector space that ''T'' acts on as all complex linear combinations of these. Using quantum mechanical notation:
:<math>|A\rangle = \sum_S A(S) |S\rangle</math>
<math display="block">|A\rangle = \sum_S A(S) |S\rangle</math>


where each basis vector <math>|S\rangle</math> is a spin configuration of a one-dimensional Ising model.
where each basis vector <math>|S\rangle</math> is a spin configuration of a one-dimensional Ising model.


Like the Hamiltonian, the transfer matrix acts on all linear combinations of states. The partition function is a matrix function of T, which is defined by the [[Trace (linear algebra)|sum]] over all histories which come back to the original configuration after ''N'' steps:
Like the Hamiltonian, the transfer matrix acts on all linear combinations of states. The partition function is a matrix function of T, which is defined by the [[Trace (linear algebra)|sum]] over all histories which come back to the original configuration after ''N'' steps:
:<math>Z= \mathrm{tr}(T^N).</math>
<math display="block">Z= \mathrm{tr}(T^N).</math>


Since this is a matrix equation, it can be evaluated in any basis. So if we can diagonalize the matrix ''T'', we can find ''Z''.
Since this is a matrix equation, it can be evaluated in any basis. So if we can diagonalize the matrix ''T'', we can find ''Z''.


====''T'' in terms of Pauli matrices====
===== ''T'' in terms of Pauli matrices =====

The contribution to the partition function for each past/future pair of configurations on a slice is the sum of two terms. There is the number of spin flips in the past slice and there is the number of spin flips between the past and future slice. Define an operator on configurations which flips the spin at site i:
The contribution to the partition function for each past/future pair of configurations on a slice is the sum of two terms. There is the number of spin flips in the past slice and there is the number of spin flips between the past and future slice. Define an operator on configurations which flips the spin at site i:


:<math>\sigma^x_i.</math>
<math display="block">\sigma^x_i.</math>


In the usual Ising basis, acting on any linear combination of past configurations, it produces the same linear combination but with the spin at position i of each basis vector flipped.
In the usual Ising basis, acting on any linear combination of past configurations, it produces the same linear combination but with the spin at position i of each basis vector flipped.
Line 422: Line 509:
Define a second operator which multiplies the basis vector by +1 and −1 according to the spin at position ''i'':
Define a second operator which multiplies the basis vector by +1 and −1 according to the spin at position ''i'':


:<math>\sigma^z_i.</math>
<math display="block">\sigma^z_i.</math>


''T'' can be written in terms of these:
''T'' can be written in terms of these:


:<math>\sum_i A \sigma^x_i + B \sigma^z_i \sigma^z_{i+1}</math>
<math display="block">\sum_i A \sigma^x_i + B \sigma^z_i \sigma^z_{i+1}</math>


where ''A'' and ''B'' are constants which are to be determined so as to reproduce the partition function. The interpretation is that the statistical configuration at this slice contributes according to both the number of spin flips in the slice, and whether or not the spin at position ''i'' has flipped.
where ''A'' and ''B'' are constants which are to be determined so as to reproduce the partition function. The interpretation is that the statistical configuration at this slice contributes according to both the number of spin flips in the slice, and whether or not the spin at position ''i'' has flipped.


====Spin flip creation and annihilation operators====
===== Spin flip creation and annihilation operators =====

Just as in the one-dimensional case, we will shift attention from the spins to the spin-flips. The σ<sup>''z''</sup> term in ''T'' counts the number of spin flips, which we can write in terms of spin-flip creation and annihilation operators:
Just as in the one-dimensional case, we will shift attention from the spins to the spin-flips. The σ<sup>''z''</sup> term in ''T'' counts the number of spin flips, which we can write in terms of spin-flip creation and annihilation operators:


:<math> \sum C \psi^\dagger_i \psi_i. \,</math>
<math display="block"> \sum C \psi^\dagger_i \psi_i. \,</math>


The first term flips a spin, so depending on the basis state it either:
The first term flips a spin, so depending on the basis state it either:
Line 442: Line 530:


Writing this out in terms of creation and annihilation operators:
Writing this out in terms of creation and annihilation operators:
:<math> \sigma^x_i = D {\psi^\dagger}_i \psi_{i+1} + D^* {\psi^\dagger}_i \psi_{i-1} + C\psi_i \psi_{i+1} + C^* {\psi^\dagger}_i {\psi^\dagger}_{i+1}.</math>
<math display="block"> \sigma^x_i = D {\psi^\dagger}_i \psi_{i+1} + D^* {\psi^\dagger}_i \psi_{i-1} + C\psi_i \psi_{i+1} + C^* {\psi^\dagger}_i {\psi^\dagger}_{i+1}.</math>


Ignore the constant coefficients, and focus attention on the form. They are all quadratic. Since the coefficients are constant, this means that the ''T'' matrix can be diagonalized by Fourier transforms.
Ignore the constant coefficients, and focus attention on the form. They are all quadratic. Since the coefficients are constant, this means that the ''T'' matrix can be diagonalized by Fourier transforms.
Line 448: Line 536:
Carrying out the diagonalization produces the Onsager free energy.
Carrying out the diagonalization produces the Onsager free energy.


====Onsager's formula for spontaneous magnetization====
===== Onsager's formula for spontaneous magnetization =====

Onsager famously announced the following expression for the [[spontaneous magnetization]] ''M'' of a two-dimensional Ising ferromagnet on the square lattice at two different conferences in 1948, though without proof<ref name="Montroll 1963 pages=308-309"/>
Onsager famously announced the following expression for the [[spontaneous magnetization]] ''M'' of a two-dimensional Ising ferromagnet on the square lattice at two different conferences in 1948, though without proof<ref name="Montroll 1963 pages=308-309"/>

:<math>M = \left(1 - \left[\sinh 2\beta J_1 \sinh 2\beta J_2\right]^{-2}\right)^{\frac{1}{8}}</math>
<math display="block">M = \left(1 - \left[\sinh 2\beta J_1 \sinh 2\beta J_2\right]^{-2}\right)^{\frac{1}{8}}</math>

where <math>J_1</math> and <math>J_2</math> are horizontal and vertical interaction energies.
where <math>J_1</math> and <math>J_2</math> are horizontal and vertical interaction energies.


A complete derivation was only given in 1951 by {{harvtxt|Yang|1952}} using a limiting process of transfer matrix eigenvalues. The proof was subsequently greatly simplified in 1963 by Montroll, Potts, and Ward<ref name="Montroll 1963 pages=308-309"/> using [[Gábor Szegő|Szegő]]'s [[Szegő limit theorems|limit formula]] for [[Toeplitz determinant]]s by treating the magnetization as the limit of correlation functions.
A complete derivation was only given in 1951 by {{harvtxt|Yang|1952}} using a limiting process of transfer matrix eigenvalues. The proof was subsequently greatly simplified in 1963 by Montroll, Potts, and Ward<ref name="Montroll 1963 pages=308-309"/> using [[Gábor Szegő|Szegő]]'s [[Szegő limit theorems|limit formula]] for [[Toeplitz determinant]]s by treating the magnetization as the limit of correlation functions.


===Minimal model===
==== Minimal model ====
{{main|Two-dimensional critical Ising model}}
{{main|Two-dimensional critical Ising model}}


At the critical point, the two-dimensional Ising model is a [[two-dimensional conformal field theory]]. The spin and energy correlation functions are described by a [[Minimal model (physics)|minimal model]], which has been exactly solved.
At the critical point, the two-dimensional Ising model is a [[two-dimensional conformal field theory]]. The spin and energy correlation functions are described by a [[Minimal model (physics)|minimal model]], which has been exactly solved.


== Three dimensions ==
=== Three dimensions ===
In three as in two dimensions, the most studied case of the Ising model is the translation-invariant model on a cubic lattice with nearest-neighbor coupling in the zero magnetic field. Many theoreticians searched for an analytical three-dimensional solution for many decades, which would be analogous to Onsager's solution in the two-dimensional case.'''<ref>{{Cite web|last=Wood|first=Charlie|title=The Cartoon Picture of Magnets That Has Transformed Science|url=https://www.quantamagazine.org/the-cartoon-picture-of-magnets-that-has-transformed-science-20200624/|access-date=2020-06-26|website=Quanta Magazine|date=24 June 2020|language=en}}</ref>''' <ref>{{Cite web |title=Ken Wilson recalls how Murray Gell-Mann suggested that he solve the three-dimensional Ising model |url=https://authors.library.caltech.edu/5456/1/hrst.mit.edu/hrs/renormalization/Wilson/index.htm}}</ref> Such a solution has not been found until now, although there is no proof that it may not exist.


In three dimensions, the Ising model was shown to have a representation in terms of non-interacting fermionic strings by [[Alexander Markovich Polyakov|Alexander Polyakov]] and [[Vladimir Dotsenko]]. This construction has been carried on the lattice, and the [[continuum limit]], conjecturally describing the critical point, is unknown.
In three as in two dimensions, the most studied case of the Ising model is the translation-invariant model on a cubic lattice with nearest-neighbor coupling in the zero magnetic field. Many theoreticians searched for an analytical three-dimensional solution for many decades, which would be analogous to Onsager's solution in the two-dimensional case.'''<ref>{{Cite web|last=Wood|first=Charlie|title=The Cartoon Picture of Magnets That Has Transformed Science|url=https://www.quantamagazine.org/the-cartoon-picture-of-magnets-that-has-transformed-science-20200624/|access-date=2020-06-26|website=Quanta Magazine|date=24 June 2020|language=en}}</ref>''' <ref>{{Cite web |title=Ken Wilson recalls how Murray Gell-Mann suggested that he solve the three-dimensional Ising model |url=https://authors.library.caltech.edu/5456/1/hrst.mit.edu/hrs/renormalization/Wilson/index.htm}}</ref> Such a solution has not been found until now, although there is no proof that it may not exist. In three dimensions, the Ising model was shown to have a representation in terms of non-interacting fermionic strings by [[Alexander Markovich Polyakov|Alexander Polyakov]] and [[Vladimir Dotsenko]]. This construction has been carried on the lattice, and the [[continuum limit]], conjecturally describing the critical point, is unknown.


In three as in two dimensions, Peierls' argument shows that there is a phase transition. This phase transition is rigorously known to be continuous (in the sense that correlation length diverges and the magnetization goes to zero), and is called the [[Critical point (thermodynamics)|critical point]]. It is believed that the critical point can be described by a renormalization group fixed point of the Wilson-Kadanoff renormalization group transformation. It is also believed that the phase transition can be described by a three-dimensional unitary conformal field theory, as evidenced by [[Metropolis–Hastings algorithm|Monte Carlo]] simulations,<ref>{{Cite journal|last1=Billó|first1=M.|last2=Caselle|first2=M.|last3=Gaiotto|first3=D.|last4=Gliozzi|first4=F.|last5=Meineri|first5=M.|last6=others|date=2013|title=Line defects in the 3d Ising model|journal=JHEP|volume=1307|issue=7|pages=055|arxiv=1304.4110|bibcode=2013JHEP...07..055B|doi=10.1007/JHEP07(2013)055|s2cid=119226610}}</ref><ref>{{Cite journal|last1=Cosme|first1=Catarina|last2=Lopes|first2=J. M. Viana Parente|last3=Penedones|first3=Joao|date=2015|title=Conformal symmetry of the critical 3D Ising model inside a sphere|journal=Journal of High Energy Physics|volume=2015|issue=8|pages=22|arxiv=1503.02011|bibcode=2015JHEP...08..022C|doi=10.1007/JHEP08(2015)022|s2cid=53710971}}</ref> exact diagonalization results in quantum models,<ref>{{Cite journal |last1=Zhu |first1=Wei |last2=Han |first2=Chao |last3=Huffman |first3=Emilie |last4=Hofmann |first4=Johannes S. |last5=He |first5=Yin-Chen |date=2023 |title=Uncovering Conformal Symmetry in the 3D Ising Transition: State-Operator Correspondence from a Quantum Fuzzy Sphere Regularization |journal=Physical Review X |volume=13 |issue=2 |page=021009 |doi=10.1103/PhysRevX.13.021009 |arxiv=2210.13482|bibcode=2023PhRvX..13b1009Z |s2cid=253107625 }}</ref> and quantum field theoretical arguments.<ref>{{Cite journal|last1=Delamotte|first1=Bertrand|last2=Tissier|first2=Matthieu|last3=Wschebor|first3=Nicolás|year=2016|title=Scale invariance implies conformal invariance for the three-dimensional Ising model|journal=Physical Review E|volume=93|issue=12144|pages=012144|arxiv=1501.01776|bibcode=2016PhRvE..93a2144D|doi=10.1103/PhysRevE.93.012144|pmid=26871060|s2cid=14538564}}</ref> Although it is an open problem to establish rigorously the renormalization group picture or the conformal field theory picture, theoretical physicists have used these two methods to compute the [[critical exponents]] of the phase transition, which agree with the experiments and with the Monte Carlo simulations. This conformal field theory describing the three-dimensional Ising critical point is under active investigation using the method of the [[conformal bootstrap]].<ref>{{Cite journal|last1=El-Showk|first1=Sheer|last2=Paulos|first2=Miguel F.|last3=Poland|first3=David|last4=Rychkov|first4=Slava|last5=Simmons-Duffin|first5=David|last6=Vichi|first6=Alessandro|date=2012|title=Solving the 3D Ising Model with the Conformal Bootstrap|journal=Phys. Rev.|volume=D86|issue=2|pages=025022|arxiv=1203.6064|bibcode=2012PhRvD..86b5022E|doi=10.1103/PhysRevD.86.025022|s2cid=39692193}}</ref><ref name="cmin">{{Cite journal|last1=El-Showk|first1=Sheer|last2=Paulos|first2=Miguel F.|last3=Poland|first3=David|last4=Rychkov|first4=Slava|last5=Simmons-Duffin|first5=David|last6=Vichi|first6=Alessandro|date=2014|title=Solving the 3d Ising Model with the Conformal Bootstrap II. c-Minimization and Precise Critical Exponents|journal=Journal of Statistical Physics|volume=157|issue=4–5|pages=869–914|arxiv=1403.4545|bibcode=2014JSP...157..869E|doi=10.1007/s10955-014-1042-7|s2cid=119627708}}</ref><ref name="SDPB">{{Cite journal|last=Simmons-Duffin|first=David|date=2015|title=A semidefinite program solver for the conformal bootstrap|journal=Journal of High Energy Physics|volume=2015|issue=6|pages=174|arxiv=1502.02033|bibcode=2015JHEP...06..174S|doi=10.1007/JHEP06(2015)174|issn=1029-8479|s2cid=35625559}}</ref><ref name="Kadanoff">{{cite journal |last=Kadanoff|first=Leo P.|date=April 30, 2014|title=Deep Understanding Achieved on the 3d Ising Model|url=http://www.condmatjournalclub.org/?p=2384|url-status=dead|archive-url=https://web.archive.org/web/20150722062827/http://www.condmatjournalclub.org/?p=2384|archive-date=July 22, 2015|access-date=July 19, 2015|journal=Journal Club for Condensed Matter Physics}}</ref> This method currently yields the most precise information about the structure of the critical theory (see [[Ising critical exponents]]).
=== Phase transition ===
In three as in two dimensions, Peierl's argument shows that there is a phase transition. This phase transition is rigorously known to be continuous (in the sense that correlation length diverges and the magnetization goes to zero), and is called the [[Critical point (thermodynamics)|critical point]]. It is believed that the critical point can be described by a renormalization group fixed point of the Wilson-Kadanoff renormalization group transformation. It is also believed that the phase transition can be described by a three-dimensional unitary conformal field theory, as evidenced by [[Metropolis–Hastings algorithm|Monte Carlo]] simulations,<ref>{{Cite journal|last1=Billó|first1=M.|last2=Caselle|first2=M.|last3=Gaiotto|first3=D.|last4=Gliozzi|first4=F.|last5=Meineri|first5=M.|last6=others|date=2013|title=Line defects in the 3d Ising model|journal=JHEP|volume=1307|issue=7|pages=055|arxiv=1304.4110|bibcode=2013JHEP...07..055B|doi=10.1007/JHEP07(2013)055|s2cid=119226610}}</ref><ref>{{Cite journal|last1=Cosme|first1=Catarina|last2=Lopes|first2=J. M. Viana Parente|last3=Penedones|first3=Joao|date=2015|title=Conformal symmetry of the critical 3D Ising model inside a sphere|journal=Journal of High Energy Physics|volume=2015|issue=8|pages=22|arxiv=1503.02011|bibcode=2015JHEP...08..022C|doi=10.1007/JHEP08(2015)022|s2cid=53710971}}</ref> exact diagonalization results in quantum models,<ref>{{Cite journal |last1=Zhu |first1=Wei |last2=Han |first2=Chao |last3=Huffman |first3=Emilie |last4=Hofmann |first4=Johannes S. |last5=He |first5=Yin-Chen |date=2023 |title=Uncovering Conformal Symmetry in the 3D Ising Transition: State-Operator Correspondence from a Quantum Fuzzy Sphere Regularization |journal=Physical Review X |volume=13 |issue=2 |page=021009 |doi=10.1103/PhysRevX.13.021009 |arxiv=2210.13482}}</ref> and quantum field theoretical arguments.<ref>{{Cite journal|last1=Delamotte|first1=Bertrand|last2=Tissier|first2=Matthieu|last3=Wschebor|first3=Nicolás|year=2016|title=Scale invariance implies conformal invariance for the three-dimensional Ising model|journal=Physical Review E|volume=93|issue=12144|pages=012144|arxiv=1501.01776|bibcode=2016PhRvE..93a2144D|doi=10.1103/PhysRevE.93.012144|pmid=26871060|s2cid=14538564}}</ref> Although it is an open problem to establish rigorously the renormalization group picture or the conformal field theory picture, theoretical physicists have used these two methods to compute the [[critical exponents]] of the phase transition, which agree with the experiments and with the Monte Carlo simulations.


In 2000, [[Sorin Istrail]] of [[Sandia National Laboratories]] proved that the spin glass Ising model on a [[nonplanar]] lattice is [[NP-completeness|NP-complete]]. That is, assuming '''P''' ≠ '''NP,''' the general spin glass Ising model is exactly solvable only in [[Planar graph|planar]] cases, so solutions for dimensions higher than two are also intractable.<ref>{{cite journal |last=Cipra |first=Barry A. |year=2000 |title=The Ising Model Is NP-Complete |url=https://archive.siam.org/pdf/news/654.pdf |journal=SIAM News |volume=33 |issue=6}}</ref> Istrail's result only concerns the spin glass model with spatially varying couplings, and tells nothing about Ising's original ferromagnetic model with equal couplings.
This conformal field theory describing the three-dimensional Ising critical point is under active investigation using the method of the [[conformal bootstrap]].<ref>{{Cite journal|last1=El-Showk|first1=Sheer|last2=Paulos|first2=Miguel F.|last3=Poland|first3=David|last4=Rychkov|first4=Slava|last5=Simmons-Duffin|first5=David|last6=Vichi|first6=Alessandro|date=2012|title=Solving the 3D Ising Model with the Conformal Bootstrap|journal=Phys. Rev.|volume=D86|issue=2|pages=025022|arxiv=1203.6064|bibcode=2012PhRvD..86b5022E|doi=10.1103/PhysRevD.86.025022|s2cid=39692193}}</ref><ref name="cmin">{{Cite journal|last1=El-Showk|first1=Sheer|last2=Paulos|first2=Miguel F.|last3=Poland|first3=David|last4=Rychkov|first4=Slava|last5=Simmons-Duffin|first5=David|last6=Vichi|first6=Alessandro|date=2014|title=Solving the 3d Ising Model with the Conformal Bootstrap II. c-Minimization and Precise Critical Exponents|journal=Journal of Statistical Physics|volume=157|issue=4–5|pages=869–914|arxiv=1403.4545|bibcode=2014JSP...157..869E|doi=10.1007/s10955-014-1042-7|s2cid=119627708}}</ref><ref name="SDPB">{{Cite journal|last=Simmons-Duffin|first=David|date=2015|title=A semidefinite program solver for the conformal bootstrap|journal=Journal of High Energy Physics|volume=2015|issue=6|pages=174|arxiv=1502.02033|bibcode=2015JHEP...06..174S|doi=10.1007/JHEP06(2015)174|issn=1029-8479|s2cid=35625559}}</ref><ref name="Kadanoff">{{cite journal |last=Kadanoff|first=Leo P.|date=April 30, 2014|title=Deep Understanding Achieved on the 3d Ising Model|url=http://www.condmatjournalclub.org/?p=2384|url-status=dead|archive-url=https://web.archive.org/web/20150722062827/http://www.condmatjournalclub.org/?p=2384|archive-date=July 22, 2015|access-date=July 19, 2015|journal=Journal Club for Condensed Matter Physics}}</ref> This method currently yields the most precise information about the structure of the critical theory (see [[Ising critical exponents]]).


=== Four dimensions and above ===
=== Istrail's NP-completeness result for the general spin glass model ===
{{unreferenced|section|date=November 2024}}
In 2000, [[Sorin Istrail]] of [[Sandia National Laboratories]] proved that the spin glass Ising model on a [[nonplanar]] lattice is [[NP-completeness|NP-complete]]. That is, assuming '''P''' ≠ '''NP,''' the general spin glass Ising model is exactly solvable only in [[Planar graph|planar]] cases, so solutions for dimensions higher that two are also intractable.<ref>{{cite journal |last=Cipra |first=Barry A. |year=2000 |title=The Ising Model Is NP-Complete |url=https://archive.siam.org/pdf/news/654.pdf |journal=SIAM News |volume=33 |issue=6}}</ref> Istrail's result only concerns the spin glass model with spatially varying couplings, and tells nothing about Ising's original ferromagnetic model with equal couplings.
{{overly detailed|section|date=November 2024}}


==Four dimensions and above==
In any dimension, the Ising model can be productively described by a locally varying mean field. The field is defined as the average spin value over a large region, but not so large so as to include the entire system. The field still has slow variations from point to point, as the averaging volume moves. These fluctuations in the field are described by a continuum field theory in the infinite system limit.
In any dimension, the Ising model can be productively described by a locally varying mean field. The field is defined as the average spin value over a large region, but not so large so as to include the entire system. The field still has slow variations from point to point, as the averaging volume moves. These fluctuations in the field are described by a continuum field theory in the infinite system limit.


===Local field===
==== Local field ====

The field ''H'' is defined as the long wavelength Fourier components of the spin variable, in the limit that the wavelengths are long. There are many ways to take the long wavelength average, depending on the details of how high wavelengths are cut off. The details are not too important, since the goal is to find the statistics of ''H'' and not the spins. Once the correlations in ''H'' are known, the long-distance correlations between the spins will be proportional to the long-distance correlations in ''H''.
The field ''H'' is defined as the long wavelength Fourier components of the spin variable, in the limit that the wavelengths are long. There are many ways to take the long wavelength average, depending on the details of how high wavelengths are cut off. The details are not too important, since the goal is to find the statistics of ''H'' and not the spins. Once the correlations in ''H'' are known, the long-distance correlations between the spins will be proportional to the long-distance correlations in ''H''.


Line 485: Line 575:
By symmetry in ''H'', only even powers contribute. By reflection symmetry on a square lattice, only even powers of gradients contribute. Writing out the first few terms in the free energy:
By symmetry in ''H'', only even powers contribute. By reflection symmetry on a square lattice, only even powers of gradients contribute. Writing out the first few terms in the free energy:


:<math>\beta F = \int d^dx \left[ A H^2 + \sum_{i=1}^d Z_i (\partial_i H)^2 + \lambda H^4 +\cdots \right].</math>
<math display="block">\beta F = \int d^dx \left[ A H^2 + \sum_{i=1}^d Z_i (\partial_i H)^2 + \lambda H^4 +\cdots \right].</math>


On a square lattice, symmetries guarantee that the coefficients ''Z<sub>i</sub>'' of the derivative terms are all equal. But even for an anisotropic Ising model, where the ''Z<sub>i</sub>''{{'}}s in different directions are different, the fluctuations in ''H'' are isotropic in a coordinate system where the different directions of space are rescaled.
On a square lattice, symmetries guarantee that the coefficients ''Z<sub>i</sub>'' of the derivative terms are all equal. But even for an anisotropic Ising model, where the ''Z<sub>i</sub>''{{'}}s in different directions are different, the fluctuations in ''H'' are isotropic in a coordinate system where the different directions of space are rescaled.


On any lattice, the derivative term
On any lattice, the derivative term
:<math>Z_{ij} \, \partial_i H \, \partial_j H </math>
<math display="block">Z_{ij} \, \partial_i H \, \partial_j H </math>
is a positive definite [[quadratic form]], and can be used to ''define'' the metric for space. So any translationally invariant Ising model is rotationally invariant at long distances, in coordinates that make ''Z<sub>ij</sub>'' = δ<sub>''ij''</sub>. Rotational symmetry emerges spontaneously at large distances just because there aren't very many low order terms. At higher order multicritical points, this [[accidental symmetry]] is lost.
is a positive definite [[quadratic form]], and can be used to ''define'' the metric for space. So any translationally invariant Ising model is rotationally invariant at long distances, in coordinates that make ''Z<sub>ij</sub>'' = δ<sub>''ij''</sub>. Rotational symmetry emerges spontaneously at large distances just because there aren't very many low order terms. At higher order multicritical points, this [[accidental symmetry]] is lost.


Since β''F'' is a function of a slowly spatially varying field, the probability of any field configuration is (omitting higher-order terms):
Since β''F'' is a function of a slowly spatially varying field, the probability of any field configuration is (omitting higher-order terms):


:<math>P(H) \propto e^{ - \int d^dx \left[ AH^2 + Z |\nabla H|^2 + \lambda H^4 \right]} = e^{-\beta F[H]}. </math>
<math display="block">P(H) \propto e^{ - \int d^dx \left[ AH^2 + Z |\nabla H|^2 + \lambda H^4 \right]} = e^{-\beta F[H]}. </math>


The statistical average of any product of ''H'' terms is equal to:
The statistical average of any product of ''H'' terms is equal to:


:<math>\langle H(x_1) H(x_2)\cdots H(x_n) \rangle = { \int DH \, e^{ - \int d^dx \left[ A H^2 + Z |\nabla H|^2 + \lambda H^4 \right]} H(x_1) H(x_2) \cdots H(x_n) \over \int DH \, e^{ - \int d^dx \left[ A H^2 + Z |\nabla H|^2 + \lambda H^4 \right]} }.</math>
<math display="block">\langle H(x_1) H(x_2)\cdots H(x_n) \rangle = { \int DH \, e^{ - \int d^dx \left[ A H^2 + Z |\nabla H|^2 + \lambda H^4 \right]} H(x_1) H(x_2) \cdots H(x_n) \over \int DH \, e^{ - \int d^dx \left[ A H^2 + Z |\nabla H|^2 + \lambda H^4 \right]} }.</math>


The denominator in this expression is called the ''partition function'':<math display="block">Z = \int DH \, e^{ - \int d^dx \left[ A H^2 + Z |\nabla H|^2 + \lambda H^4 \right]}</math>and the integral over all possible values of ''H'' is a statistical path integral. It integrates exp(β''F'') over all values of ''H'', over all the long wavelength fourier components of the spins. ''F'' is a "Euclidean" Lagrangian for the field ''H''. It is similar to the Lagrangian in of a scalar field in [[quantum field theory]], the difference being that all the derivative terms enter with a positive sign, and there is no overall factor of ''i'' (thus "Euclidean").
The denominator in this expression is called the ''partition function'':<math display="block">Z = \int DH \, e^{ - \int d^dx \left[ A H^2 + Z |\nabla H|^2 + \lambda H^4 \right]}</math>and the integral over all possible values of ''H'' is a statistical path integral. It integrates exp(β''F'') over all values of ''H'', over all the long wavelength fourier components of the spins. ''F'' is a "Euclidean" Lagrangian for the field ''H''. It is similar to the Lagrangian in of a scalar field in [[quantum field theory]], the difference being that all the derivative terms enter with a positive sign, and there is no overall factor of ''i'' (thus "Euclidean").


===Dimensional analysis===
==== Dimensional analysis ====

The form of ''F'' can be used to predict which terms are most important by dimensional analysis. Dimensional analysis is not completely straightforward, because the scaling of ''H'' needs to be determined.
The form of ''F'' can be used to predict which terms are most important by dimensional analysis. Dimensional analysis is not completely straightforward, because the scaling of ''H'' needs to be determined.


In the generic case, choosing the scaling law for ''H'' is easy, since the only term that contributes is the first one,
In the generic case, choosing the scaling law for ''H'' is easy, since the only term that contributes is the first one,


:<math>F = \int d^dx \, A H^2.</math>
<math display="block">F = \int d^dx \, A H^2.</math>


This term is the most significant, but it gives trivial behavior. This form of the free energy is ultralocal, meaning that it is a sum of an independent contribution from each point. This is like the spin-flips in the one-dimensional Ising model. Every value of ''H'' at any point fluctuates completely independently of the value at any other point.
This term is the most significant, but it gives trivial behavior. This form of the free energy is ultralocal, meaning that it is a sum of an independent contribution from each point. This is like the spin-flips in the one-dimensional Ising model. Every value of ''H'' at any point fluctuates completely independently of the value at any other point.
Line 516: Line 607:
To find the critical point, lower the temperature. As the temperature goes down, the fluctuations in ''H'' go up because the fluctuations are more correlated. This means that the average of a large number of spins does not become small as quickly as if they were uncorrelated, because they tend to be the same. This corresponds to decreasing ''A'' in the system of units where ''H'' does not absorb ''A''. The phase transition can only happen when the subleading terms in ''F'' can contribute, but since the first term dominates at long distances, the coefficient ''A'' must be tuned to zero. This is the location of the critical point:
To find the critical point, lower the temperature. As the temperature goes down, the fluctuations in ''H'' go up because the fluctuations are more correlated. This means that the average of a large number of spins does not become small as quickly as if they were uncorrelated, because they tend to be the same. This corresponds to decreasing ''A'' in the system of units where ''H'' does not absorb ''A''. The phase transition can only happen when the subleading terms in ''F'' can contribute, but since the first term dominates at long distances, the coefficient ''A'' must be tuned to zero. This is the location of the critical point:


:<math>F= \int d^dx \left[ t H^2 + \lambda H^4 + Z (\nabla H)^2 \right],</math>
<math display="block">F= \int d^dx \left[ t H^2 + \lambda H^4 + Z (\nabla H)^2 \right],</math>


where ''t'' is a parameter which goes through zero at the transition.
where ''t'' is a parameter which goes through zero at the transition.
Line 522: Line 613:
Since ''t'' is vanishing, fixing the scale of the field using this term makes the other terms blow up. Once ''t'' is small, the scale of the field can either be set to fix the coefficient of the ''H''<sup>4</sup> term or the (∇''H'')<sup>2</sup> term to 1.
Since ''t'' is vanishing, fixing the scale of the field using this term makes the other terms blow up. Once ''t'' is small, the scale of the field can either be set to fix the coefficient of the ''H''<sup>4</sup> term or the (∇''H'')<sup>2</sup> term to 1.


===Magnetization===
==== Magnetization ====

To find the magnetization, fix the scaling of ''H'' so that λ is one. Now the field ''H'' has dimension −''d''/4, so that ''H''<sup>4</sup>''d<sup>d</sup>x'' is dimensionless, and ''Z'' has dimension 2&nbsp;−&nbsp;''d''/2. In this scaling, the gradient term is only important at long distances for ''d'' ≤ 4. Above four dimensions, at long wavelengths, the overall magnetization is only affected by the ultralocal terms.
To find the magnetization, fix the scaling of ''H'' so that λ is one. Now the field ''H'' has dimension −''d''/4, so that ''H''<sup>4</sup>''d<sup>d</sup>x'' is dimensionless, and ''Z'' has dimension 2&nbsp;−&nbsp;''d''/2. In this scaling, the gradient term is only important at long distances for ''d'' ≤ 4. Above four dimensions, at long wavelengths, the overall magnetization is only affected by the ultralocal terms.


There is one subtle point. The field ''H'' is fluctuating statistically, and the fluctuations can shift the zero point of ''t''. To see how, consider ''H''<sup>4</sup> split in the following way:
There is one subtle point. The field ''H'' is fluctuating statistically, and the fluctuations can shift the zero point of ''t''. To see how, consider ''H''<sup>4</sup> split in the following way:


:<math>H(x)^4 = -\langle H(x)^2\rangle^2 + 2\langle H(x)^2\rangle H(x)^2 + \left(H(x)^2 - \langle H(x)^2\rangle\right)^2</math>
<math display="block">H(x)^4 = -\langle H(x)^2\rangle^2 + 2\langle H(x)^2\rangle H(x)^2 + \left(H(x)^2 - \langle H(x)^2\rangle\right)^2</math>


The first term is a constant contribution to the free energy, and can be ignored. The second term is a finite shift in ''t''. The third term is a quantity that scales to zero at long distances. This means that when analyzing the scaling of ''t'' by dimensional analysis, it is the shifted ''t'' that is important. This was historically very confusing, because the shift in ''t'' at any finite ''λ'' is finite, but near the transition ''t'' is very small. The fractional change in ''t'' is very large, and in units where ''t'' is fixed the shift looks infinite.
The first term is a constant contribution to the free energy, and can be ignored. The second term is a [[Subshift of finite type|finite shift]] in ''t''. The third term is a quantity that scales to zero at long distances. This means that when analyzing the scaling of ''t'' by dimensional analysis, it is the shifted ''t'' that is important. This was historically very confusing, because the shift in ''t'' at any finite ''λ'' is finite, but near the transition ''t'' is very small. The fractional change in ''t'' is very large, and in units where ''t'' is fixed the shift looks infinite.


The magnetization is at the minimum of the free energy, and this is an analytic equation. In terms of the shifted ''t'',
The magnetization is at the minimum of the free energy, and this is an analytic equation. In terms of the shifted ''t'',


:<math>{\partial \over \partial H } \left( t H^2 + \lambda H^4 \right ) = 2t H + 4\lambda H^3 = 0</math>
<math display="block">{\partial \over \partial H } \left( t H^2 + \lambda H^4 \right ) = 2t H + 4\lambda H^3 = 0</math>


For ''t'' < 0, the minima are at ''H'' proportional to the square root of ''t''. So Landau's [[catastrophe theory|catastrophe]] argument is correct in dimensions larger than 5. The magnetization exponent in dimensions higher than 5 is equal to the mean-field value.
For ''t'' < 0, the minima are at ''H'' proportional to the square root of ''t''. So Landau's [[catastrophe theory|catastrophe]] argument is correct in dimensions larger than 5. The magnetization exponent in dimensions higher than 5 is equal to the mean-field value.
Line 539: Line 631:
When ''t'' is negative, the fluctuations about the new minimum are described by a new positive quadratic coefficient. Since this term always dominates, at temperatures below the transition the fluctuations again become ultralocal at long distances.
When ''t'' is negative, the fluctuations about the new minimum are described by a new positive quadratic coefficient. Since this term always dominates, at temperatures below the transition the fluctuations again become ultralocal at long distances.


===Fluctuations===
==== Fluctuations ====

To find the behavior of fluctuations, rescale the field to fix the gradient term. Then the length scaling dimension of the field is 1&nbsp;−&nbsp;''d''/2. Now the field has constant quadratic spatial fluctuations at all temperatures. The scale dimension of the ''H''<sup>2</sup> term is 2, while the scale dimension of the ''H''<sup>4</sup> term is 4&nbsp;−&nbsp;''d''. For ''d'' < 4, the ''H''<sup>4</sup> term has positive scale dimension. In dimensions higher than 4 it has negative scale dimensions.
To find the behavior of fluctuations, rescale the field to fix the gradient term. Then the length scaling dimension of the field is 1&nbsp;−&nbsp;''d''/2. Now the field has constant quadratic spatial fluctuations at all temperatures. The scale dimension of the ''H''<sup>2</sup> term is 2, while the scale dimension of the ''H''<sup>4</sup> term is 4&nbsp;−&nbsp;''d''. For ''d'' < 4, the ''H''<sup>4</sup> term has positive scale dimension. In dimensions higher than 4 it has negative scale dimensions.


Line 546: Line 639:
In dimensions above 4, the critical fluctuations are described by a purely quadratic free energy at long wavelengths. This means that the correlation functions are all computable from as [[Gaussian distribution|Gaussian]] averages:
In dimensions above 4, the critical fluctuations are described by a purely quadratic free energy at long wavelengths. This means that the correlation functions are all computable from as [[Gaussian distribution|Gaussian]] averages:


:<math>\langle S(x)S(y)\rangle \propto \langle H(x)H(y)\rangle = G(x-y) = \int {dk \over (2\pi)^d} { e^{ik(x-y)}\over k^2 + t }</math>
<math display="block">\langle S(x)S(y)\rangle \propto \langle H(x)H(y)\rangle = G(x-y) = \int {dk \over (2\pi)^d} { e^{ik(x-y)}\over k^2 + t }</math>


valid when ''x''&nbsp;−&nbsp;''y'' is large. The function ''G''(''x''&nbsp;−&nbsp;''y'') is the analytic continuation to imaginary time of the [[propagator|Feynman propagator]], since the free energy is the analytic continuation of the quantum field action for a free scalar field. For dimensions 5 and higher, all the other correlation functions at long distances are then determined by [[S-matrix#Wick's theorem|Wick's theorem]]. All the odd moments are zero, by ± symmetry. The even moments are the sum over all partition into pairs of the product of ''G''(''x''&nbsp;−&nbsp;''y'') for each pair.
valid when ''x''&nbsp;−&nbsp;''y'' is large. The function ''G''(''x''&nbsp;−&nbsp;''y'') is the analytic continuation to imaginary time of the [[propagator|Feynman propagator]], since the free energy is the analytic continuation of the quantum field action for a free scalar field. For dimensions 5 and higher, all the other correlation functions at long distances are then determined by [[S-matrix#Wick's theorem|Wick's theorem]]. All the odd moments are zero, by ± symmetry. The even moments are the sum over all partition into pairs of the product of ''G''(''x''&nbsp;−&nbsp;''y'') for each pair.


:<math>\langle S(x_1) S(x_2) \cdots S(x_{2n})\rangle = C^n \sum G(x_{i1},x_{j1}) G(x_{i2},x_{j2}) \ldots G(x_{in},x_{jn})</math>
<math display="block">\langle S(x_1) S(x_2) \cdots S(x_{2n})\rangle = C^n \sum G(x_{i1},x_{j1}) G(x_{i2},x_{j2}) \ldots G(x_{in},x_{jn})</math>


where ''C'' is the proportionality constant. So knowing ''G'' is enough. It determines all the multipoint correlations of the field.
where ''C'' is the proportionality constant. So knowing ''G'' is enough. It determines all the multipoint correlations of the field.


===The critical two-point function===
==== The critical two-point function ====

To determine the form of ''G'', consider that the fields in a path integral obey the classical equations of motion derived by varying the free energy:
To determine the form of ''G'', consider that the fields in a path integral obey the classical equations of motion derived by varying the free energy:


:<math>\begin{align}
<math display="block">\begin{align}
&&\left(-\nabla_x^2 + t\right) \langle H(x)H(y) \rangle &= 0 \\
&&\left(-\nabla_x^2 + t\right) \langle H(x)H(y) \rangle &= 0 \\
\rightarrow {} && \nabla^2 G(x) + tG(x) &= 0
\rightarrow {} && \nabla^2 G(x) + tG(x) &= 0
Line 566: Line 660:
At the critical point ''t'' = 0, this is [[Laplace's equation]], which can be solved by [[Gaussian surface|Gauss's method]] from electrostatics. Define an electric field analog by
At the critical point ''t'' = 0, this is [[Laplace's equation]], which can be solved by [[Gaussian surface|Gauss's method]] from electrostatics. Define an electric field analog by


:<math>E = \nabla G</math>
<math display="block">E = \nabla G</math>


Away from the origin:
Away from the origin:


:<math>\nabla \cdot E = 0</math>
<math display="block">\nabla \cdot E = 0</math>


since ''G'' is spherically symmetric in ''d'' dimensions, and ''E'' is the radial gradient of ''G''. Integrating over a large ''d''&nbsp;−&nbsp;1 dimensional sphere,
since ''G'' is spherically symmetric in ''d'' dimensions, and ''E'' is the radial gradient of ''G''. Integrating over a large ''d''&nbsp;−&nbsp;1 dimensional sphere,


:<math>\int d^{d-1}S E_r = \mathrm{constant}</math>
<math display="block">\int d^{d-1}S E_r = \mathrm{constant}</math>


This gives:
This gives:


:<math>E = {C \over r^{d-1} }</math>
<math display="block">E = {C \over r^{d-1} }</math>


and ''G'' can be found by integrating with respect to ''r''.
and ''G'' can be found by integrating with respect to ''r''.


:<math>G(r) = {C \over r^{d-2} }</math>
<math display="block">G(r) = {C \over r^{d-2} }</math>


The constant ''C'' fixes the overall normalization of the field.
The constant ''C'' fixes the overall normalization of the field.


===''G''(''r'') away from the critical point===
==== ''G''(''r'') away from the critical point ====

When ''t'' does not equal zero, so that ''H'' is fluctuating at a temperature slightly away from critical, the two point function decays at long distances. The equation it obeys is altered:
When ''t'' does not equal zero, so that ''H'' is fluctuating at a temperature slightly away from critical, the two point function decays at long distances. The equation it obeys is altered:


:<math>\nabla^2 G + t G = 0 \to {1 \over r^{d - 1}} {d \over dr} \left( r^{d-1} {dG \over dr} \right) + t G(r) = 0</math>
<math display="block">\nabla^2 G + t G = 0 \to {1 \over r^{d - 1}} {d \over dr} \left( r^{d-1} {dG \over dr} \right) + t G(r) = 0</math>


For ''r'' small compared with <math>\sqrt{t}</math>, the solution diverges exactly the same way as in the critical case, but the long distance behavior is modified.
For ''r'' small compared with <math>\sqrt{t}</math>, the solution diverges exactly the same way as in the critical case, but the long distance behavior is modified.
Line 595: Line 690:
To see how, it is convenient to represent the two point function as an integral, introduced by Schwinger in the quantum field theory context:
To see how, it is convenient to represent the two point function as an integral, introduced by Schwinger in the quantum field theory context:


:<math>G(x) = \int d\tau {1 \over \left(\sqrt{2\pi\tau}\right)^d} e^{-{x^2 \over 4\tau} - t\tau}</math>
<math display="block">G(x) = \int d\tau {1 \over \left(\sqrt{2\pi\tau}\right)^d} e^{-{x^2 \over 4\tau} - t\tau}</math>


This is ''G'', since the Fourier transform of this integral is easy. Each fixed τ contribution is a Gaussian in ''x'', whose Fourier transform is another Gaussian of reciprocal width in ''k''.
This is ''G'', since the Fourier transform of this integral is easy. Each fixed τ contribution is a Gaussian in ''x'', whose Fourier transform is another Gaussian of reciprocal width in ''k''.


:<math>G(k) = \int d\tau e^{-(k^2 - t)\tau} = {1 \over k^2 - t}</math>
<math display="block">G(k) = \int d\tau e^{-(k^2 - t)\tau} = {1 \over k^2 - t}</math>


This is the inverse of the operator ∇<sup>2</sup>&nbsp;−&nbsp;''t'' in ''k''-space, acting on the unit function in ''k''-space, which is the Fourier transform of a delta function source localized at the origin. So it satisfies the same equation as ''G'' with the same boundary conditions that determine the strength of the divergence at 0.
This is the inverse of the operator ∇<sup>2</sup>&nbsp;−&nbsp;''t'' in ''k''-space, acting on the unit function in ''k''-space, which is the Fourier transform of a delta function source localized at the origin. So it satisfies the same equation as ''G'' with the same boundary conditions that determine the strength of the divergence at 0.
Line 609: Line 704:
A heuristic approximation for ''G''(''r'') is:
A heuristic approximation for ''G''(''r'') is:


:<math>G(r) \approx { e^{-\sqrt t r} \over r^{d-2}}</math>
<math display="block">G(r) \approx { e^{-\sqrt t r} \over r^{d-2}}</math>


This is not an exact form, except in three dimensions, where interactions between paths become important. The exact forms in high dimensions are variants of [[Bessel functions]].
This is not an exact form, except in three dimensions, where interactions between paths become important. The exact forms in high dimensions are variants of [[Bessel functions]].


===Symanzik polymer interpretation===
==== Symanzik polymer interpretation ====

The interpretation of the correlations as fixed size quanta travelling along random walks gives a way of understanding why the critical dimension of the ''H''<sup>4</sup> interaction is 4. The term ''H''<sup>4</sup> can be thought of as the square of the density of the random walkers at any point. In order for such a term to alter the finite order correlation functions, which only introduce a few new random walks into the fluctuating environment, the new paths must intersect. Otherwise, the square of the density is just proportional to the density and only shifts the ''H''<sup>2</sup> coefficient by a constant. But the intersection probability of random walks depends on the dimension, and random walks in dimension higher than 4 do not intersect.
The interpretation of the correlations as fixed size quanta travelling along random walks gives a way of understanding why the critical dimension of the ''H''<sup>4</sup> interaction is 4. The term ''H''<sup>4</sup> can be thought of as the square of the density of the random walkers at any point. In order for such a term to alter the finite order correlation functions, which only introduce a few new random walks into the fluctuating environment, the new paths must intersect. Otherwise, the square of the density is just proportional to the density and only shifts the ''H''<sup>2</sup> coefficient by a constant. But the intersection probability of random walks depends on the dimension, and random walks in dimension higher than 4 do not intersect.


The [[fractal dimension]] of an ordinary random walk is 2. The number of balls of size ε required to cover the path increase as ε<sup>−2</sup>. Two objects of fractal dimension 2 will intersect with reasonable probability only in a space of dimension 4 or less, the same condition as for a generic pair of planes. [[Kurt Symanzik]] argued that this implies that the critical Ising fluctuations in dimensions higher than 4 should be described by a free field. This argument eventually became a mathematical proof.
The [[fractal dimension]] of an ordinary random walk is 2. The number of balls of size ε required to cover the path increase as ε<sup>−2</sup>. Two objects of fractal dimension 2 will intersect with reasonable probability only in a space of dimension 4 or less, the same condition as for a generic pair of planes. [[Kurt Symanzik]] argued that this implies that the critical Ising fluctuations in dimensions higher than 4 should be described by a free field. This argument eventually became a mathematical proof.


===4&nbsp;−&nbsp;''ε'' dimensions – renormalization group===
==== 4&nbsp;−&nbsp;''ε'' dimensions – renormalization group ====

The Ising model in four dimensions is described by a fluctuating field, but now the fluctuations are interacting. In the polymer representation, intersections of random walks are marginally possible. In the quantum field continuation, the quanta interact.
The Ising model in four dimensions is described by a fluctuating field, but now the fluctuations are interacting. In the polymer representation, intersections of random walks are marginally possible. In the quantum field continuation, the quanta interact.


The negative logarithm of the probability of any field configuration ''H'' is the [[Thermodynamic free energy|free energy]] function
The negative logarithm of the probability of any field configuration ''H'' is the [[Thermodynamic free energy|free energy]] function


:<math>F= \int d^4 x \left[ {Z \over 2} |\nabla H|^2 + {t\over 2} H^2 + {\lambda \over 4!} H^4 \right] \,</math>
<math display="block">F= \int d^4 x \left[ {Z \over 2} |\nabla H|^2 + {t\over 2} H^2 + {\lambda \over 4!} H^4 \right] \,</math>


The numerical factors are there to simplify the equations of motion. The goal is to understand the statistical fluctuations. Like any other non-quadratic path integral, the correlation functions have a [[Feynman diagram|Feynman expansion]] as particles travelling along random walks, splitting and rejoining at vertices. The interaction strength is parametrized by the classically dimensionless quantity λ.
The numerical factors are there to simplify the equations of motion. The goal is to understand the statistical fluctuations. Like any other non-quadratic path integral, the correlation functions have a [[Feynman diagram|Feynman expansion]] as particles travelling along random walks, splitting and rejoining at vertices. The interaction strength is parametrized by the classically dimensionless quantity λ.
Line 631: Line 728:
The reason is that there is a cutoff used to define ''H'', and the cutoff defines the shortest wavelength. Fluctuations of ''H'' at wavelengths near the cutoff can affect the longer-wavelength fluctuations. If the system is scaled along with the cutoff, the parameters will scale by dimensional analysis, but then comparing parameters doesn't compare behavior because the rescaled system has more modes. If the system is rescaled in such a way that the short wavelength cutoff remains fixed, the long-wavelength fluctuations are modified.
The reason is that there is a cutoff used to define ''H'', and the cutoff defines the shortest wavelength. Fluctuations of ''H'' at wavelengths near the cutoff can affect the longer-wavelength fluctuations. If the system is scaled along with the cutoff, the parameters will scale by dimensional analysis, but then comparing parameters doesn't compare behavior because the rescaled system has more modes. If the system is rescaled in such a way that the short wavelength cutoff remains fixed, the long-wavelength fluctuations are modified.


====Wilson renormalization====
===== Wilson renormalization =====

A quick heuristic way of studying the scaling is to cut off the ''H'' wavenumbers at a point λ. Fourier modes of ''H'' with wavenumbers larger than λ are not allowed to fluctuate. A rescaling of length that make the whole system smaller increases all wavenumbers, and moves some fluctuations above the cutoff.
A quick heuristic way of studying the scaling is to cut off the ''H'' wavenumbers at a point λ. Fourier modes of ''H'' with wavenumbers larger than λ are not allowed to fluctuate. A rescaling of length that make the whole system smaller increases all wavenumbers, and moves some fluctuations above the cutoff.


Line 640: Line 738:
The lowest order effect of integrating out can be calculated from the equations of motion:
The lowest order effect of integrating out can be calculated from the equations of motion:


:<math>\nabla^2 H + t H = - {\lambda \over 6} H^3.</math>
<math display="block">\nabla^2 H + t H = - {\lambda \over 6} H^3.</math>


This equation is an identity inside any correlation function away from other insertions. After integrating out the modes with Λ < ''k'' < (1+''b'')Λ, it will be a slightly different identity.
This equation is an identity inside any correlation function away from other insertions. After integrating out the modes with Λ < ''k'' < (1+''b'')Λ, it will be a slightly different identity.
Line 646: Line 744:
Since the form of the equation will be preserved, to find the change in coefficients it is sufficient to analyze the change in the ''H''<sup>3</sup> term. In a Feynman diagram expansion, the ''H''<sup>3</sup> term in a correlation function inside a correlation has three dangling lines. Joining two of them at large wavenumber ''k'' gives a change ''H''<sup>3</sup> with one dangling line, so proportional to ''H'':
Since the form of the equation will be preserved, to find the change in coefficients it is sufficient to analyze the change in the ''H''<sup>3</sup> term. In a Feynman diagram expansion, the ''H''<sup>3</sup> term in a correlation function inside a correlation has three dangling lines. Joining two of them at large wavenumber ''k'' gives a change ''H''<sup>3</sup> with one dangling line, so proportional to ''H'':


:<math>\delta H^3 = 3H \int_{\Lambda<|k|<(1 + b)\Lambda} {d^4k \over (2\pi)^4} {1\over (k^2 + t)}</math>
<math display="block">\delta H^3 = 3H \int_{\Lambda<|k|<(1 + b)\Lambda} {d^4k \over (2\pi)^4} {1\over (k^2 + t)}</math>


The factor of 3 comes from the fact that the loop can be closed in three different ways.
The factor of 3 comes from the fact that the loop can be closed in three different ways.
Line 652: Line 750:
The integral should be split into two parts:
The integral should be split into two parts:


:<math>\int dk {1 \over k^2} - t \int dk { 1\over k^2(k^2 + t)} = A\Lambda^2 b + B b t</math>
<math display="block">\int dk {1 \over k^2} - t \int dk { 1\over k^2(k^2 + t)} = A\Lambda^2 b + B b t</math>


The first part is not proportional to ''t'', and in the equation of motion it can be absorbed by a constant shift in ''t''. It is caused by the fact that the ''H''<sup>3</sup> term has a linear part. Only the second term, which varies from ''t'' to ''t'', contributes to the critical scaling.
The first part is not proportional to ''t'', and in the equation of motion it can be absorbed by a constant shift in ''t''. It is caused by the fact that the ''H''<sup>3</sup> term has a linear part. Only the second term, which varies from ''t'' to ''t'', contributes to the critical scaling.
Line 658: Line 756:
This new linear term adds to the first term on the left hand side, changing ''t'' by an amount proportional to ''t''. The total change in ''t'' is the sum of the term from dimensional analysis and this second term from [[operator product expansion|operator products]]:
This new linear term adds to the first term on the left hand side, changing ''t'' by an amount proportional to ''t''. The total change in ''t'' is the sum of the term from dimensional analysis and this second term from [[operator product expansion|operator products]]:


:<math>\delta t = \left(2 - {B\lambda \over 2} \right)b t</math>
<math display="block">\delta t = \left(2 - {B\lambda \over 2} \right)b t</math>


So ''t'' is rescaled, but its dimension is [[anomalous dimension|anomalous]], it is changed by an amount proportional to the value of λ.
So ''t'' is rescaled, but its dimension is [[anomalous dimension|anomalous]], it is changed by an amount proportional to the value of λ.
Line 664: Line 762:
But λ also changes. The change in λ requires considering the lines splitting and then quickly rejoining. The lowest order process is one where one of the three lines from ''H''<sup>3</sup> splits into three, which quickly joins with one of the other lines from the same vertex. The correction to the vertex is
But λ also changes. The change in λ requires considering the lines splitting and then quickly rejoining. The lowest order process is one where one of the three lines from ''H''<sup>3</sup> splits into three, which quickly joins with one of the other lines from the same vertex. The correction to the vertex is


:<math>\delta \lambda = - {3 \lambda^2 \over 2} \int_k dk {1 \over (k^2 + t)^2} = -{3\lambda^2 \over 2} b</math>
<math display="block">\delta \lambda = - {3 \lambda^2 \over 2} \int_k dk {1 \over (k^2 + t)^2} = -{3\lambda^2 \over 2} b</math>


The numerical factor is three times bigger because there is an extra factor of three in choosing which of the three new lines to contract. So
The numerical factor is three times bigger because there is an extra factor of three in choosing which of the three new lines to contract. So


:<math>\delta \lambda = - 3 B \lambda^2 b</math>
<math display="block">\delta \lambda = - 3 B \lambda^2 b</math>


These two equations together define the renormalization group equations in four dimensions:
These two equations together define the renormalization group equations in four dimensions:


:<math>\begin{align}
<math display="block">\begin{align}
{dt \over t} &= \left(2 - {B\lambda \over 2}\right) b \\
{dt \over t} &= \left(2 - {B\lambda \over 2}\right) b \\
{d\lambda \over \lambda} &= {-3 B \lambda \over 2} b
{d\lambda \over \lambda} &= {-3 B \lambda \over 2} b
Line 678: Line 776:


The coefficient ''B'' is determined by the formula
The coefficient ''B'' is determined by the formula
:<math>B b = \int_{\Lambda<|k|<(1+b)\Lambda} {d^4k\over (2\pi)^4} {1 \over k^4}</math>
<math display="block">B b = \int_{\Lambda<|k|<(1+b)\Lambda} {d^4k\over (2\pi)^4} {1 \over k^4}</math>


and is proportional to the area of a three-dimensional sphere of radius λ, times the width of the integration region ''b''Λ divided by Λ<sup>4</sup>:
and is proportional to the area of a three-dimensional sphere of radius λ, times the width of the integration region ''b''Λ divided by Λ<sup>4</sup>:
:<math>B= (2 \pi^2 \Lambda^3) {1\over (2\pi)^4} { b \Lambda} {1 \over b\Lambda^4} = {1\over 8\pi^2} </math>
<math display="block">B= (2 \pi^2 \Lambda^3) {1\over (2\pi)^4} { b \Lambda} {1 \over b\Lambda^4} = {1\over 8\pi^2} </math>


In other dimensions, the constant ''B'' changes, but the same constant appears both in the ''t'' flow and in the coupling flow. The reason is that the derivative with respect to ''t'' of the closed loop with a single vertex is a closed loop with two vertices. This means that the only difference between the scaling of the coupling and the ''t'' is the combinatorial factors from joining and splitting.
In other dimensions, the constant ''B'' changes, but the same constant appears both in the ''t'' flow and in the coupling flow. The reason is that the derivative with respect to ''t'' of the closed loop with a single vertex is a closed loop with two vertices. This means that the only difference between the scaling of the coupling and the ''t'' is the combinatorial factors from joining and splitting.


====Wilson–Fisher fixed point====
===== Wilson–Fisher fixed point =====

To investigate three dimensions starting from the four-dimensional theory should be possible, because the intersection probabilities of random walks depend continuously on the dimensionality of the space. In the language of Feynman graphs, the coupling does not change very much when the dimension is changed.
To investigate three dimensions starting from the four-dimensional theory should be possible, because the intersection probabilities of random walks depend continuously on the dimensionality of the space. In the language of Feynman graphs, the coupling does not change very much when the dimension is changed.


The process of continuing away from dimension 4 is not completely well defined without a prescription for how to do it. The prescription is only well defined on diagrams. It replaces the Schwinger representation in dimension 4 with the Schwinger representation in dimension 4&nbsp;−&nbsp;ε defined by:
The process of continuing away from dimension 4 is not completely well defined without a prescription for how to do it. The prescription is only well defined on diagrams. It replaces the Schwinger representation in dimension 4 with the Schwinger representation in dimension 4&nbsp;−&nbsp;ε defined by:
:<math> G(x-y) = \int d\tau {1 \over t^{d\over 2}} e^{{x^2 \over 2\tau} + t \tau} </math>
<math display="block"> G(x-y) = \int d\tau {1 \over t^{d\over 2}} e^{{x^2 \over 2\tau} + t \tau} </math>


In dimension 4&nbsp;−&nbsp;ε, the coupling λ has positive scale dimension ε, and this must be added to the flow.
In dimension 4&nbsp;−&nbsp;ε, the coupling λ has positive scale dimension ε, and this must be added to the flow.


:<math>\begin{align}
<math display="block">\begin{align}
{d\lambda \over \lambda} &= \varepsilon - 3 B \lambda \\
{d\lambda \over \lambda} &= \varepsilon - 3 B \lambda \\
{dt \over t} &= 2 - \lambda B
{dt \over t} &= 2 - \lambda B
Line 699: Line 798:


The coefficient ''B'' is dimension dependent, but it will cancel. The fixed point for λ is no longer zero, but at:
The coefficient ''B'' is dimension dependent, but it will cancel. The fixed point for λ is no longer zero, but at:
:<math>\lambda = {\varepsilon \over 3B} </math>
<math display="block">\lambda = {\varepsilon \over 3B} </math>
where the scale dimensions of ''t'' is altered by an amount λ''B'' = ε/3.
where the scale dimensions of ''t'' is altered by an amount λ''B'' = ε/3.


The magnetization exponent is altered proportionately to:
The magnetization exponent is altered proportionately to:
:<math>\tfrac{1}{2} \left( 1 - {\varepsilon \over 3}\right)</math>
<math display="block">\tfrac{1}{2} \left( 1 - {\varepsilon \over 3}\right)</math>


which is .333 in 3 dimensions (ε = 1) and .166 in 2 dimensions (ε = 2). This is not so far off from the measured exponent .308 and the Onsager two dimensional exponent .125.
which is .333 in 3 dimensions (ε = 1) and .166 in 2 dimensions (ε = 2). This is not so far off from the measured exponent .308 and the Onsager two dimensional exponent .125.


===Infinite dimensions – mean field===
==== Infinite dimensions – mean field ====
{{Main|Mean-field theory}}
{{Main|Mean-field theory}}


Line 715: Line 814:


This energy cost gives the ratio of probability ''p'' that the spin is + to the probability 1−''p'' that the spin is&nbsp;−. This ratio is the Boltzmann factor:
This energy cost gives the ratio of probability ''p'' that the spin is + to the probability 1−''p'' that the spin is&nbsp;−. This ratio is the Boltzmann factor:
:<math>{p\over 1-p} = e^{2\beta JH}</math>
<math display="block">{p\over 1-p} = e^{2\beta JH}</math>


so that
so that
:<math>p = {1 \over 1 + e^{-2\beta JH} }</math>
<math display="block">p = {1 \over 1 + e^{-2\beta JH} }</math>


The mean value of the spin is given by averaging 1 and −1 with the weights ''p'' and 1&nbsp;−&nbsp;''p'', so the mean value is 2''p''&nbsp;−&nbsp;1. But this average is the same for all spins, and is therefore equal to ''H''.
The mean value of the spin is given by averaging 1 and −1 with the weights ''p'' and 1&nbsp;−&nbsp;''p'', so the mean value is 2''p''&nbsp;−&nbsp;1. But this average is the same for all spins, and is therefore equal to ''H''.
:<math> H = 2p - 1 = { 1 - e^{-2\beta JH} \over 1 + e^{-2\beta JH}} = \tanh (\beta JH)</math>
<math display="block"> H = 2p - 1 = { 1 - e^{-2\beta JH} \over 1 + e^{-2\beta JH}} = \tanh (\beta JH)</math>


The solutions to this equation are the possible consistent mean fields. For β''J'' < 1 there is only the one solution at ''H'' = 0. For bigger values of β there are three solutions, and the solution at ''H'' = 0 is unstable.
The solutions to this equation are the possible consistent mean fields. For β''J'' < 1 there is only the one solution at ''H'' = 0. For bigger values of β there are three solutions, and the solution at ''H'' = 0 is unstable.
Line 730: Line 829:


For β''J'' = 1 + ε, just below the critical temperature, the value of ''H'' can be calculated from the Taylor expansion of the hyperbolic tangent:
For β''J'' = 1 + ε, just below the critical temperature, the value of ''H'' can be calculated from the Taylor expansion of the hyperbolic tangent:
:<math>H = \tanh(\beta J H) \approx (1+\varepsilon)H - {(1+\varepsilon)^3H^3\over 3}</math>
<math display="block">H = \tanh(\beta J H) \approx (1+\varepsilon)H - {(1+\varepsilon)^3H^3\over 3}</math>


Dividing by ''H'' to discard the unstable solution at ''H'' = 0, the stable solutions are:
Dividing by ''H'' to discard the unstable solution at ''H'' = 0, the stable solutions are:
:<math>H = \sqrt{3\varepsilon}</math>
<math display="block">H = \sqrt{3\varepsilon}</math>


The spontaneous magnetization ''H'' grows near the critical point as the square root of the change in temperature. This is true whenever ''H'' can be calculated from the solution of an analytic equation which is symmetric between positive and negative values, which led [[Lev Landau|Landau]] to suspect that all Ising type phase transitions in all dimensions should follow this law.
The spontaneous magnetization ''H'' grows near the critical point as the square root of the change in temperature. This is true whenever ''H'' can be calculated from the solution of an analytic equation which is symmetric between positive and negative values, which led [[Lev Landau|Landau]] to suspect that all Ising type phase transitions in all dimensions should follow this law.


The mean-field exponent is [[Universality (dynamical systems)|universal]] because changes in the character of solutions of analytic equations are always described by [[catastrophe theory|catastrophes]] in the [[Taylor series]], which is a polynomial equation. By symmetry, the equation for ''H'' must only have odd powers of ''H'' on the right hand side. Changing β should only smoothly change the coefficients. The transition happens when the coefficient of ''H'' on the right hand side is 1. Near the transition:
The mean-field exponent is [[Universality (dynamical systems)|universal]] because changes in the character of solutions of analytic equations are always described by [[catastrophe theory|catastrophes]] in the [[Taylor series]], which is a polynomial equation. By symmetry, the equation for ''H'' must only have odd powers of ''H'' on the right hand side. Changing β should only smoothly change the coefficients. The transition happens when the coefficient of ''H'' on the right hand side is 1. Near the transition:
:<math>H = {\partial (\beta F) \over \partial h} = (1+A\varepsilon) H + B H^3 + \cdots</math>
<math display="block">H = {\partial (\beta F) \over \partial h} = (1+A\varepsilon) H + B H^3 + \cdots</math>


Whatever ''A'' and ''B'' are, so long as neither of them is tuned to zero, the spontaneous magnetization will grow as the square root of ε. This argument can only fail if the free energy β''F'' is either non-analytic or non-generic at the exact β where the transition occurs.
Whatever ''A'' and ''B'' are, so long as neither of them is tuned to zero, the spontaneous magnetization will grow as the square root of ε. This argument can only fail if the free energy β''F'' is either non-analytic or non-generic at the exact β where the transition occurs.


But the spontaneous magnetization in magnetic systems and the density in gasses near the critical point are measured very accurately. The density and the magnetization in three dimensions have the same power-law dependence on the temperature near the critical point, but the behavior from experiments is:
But the spontaneous magnetization in magnetic systems and the density in gasses near the critical point are measured very accurately. The density and the magnetization in three dimensions have the same power-law dependence on the temperature near the critical point, but the behavior from experiments is:
:<math>H \propto \varepsilon^{0.308}</math>
<math display="block">H \propto \varepsilon^{0.308}</math>


The exponent is also universal, since it is the same in the Ising model as in the experimental magnet and gas, but it is not equal to the mean-field value. This was a great surprise.
The exponent is also universal, since it is the same in the Ising model as in the experimental magnet and gas, but it is not equal to the mean-field value. This was a great surprise.


This is also true in two dimensions, where
This is also true in two dimensions, where
:<math>H \propto \varepsilon^{0.125}</math>
<math display="block">H \propto \varepsilon^{0.125}</math>


But there it was not a surprise, because it was predicted by [[Lars Onsager|Onsager]].
But there it was not a surprise, because it was predicted by [[Lars Onsager|Onsager]].


===Low dimensions&nbsp;– block spins===
==== Low dimensions&nbsp;– block spins ====

In three dimensions, the perturbative series from the field theory is an expansion in a coupling constant λ which is not particularly small. The effective size of the coupling at the fixed point is one over the branching factor of the particle paths, so the expansion parameter is about 1/3. In two dimensions, the perturbative expansion parameter is 2/3.
In three dimensions, the perturbative series from the field theory is an expansion in a coupling constant λ which is not particularly small. The effective size of the coupling at the fixed point is one over the branching factor of the particle paths, so the expansion parameter is about 1/3. In two dimensions, the perturbative expansion parameter is 2/3.


Line 759: Line 859:
The idea is to integrate out lattice spins iteratively, generating a flow in couplings. But now the couplings are lattice energy coefficients. The fact that a continuum description exists guarantees that this iteration will converge to a fixed point when the temperature is tuned to criticality.
The idea is to integrate out lattice spins iteratively, generating a flow in couplings. But now the couplings are lattice energy coefficients. The fact that a continuum description exists guarantees that this iteration will converge to a fixed point when the temperature is tuned to criticality.


====Migdal–Kadanoff renormalization====
===== Migdal–Kadanoff renormalization =====

Write the two-dimensional Ising model with an infinite number of possible higher order interactions. To keep spin reflection symmetry, only even powers contribute:
Write the two-dimensional Ising model with an infinite number of possible higher order interactions. To keep spin reflection symmetry, only even powers contribute:
:<math>E = \sum_{ij} J_{ij} S_i S_j + \sum J_{ijkl} S_i S_j S_k S_l \ldots.</math>
<math display="block">E = \sum_{ij} J_{ij} S_i S_j + \sum J_{ijkl} S_i S_j S_k S_l \ldots.</math>


By translation invariance, ''J<sub>ij</sub>'' is only a function of i-j. By the accidental rotational symmetry, at large i and j its size only depends on the magnitude of the two-dimensional vector ''i''&nbsp;−&nbsp;''j''. The higher order coefficients are also similarly restricted.
By translation invariance, ''J<sub>ij</sub>'' is only a function of i-j. By the accidental rotational symmetry, at large i and j its size only depends on the magnitude of the two-dimensional vector ''i''&nbsp;−&nbsp;''j''. The higher order coefficients are also similarly restricted.
Line 767: Line 868:
The renormalization iteration divides the lattice into two parts – even spins and odd spins. The odd spins live on the odd-checkerboard lattice positions, and the even ones on the even-checkerboard. When the spins are indexed by the position (''i'',''j''), the odd sites are those with ''i''&nbsp;+&nbsp;''j'' odd and the even sites those with ''i''&nbsp;+&nbsp;''j'' even, and even sites are only connected to odd sites.
The renormalization iteration divides the lattice into two parts – even spins and odd spins. The odd spins live on the odd-checkerboard lattice positions, and the even ones on the even-checkerboard. When the spins are indexed by the position (''i'',''j''), the odd sites are those with ''i''&nbsp;+&nbsp;''j'' odd and the even sites those with ''i''&nbsp;+&nbsp;''j'' even, and even sites are only connected to odd sites.


The two possible values of the odd spins will be integrated out, by summing over both possible values. This will produce a new free energy function for the remaining even spins, with new adjusted couplings. The even spins are again in a lattice, with axes tilted at 45 degrees to the old ones. Unrotating the system restores the old configuration, but with new parameters. These parameters describe the interaction between spins at distances <math>\scriptstyle \sqrt{2}</math> larger.
The two possible values of the odd spins will be integrated out, by summing over both possible values. This will produce a new free energy function for the remaining even spins, with new adjusted couplings. The even spins are again in a lattice, with axes tilted at 45 degrees to the old ones. Unrotating the system restores the old configuration, but with new parameters. These parameters describe the interaction between spins at distances <math display="inline">\sqrt{2}</math> larger.


Starting from the Ising model and repeating this iteration eventually changes all the couplings. When the temperature is higher than the critical temperature, the couplings will converge to zero, since the spins at large distances are uncorrelated. But when the temperature is critical, there will be nonzero coefficients linking spins at all orders. The flow can be approximated by only considering the first few terms. This truncated flow will produce better and better approximations to the critical exponents when more terms are included.
Starting from the Ising model and repeating this iteration eventually changes all the couplings. When the temperature is higher than the critical temperature, the couplings will converge to zero, since the spins at large distances are uncorrelated. But when the temperature is critical, there will be nonzero coefficients linking spins at all orders. The flow can be approximated by only considering the first few terms. This truncated flow will produce better and better approximations to the critical exponents when more terms are included.
Line 774: Line 875:


To find the change in ''J'', consider the four neighbors of an odd site. These are the only spins which interact with it. The multiplicative contribution to the partition function from the sum over the two values of the spin at the odd site is:
To find the change in ''J'', consider the four neighbors of an odd site. These are the only spins which interact with it. The multiplicative contribution to the partition function from the sum over the two values of the spin at the odd site is:
:<math> e^{J (N_+ - N_-)} + e^{J (N_- - N_+)} = 2 \cosh(J[N_+ - N_-])</math>
<math display="block"> e^{J (N_+ - N_-)} + e^{J (N_- - N_+)} = 2 \cosh(J[N_+ - N_-])</math>


where ''N''<sub>±</sub> is the number of neighbors which are ±. Ignoring the factor of 2, the free energy contribution from this odd site is:
where ''N''<sub>±</sub> is the number of neighbors which are ±. Ignoring the factor of 2, the free energy contribution from this odd site is:
:<math> F = \log(\cosh[J(N_+ - N_-)]).</math>
<math display="block"> F = \log(\cosh[J(N_+ - N_-)]).</math>


This includes nearest neighbor and next-nearest neighbor interactions, as expected, but also a four-spin interaction which is to be discarded. To truncate to nearest neighbor interactions, consider that the difference in energy between all spins the same and equal numbers + and – is:
This includes nearest neighbor and next-nearest neighbor interactions, as expected, but also a four-spin interaction which is to be discarded. To truncate to nearest neighbor interactions, consider that the difference in energy between all spins the same and equal numbers + and – is:
:<math> \Delta F = \ln(\cosh[4J]).</math>
<math display="block"> \Delta F = \ln(\cosh[4J]).</math>


From nearest neighbor couplings, the difference in energy between all spins equal and staggered spins is 8''J''. The difference in energy between all spins equal and nonstaggered but net zero spin is 4''J''. Ignoring four-spin interactions, a reasonable truncation is the average of these two energies or 6''J''. Since each link will contribute to two odd spins, the right value to compare with the previous one is half that:
From nearest neighbor couplings, the difference in energy between all spins equal and staggered spins is 8''J''. The difference in energy between all spins equal and nonstaggered but net zero spin is 4''J''. Ignoring four-spin interactions, a reasonable truncation is the average of these two energies or 6''J''. Since each link will contribute to two odd spins, the right value to compare with the previous one is half that:
:<math>3J' = \ln(\cosh[4J]).</math>
<math display="block">3J' = \ln(\cosh[4J]).</math>


For small ''J'', this quickly flows to zero coupling. Large ''J'''s flow to large couplings. The magnetization exponent is determined from the slope of the equation at the fixed point.
For small ''J'', this quickly flows to zero coupling. Large ''J'''s flow to large couplings. The magnetization exponent is determined from the slope of the equation at the fixed point.


Variants of this method produce good numerical approximations for the critical exponents when many terms are included, in both two and three dimensions.
Variants of this method produce good numerical approximations for the critical exponents when many terms are included, in both two and three dimensions.

==Applications==

===Magnetism===
The original motivation for the model was the phenomenon of [[ferromagnetism]]. Iron is magnetic; once it is magnetized it stays magnetized for a long time compared to any atomic time.

In the 19th century, it was thought that magnetic fields are due to currents in matter, and [[André-Marie Ampère|Ampère]] postulated that permanent magnets are caused by permanent atomic currents. The motion of classical charged particles could not explain permanent currents though, as shown by [[Joseph Larmor|Larmor]]. In order to have ferromagnetism, the atoms must have permanent [[magnetic moment]]s which are not due to the motion of classical charges.

Once the electron's spin was discovered, it was clear that the magnetism should be due to a large number of electron spins all oriented in the same direction. It was natural to ask how the electrons' spins all know which direction to point in, because the electrons on one side of a magnet don't directly interact with the electrons on the other side. They can only influence their neighbors. The Ising model was designed to investigate whether a large fraction of the electron spins could be oriented in the same direction using only local forces.

===Lattice gas===
The Ising model can be reinterpreted as a statistical model for the motion of atoms. Since the kinetic energy depends only on momentum and not on position, while the statistics of the positions only depends on the potential energy, the thermodynamics of the gas only depends on the potential energy for each configuration of atoms.

A coarse model is to make space-time a lattice and imagine that each position either contains an atom or it doesn't. The space of configuration is that of independent bits ''B<sub>i</sub>'', where each bit is either 0 or 1 depending on whether the position is occupied or not. An attractive interaction reduces the energy of two nearby atoms. If the attraction is only between nearest neighbors, the energy is reduced by −4''JB''<sub>''i''</sub>''B''<sub>''j''</sub> for each occupied neighboring pair.

The density of the atoms can be controlled by adding a [[chemical potential]], which is a multiplicative probability cost for adding one more atom. A multiplicative factor in probability can be reinterpreted as an additive term in the logarithm – the energy. The extra energy of a configuration with ''N'' atoms is changed by ''μN''. The probability cost of one more atom is a factor of exp(−''βμ'').

So the energy of the lattice gas is:
:<math>E = - \frac{1}{2} \sum_{\langle i,j \rangle} 4 J B_i B_j + \sum_i \mu B_i</math>

Rewriting the bits in terms of spins, <math>B_i = (S_i + 1)/2. </math>
:<math>E = - \frac{1}{2} \sum_{\langle i,j \rangle} J S_i S_j - \frac{1}{2} \sum_i (4 J - \mu) S_i</math>

For lattices where every site has an equal number of neighbors, this is the Ising model with a magnetic field ''h'' = (''zJ''&nbsp;−&nbsp;''μ'')/2, where ''z'' is the number of neighbors.

In biological systems, modified versions of the lattice gas model have been used to understand a range of binding behaviors. These include the binding of ligands to receptors in the cell surface,<ref>{{Cite journal|last1=Shi|first1=Y.|last2=Duke|first2=T.|date=1998-11-01|title=Cooperative model of bacteril sensing|journal=Physical Review E|language=en|volume=58|issue=5|pages=6399–6406|doi=10.1103/PhysRevE.58.6399|arxiv=physics/9901052|bibcode=1998PhRvE..58.6399S|s2cid=18854281}}</ref> the binding of chemotaxis proteins to the flagellar motor,<ref>{{Cite journal|last1=Bai|first1=Fan|last2=Branch|first2=Richard W.|last3=Nicolau|first3=Dan V.|last4=Pilizota|first4=Teuta|last5=Steel|first5=Bradley C.|last6=Maini|first6=Philip K.|last7=Berry|first7=Richard M.|date=2010-02-05|title=Conformational Spread as a Mechanism for Cooperativity in the Bacterial Flagellar Switch|journal=Science|language=en|volume=327|issue=5966|pages=685–689|doi=10.1126/science.1182105|issn=0036-8075|pmid=20133571|bibcode = 2010Sci...327..685B |s2cid=206523521|url=https://semanticscholar.org/paper/680aa07b7425c7addc6e02ef49356d31cfb84d48}}</ref> and the condensation of DNA.<ref>{{Cite journal|last1=Vtyurina|first1=Natalia N.|last2=Dulin|first2=David|last3=Docter|first3=Margreet W.|last4=Meyer|first4=Anne S.|last5=Dekker|first5=Nynke H.|last6=Abbondanzieri|first6=Elio A.|date=2016-04-18|title=Hysteresis in DNA compaction by Dps is described by an Ising model|journal=Proceedings of the National Academy of Sciences|language=en|pages=4982–7|doi=10.1073/pnas.1521241113|issn=0027-8424|pmid=27091987|pmc=4983820|volume=113|issue=18|bibcode=2016PNAS..113.4982V|doi-access=free}}</ref>

===Neuroscience===
The activity of [[neuron]]s in the brain can be modelled statistically. Each neuron at any time is either active + or inactive&nbsp;−. The active neurons are those that send an [[action potential]] down the axon in any given time window, and the inactive ones are those that do not. Because the neural activity at any one time is modelled by independent bits, [[J. J. Hopfield|Hopfield]] suggested in 1982 that a dynamical Ising model would provide a [[Hopfield net|first approximation]] to a neural network which is capable of [[learning]].<ref>{{Citation| author= J. J. Hopfield| title = Neural networks and physical systems with emergent collective computational abilities| journal = Proceedings of the National Academy of Sciences of the USA| volume= 79 | pages= 2554–2558| year= 1982| doi = 10.1073/pnas.79.8.2554| pmid = 6953413| issue= 8| pmc= 346238| postscript= .|bibcode = 1982PNAS...79.2554H | doi-access = free}}</ref> This learning [[recurrent neural network]] was published by [[Shun'ichi Amari]] in 1972.<ref name="Amari1972">{{cite journal |last1=Amari |first1=Shun-Ichi |title=Learning patterns and pattern sequences by self-organizing nets of threshold elements|journal= IEEE Transactions |date=1972 |volume=C |issue=21 |pages=1197–1206 }}</ref><ref name=DLhistory>{{cite arXiv|last=Schmidhuber|first=Juergen|date=2022|title=Annotated History of Modern AI and Deep Learning |class=cs.NE|eprint=2212.11279}}</ref>

Following the general approach of Jaynes,<ref>{{Citation| author=Jaynes, E. T.| title= Information Theory and Statistical Mechanics | journal= Physical Review| volume = 106 | pages= 620–630 | year= 1957| doi=10.1103/PhysRev.106.620| postscript=.|bibcode = 1957PhRv..106..620J| issue=4 | s2cid= 17870175 | url= https://semanticscholar.org/paper/08b67692bc037eada8d3d7ce76cc70994e7c8116 }}</ref><ref>{{Citation| author= Jaynes, Edwin T.| title = Information Theory and Statistical Mechanics II |journal = Physical Review |volume =108 | pages = 171–190 | year = 1957| doi= 10.1103/PhysRev.108.171| postscript= .|bibcode = 1957PhRv..108..171J| issue= 2 }}</ref> a later interpretation of Schneidman, Berry, Segev and Bialek,<ref>{{Citation|author1=Elad Schneidman |author2=Michael J. Berry |author3=Ronen Segev |author4=William Bialek | title= Weak pairwise correlations imply strongly correlated network states in a neural population| journal=Nature| volume= 440 | pages= 1007–1012| year=2006| doi= 10.1038/nature04701| pmid= 16625187| issue= 7087| pmc= 1785327| postscript= .|arxiv = q-bio/0512013 |bibcode = 2006Natur.440.1007S |title-link=neural population }}</ref>
is that the Ising model is useful for any model of neural function, because a statistical model for neural activity should be chosen using the [[principle of maximum entropy]]. Given a collection of neurons, a statistical model which can reproduce the average firing rate for each neuron introduces a [[Lagrange multiplier]] for each neuron:
:<math>E = - \sum_i h_i S_i</math>
But the activity of each neuron in this model is statistically independent. To allow for pair correlations, when one neuron tends to fire (or not to fire) along with another, introduce pair-wise lagrange multipliers:
:<math>E= - \tfrac{1}{2} \sum_{ij} J_{ij} S_i S_j - \sum_i h_i S_i</math>
where <math>J_{ij}</math> are not restricted to neighbors. Note that this generalization of Ising model is sometimes called the quadratic exponential binary distribution in statistics.
This energy function only introduces probability biases for a spin having a value and for a pair of spins having the same value. Higher order correlations are unconstrained by the multipliers. An activity pattern sampled from this distribution requires the largest number of bits to store in a computer, in the most efficient coding scheme imaginable, as compared with any other distribution with the same average activity and pairwise correlations. This means that Ising models are relevant to any system which is described by bits which are as random as possible, with constraints on the pairwise correlations and the average number of 1s, which frequently occurs in both the physical and social sciences.

===Spin glasses===
With the Ising model the so-called [[spin glasses]] can also be described, by the usual Hamiltonian <math>H=-\frac{1}{2}\,\sum J_{i,k}\,S_i\,S_k,</math> where the ''S''-variables describe the Ising spins, while the ''J<sub>i,k</sub>'' are taken from a random distribution. For spin glasses a typical distribution chooses antiferromagnetic bonds with probability ''p'' and ferromagnetic bonds with probability 1&nbsp;−&nbsp;''p'' (also known as the random-bond Ising model). These bonds stay fixed or "quenched" even in the presence of thermal fluctuations. When ''p''&nbsp;=&nbsp;0 we have the original Ising model. This system deserves interest in its own; particularly one has "non-ergodic" properties leading to strange relaxation behaviour. Much attention has been also attracted by the related bond and site dilute Ising model, especially in two dimensions, leading to intriguing critical behavior.<ref>{{Citation|author= J-S Wang, [[Walter Selke|W Selke]], VB Andreichenko, and VS Dotsenko| title= The critical behaviour of the two-dimensional dilute model|journal= Physica A|volume= 164| issue= 2| pages= 221–239 |year= 1990|doi=10.1016/0378-4371(90)90196-Y|bibcode = 1990PhyA..164..221W }}</ref>

===Sea ice===
2D [[melt pond]] approximations can be created using the Ising model; sea ice topography data bears rather heavily on the results. The state variable is binary for a simple 2D approximation, being either water or ice.<ref>{{cite arXiv|author= Yi-Ping Ma|author2= Ivan Sudakov|author3= Courtenay Strong|author4= Kenneth Golden|title= Ising model for melt ponds on Arctic sea ice|year= 2017|class= physics.ao-ph|eprint=1408.2487v3}}</ref>

===Cayley tree topologies and large neural networks===

[[File:Cayley Tree Branch with Branching Ratio = 2.jpg|thumb|An Open Cayley Tree or Branch with Branching Ratio = 2 and k Generations]]

In order to investigate an Ising model with potential relevance for large (e.g. with <math>10^4</math> or <math>10^5</math> interactions per node) neural nets, at the suggestion of Krizan in 1979, {{harvtxt|Barth|1981}} obtained the exact analytical expression for the free energy of the Ising model on the closed Cayley tree (with an arbitrarily large branching ratio) for a zero-external magnetic field (in the thermodynamic limit) by applying the methodologies of {{harvtxt|Glasser|1970}} and {{harvtxt|Jellito|1979}}

<math>-\beta f = \ln 2 + \frac{2\gamma}{(\gamma+1)}\ln (\cosh J) + \frac{\gamma(\gamma-1)}{(\gamma+1)}\sum_{i=2}^z\frac{1}{\gamma^i}\ln J_i (\tau) </math>

[[File:Closed Cayley Tree with Branching Ratio = 4.jpg |thumb|A Closed Cayley Tree with Branching Ratio = 4. (Only sites for generations k, k-1, and k=1(overlapping as one row) are shown for the joined trees)]] where <math>\gamma</math> is an arbitrary branching ratio (greater than or equal to 2), t ≡ <math>tanh J</math>, <math>\tau</math> ≡ <math>t^2</math>, J ≡ <math>\beta\epsilon</math> (with <math>\epsilon</math> representing the nearest-neighbor interaction energy) and there are k (→ ∞ in the thermodynamic limit) generations in each of the tree branches (forming the closed tree architecture as shown in the given closed Cayley tree diagram.) The sum in the last term can be shown to converge uniformly and rapidly (i.e. for z → ∞, it remains finite) yielding a continuous and monotonous function, establishing that, for <math>\gamma</math> greater than or equal to 2, the free energy is a continuous function of temperature T. Further analysis of the free energy indicates that it exhibits an unusual discontinuous first derivative at the critical temperature ({{harvtxt|Krizan|Barth|Glasser|1983}}, {{harvtxt|Glasser|Goldberg|1983}}.)

The spin-spin correlation between sites (in general, m and n) on the tree was found to have a transition point when considered at the vertices (e.g. A and Ā, its reflection), their respective neighboring sites (such as B and its reflection), and between sites adjacent to the top and bottom extreme vertices of the two trees (e.g. A and B), as may be determined from

<math>\langle s_m s_n \rangle = {Z_N}^{-1}(0,T)[cosh J]^{N_b}2^N\sum_{l=1}^z g_{mn}(l)t^l</math>

where <math>N_b</math> is equal to the number of bonds, <math>g_{mn}(l)t^l</math> is the number of graphs counted for odd vertices with even intermediate sites (see cited methodologies and references for detailed calculations), <math>2^N</math> is the multiplicity resulting from two-valued spin possibilities and the partition function <math>{Z_N}</math> is derived from <math>\sum_{\{s\}}e^{-\beta H}</math>. (Note: <math>s_i </math> is consistent with the referenced literature in this section and is equivalent to <math>S_i</math> or <math>\sigma_i</math> utilized above and in earlier sections; it is valued at <math>\pm 1 </math>.) The critical temperature <math>T_C</math> is given by

<math>T_C = \frac{2\epsilon}{k_B[ln(\sqrt \gamma+1) - ln(\sqrt \gamma-1)]}</math>.

The critical temperature for this model is only determined by the branching ratio <math>\gamma</math> and the site-to-site interaction energy <math>\epsilon</math>, a fact which may have direct implications associated with neural structure vs. its function (in that it relates the energies of interaction and branching ratio to its transitional behavior.) For example, a relationship between the transition behavior of activities of neural networks between sleeping and wakeful states (which may correlate with a spin-spin type of phase transition) in terms of changes in neural interconnectivity (<math>\gamma</math>) and/or neighbor-to-neighbor interactions (<math>\epsilon</math>), over time, is just one possible avenue suggested for further experimental investigation into such a phenomenon. In any case, for this Ising model it was established, that “the stability of the long-range correlation increases with increasing <math>\gamma</math> or increasing <math>\epsilon</math>.”

For this topology, the spin-spin correlation was found to be zero between the extreme vertices and the central sites at which the two trees (or branches) are joined (i.e. between A and individually C, D, or E.) This behavior is explained to be due to the fact that, as k increases, the number of links increases exponentially (between the extreme vertices) and so even though the contribution to spin correlations decrease exponentially, the correlation between sites such as the extreme vertex (A) in one tree and the extreme vertex in the joined tree (Ā) remains finite (above the critical temperature.) In addition, A and B also exhibit a non-vanishing correlation (as do their reflections) thus lending itself to, for B level sites (with A level), being considered “clusters” which tend to exhibit synchronization of firing.

Based upon a review of other classical network models as a comparison, the Ising model on a closed Cayley tree was determined to be the first classical statistical mechanical model to demonstrate both local and long-range sites with non-vanishing spin-spin correlations, while at the same time exhibiting intermediate sites with zero correlation, which indeed was a relevant matter for large neural networks at the time of its consideration. The model's behavior is also of relevance for any other divergent-convergent tree physical (or biological) system exhibiting a closed Cayley tree topology with an Ising-type of interaction. This topology should not be ignored since its behavior for Ising models has been solved exactly, and presumably nature will have found a way of taking advantage of such simple symmetries at many levels of its designs.

{{harvtxt|Barth|1981}} early on noted the possibility of interrelationships between (1) the classical large neural network model (with similar coupled divergent-convergent topologies) with (2) an underlying statistical quantum mechanical model (independent of topology and with persistence in fundamental quantum states):

{{Blockquote|The most significant result obtained from the closed Cayley tree model involves the occurrence of long-range correlation in the absence of intermediate-range correlation. This result has not been demonstrated by other classical models. The failure of the classical view of impulse transmission to account for this phenomenon has been cited by numerous investigators (Ricciiardi and Umezawa, 1967, Hokkyo 1972, Stuart, Takahashi and Umezawa 1978, 1979) as significant enough to warrant radically new assumptions on a very fundamental level and have suggested the existence of quantum cooperative modes within the brain…In addition, it is interesting to note that the (modeling) of…Goldstone particles or bosons (as per Umezawa, et al)…within the brain, demonstrates the long-range correlation of quantum numbers preserved in the ground state…In the closed Cayley tree model ground states of pairs of sites, as well as the state variable of individual sites, (can) exhibit long-range correlation.|author=|title=|source=}}

It was a natural and common belief among early neurophysicists (e.g. Umezawa, Krizan, Barth, etc.) that classical neural models (including those with statistical mechanical aspects) will one day have to be integrated with quantum physics (with quantum statistical aspects), similar perhaps to how the domain of chemistry has historically integrated itself into quantum physics via quantum chemistry.

Several additional statistical mechanical problems of interest remain to be solved for the closed Cayley tree, including the time-dependent case and the external field situation, as well as theoretical efforts aimed at understanding interrelationships with underlying quantum constituents and their physics.


==See also==
==See also==
{{too many see alsos|date=November 2024}}
{{div col|colwidth=25em}}
* [[ANNNI model]]
* [[ANNNI model]]
* [[Binder parameter]]
* [[Binder parameter]]
Line 889: Line 915:
* [[XY model]]
* [[XY model]]
* [[Z N model]]
* [[Z N model]]
{{div col end}}


==Footnotes==
==Footnotes==
Line 897: Line 922:
==References==
==References==
{{Refbegin}}
{{Refbegin}}
*{{Citation | last1=Barth | first1=P. F. |author-link1=Peter F. Barth | year=1981 | title= Cooperativity and the Transition Behavior of Large Neural Nets | pages=1–118 | journal= Master of Science Thesis | publisher= University of Vermont | location= Burlington |oclc=8231704 }}
*{{Citation | last1=Baxter | first1=Rodney J. | title=Exactly solved models in statistical mechanics | url=https://physics.anu.edu.au/theophys/baxter_book.php | publisher=Academic Press Inc. [Harcourt Brace Jovanovich Publishers] | location=London | isbn=978-0-12-083180-7 | mr=690578 | year=1982 }}
*{{Citation | last1=Baxter | first1=Rodney J. | title=Exactly solved models in statistical mechanics | url=https://physics.anu.edu.au/theophys/baxter_book.php | publisher=Academic Press Inc. [Harcourt Brace Jovanovich Publishers] | location=London | isbn=978-0-12-083180-7 | mr=690578 | year=1982 }}
* {{springer|author=[[Kurt Binder|K. Binder]]|title=Ising model}}
* {{springer|author=[[Kurt Binder|K. Binder]]|title=Ising model}}
* {{cite journal |doi=10.1103/RevModPhys.39.883|title=History of the Lenz-Ising Model|year=1967|last1=Brush|first1=Stephen G.|journal=Reviews of Modern Physics|volume=39|issue=4|pages=883–893|bibcode=1967RvMP...39..883B}}
* {{cite journal |doi=10.1103/RevModPhys.39.883|title=History of the Lenz-Ising Model|year=1967|last1=Brush|first1=Stephen G.|journal=Reviews of Modern Physics|volume=39|issue=4|pages=883–893| bibcode=1967RvMP...39..883B}}
* {{citation|first=R.|last=Baierlein|title=Thermal Physics|publisher=Cambridge University Press|location=Cambridge|year=1999|isbn=978-0-521-59082-2|url-access=registration|url=https://archive.org/details/thermalphysics00ralp}}
* {{citation|first=R.|last=Baierlein|title=Thermal Physics|publisher=Cambridge University Press | location=Cambridge | year=1999|isbn=978-0-521-59082-2|url-access=registration | url=https://archive.org/details/thermalphysics00ralp}}
* {{citation|first=G.|last=Gallavotti|author-link=Giovanni Gallavotti|title=Statistical mechanics|series=Texts and Monographs in Physics|publisher=Springer-Verlag|location=Berlin|year=1999|isbn=978-3-540-64883-3|mr=1707309|doi=10.1007/978-3-662-03952-6}}
* {{citation|first=G.|last=Gallavotti|author-link=Giovanni Gallavotti|title=Statistical mechanics|series=Texts and Monographs in Physics|publisher=Springer-Verlag|location=Berlin|year=1999|isbn=978-3-540-64883-3| mr=1707309 | doi=10.1007/978-3-662-03952-6}}
* {{citation|first=Kerson|last=Huang|author-link=Kerson Huang|title=Statistical mechanics (2nd edition)|publisher=Wiley|year=1987|isbn=978-0-471-81518-1}}
* {{citation | first=Kerson|last=Huang | author-link=Kerson Huang|title=Statistical mechanics | edition = 2nd | publisher=Wiley | year=1987|isbn=978-0-471-81518-1}}
*{{citation|first=E. |last=Ising|title=Beitrag zur Theorie des Ferromagnetismus|journal= Z. Phys. |volume= 31 |issue=1|year=1925|pages= 253–258|doi=10.1007/BF02980577|bibcode = 1925ZPhy...31..253I |s2cid=122157319}}
*{{citation|first=E. |last=Ising|title=Beitrag zur Theorie des Ferromagnetismus|journal= Z. Phys. |volume= 31 |issue=1|year=1925|pages= 253–258|doi=10.1007/BF02980577|bibcode = 1925ZPhy...31..253I |s2cid=122157319}}
*{{citation|title=Théorie statistique des champs, Volume 1|series=Savoirs actuels ([[CNRS]])|first1=Claude|last1= Itzykson|first2= Jean-Michel|last2= Drouffe
*{{citation|title=Théorie statistique des champs, Volume 1|series=Savoirs actuels ([[CNRS]])|first1=Claude|last1= Itzykson|first2= Jean-Michel|last2= Drouffe | publisher = EDP Sciences Editions|year= 1989|isbn=978-2-86883-360-0}}
*{{citation|title=Statistical field theory, Volume 1: From Brownian motion to renormalization and lattice gauge theory|first1=Claude|last1= Itzykson|first2= Jean-Michel|last2= Drouffe | publisher = Cambridge University Press|year= 1989|isbn=978-0-521-40805-9}}
| publisher = EDP Sciences Editions|year= 1989|isbn=978-2-86883-360-0}}
*{{cite book |last1=Friedli |first1=S. |last2=Velenik |first2=Y. |title=Statistical Mechanics of Lattice Systems: a Concrete Mathematical Introduction |publisher=Cambridge University Press |location=Cambridge |year=2017 | isbn=9781107184824 |url=http://www.unige.ch/math/folks/velenik/smbook/index.html }}
*{{citation|title=Statistical field theory, Volume 1: From Brownian motion to renormalization and lattice gauge theory|first1=Claude|last1= Itzykson|first2= Jean-Michel|last2= Drouffe
| publisher = Cambridge University Press|year= 1989|isbn=978-0-521-40805-9}}
*{{cite book |last1=Friedli |first1=S. |last2=Velenik |first2=Y. |title=Statistical Mechanics of Lattice Systems: a Concrete Mathematical Introduction |publisher=Cambridge University Press |location=Cambridge |year=2017 |isbn=9781107184824 |url=http://www.unige.ch/math/folks/velenik/smbook/index.html }}
* Ross Kindermann and J. Laurie Snell (1980), ''[https://www.ams.org/online_bks/conm1/ Markov Random Fields and Their Applications]''. American Mathematical Society. {{ISBN|0-8218-3381-2}}.
* Ross Kindermann and J. Laurie Snell (1980), ''[https://www.ams.org/online_bks/conm1/ Markov Random Fields and Their Applications]''. American Mathematical Society. {{ISBN|0-8218-3381-2}}.
*[[Hagen Kleinert|Kleinert, H]] (1989), ''Gauge Fields in Condensed Matter'', Vol. I, "Superflow and Vortex Lines", pp.&nbsp;1–742, Vol. II, "Stresses and Defects", pp.&nbsp;743–1456, [https://web.archive.org/web/20100113041810/http://worldscibooks.com/physics/0356.html World Scientific (Singapore)]; Paperback {{ISBN|9971-5-0210-0}} '' (also available online: [http://www.physik.fu-berlin.de/~kleinert/kleiner_reb1/contents1.html Vol. I] and [http://www.physik.fu-berlin.de/~kleinert/kleiner_reb1/contents2.html Vol. II])''
*[[Hagen Kleinert|Kleinert, H]] (1989), ''Gauge Fields in Condensed Matter'', Vol. I, "Superflow and Vortex Lines", pp.&nbsp;1–742, Vol. II, "Stresses and Defects", pp.&nbsp;743–1456, [https://web.archive.org/web/20100113041810/http://worldscibooks.com/physics/0356.html World Scientific (Singapore)]; Paperback {{ISBN|9971-5-0210-0}} '' (also available online: [http://www.physik.fu-berlin.de/~kleinert/kleiner_reb1/contents1.html Vol. I] and [http://www.physik.fu-berlin.de/~kleinert/kleiner_reb1/contents2.html Vol. II])''
*[[Hagen Kleinert|Kleinert, H]] and Schulte-Frohlinde, V (2001), ''Critical Properties of φ<sup>4</sup>-Theories'', [https://web.archive.org/web/20080226151023/http://www.worldscibooks.com/physics/4733.html World Scientific (Singapore)]; Paperback {{ISBN|981-02-4658-7}}'' (also available [http://users.physik.fu-berlin.de/~kleinert/kleinert/?p=booklist&details=6 online])''
*[[Hagen Kleinert|Kleinert, H]] and Schulte-Frohlinde, V (2001), ''Critical Properties of φ<sup>4</sup>-Theories'', [https://web.archive.org/web/20080226151023/http://www.worldscibooks.com/physics/4733.html World Scientific (Singapore)]; Paperback {{ISBN|981-02-4658-7}}'' (also available [http://users.physik.fu-berlin.de/~kleinert/kleinert/?p=booklist&details=6 online])''
* {{Citation | last = Lenz | first = W. | author-link = Wilhelm Lenz | year = 1920 | title = Beiträge zum Verständnis der magnetischen Eigenschaften in festen Körpern | journal = Physikalische Zeitschrift | volume = 21 | pages = 613–615 | postscript = .}}
* {{Citation | last = Lenz | first = W. | author-link = Wilhelm Lenz | year = 1920 | title = Beiträge zum Verständnis der magnetischen Eigenschaften in festen Körpern | journal = Physikalische Zeitschrift | volume = 21 | pages = 613–615 }}
* Barry M. McCoy and Tai Tsun Wu (1973), ''The Two-Dimensional Ising Model''. Harvard University Press, Cambridge Massachusetts, {{ISBN|0-674-91440-6}}
* Barry M. McCoy and Tai Tsun Wu (1973), ''The Two-Dimensional Ising Model''. Harvard University Press, Cambridge Massachusetts, {{ISBN|0-674-91440-6}}
*{{Citation | last1=Montroll | first1=Elliott W. | last2=Potts | first2=Renfrey B. | last3=Ward | first3=John C. | author-link3=John Clive Ward | title=Correlations and spontaneous magnetization of the two-dimensional Ising model | url=http://link.aip.org/link/?JMAPAQ%2F4%2F308%2F1 | doi=10.1063/1.1703955 | mr=0148406 | year=1963 | journal=[[Journal of Mathematical Physics]] | issn=0022-2488 | volume=4 | pages=308–322 | bibcode=1963JMP.....4..308M | issue=2 | url-status=dead | archive-url=https://archive.today/20130112095848/http://link.aip.org/link/?JMAPAQ/4/308/1 | archive-date=2013-01-12 | access-date=2009-10-25 }}
*{{Citation | last1=Montroll | first1=Elliott W. | last2=Potts | first2=Renfrey B. | last3=Ward | first3=John C. | author-link3=John Clive Ward | title=Correlations and spontaneous magnetization of the two-dimensional Ising model | url=http://link.aip.org/link/?JMAPAQ%2F4%2F308%2F1 | doi=10.1063/1.1703955 | mr=0148406 | year=1963 | journal=[[Journal of Mathematical Physics]] | issn=0022-2488 | volume=4 | pages=308–322 | bibcode=1963JMP.....4..308M | issue=2 | url-status=dead | archive-url=https://archive.today/20130112095848/http://link.aip.org/link/?JMAPAQ/4/308/1 | archive-date=2013-01-12 | access-date=2009-10-25 }}
*{{Citation | last1=Onsager | first1=Lars | author-link1= Lars Onsager|title=Crystal statistics. I. A two-dimensional model with an order-disorder transition | doi=10.1103/PhysRev.65.117 | mr=0010315 | year=1944 | journal= Physical Review | series = Series II | volume=65 | pages=117–149|bibcode = 1944PhRv...65..117O | issue=3–4 }}
*{{Citation | last1=Onsager | first1=Lars | author-link1= Lars Onsager|title=Crystal statistics. I. A two-dimensional model with an order-disorder transition | doi=10.1103/PhysRev.65.117 | mr=0010315 | year=1944 | journal= Physical Review | series = Series II | volume=65 | pages=117–149|bibcode = 1944PhRv...65..117O | issue=3–4 }}
*{{Citation |last=Onsager |first=Lars |author-link=Lars Onsager|title=Discussion|journal=Supplemento al Nuovo Cimento|volume=6|page=261|year=1949}}
*{{Citation |last=Onsager |first=Lars |author-link=Lars Onsager|title=Discussion|journal=Supplemento al Nuovo Cimento | volume=6|page=261|year=1949}}
* John Palmer (2007), ''Planar Ising Correlations''. Birkhäuser, Boston, {{ISBN|978-0-8176-4248-8}}.
* John Palmer (2007), ''Planar Ising Correlations''. Birkhäuser, Boston, {{ISBN|978-0-8176-4248-8}}.
*{{Citation | last1=Istrail | first1=Sorin | title=Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing | chapter-url=http://www.cs.brown.edu/~sorin/pdfs/Ising-paper.pdf | publisher=ACM | mr=2114521 | year=2000 | chapter=Statistical mechanics, three-dimensionality and NP-completeness. I. Universality of intractability for the partition function of the Ising model across non-planar surfaces (extended abstract) | pages=87–96 | doi=10.1145/335305.335316 | isbn=978-1581131840 | s2cid=7944336 }}
*{{Citation | last1=Istrail | first1=Sorin | title=Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing | chapter-url=http://www.cs.brown.edu/~sorin/pdfs/Ising-paper.pdf | publisher=ACM | mr=2114521 | year=2000 | chapter=Statistical mechanics, three-dimensionality and NP-completeness. I. Universality of intractability for the partition function of the Ising model across non-planar surfaces (extended abstract) | pages=87–96 | doi=10.1145/335305.335316 | isbn=978-1581131840 | s2cid=7944336 }}
Line 922: Line 946:
*{{Citation | last1=Glasser | first1=M. L. | year=1970 | title= Exact Partition Function for the Two-Dimensional Ising Model | journal=American Journal of Physics | volume=38 | issue=8 | pages=1033–1036 | doi=10.1119/1.1976530 | bibcode=1970AmJPh..38.1033G }}
*{{Citation | last1=Glasser | first1=M. L. | year=1970 | title= Exact Partition Function for the Two-Dimensional Ising Model | journal=American Journal of Physics | volume=38 | issue=8 | pages=1033–1036 | doi=10.1119/1.1976530 | bibcode=1970AmJPh..38.1033G }}
* {{Citation | last1=Jellito | first1=R. J. | year=1979 | title= The Ising Model on a Closed Cayley Tree | journal=Physica | volume=99A | issue=1 | pages=268–280 | doi=10.1016/0378-4371(79)90134-1 | bibcode=1979PhyA...99..268J }}
* {{Citation | last1=Jellito | first1=R. J. | year=1979 | title= The Ising Model on a Closed Cayley Tree | journal=Physica | volume=99A | issue=1 | pages=268–280 | doi=10.1016/0378-4371(79)90134-1 | bibcode=1979PhyA...99..268J }}
*{{Citation | last1=Barth | first1=P. F. |author-link1=Peter F. Barth | year=1981 | title= Cooperativity and the Transition Behavior of Large Neural Nets | pages=1–118 | journal= Master of Science Thesis | publisher= University of Vermont | location= Burlington |oclc=8231704 }}
* {{Citation | last1=Krizan | first1=J. E. |last2=Barth | first2=P. F. | author-link2=Peter F. Barth | last3=Glasser | first3=M.L.| year=1983 | title= Exact Phase Transitions for the Ising Model on the Closed Cayley Tree| journal=Physica | volume=119A | pages=230–242 | publisher= North-Holland Publishing Co.| doi=10.1016/0378-4371(83)90157-7 }}
* {{Citation | last1=Krizan | first1=J. E. |last2=Barth | first2=P. F. | author-link2=Peter F. Barth | last3=Glasser | first3=M.L.| year=1983 | title= Exact Phase Transitions for the Ising Model on the Closed Cayley Tree| journal=Physica | volume=119A | pages=230–242 | publisher= North-Holland Publishing Co.| doi=10.1016/0378-4371(83)90157-7 }}
*{{Citation | last1=Glasser | first1=M. L. | last2=Goldberg | first2=M. | year=1983| title= The Ising model on a closed Cayley tree | journal=Physica | volume=117A | issue=2 | pages=670–672 | doi=10.1016/0378-4371(83)90138-3 | bibcode=1983PhyA..117..670G }}
*{{Citation | last1=Glasser | first1=M. L. | last2=Goldberg | first2=M. | year=1983| title= The Ising model on a closed Cayley tree | journal=Physica | volume=117A | issue=2 | pages=670–672 | doi=10.1016/0378-4371(83)90138-3 | bibcode=1983PhyA..117..670G }}
*{{Citation | last1=Süzen | first1=Mehmet | year=2014 | title= Effective ergodicity in single-spin-flip dynamics | journal=Physical Review E | volume=90 | issue=3 | pages=032141 | doi=10.1103/PhysRevE.90.032141 | pmid=25314429 | arxiv=1405.4497 | bibcode=2014PhRvE..90c2141S }}
{{Refend}}
{{Refend}}


Line 933: Line 957:
* [http://physics.ucsc.edu/~peter/ising/ising.html A dynamical 2D Ising java applet by UCSC]
* [http://physics.ucsc.edu/~peter/ising/ising.html A dynamical 2D Ising java applet by UCSC]
* [https://sites.google.com/view/chremos-group/applets/ising-model A dynamical 2D Ising java applet]
* [https://sites.google.com/view/chremos-group/applets/ising-model A dynamical 2D Ising java applet]
* [http://www.physics.uci.edu/~etolleru/IsingApplet/IsingApplet.html A larger/more complicated 2D Ising java applet]
* [http://www.physics.uci.edu/~etolleru/IsingApplet/IsingApplet.html A larger/more complicated 2D Ising java applet] {{Webarchive|url=https://web.archive.org/web/20201125045940/http://www.physics.uci.edu/~etolleru/IsingApplet/IsingApplet.html |date=2020-11-25 }}
* [https://www.complexity-explorables.org/explorables/i-sing-well-tempered/ “I sing well-tempered” The Ising Model: A simple model for critical behavior in a system of spins] by Dirk Brockman, is an interactive simulation that allows users to export the working code to a presentation slide
* [http://demonstrations.wolfram.com/IsingModel/ Ising Model simulation] by Enrique Zeleny, the [[Wolfram Demonstrations Project]]
* [http://demonstrations.wolfram.com/IsingModel/ Ising Model simulation] by Enrique Zeleny, the [[Wolfram Demonstrations Project]]
* [http://ibiblio.org/e-notes/Perc/contents.htm Phase transitions on lattices]
* [http://ibiblio.org/e-notes/Perc/contents.htm Phase transitions on lattices]
Line 951: Line 976:
[[Category:Statistical mechanics]]
[[Category:Statistical mechanics]]
[[Category:Lattice models]]
[[Category:Lattice models]]
[[Category:NP-complete problems]]

Latest revision as of 06:40, 1 December 2024

The Ising model (or Lenz–Ising model), named after the physicists Ernst Ising and Wilhelm Lenz, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic "spins" that can be in one of two states (+1 or −1). The spins are arranged in a graph, usually a lattice (where the local structure repeats periodically in all directions), allowing each spin to interact with its neighbors. Neighboring spins that agree have a lower energy than those that disagree; the system tends to the lowest energy but heat disturbs this tendency, thus creating the possibility of different structural phases. The model allows the identification of phase transitions as a simplified model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition.[1]

The Ising model was invented by the physicist Wilhelm Lenz (1920), who gave it as a problem to his student Ernst Ising. The one-dimensional Ising model was solved by Ising (1925) alone in his 1924 thesis;[2] it has no phase transition. The two-dimensional square-lattice Ising model is much harder and was only given an analytic description much later, by Lars Onsager (1944). It is usually solved by a transfer-matrix method, although there exist different approaches, more related to quantum field theory.

In dimensions greater than four, the phase transition of the Ising model is described by mean-field theory. The Ising model for greater dimensions was also explored with respect to various tree topologies in the late 1970s, culminating in an exact solution of the zero-field, time-independent Barth (1981) model for closed Cayley trees of arbitrary branching ratio, and thereby, arbitrarily large dimensionality within tree branches. The solution to this model exhibited a new, unusual phase transition behavior, along with non-vanishing long-range and nearest-neighbor spin-spin correlations, deemed relevant to large neural networks as one of its possible applications.

The Ising problem without an external field can be equivalently formulated as a graph maximum cut (Max-Cut) problem that can be solved via combinatorial optimization.

Definition

[edit]

Consider a set of lattice sites, each with a set of adjacent sites (e.g. a graph) forming a -dimensional lattice. For each lattice site there is a discrete variable such that , representing the site's spin. A spin configuration, is an assignment of spin value to each lattice site.

For any two adjacent sites there is an interaction . Also a site has an external magnetic field interacting with it. The energy of a configuration is given by the Hamiltonian function

where the first sum is over pairs of adjacent spins (every pair is counted once). The notation indicates that sites and are nearest neighbors. The magnetic moment is given by . Note that the sign in the second term of the Hamiltonian above should actually be positive because the electron's magnetic moment is antiparallel to its spin, but the negative term is used conventionally.[3] The configuration probability is given by the Boltzmann distribution with inverse temperature :

where , and the normalization constant

is the partition function. For a function of the spins ("observable"), one denotes by

the expectation (mean) value of .

The configuration probabilities represent the probability that (in equilibrium) the system is in a state with configuration .

Discussion

[edit]

The minus sign on each term of the Hamiltonian function is conventional. Using this sign convention, Ising models can be classified according to the sign of the interaction: if, for a pair ij

  • , the interaction is called ferromagnetic,
  • , the interaction is called antiferromagnetic,
  • , the spins are noninteracting.

The system is called ferromagnetic or antiferromagnetic if all interactions are ferromagnetic or all are antiferromagnetic. The original Ising models were ferromagnetic, and it is still often assumed that "Ising model" means a ferromagnetic Ising model.

In a ferromagnetic Ising model, spins desire to be aligned: the configurations in which adjacent spins are of the same sign have higher probability. In an antiferromagnetic model, adjacent spins tend to have opposite signs.

The sign convention of H(σ) also explains how a spin site j interacts with the external field. Namely, the spin site wants to line up with the external field. If:

  • , the spin site j desires to line up in the positive direction,
  • , the spin site j desires to line up in the negative direction,
  • , there is no external influence on the spin site.

Simplifications

[edit]

Ising models are often examined without an external field interacting with the lattice, that is, h = 0 for all j in the lattice Λ. Using this simplification, the Hamiltonian becomes

When the external field is zero everywhere, h = 0, the Ising model is symmetric under switching the value of the spin in all the lattice sites; a nonzero field breaks this symmetry.

Another common simplification is to assume that all of the nearest neighbors ⟨ij⟩ have the same interaction strength. Then we can set Jij = J for all pairs ij in Λ. In this case the Hamiltonian is further simplified to

Connection to graph maximum cut

[edit]

A subset S of the vertex set V(G) of a weighted undirected graph G determines a cut of the graph G into S and its complementary subset G\S. The size of the cut is the sum of the weights of the edges between S and G\S. A maximum cut size is at least the size of any other cut, varying S.

For the Ising model without an external field on a graph G, the Hamiltonian becomes the following sum over the graph edges E(G)

.

Here each vertex i of the graph is a spin site that takes a spin value . A given spin configuration partitions the set of vertices into two -depended subsets, those with spin up and those with spin down . We denote by the -depended set of edges that connects the two complementary vertex subsets and . The size of the cut to bipartite the weighted undirected graph G can be defined as

where denotes a weight of the edge and the scaling 1/2 is introduced to compensate for double counting the same weights .

The identities

where the total sum in the first term does not depend on , imply that minimizing in is equivalent to minimizing . Defining the edge weight thus turns the Ising problem without an external field into a graph Max-Cut problem [4] maximizing the cut size , which is related to the Ising Hamiltonian as follows,

Questions

[edit]

A significant number of statistical questions to ask about this model are in the limit of large numbers of spins:

  • In a typical configuration, are most of the spins +1 or −1, or are they split equally?
  • If a spin at any given position i is 1, what is the probability that the spin at position j is also 1?
  • If β is changed, is there a phase transition?
  • On a lattice Λ, what is the fractal dimension of the shape of a large cluster of +1 spins?

Basic properties and history

[edit]
Visualization of the translation-invariant probability measure of the one-dimensional Ising model

The most studied case of the Ising model is the translation-invariant ferromagnetic zero-field model on a d-dimensional lattice, namely, Λ = Zd, Jij = 1, h = 0.

No phase transition in one dimension

[edit]

In his 1924 PhD thesis, Ising solved the model for the d = 1 case, which can be thought of as a linear horizontal lattice where each site only interacts with its left and right neighbor. In one dimension, the solution admits no phase transition.[5] Namely, for any positive β, the correlations ⟨σiσj⟩ decay exponentially in |i − j|:

and the system is disordered. On the basis of this result, he incorrectly concluded [citation needed] that this model does not exhibit phase behaviour in any dimension.

Phase transition and exact solution in two dimensions

[edit]

The Ising model undergoes a phase transition between an ordered and a disordered phase in 2 dimensions or more. Namely, the system is disordered for small β, whereas for large β the system exhibits ferromagnetic order:

This was first proven by Rudolf Peierls in 1936,[6] using what is now called a Peierls argument.

The Ising model on a two-dimensional square lattice with no magnetic field was analytically solved by Lars Onsager (1944). Onsager showed that the correlation functions and free energy of the Ising model are determined by a noninteracting lattice fermion. Onsager announced the formula for the spontaneous magnetization for the 2-dimensional model in 1949 but did not give a derivation. Yang (1952) gave the first published proof of this formula, using a limit formula for Fredholm determinants, proved in 1951 by Szegő in direct response to Onsager's work.[7]

Correlation inequalities

[edit]

A number of correlation inequalities have been derived rigorously for the Ising spin correlations (for general lattice structures), which have enabled mathematicians to study the Ising model both on and off criticality.

Griffiths inequality

[edit]

Given any subset of spins and on the lattice, the following inequality holds,

where .

With , the special case results.

This means that spins are positively correlated on the Ising ferromagnet. An immediate application of this is that the magnetization of any set of spins is increasing with respect to any set of coupling constants .

Simon-Lieb inequality

[edit]

The Simon-Lieb inequality[8] states that for any set disconnecting from (e.g. the boundary of a box with being inside the box and being outside),

This inequality can be used to establish the sharpness of phase transition for the Ising model.[9]

FKG inequality

[edit]

This inequality is proven first for a type of positively-correlated percolation model, of which includes a representation of the Ising model. It is used to determine the critical temperatures of planar Potts model using percolation arguments (which includes the Ising model as a special case).[10]

Historical significance

[edit]

One of Democritus' arguments in support of atomism was that atoms naturally explain the sharp phase boundaries observed in materials[citation needed], as when ice melts to water or water turns to steam. His idea was that small changes in atomic-scale properties would lead to big changes in the aggregate behavior. Others believed that matter is inherently continuous, not atomic, and that the large-scale properties of matter are not reducible to basic atomic properties.

While the laws of chemical bonding made it clear to nineteenth century chemists that atoms were real, among physicists the debate continued well into the early twentieth century. Atomists, notably James Clerk Maxwell and Ludwig Boltzmann, applied Hamilton's formulation of Newton's laws to large systems, and found that the statistical behavior of the atoms correctly describes room temperature gases. But classical statistical mechanics did not account for all of the properties of liquids and solids, nor of gases at low temperature.

Once modern quantum mechanics was formulated, atomism was no longer in conflict with experiment, but this did not lead to a universal acceptance of statistical mechanics, which went beyond atomism. Josiah Willard Gibbs had given a complete formalism to reproduce the laws of thermodynamics from the laws of mechanics. But many faulty arguments survived from the 19th century, when statistical mechanics was considered dubious. The lapses in intuition mostly stemmed from the fact that the limit of an infinite statistical system has many zero-one laws which are absent in finite systems: an infinitesimal change in a parameter can lead to big differences in the overall, aggregate behavior, as Democritus expected.

No phase transitions in finite volume

[edit]

In the early part of the twentieth century, some believed that the partition function could never describe a phase transition, based on the following argument:

  1. The partition function is a sum of e−βE over all configurations.
  2. The exponential function is everywhere analytic as a function of β.
  3. The sum of analytic functions is an analytic function.

This argument works for a finite sum of exponentials, and correctly establishes that there are no singularities in the free energy of a system of a finite size. For systems which are in the thermodynamic limit (that is, for infinite systems) the infinite sum can lead to singularities. The convergence to the thermodynamic limit is fast, so that the phase behavior is apparent already on a relatively small lattice, even though the singularities are smoothed out by the system's finite size.

This was first established by Rudolf Peierls in the Ising model.

Peierls droplets

[edit]

Shortly after Lenz and Ising constructed the Ising model, Peierls was able to explicitly show that a phase transition occurs in two dimensions.

To do this, he compared the high-temperature and low-temperature limits. At infinite temperature (β = 0) all configurations have equal probability. Each spin is completely independent of any other, and if typical configurations at infinite temperature are plotted so that plus/minus are represented by black and white, they look like television snow. For high, but not infinite temperature, there are small correlations between neighboring positions, the snow tends to clump a little bit, but the screen stays randomly looking, and there is no net excess of black or white.

A quantitative measure of the excess is the magnetization, which is the average value of the spin:

A bogus argument analogous to the argument in the last section now establishes that the magnetization in the Ising model is always zero.

  1. Every configuration of spins has equal energy to the configuration with all spins flipped.
  2. So for every configuration with magnetization M there is a configuration with magnetization −M with equal probability.
  3. The system should therefore spend equal amounts of time in the configuration with magnetization M as with magnetization −M.
  4. So the average magnetization (over all time) is zero.

As before, this only proves that the average magnetization is zero at any finite volume. For an infinite system, fluctuations might not be able to push the system from a mostly plus state to a mostly minus with a nonzero probability.

For very high temperatures, the magnetization is zero, as it is at infinite temperature. To see this, note that if spin A has only a small correlation ε with spin B, and B is only weakly correlated with C, but C is otherwise independent of A, the amount of correlation of A and C goes like ε2. For two spins separated by distance L, the amount of correlation goes as εL, but if there is more than one path by which the correlations can travel, this amount is enhanced by the number of paths.

The number of paths of length L on a square lattice in d dimensions is since there are 2d choices for where to go at each step.

A bound on the total correlation is given by the contribution to the correlation by summing over all paths linking two points, which is bounded above by the sum over all paths of length L divided by which goes to zero when ε is small.

At low temperatures (β ≫ 1) the configurations are near the lowest-energy configuration, the one where all the spins are plus or all the spins are minus. Peierls asked whether it is statistically possible at low temperature, starting with all the spins minus, to fluctuate to a state where most of the spins are plus. For this to happen, droplets of plus spin must be able to congeal to make the plus state.

The energy of a droplet of plus spins in a minus background is proportional to the perimeter of the droplet L, where plus spins and minus spins neighbor each other. For a droplet with perimeter L, the area is somewhere between (L − 2)/2 (the straight line) and (L/4)2 (the square box). The probability cost for introducing a droplet has the factor e−βL, but this contributes to the partition function multiplied by the total number of droplets with perimeter L, which is less than the total number of paths of length L: So that the total spin contribution from droplets, even overcounting by allowing each site to have a separate droplet, is bounded above by

which goes to zero at large β. For β sufficiently large, this exponentially suppresses long loops, so that they cannot occur, and the magnetization never fluctuates too far from −1.

So Peierls established that the magnetization in the Ising model eventually defines superselection sectors, separated domains not linked by finite fluctuations.

Kramers–Wannier duality

[edit]

Kramers and Wannier were able to show that the high-temperature expansion and the low-temperature expansion of the model are equal up to an overall rescaling of the free energy. This allowed the phase-transition point in the two-dimensional model to be determined exactly (under the assumption that there is a unique critical point).

Yang–Lee zeros

[edit]

After Onsager's solution, Yang and Lee investigated the way in which the partition function becomes singular as the temperature approaches the critical temperature.

Applications

[edit]

Magnetism

[edit]

The original motivation for the model was the phenomenon of ferromagnetism. Iron is magnetic; once it is magnetized it stays magnetized for a long time compared to any atomic time.

In the 19th century, it was thought that magnetic fields are due to currents in matter, and Ampère postulated that permanent magnets are caused by permanent atomic currents. The motion of classical charged particles could not explain permanent currents though, as shown by Larmor. In order to have ferromagnetism, the atoms must have permanent magnetic moments which are not due to the motion of classical charges.

Once the electron's spin was discovered, it was clear that the magnetism should be due to a large number of electron spins all oriented in the same direction. It was natural to ask how the electrons' spins all know which direction to point in, because the electrons on one side of a magnet don't directly interact with the electrons on the other side. They can only influence their neighbors. The Ising model was designed to investigate whether a large fraction of the electron spins could be oriented in the same direction using only local forces.

Lattice gas

[edit]

The Ising model can be reinterpreted as a statistical model for the motion of atoms. Since the kinetic energy depends only on momentum and not on position, while the statistics of the positions only depends on the potential energy, the thermodynamics of the gas only depends on the potential energy for each configuration of atoms.

A coarse model is to make space-time a lattice and imagine that each position either contains an atom or it doesn't. The space of configuration is that of independent bits Bi, where each bit is either 0 or 1 depending on whether the position is occupied or not. An attractive interaction reduces the energy of two nearby atoms. If the attraction is only between nearest neighbors, the energy is reduced by −4JBiBj for each occupied neighboring pair.

The density of the atoms can be controlled by adding a chemical potential, which is a multiplicative probability cost for adding one more atom. A multiplicative factor in probability can be reinterpreted as an additive term in the logarithm – the energy. The extra energy of a configuration with N atoms is changed by μN. The probability cost of one more atom is a factor of exp(−βμ).

So the energy of the lattice gas is:

Rewriting the bits in terms of spins,

For lattices where every site has an equal number of neighbors, this is the Ising model with a magnetic field h = (zJ − μ)/2, where z is the number of neighbors.

In biological systems, modified versions of the lattice gas model have been used to understand a range of binding behaviors. These include the binding of ligands to receptors in the cell surface,[11] the binding of chemotaxis proteins to the flagellar motor,[12] and the condensation of DNA.[13]

Neuroscience

[edit]

The activity of neurons in the brain can be modelled statistically. Each neuron at any time is either active + or inactive −. The active neurons are those that send an action potential down the axon in any given time window, and the inactive ones are those that do not.

Following the general approach of Jaynes,[14][15] a later interpretation of Schneidman, Berry, Segev and Bialek,[16] is that the Ising model is useful for any model of neural function, because a statistical model for neural activity should be chosen using the principle of maximum entropy. Given a collection of neurons, a statistical model which can reproduce the average firing rate for each neuron introduces a Lagrange multiplier for each neuron: But the activity of each neuron in this model is statistically independent. To allow for pair correlations, when one neuron tends to fire (or not to fire) along with another, introduce pair-wise lagrange multipliers: where are not restricted to neighbors. Note that this generalization of Ising model is sometimes called the quadratic exponential binary distribution in statistics. This energy function only introduces probability biases for a spin having a value and for a pair of spins having the same value. Higher order correlations are unconstrained by the multipliers. An activity pattern sampled from this distribution requires the largest number of bits to store in a computer, in the most efficient coding scheme imaginable, as compared with any other distribution with the same average activity and pairwise correlations. This means that Ising models are relevant to any system which is described by bits which are as random as possible, with constraints on the pairwise correlations and the average number of 1s, which frequently occurs in both the physical and social sciences.

Spin glasses

[edit]

With the Ising model the so-called spin glasses can also be described, by the usual Hamiltonian where the S-variables describe the Ising spins, while the Ji,k are taken from a random distribution. For spin glasses a typical distribution chooses antiferromagnetic bonds with probability p and ferromagnetic bonds with probability 1 − p (also known as the random-bond Ising model). These bonds stay fixed or "quenched" even in the presence of thermal fluctuations. When p = 0 we have the original Ising model. This system deserves interest in its own; particularly one has "non-ergodic" properties leading to strange relaxation behaviour. Much attention has been also attracted by the related bond and site dilute Ising model, especially in two dimensions, leading to intriguing critical behavior.[17]

Artificial neural network

[edit]

Ising model was instrumental in the development of the Hopfield network. The original Ising model is a model for equilibrium. Roy J. Glauber in 1963 studied the Ising model evolving in time, as a process towards thermal equilibrium (Glauber dynamics), adding in the component of time.[18] (Kaoru Nakano, 1971)[19][20] and (Shun'ichi Amari, 1972),[21] proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory. The same idea was published by (William A. Little [de], 1974),[22] who was cited by Hopfield in his 1982 paper.

The Sherrington–Kirkpatrick model of spin glass, published in 1975,[23] is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions.[24] In a 1984 paper he extended this to continuous activation functions.[25] It became a standard model for the study of neural networks through statistical mechanics.[26][27]

Sea ice

[edit]

The melt pond can be modelled by the Ising model; sea ice topography data bears rather heavily on the results. The state variable is binary for a simple 2D approximation, being either water or ice.[28]

Cayley tree topologies and large neural networks

[edit]
An Open Cayley Tree or Branch with Branching Ratio = 2 and k Generations

In order to investigate an Ising model with potential relevance for large (e.g. with or interactions per node) neural nets, at the suggestion of Krizan in 1979, Barth (1981) obtained the exact analytical expression for the free energy of the Ising model on the closed Cayley tree (with an arbitrarily large branching ratio) for a zero-external magnetic field (in the thermodynamic limit) by applying the methodologies of Glasser (1970) and Jellito (1979)

A Closed Cayley Tree with Branching Ratio = 4. (Only sites for generations k, k-1, and k=1(overlapping as one row) are shown for the joined trees)

where is an arbitrary branching ratio (greater than or equal to 2), , , (with representing the nearest-neighbor interaction energy) and there are k (→ ∞ in the thermodynamic limit) generations in each of the tree branches (forming the closed tree architecture as shown in the given closed Cayley tree diagram.) The sum in the last term can be shown to converge uniformly and rapidly (i.e. for z → ∞, it remains finite) yielding a continuous and monotonous function, establishing that, for greater than or equal to 2, the free energy is a continuous function of temperature T. Further analysis of the free energy indicates that it exhibits an unusual discontinuous first derivative at the critical temperature (Krizan, Barth & Glasser (1983), Glasser & Goldberg (1983).)

The spin-spin correlation between sites (in general, m and n) on the tree was found to have a transition point when considered at the vertices (e.g. A and Ā, its reflection), their respective neighboring sites (such as B and its reflection), and between sites adjacent to the top and bottom extreme vertices of the two trees (e.g. A and B), as may be determined from where is equal to the number of bonds, is the number of graphs counted for odd vertices with even intermediate sites (see cited methodologies and references for detailed calculations), is the multiplicity resulting from two-valued spin possibilities and the partition function is derived from . (Note: is consistent with the referenced literature in this section and is equivalent to or utilized above and in earlier sections; it is valued at .) The critical temperature is given by

The critical temperature for this model is only determined by the branching ratio and the site-to-site interaction energy , a fact which may have direct implications associated with neural structure vs. its function (in that it relates the energies of interaction and branching ratio to its transitional behavior.) For example, a relationship between the transition behavior of activities of neural networks between sleeping and wakeful states (which may correlate with a spin-spin type of phase transition) in terms of changes in neural interconnectivity () and/or neighbor-to-neighbor interactions (), over time, is just one possible avenue suggested for further experimental investigation into such a phenomenon. In any case, for this Ising model it was established, that “the stability of the long-range correlation increases with increasing or increasing .”

For this topology, the spin-spin correlation was found to be zero between the extreme vertices and the central sites at which the two trees (or branches) are joined (i.e. between A and individually C, D, or E.) This behavior is explained to be due to the fact that, as k increases, the number of links increases exponentially (between the extreme vertices) and so even though the contribution to spin correlations decrease exponentially, the correlation between sites such as the extreme vertex (A) in one tree and the extreme vertex in the joined tree (Ā) remains finite (above the critical temperature.) In addition, A and B also exhibit a non-vanishing correlation (as do their reflections) thus lending itself to, for B level sites (with A level), being considered “clusters” which tend to exhibit synchronization of firing.

Based upon a review of other classical network models as a comparison, the Ising model on a closed Cayley tree was determined to be the first classical statistical mechanical model to demonstrate both local and long-range sites with non-vanishing spin-spin correlations, while at the same time exhibiting intermediate sites with zero correlation, which indeed was a relevant matter for large neural networks at the time of its consideration. The model's behavior is also of relevance for any other divergent-convergent tree physical (or biological) system exhibiting a closed Cayley tree topology with an Ising-type of interaction. This topology should not be ignored since its behavior for Ising models has been solved exactly, and presumably nature will have found a way of taking advantage of such simple symmetries at many levels of its designs.

Barth (1981) early on noted the possibility of interrelationships between (1) the classical large neural network model (with similar coupled divergent-convergent topologies) with (2) an underlying statistical quantum mechanical model (independent of topology and with persistence in fundamental quantum states):

The most significant result obtained from the closed Cayley tree model involves the occurrence of long-range correlation in the absence of intermediate-range correlation. This result has not been demonstrated by other classical models. The failure of the classical view of impulse transmission to account for this phenomenon has been cited by numerous investigators (Ricciiardi and Umezawa, 1967, Hokkyo 1972, Stuart, Takahashi and Umezawa 1978, 1979) as significant enough to warrant radically new assumptions on a very fundamental level and have suggested the existence of quantum cooperative modes within the brain…In addition, it is interesting to note that the (modeling) of…Goldstone particles or bosons (as per Umezawa, et al)…within the brain, demonstrates the long-range correlation of quantum numbers preserved in the ground state…In the closed Cayley tree model ground states of pairs of sites, as well as the state variable of individual sites, (can) exhibit long-range correlation.

It was a natural and common belief among early neurophysicists (e.g. Umezawa, Krizan, Barth, etc.) that classical neural models (including those with statistical mechanical aspects) will one day have to be integrated with quantum physics (with quantum statistical aspects), similar perhaps to how the domain of chemistry has historically integrated itself into quantum physics via quantum chemistry.

Several additional statistical mechanical problems of interest remain to be solved for the closed Cayley tree, including the time-dependent case and the external field situation, as well as theoretical efforts aimed at understanding interrelationships with underlying quantum constituents and their physics.

Numerical simulation

[edit]
Quench of an Ising system on a two-dimensional square lattice (500 × 500) with inverse temperature β = 10, starting from a random configuration

The Ising model can often be difficult to evaluate numerically if there are many states in the system. Consider an Ising model with

L = |Λ|: the total number of sites on the lattice,
σj ∈ {−1, +1}: an individual spin site on the lattice, j = 1, ..., L,
S ∈ {−1, +1}L: state of the system.

Since every spin site has ±1 spin, there are 2L different states that are possible.[29] This motivates the reason for the Ising model to be simulated using Monte Carlo methods.[29]

The Hamiltonian that is commonly used to represent the energy of the model when using Monte Carlo methods is:

Furthermore, the Hamiltonian is further simplified by assuming zero external field h, since many questions that are posed to be solved using the model can be answered in absence of an external field. This leads us to the following energy equation for state σ:

Given this Hamiltonian, quantities of interest such as the specific heat or the magnetization of the magnet at a given temperature can be calculated.[29]

Metropolis algorithm

[edit]

The Metropolis–Hastings algorithm is the most commonly used Monte Carlo algorithm to calculate Ising model estimations.[29] The algorithm first chooses selection probabilities g(μ, ν), which represent the probability that state ν is selected by the algorithm out of all states, given that one is in state μ. It then uses acceptance probabilities A(μ, ν) so that detailed balance is satisfied. If the new state ν is accepted, then we move to that state and repeat with selecting a new state and deciding to accept it. If ν is not accepted then we stay in μ. This process is repeated until some stopping criterion is met, which for the Ising model is often when the lattice becomes ferromagnetic, meaning all of the sites point in the same direction.[29]

When implementing the algorithm, one must ensure that g(μ, ν) is selected such that ergodicity is met. In thermal equilibrium a system's energy only fluctuates within a small range.[29] This is the motivation behind the concept of single-spin-flip dynamics,[30] which states that in each transition, we will only change one of the spin sites on the lattice.[29] Furthermore, by using single- spin-flip dynamics, one can get from any state to any other state by flipping each site that differs between the two states one at a time. The maximum amount of change between the energy of the present state, Hμ and any possible new state's energy Hν (using single-spin-flip dynamics) is 2J between the spin we choose to "flip" to move to the new state and that spin's neighbor.[29] Thus, in a 1D Ising model, where each site has two neighbors (left and right), the maximum difference in energy would be 4J. Let c represent the lattice coordination number; the number of nearest neighbors that any lattice site has. We assume that all sites have the same number of neighbors due to periodic boundary conditions.[29] It is important to note that the Metropolis–Hastings algorithm does not perform well around the critical point due to critical slowing down. Other techniques such as multigrid methods, Niedermayer's algorithm, Swendsen–Wang algorithm, or the Wolff algorithm are required in order to resolve the model near the critical point; a requirement for determining the critical exponents of the system.

Specifically for the Ising model and using single-spin-flip dynamics, one can establish the following. Since there are L total sites on the lattice, using single-spin-flip as the only way we transition to another state, we can see that there are a total of L new states ν from our present state μ. The algorithm assumes that the selection probabilities are equal to the L states: g(μ, ν) = 1/L. Detailed balance tells us that the following equation must hold:

Thus, we want to select the acceptance probability for our algorithm to satisfy

If Hν > Hμ, then A(ν, μ) > A(μ, ν). Metropolis sets the larger of A(μ, ν) or A(ν, μ) to be 1. By this reasoning the acceptance algorithm is:[29]

The basic form of the algorithm is as follows:

  1. Pick a spin site using selection probability g(μ, ν) and calculate the contribution to the energy involving this spin.
  2. Flip the value of the spin and calculate the new contribution.
  3. If the new energy is less, keep the flipped value.
  4. If the new energy is more, only keep with probability
  5. Repeat.

The change in energy Hν − Hμ only depends on the value of the spin and its nearest graph neighbors. So if the graph is not too connected, the algorithm is fast. This process will eventually produce a pick from the distribution.

As a Markov chain

[edit]

It is possible to view the Ising model as a Markov chain, as the immediate probability Pβ(ν) of transitioning to a future state ν only depends on the present state μ. The Metropolis algorithm is actually a version of a Markov chain Monte Carlo simulation, and since we use single-spin-flip dynamics in the Metropolis algorithm, every state can be viewed as having links to exactly L other states, where each transition corresponds to flipping a single spin site to the opposite value.[31] Furthermore, since the energy equation Hσ change only depends on the nearest-neighbor interaction strength J, the Ising model and its variants such the Sznajd model can be seen as a form of a voter model for opinion dynamics.

Solutions

[edit]

One dimension

[edit]

The thermodynamic limit exists as long as the interaction decay is with α > 1.[32]

  • In the case of ferromagnetic interaction with 1 < α < 2, Dyson proved, by comparison with the hierarchical case, that there is phase transition at small enough temperature.[33]
  • In the case of ferromagnetic interaction , Fröhlich and Spencer proved that there is phase transition at small enough temperature (in contrast with the hierarchical case).[34]
  • In the case of interaction with α > 2 (which includes the case of finite-range interactions), there is no phase transition at any positive temperature (i.e. finite β), since the free energy is analytic in the thermodynamic parameters.[32]
  • In the case of nearest neighbor interactions, E. Ising provided an exact solution of the model. At any positive temperature (i.e. finite β) the free energy is analytic in the thermodynamics parameters, and the truncated two-point spin correlation decays exponentially fast. At zero temperature (i.e. infinite β), there is a second-order phase transition: the free energy is infinite, and the truncated two-point spin correlation does not decay (remains constant). Therefore, T = 0 is the critical temperature of this case. Scaling formulas are satisfied.[35]

Ising's exact solution

[edit]

In the nearest neighbor case (with periodic or free boundary conditions) an exact solution is available. The Hamiltonian of the one-dimensional Ising model on a lattice of L sites with free boundary conditions is where J and h can be any number, since in this simplified case J is a constant representing the interaction strength between the nearest neighbors and h is the constant external magnetic field applied to lattice sites. Then the free energy is and the spin-spin correlation (i.e. the covariance) is where C(β) and c(β) are positive functions for T > 0. For T → 0, though, the inverse correlation length c(β) vanishes.

Proof
[edit]

The proof of this result is a simple computation.

If h = 0, it is very easy to obtain the free energy in the case of free boundary condition, i.e. when Then the model factorizes under the change of variables

This gives

Therefore, the free energy is

With the same change of variables

hence it decays exponentially as soon as T ≠ 0; but for T = 0, i.e. in the limit β → ∞ there is no decay.

If h ≠ 0 we need the transfer matrix method. For the periodic boundary conditions case is the following. The partition function is The coefficients can be seen as the entries of a matrix. There are different possible choices: a convenient one (because the matrix is symmetric) is or In matrix formalism where λ1 is the highest eigenvalue of V, while λ2 is the other eigenvalue: and |λ2| < λ1. This gives the formula of the free energy.

Comments
[edit]

The energy of the lowest state is −JL, when all the spins are the same. For any other configuration, the extra energy is equal to 2J times the number of sign changes that are encountered when scanning the configuration from left to right.

If we designate the number of sign changes in a configuration as k, the difference in energy from the lowest energy state is 2k. Since the energy is additive in the number of flips, the probability p of having a spin-flip at each position is independent. The ratio of the probability of finding a flip to the probability of not finding one is the Boltzmann factor:

The problem is reduced to independent biased coin tosses. This essentially completes the mathematical description.

From the description in terms of independent tosses, the statistics of the model for long lines can be understood. The line splits into domains. Each domain is of average length exp(2β). The length of a domain is distributed exponentially, since there is a constant probability at any step of encountering a flip. The domains never become infinite, so a long system is never magnetized. Each step reduces the correlation between a spin and its neighbor by an amount proportional to p, so the correlations fall off exponentially.

The partition function is the volume of configurations, each configuration weighted by its Boltzmann weight. Since each configuration is described by the sign-changes, the partition function factorizes:

The logarithm divided by L is the free energy density:

which is analytic away from β = ∞. A sign of a phase transition is a non-analytic free energy, so the one-dimensional model does not have a phase transition.

One-dimensional solution with transverse field

[edit]

To express the Ising Hamiltonian using a quantum mechanical description of spins, we replace the spin variables with their respective Pauli matrices. However, depending on the direction of the magnetic field, we can create a transverse-field or longitudinal-field Hamiltonian. The transverse-field Hamiltonian is given by

The transverse-field model experiences a phase transition between an ordered and disordered regime at J ~ h. This can be shown by a mapping of Pauli matrices

Upon rewriting the Hamiltonian in terms of this change-of-basis matrices, we obtain

Since the roles of h and J are switched, the Hamiltonian undergoes a transition at J = h.[36]

Renormalization

[edit]

When there is no external field, we can derive a functional equation that satisfies using renormalization.[37] Specifically, let be the partition function with sites. Now we have:where . We sum over each of , to obtainNow, since the cosh function is even, we can solve as . Now we have a self-similarity relation:Taking the limit, we obtainwhere .

When is small, we have , so we can numerically evaluate by iterating the functional equation until is small.

Two dimensions

[edit]
  • In the ferromagnetic case there is a phase transition. At low temperature, the Peierls argument proves positive magnetization for the nearest neighbor case and then, by the Griffiths inequality, also when longer range interactions are added. Meanwhile, at high temperature, the cluster expansion gives analyticity of the thermodynamic functions.
  • In the nearest-neighbor case, the free energy was exactly computed by Onsager, through the equivalence of the model with free fermions on lattice. The spin-spin correlation functions were computed by McCoy and Wu.

Onsager's exact solution

[edit]

Onsager (1944) obtained the following analytical expression for the free energy of the Ising model on the anisotropic square lattice when the magnetic field in the thermodynamic limit as a function of temperature and the horizontal and vertical interaction energies and , respectively

From this expression for the free energy, all thermodynamic functions of the model can be calculated by using an appropriate derivative. The 2D Ising model was the first model to exhibit a continuous phase transition at a positive temperature. It occurs at the temperature which solves the equation

In the isotropic case when the horizontal and vertical interaction energies are equal , the critical temperature occurs at the following point

When the interaction energies , are both negative, the Ising model becomes an antiferromagnet. Since the square lattice is bi-partite, it is invariant under this change when the magnetic field , so the free energy and critical temperature are the same for the antiferromagnetic case. For the triangular lattice, which is not bi-partite, the ferromagnetic and antiferromagnetic Ising model behave notably differently. Specifically, around a triangle, it is impossible to make all 3 spin-pairs antiparallel, so the antiferromagnetic Ising model cannot reach the minimal energy state. This is an example of geometric frustration.

Transfer matrix
[edit]

Start with an analogy with quantum mechanics. The Ising model on a long periodic lattice has a partition function

Think of the i direction as space, and the j direction as time. This is an independent sum over all the values that the spins can take at each time slice. This is a type of path integral, it is the sum over all spin histories.

A path integral can be rewritten as a Hamiltonian evolution. The Hamiltonian steps through time by performing a unitary rotation between time t and time t + Δt:

The product of the U matrices, one after the other, is the total time evolution operator, which is the path integral we started with.

where N is the number of time slices. The sum over all paths is given by a product of matrices, each matrix element is the transition probability from one slice to the next.

Similarly, one can divide the sum over all partition function configurations into slices, where each slice is the one-dimensional configuration at time 1. This defines the transfer matrix:

The configuration in each slice is a one-dimensional collection of spins. At each time slice, T has matrix elements between two configurations of spins, one in the immediate future and one in the immediate past. These two configurations are C1 and C2, and they are all one-dimensional spin configurations. We can think of the vector space that T acts on as all complex linear combinations of these. Using quantum mechanical notation:

where each basis vector is a spin configuration of a one-dimensional Ising model.

Like the Hamiltonian, the transfer matrix acts on all linear combinations of states. The partition function is a matrix function of T, which is defined by the sum over all histories which come back to the original configuration after N steps:

Since this is a matrix equation, it can be evaluated in any basis. So if we can diagonalize the matrix T, we can find Z.

T in terms of Pauli matrices
[edit]

The contribution to the partition function for each past/future pair of configurations on a slice is the sum of two terms. There is the number of spin flips in the past slice and there is the number of spin flips between the past and future slice. Define an operator on configurations which flips the spin at site i:

In the usual Ising basis, acting on any linear combination of past configurations, it produces the same linear combination but with the spin at position i of each basis vector flipped.

Define a second operator which multiplies the basis vector by +1 and −1 according to the spin at position i:

T can be written in terms of these:

where A and B are constants which are to be determined so as to reproduce the partition function. The interpretation is that the statistical configuration at this slice contributes according to both the number of spin flips in the slice, and whether or not the spin at position i has flipped.

Spin flip creation and annihilation operators
[edit]

Just as in the one-dimensional case, we will shift attention from the spins to the spin-flips. The σz term in T counts the number of spin flips, which we can write in terms of spin-flip creation and annihilation operators:

The first term flips a spin, so depending on the basis state it either:

  1. moves a spin-flip one unit to the right
  2. moves a spin-flip one unit to the left
  3. produces two spin-flips on neighboring sites
  4. destroys two spin-flips on neighboring sites.

Writing this out in terms of creation and annihilation operators:

Ignore the constant coefficients, and focus attention on the form. They are all quadratic. Since the coefficients are constant, this means that the T matrix can be diagonalized by Fourier transforms.

Carrying out the diagonalization produces the Onsager free energy.

Onsager's formula for spontaneous magnetization
[edit]

Onsager famously announced the following expression for the spontaneous magnetization M of a two-dimensional Ising ferromagnet on the square lattice at two different conferences in 1948, though without proof[7]

where and are horizontal and vertical interaction energies.

A complete derivation was only given in 1951 by Yang (1952) using a limiting process of transfer matrix eigenvalues. The proof was subsequently greatly simplified in 1963 by Montroll, Potts, and Ward[7] using Szegő's limit formula for Toeplitz determinants by treating the magnetization as the limit of correlation functions.

Minimal model

[edit]

At the critical point, the two-dimensional Ising model is a two-dimensional conformal field theory. The spin and energy correlation functions are described by a minimal model, which has been exactly solved.

Three dimensions

[edit]

In three as in two dimensions, the most studied case of the Ising model is the translation-invariant model on a cubic lattice with nearest-neighbor coupling in the zero magnetic field. Many theoreticians searched for an analytical three-dimensional solution for many decades, which would be analogous to Onsager's solution in the two-dimensional case.[38] [39] Such a solution has not been found until now, although there is no proof that it may not exist. In three dimensions, the Ising model was shown to have a representation in terms of non-interacting fermionic strings by Alexander Polyakov and Vladimir Dotsenko. This construction has been carried on the lattice, and the continuum limit, conjecturally describing the critical point, is unknown.

In three as in two dimensions, Peierls' argument shows that there is a phase transition. This phase transition is rigorously known to be continuous (in the sense that correlation length diverges and the magnetization goes to zero), and is called the critical point. It is believed that the critical point can be described by a renormalization group fixed point of the Wilson-Kadanoff renormalization group transformation. It is also believed that the phase transition can be described by a three-dimensional unitary conformal field theory, as evidenced by Monte Carlo simulations,[40][41] exact diagonalization results in quantum models,[42] and quantum field theoretical arguments.[43] Although it is an open problem to establish rigorously the renormalization group picture or the conformal field theory picture, theoretical physicists have used these two methods to compute the critical exponents of the phase transition, which agree with the experiments and with the Monte Carlo simulations. This conformal field theory describing the three-dimensional Ising critical point is under active investigation using the method of the conformal bootstrap.[44][45][46][47] This method currently yields the most precise information about the structure of the critical theory (see Ising critical exponents).

In 2000, Sorin Istrail of Sandia National Laboratories proved that the spin glass Ising model on a nonplanar lattice is NP-complete. That is, assuming PNP, the general spin glass Ising model is exactly solvable only in planar cases, so solutions for dimensions higher than two are also intractable.[48] Istrail's result only concerns the spin glass model with spatially varying couplings, and tells nothing about Ising's original ferromagnetic model with equal couplings.

Four dimensions and above

[edit]

In any dimension, the Ising model can be productively described by a locally varying mean field. The field is defined as the average spin value over a large region, but not so large so as to include the entire system. The field still has slow variations from point to point, as the averaging volume moves. These fluctuations in the field are described by a continuum field theory in the infinite system limit.

Local field

[edit]

The field H is defined as the long wavelength Fourier components of the spin variable, in the limit that the wavelengths are long. There are many ways to take the long wavelength average, depending on the details of how high wavelengths are cut off. The details are not too important, since the goal is to find the statistics of H and not the spins. Once the correlations in H are known, the long-distance correlations between the spins will be proportional to the long-distance correlations in H.

For any value of the slowly varying field H, the free energy (log-probability) is a local analytic function of H and its gradients. The free energy F(H) is defined to be the sum over all Ising configurations which are consistent with the long wavelength field. Since H is a coarse description, there are many Ising configurations consistent with each value of H, so long as not too much exactness is required for the match.

Since the allowed range of values of the spin in any region only depends on the values of H within one averaging volume from that region, the free energy contribution from each region only depends on the value of H there and in the neighboring regions. So F is a sum over all regions of a local contribution, which only depends on H and its derivatives.

By symmetry in H, only even powers contribute. By reflection symmetry on a square lattice, only even powers of gradients contribute. Writing out the first few terms in the free energy:

On a square lattice, symmetries guarantee that the coefficients Zi of the derivative terms are all equal. But even for an anisotropic Ising model, where the Zi's in different directions are different, the fluctuations in H are isotropic in a coordinate system where the different directions of space are rescaled.

On any lattice, the derivative term is a positive definite quadratic form, and can be used to define the metric for space. So any translationally invariant Ising model is rotationally invariant at long distances, in coordinates that make Zij = δij. Rotational symmetry emerges spontaneously at large distances just because there aren't very many low order terms. At higher order multicritical points, this accidental symmetry is lost.

Since βF is a function of a slowly spatially varying field, the probability of any field configuration is (omitting higher-order terms):

The statistical average of any product of H terms is equal to:

The denominator in this expression is called the partition function:and the integral over all possible values of H is a statistical path integral. It integrates exp(βF) over all values of H, over all the long wavelength fourier components of the spins. F is a "Euclidean" Lagrangian for the field H. It is similar to the Lagrangian in of a scalar field in quantum field theory, the difference being that all the derivative terms enter with a positive sign, and there is no overall factor of i (thus "Euclidean").

Dimensional analysis

[edit]

The form of F can be used to predict which terms are most important by dimensional analysis. Dimensional analysis is not completely straightforward, because the scaling of H needs to be determined.

In the generic case, choosing the scaling law for H is easy, since the only term that contributes is the first one,

This term is the most significant, but it gives trivial behavior. This form of the free energy is ultralocal, meaning that it is a sum of an independent contribution from each point. This is like the spin-flips in the one-dimensional Ising model. Every value of H at any point fluctuates completely independently of the value at any other point.

The scale of the field can be redefined to absorb the coefficient A, and then it is clear that A only determines the overall scale of fluctuations. The ultralocal model describes the long wavelength high temperature behavior of the Ising model, since in this limit the fluctuation averages are independent from point to point.

To find the critical point, lower the temperature. As the temperature goes down, the fluctuations in H go up because the fluctuations are more correlated. This means that the average of a large number of spins does not become small as quickly as if they were uncorrelated, because they tend to be the same. This corresponds to decreasing A in the system of units where H does not absorb A. The phase transition can only happen when the subleading terms in F can contribute, but since the first term dominates at long distances, the coefficient A must be tuned to zero. This is the location of the critical point:

where t is a parameter which goes through zero at the transition.

Since t is vanishing, fixing the scale of the field using this term makes the other terms blow up. Once t is small, the scale of the field can either be set to fix the coefficient of the H4 term or the (∇H)2 term to 1.

Magnetization

[edit]

To find the magnetization, fix the scaling of H so that λ is one. Now the field H has dimension −d/4, so that H4ddx is dimensionless, and Z has dimension 2 − d/2. In this scaling, the gradient term is only important at long distances for d ≤ 4. Above four dimensions, at long wavelengths, the overall magnetization is only affected by the ultralocal terms.

There is one subtle point. The field H is fluctuating statistically, and the fluctuations can shift the zero point of t. To see how, consider H4 split in the following way:

The first term is a constant contribution to the free energy, and can be ignored. The second term is a finite shift in t. The third term is a quantity that scales to zero at long distances. This means that when analyzing the scaling of t by dimensional analysis, it is the shifted t that is important. This was historically very confusing, because the shift in t at any finite λ is finite, but near the transition t is very small. The fractional change in t is very large, and in units where t is fixed the shift looks infinite.

The magnetization is at the minimum of the free energy, and this is an analytic equation. In terms of the shifted t,

For t < 0, the minima are at H proportional to the square root of t. So Landau's catastrophe argument is correct in dimensions larger than 5. The magnetization exponent in dimensions higher than 5 is equal to the mean-field value.

When t is negative, the fluctuations about the new minimum are described by a new positive quadratic coefficient. Since this term always dominates, at temperatures below the transition the fluctuations again become ultralocal at long distances.

Fluctuations

[edit]

To find the behavior of fluctuations, rescale the field to fix the gradient term. Then the length scaling dimension of the field is 1 − d/2. Now the field has constant quadratic spatial fluctuations at all temperatures. The scale dimension of the H2 term is 2, while the scale dimension of the H4 term is 4 − d. For d < 4, the H4 term has positive scale dimension. In dimensions higher than 4 it has negative scale dimensions.

This is an essential difference. In dimensions higher than 4, fixing the scale of the gradient term means that the coefficient of the H4 term is less and less important at longer and longer wavelengths. The dimension at which nonquadratic contributions begin to contribute is known as the critical dimension. In the Ising model, the critical dimension is 4.

In dimensions above 4, the critical fluctuations are described by a purely quadratic free energy at long wavelengths. This means that the correlation functions are all computable from as Gaussian averages:

valid when x − y is large. The function G(x − y) is the analytic continuation to imaginary time of the Feynman propagator, since the free energy is the analytic continuation of the quantum field action for a free scalar field. For dimensions 5 and higher, all the other correlation functions at long distances are then determined by Wick's theorem. All the odd moments are zero, by ± symmetry. The even moments are the sum over all partition into pairs of the product of G(x − y) for each pair.

where C is the proportionality constant. So knowing G is enough. It determines all the multipoint correlations of the field.

The critical two-point function

[edit]

To determine the form of G, consider that the fields in a path integral obey the classical equations of motion derived by varying the free energy:

This is valid at noncoincident points only, since the correlations of H are singular when points collide. H obeys classical equations of motion for the same reason that quantum mechanical operators obey them—its fluctuations are defined by a path integral.

At the critical point t = 0, this is Laplace's equation, which can be solved by Gauss's method from electrostatics. Define an electric field analog by

Away from the origin:

since G is spherically symmetric in d dimensions, and E is the radial gradient of G. Integrating over a large d − 1 dimensional sphere,

This gives:

and G can be found by integrating with respect to r.

The constant C fixes the overall normalization of the field.

G(r) away from the critical point

[edit]

When t does not equal zero, so that H is fluctuating at a temperature slightly away from critical, the two point function decays at long distances. The equation it obeys is altered:

For r small compared with , the solution diverges exactly the same way as in the critical case, but the long distance behavior is modified.

To see how, it is convenient to represent the two point function as an integral, introduced by Schwinger in the quantum field theory context:

This is G, since the Fourier transform of this integral is easy. Each fixed τ contribution is a Gaussian in x, whose Fourier transform is another Gaussian of reciprocal width in k.

This is the inverse of the operator ∇2 − t in k-space, acting on the unit function in k-space, which is the Fourier transform of a delta function source localized at the origin. So it satisfies the same equation as G with the same boundary conditions that determine the strength of the divergence at 0.

The interpretation of the integral representation over the proper time τ is that the two point function is the sum over all random walk paths that link position 0 to position x over time τ. The density of these paths at time τ at position x is Gaussian, but the random walkers disappear at a steady rate proportional to t so that the Gaussian at time τ is diminished in height by a factor that decreases steadily exponentially. In the quantum field theory context, these are the paths of relativistically localized quanta in a formalism that follows the paths of individual particles. In the pure statistical context, these paths still appear by the mathematical correspondence with quantum fields, but their interpretation is less directly physical.

The integral representation immediately shows that G(r) is positive, since it is represented as a weighted sum of positive Gaussians. It also gives the rate of decay at large r, since the proper time for a random walk to reach position τ is r2 and in this time, the Gaussian height has decayed by . The decay factor appropriate for position r is therefore .

A heuristic approximation for G(r) is:

This is not an exact form, except in three dimensions, where interactions between paths become important. The exact forms in high dimensions are variants of Bessel functions.

Symanzik polymer interpretation

[edit]

The interpretation of the correlations as fixed size quanta travelling along random walks gives a way of understanding why the critical dimension of the H4 interaction is 4. The term H4 can be thought of as the square of the density of the random walkers at any point. In order for such a term to alter the finite order correlation functions, which only introduce a few new random walks into the fluctuating environment, the new paths must intersect. Otherwise, the square of the density is just proportional to the density and only shifts the H2 coefficient by a constant. But the intersection probability of random walks depends on the dimension, and random walks in dimension higher than 4 do not intersect.

The fractal dimension of an ordinary random walk is 2. The number of balls of size ε required to cover the path increase as ε−2. Two objects of fractal dimension 2 will intersect with reasonable probability only in a space of dimension 4 or less, the same condition as for a generic pair of planes. Kurt Symanzik argued that this implies that the critical Ising fluctuations in dimensions higher than 4 should be described by a free field. This argument eventually became a mathematical proof.

4 − ε dimensions – renormalization group

[edit]

The Ising model in four dimensions is described by a fluctuating field, but now the fluctuations are interacting. In the polymer representation, intersections of random walks are marginally possible. In the quantum field continuation, the quanta interact.

The negative logarithm of the probability of any field configuration H is the free energy function

The numerical factors are there to simplify the equations of motion. The goal is to understand the statistical fluctuations. Like any other non-quadratic path integral, the correlation functions have a Feynman expansion as particles travelling along random walks, splitting and rejoining at vertices. The interaction strength is parametrized by the classically dimensionless quantity λ.

Although dimensional analysis shows that both λ and Z are dimensionless, this is misleading. The long wavelength statistical fluctuations are not exactly scale invariant, and only become scale invariant when the interaction strength vanishes.

The reason is that there is a cutoff used to define H, and the cutoff defines the shortest wavelength. Fluctuations of H at wavelengths near the cutoff can affect the longer-wavelength fluctuations. If the system is scaled along with the cutoff, the parameters will scale by dimensional analysis, but then comparing parameters doesn't compare behavior because the rescaled system has more modes. If the system is rescaled in such a way that the short wavelength cutoff remains fixed, the long-wavelength fluctuations are modified.

Wilson renormalization
[edit]

A quick heuristic way of studying the scaling is to cut off the H wavenumbers at a point λ. Fourier modes of H with wavenumbers larger than λ are not allowed to fluctuate. A rescaling of length that make the whole system smaller increases all wavenumbers, and moves some fluctuations above the cutoff.

To restore the old cutoff, perform a partial integration over all the wavenumbers which used to be forbidden, but are now fluctuating. In Feynman diagrams, integrating over a fluctuating mode at wavenumber k links up lines carrying momentum k in a correlation function in pairs, with a factor of the inverse propagator.

Under rescaling, when the system is shrunk by a factor of (1+b), the t coefficient scales up by a factor (1+b)2 by dimensional analysis. The change in t for infinitesimal b is 2bt. The other two coefficients are dimensionless and do not change at all.

The lowest order effect of integrating out can be calculated from the equations of motion:

This equation is an identity inside any correlation function away from other insertions. After integrating out the modes with Λ < k < (1+b)Λ, it will be a slightly different identity.

Since the form of the equation will be preserved, to find the change in coefficients it is sufficient to analyze the change in the H3 term. In a Feynman diagram expansion, the H3 term in a correlation function inside a correlation has three dangling lines. Joining two of them at large wavenumber k gives a change H3 with one dangling line, so proportional to H:

The factor of 3 comes from the fact that the loop can be closed in three different ways.

The integral should be split into two parts:

The first part is not proportional to t, and in the equation of motion it can be absorbed by a constant shift in t. It is caused by the fact that the H3 term has a linear part. Only the second term, which varies from t to t, contributes to the critical scaling.

This new linear term adds to the first term on the left hand side, changing t by an amount proportional to t. The total change in t is the sum of the term from dimensional analysis and this second term from operator products:

So t is rescaled, but its dimension is anomalous, it is changed by an amount proportional to the value of λ.

But λ also changes. The change in λ requires considering the lines splitting and then quickly rejoining. The lowest order process is one where one of the three lines from H3 splits into three, which quickly joins with one of the other lines from the same vertex. The correction to the vertex is

The numerical factor is three times bigger because there is an extra factor of three in choosing which of the three new lines to contract. So

These two equations together define the renormalization group equations in four dimensions:

The coefficient B is determined by the formula

and is proportional to the area of a three-dimensional sphere of radius λ, times the width of the integration region bΛ divided by Λ4:

In other dimensions, the constant B changes, but the same constant appears both in the t flow and in the coupling flow. The reason is that the derivative with respect to t of the closed loop with a single vertex is a closed loop with two vertices. This means that the only difference between the scaling of the coupling and the t is the combinatorial factors from joining and splitting.

Wilson–Fisher fixed point
[edit]

To investigate three dimensions starting from the four-dimensional theory should be possible, because the intersection probabilities of random walks depend continuously on the dimensionality of the space. In the language of Feynman graphs, the coupling does not change very much when the dimension is changed.

The process of continuing away from dimension 4 is not completely well defined without a prescription for how to do it. The prescription is only well defined on diagrams. It replaces the Schwinger representation in dimension 4 with the Schwinger representation in dimension 4 − ε defined by:

In dimension 4 − ε, the coupling λ has positive scale dimension ε, and this must be added to the flow.

The coefficient B is dimension dependent, but it will cancel. The fixed point for λ is no longer zero, but at: where the scale dimensions of t is altered by an amount λB = ε/3.

The magnetization exponent is altered proportionately to:

which is .333 in 3 dimensions (ε = 1) and .166 in 2 dimensions (ε = 2). This is not so far off from the measured exponent .308 and the Onsager two dimensional exponent .125.

Infinite dimensions – mean field

[edit]

The behavior of an Ising model on a fully connected graph may be completely understood by mean-field theory. This type of description is appropriate to very-high-dimensional square lattices, because then each site has a very large number of neighbors.

The idea is that if each spin is connected to a large number of spins, only the average ratio of + spins to − spins is important, since the fluctuations about this mean will be small. The mean field H is the average fraction of spins which are + minus the average fraction of spins which are −. The energy cost of flipping a single spin in the mean field H is ±2JNH. It is convenient to redefine J to absorb the factor N, so that the limit N → ∞ is smooth. In terms of the new J, the energy cost for flipping a spin is ±2JH.

This energy cost gives the ratio of probability p that the spin is + to the probability 1−p that the spin is −. This ratio is the Boltzmann factor:

so that

The mean value of the spin is given by averaging 1 and −1 with the weights p and 1 − p, so the mean value is 2p − 1. But this average is the same for all spins, and is therefore equal to H.

The solutions to this equation are the possible consistent mean fields. For βJ < 1 there is only the one solution at H = 0. For bigger values of β there are three solutions, and the solution at H = 0 is unstable.

The instability means that increasing the mean field above zero a little bit produces a statistical fraction of spins which are + which is bigger than the value of the mean field. So a mean field which fluctuates above zero will produce an even greater mean field, and will eventually settle at the stable solution. This means that for temperatures below the critical value βJ = 1 the mean-field Ising model undergoes a phase transition in the limit of large N.

Above the critical temperature, fluctuations in H are damped because the mean field restores the fluctuation to zero field. Below the critical temperature, the mean field is driven to a new equilibrium value, which is either the positive H or negative H solution to the equation.

For βJ = 1 + ε, just below the critical temperature, the value of H can be calculated from the Taylor expansion of the hyperbolic tangent:

Dividing by H to discard the unstable solution at H = 0, the stable solutions are:

The spontaneous magnetization H grows near the critical point as the square root of the change in temperature. This is true whenever H can be calculated from the solution of an analytic equation which is symmetric between positive and negative values, which led Landau to suspect that all Ising type phase transitions in all dimensions should follow this law.

The mean-field exponent is universal because changes in the character of solutions of analytic equations are always described by catastrophes in the Taylor series, which is a polynomial equation. By symmetry, the equation for H must only have odd powers of H on the right hand side. Changing β should only smoothly change the coefficients. The transition happens when the coefficient of H on the right hand side is 1. Near the transition:

Whatever A and B are, so long as neither of them is tuned to zero, the spontaneous magnetization will grow as the square root of ε. This argument can only fail if the free energy βF is either non-analytic or non-generic at the exact β where the transition occurs.

But the spontaneous magnetization in magnetic systems and the density in gasses near the critical point are measured very accurately. The density and the magnetization in three dimensions have the same power-law dependence on the temperature near the critical point, but the behavior from experiments is:

The exponent is also universal, since it is the same in the Ising model as in the experimental magnet and gas, but it is not equal to the mean-field value. This was a great surprise.

This is also true in two dimensions, where

But there it was not a surprise, because it was predicted by Onsager.

Low dimensions – block spins

[edit]

In three dimensions, the perturbative series from the field theory is an expansion in a coupling constant λ which is not particularly small. The effective size of the coupling at the fixed point is one over the branching factor of the particle paths, so the expansion parameter is about 1/3. In two dimensions, the perturbative expansion parameter is 2/3.

But renormalization can also be productively applied to the spins directly, without passing to an average field. Historically, this approach is due to Leo Kadanoff and predated the perturbative ε expansion.

The idea is to integrate out lattice spins iteratively, generating a flow in couplings. But now the couplings are lattice energy coefficients. The fact that a continuum description exists guarantees that this iteration will converge to a fixed point when the temperature is tuned to criticality.

Migdal–Kadanoff renormalization
[edit]

Write the two-dimensional Ising model with an infinite number of possible higher order interactions. To keep spin reflection symmetry, only even powers contribute:

By translation invariance, Jij is only a function of i-j. By the accidental rotational symmetry, at large i and j its size only depends on the magnitude of the two-dimensional vector i − j. The higher order coefficients are also similarly restricted.

The renormalization iteration divides the lattice into two parts – even spins and odd spins. The odd spins live on the odd-checkerboard lattice positions, and the even ones on the even-checkerboard. When the spins are indexed by the position (i,j), the odd sites are those with i + j odd and the even sites those with i + j even, and even sites are only connected to odd sites.

The two possible values of the odd spins will be integrated out, by summing over both possible values. This will produce a new free energy function for the remaining even spins, with new adjusted couplings. The even spins are again in a lattice, with axes tilted at 45 degrees to the old ones. Unrotating the system restores the old configuration, but with new parameters. These parameters describe the interaction between spins at distances larger.

Starting from the Ising model and repeating this iteration eventually changes all the couplings. When the temperature is higher than the critical temperature, the couplings will converge to zero, since the spins at large distances are uncorrelated. But when the temperature is critical, there will be nonzero coefficients linking spins at all orders. The flow can be approximated by only considering the first few terms. This truncated flow will produce better and better approximations to the critical exponents when more terms are included.

The simplest approximation is to keep only the usual J term, and discard everything else. This will generate a flow in J, analogous to the flow in t at the fixed point of λ in the ε expansion.

To find the change in J, consider the four neighbors of an odd site. These are the only spins which interact with it. The multiplicative contribution to the partition function from the sum over the two values of the spin at the odd site is:

where N± is the number of neighbors which are ±. Ignoring the factor of 2, the free energy contribution from this odd site is:

This includes nearest neighbor and next-nearest neighbor interactions, as expected, but also a four-spin interaction which is to be discarded. To truncate to nearest neighbor interactions, consider that the difference in energy between all spins the same and equal numbers + and – is:

From nearest neighbor couplings, the difference in energy between all spins equal and staggered spins is 8J. The difference in energy between all spins equal and nonstaggered but net zero spin is 4J. Ignoring four-spin interactions, a reasonable truncation is the average of these two energies or 6J. Since each link will contribute to two odd spins, the right value to compare with the previous one is half that:

For small J, this quickly flows to zero coupling. Large J's flow to large couplings. The magnetization exponent is determined from the slope of the equation at the fixed point.

Variants of this method produce good numerical approximations for the critical exponents when many terms are included, in both two and three dimensions.

See also

[edit]

Footnotes

[edit]
  1. ^ See Gallavotti (1999), Chapters VI-VII.
  2. ^ Ernst Ising, Contribution to the Theory of Ferromagnetism
  3. ^ See Baierlein (1999), Chapter 16.
  4. ^ Barahona, Francisco; Grötschel, Martin; Jünger, Michael; Reinelt, Gerhard (1988). "An Application of Combinatorial Optimization to Statistical Physics and Circuit Layout Design". Operations Research. 36 (3): 493–513. doi:10.1287/opre.36.3.493. ISSN 0030-364X. JSTOR 170992.
  5. ^ El-Showk, Sheer; Paulos, Miguel F.; Poland, David; Rychkov, Slava; Simmons-Duffin, David; Vichi, Alessandro (2014). "Solving the 3d Ising Model with the Conformal Bootstrap II. C -Minimization and Precise Critical Exponents" (PDF). Journal of Statistical Physics. 157 (4–5): 869–914. arXiv:1403.4545. Bibcode:2014JSP...157..869E. doi:10.1007/s10955-014-1042-7. S2CID 119627708. Archived from the original (PDF) on 2014-04-07. Retrieved 2013-04-21.
  6. ^ Peierls, R.; Born, M. (1936). "On Ising's model of ferromagnetism". Mathematical Proceedings of the Cambridge Philosophical Society. 32 (3): 477. Bibcode:1936PCPS...32..477P. doi:10.1017/S0305004100019174. S2CID 122630492.
  7. ^ a b c Montroll, Potts & Ward 1963, pp. 308–309
  8. ^ Simon, Barry (1980-10-01). "Correlation inequalities and the decay of correlations in ferromagnets". Communications in Mathematical Physics. 77 (2): 111–126. Bibcode:1980CMaPh..77..111S. doi:10.1007/BF01982711. ISSN 1432-0916. S2CID 17543488.
  9. ^ Duminil-Copin, Hugo; Tassion, Vincent (2016-04-01). "A New Proof of the Sharpness of the Phase Transition for Bernoulli Percolation and the Ising Model". Communications in Mathematical Physics. 343 (2): 725–745. arXiv:1502.03050. Bibcode:2016CMaPh.343..725D. doi:10.1007/s00220-015-2480-z. ISSN 1432-0916. S2CID 119330137.
  10. ^ Beffara, Vincent; Duminil-Copin, Hugo (2012-08-01). "The self-dual point of the two-dimensional random-cluster model is critical for q ≥ 1". Probability Theory and Related Fields. 153 (3): 511–542. doi:10.1007/s00440-011-0353-8. ISSN 1432-2064. S2CID 55391558.
  11. ^ Shi, Y.; Duke, T. (1998-11-01). "Cooperative model of bacteril sensing". Physical Review E. 58 (5): 6399–6406. arXiv:physics/9901052. Bibcode:1998PhRvE..58.6399S. doi:10.1103/PhysRevE.58.6399. S2CID 18854281.
  12. ^ Bai, Fan; Branch, Richard W.; Nicolau, Dan V.; Pilizota, Teuta; Steel, Bradley C.; Maini, Philip K.; Berry, Richard M. (2010-02-05). "Conformational Spread as a Mechanism for Cooperativity in the Bacterial Flagellar Switch". Science. 327 (5966): 685–689. Bibcode:2010Sci...327..685B. doi:10.1126/science.1182105. ISSN 0036-8075. PMID 20133571. S2CID 206523521.
  13. ^ Vtyurina, Natalia N.; Dulin, David; Docter, Margreet W.; Meyer, Anne S.; Dekker, Nynke H.; Abbondanzieri, Elio A. (2016-04-18). "Hysteresis in DNA compaction by Dps is described by an Ising model". Proceedings of the National Academy of Sciences. 113 (18): 4982–7. Bibcode:2016PNAS..113.4982V. doi:10.1073/pnas.1521241113. ISSN 0027-8424. PMC 4983820. PMID 27091987.
  14. ^ Jaynes, E. T. (1957), "Information Theory and Statistical Mechanics", Physical Review, 106 (4): 620–630, Bibcode:1957PhRv..106..620J, doi:10.1103/PhysRev.106.620, S2CID 17870175.
  15. ^ Jaynes, Edwin T. (1957), "Information Theory and Statistical Mechanics II", Physical Review, 108 (2): 171–190, Bibcode:1957PhRv..108..171J, doi:10.1103/PhysRev.108.171.
  16. ^ Elad Schneidman; Michael J. Berry; Ronen Segev; William Bialek (2006), "Weak pairwise correlations imply strongly correlated network states in a neural population", Nature, 440 (7087): 1007–1012, arXiv:q-bio/0512013, Bibcode:2006Natur.440.1007S, doi:10.1038/nature04701, PMC 1785327, PMID 16625187.
  17. ^ J-S Wang, W Selke, VB Andreichenko, and VS Dotsenko (1990), "The critical behaviour of the two-dimensional dilute model", Physica A, 164 (2): 221–239, Bibcode:1990PhyA..164..221W, doi:10.1016/0378-4371(90)90196-Y{{citation}}: CS1 maint: multiple names: authors list (link)
  18. ^ Glauber, Roy J. (February 1963). "Roy J. Glauber "Time-Dependent Statistics of the Ising Model"". Journal of Mathematical Physics. 4 (2): 294–307. doi:10.1063/1.1703954. Retrieved 2021-03-21.
  19. ^ Nakano, Kaoru (1971). "Learning Process in a Model of Associative Memory". Pattern Recognition and Machine Learning. pp. 172–186. doi:10.1007/978-1-4615-7566-5_15. ISBN 978-1-4615-7568-9.
  20. ^ Nakano, Kaoru (1972). "Associatron-A Model of Associative Memory". IEEE Transactions on Systems, Man, and Cybernetics. SMC-2 (3): 380–388. doi:10.1109/TSMC.1972.4309133.
  21. ^ Amari, Shun-Ichi (1972). "Learning patterns and pattern sequences by self-organizing nets of threshold elements". IEEE Transactions. C (21): 1197–1206.
  22. ^ Little, W. A. (1974). "The Existence of Persistent States in the Brain". Mathematical Biosciences. 19 (1–2): 101–120. doi:10.1016/0025-5564(74)90031-5.
  23. ^ Sherrington, David; Kirkpatrick, Scott (1975-12-29). "Solvable Model of a Spin-Glass". Physical Review Letters. 35 (26): 1792–1796. Bibcode:1975PhRvL..35.1792S. doi:10.1103/PhysRevLett.35.1792. ISSN 0031-9007.
  24. ^ Hopfield, J. J. (1982). "Neural networks and physical systems with emergent collective computational abilities". Proceedings of the National Academy of Sciences. 79 (8): 2554–2558. Bibcode:1982PNAS...79.2554H. doi:10.1073/pnas.79.8.2554. PMC 346238. PMID 6953413.
  25. ^ Hopfield, J. J. (1984). "Neurons with graded response have collective computational properties like those of two-state neurons". Proceedings of the National Academy of Sciences. 81 (10): 3088–3092. Bibcode:1984PNAS...81.3088H. doi:10.1073/pnas.81.10.3088. PMC 345226. PMID 6587342.
  26. ^ Engel, A.; Broeck, C. van den (2001). Statistical mechanics of learning. Cambridge, UK ; New York, NY: Cambridge University Press. ISBN 978-0-521-77307-2.
  27. ^ Seung, H. S.; Sompolinsky, H.; Tishby, N. (1992-04-01). "Statistical mechanics of learning from examples". Physical Review A. 45 (8): 6056–6091. Bibcode:1992PhRvA..45.6056S. doi:10.1103/PhysRevA.45.6056. PMID 9907706.
  28. ^ Yi-Ping Ma; Ivan Sudakov; Courtenay Strong; Kenneth Golden (2017). "Ising model for melt ponds on Arctic sea ice". arXiv:1408.2487v3 [physics.ao-ph].
  29. ^ a b c d e f g h i j Newman, M.E.J.; Barkema, G.T. (1999). Monte Carlo Methods in Statistical Physics. Clarendon Press. ISBN 9780198517979.
  30. ^ Süzen, Mehmet (29 September 2014). "M. Suzen "Effective ergodicity in single-spin-flip dynamics"". Physical Review E. 90 (3): 032141. arXiv:1405.4497. Bibcode:2014PhRvE..90c2141S. doi:10.1103/PhysRevE.90.032141. PMID 25314429. S2CID 118355454. Retrieved 2022-08-09.
  31. ^ Teif, Vladimir B. (2007). "General transfer matrix formalism to calculate DNA-protein-drug binding in gene regulation". Nucleic Acids Res. 35 (11): e80. doi:10.1093/nar/gkm268. PMC 1920246. PMID 17526526.
  32. ^ a b Ruelle, David (1999) [1969]. Statistical Mechanics: Rigorous Results. World Scientific. ISBN 978-981-4495-00-4.
  33. ^ Dyson, F. J. (1969). "Existence of a phase-transition in a one-dimensional Ising ferromagnet". Comm. Math. Phys. 12 (2): 91–107. Bibcode:1969CMaPh..12...91D. doi:10.1007/BF01645907. S2CID 122117175.
  34. ^ Fröhlich, J.; Spencer, T. (1982). "The phase transition in the one-dimensional Ising model with 1/r2 interaction energy". Comm. Math. Phys. 84 (1): 87–101. Bibcode:1982CMaPh..84...87F. doi:10.1007/BF01208373. S2CID 122722140.
  35. ^ Baxter, Rodney J. (1982), Exactly solved models in statistical mechanics, London: Academic Press Inc. [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-083180-7, MR 0690578, archived from the original on 2012-03-20, retrieved 2009-10-25
  36. ^ Suzuki, Sei; Inoue, Jun-ichi; Chakrabarti, Bikas K. (2012). Quantum Ising Phases and Transitions in Transverse Ising Models. Springer. doi:10.1007/978-3-642-33039-1. ISBN 978-3-642-33038-4.
  37. ^ Maris, Humphrey J.; Kadanoff, Leo P. (June 1978). "Teaching the renormalization group". American Journal of Physics. 46 (6): 652–657. Bibcode:1978AmJPh..46..652M. doi:10.1119/1.11224. ISSN 0002-9505.
  38. ^ Wood, Charlie (24 June 2020). "The Cartoon Picture of Magnets That Has Transformed Science". Quanta Magazine. Retrieved 2020-06-26.
  39. ^ "Ken Wilson recalls how Murray Gell-Mann suggested that he solve the three-dimensional Ising model".
  40. ^ Billó, M.; Caselle, M.; Gaiotto, D.; Gliozzi, F.; Meineri, M.; others (2013). "Line defects in the 3d Ising model". JHEP. 1307 (7): 055. arXiv:1304.4110. Bibcode:2013JHEP...07..055B. doi:10.1007/JHEP07(2013)055. S2CID 119226610.
  41. ^ Cosme, Catarina; Lopes, J. M. Viana Parente; Penedones, Joao (2015). "Conformal symmetry of the critical 3D Ising model inside a sphere". Journal of High Energy Physics. 2015 (8): 22. arXiv:1503.02011. Bibcode:2015JHEP...08..022C. doi:10.1007/JHEP08(2015)022. S2CID 53710971.
  42. ^ Zhu, Wei; Han, Chao; Huffman, Emilie; Hofmann, Johannes S.; He, Yin-Chen (2023). "Uncovering Conformal Symmetry in the 3D Ising Transition: State-Operator Correspondence from a Quantum Fuzzy Sphere Regularization". Physical Review X. 13 (2): 021009. arXiv:2210.13482. Bibcode:2023PhRvX..13b1009Z. doi:10.1103/PhysRevX.13.021009. S2CID 253107625.
  43. ^ Delamotte, Bertrand; Tissier, Matthieu; Wschebor, Nicolás (2016). "Scale invariance implies conformal invariance for the three-dimensional Ising model". Physical Review E. 93 (12144): 012144. arXiv:1501.01776. Bibcode:2016PhRvE..93a2144D. doi:10.1103/PhysRevE.93.012144. PMID 26871060. S2CID 14538564.
  44. ^ El-Showk, Sheer; Paulos, Miguel F.; Poland, David; Rychkov, Slava; Simmons-Duffin, David; Vichi, Alessandro (2012). "Solving the 3D Ising Model with the Conformal Bootstrap". Phys. Rev. D86 (2): 025022. arXiv:1203.6064. Bibcode:2012PhRvD..86b5022E. doi:10.1103/PhysRevD.86.025022. S2CID 39692193.
  45. ^ El-Showk, Sheer; Paulos, Miguel F.; Poland, David; Rychkov, Slava; Simmons-Duffin, David; Vichi, Alessandro (2014). "Solving the 3d Ising Model with the Conformal Bootstrap II. c-Minimization and Precise Critical Exponents". Journal of Statistical Physics. 157 (4–5): 869–914. arXiv:1403.4545. Bibcode:2014JSP...157..869E. doi:10.1007/s10955-014-1042-7. S2CID 119627708.
  46. ^ Simmons-Duffin, David (2015). "A semidefinite program solver for the conformal bootstrap". Journal of High Energy Physics. 2015 (6): 174. arXiv:1502.02033. Bibcode:2015JHEP...06..174S. doi:10.1007/JHEP06(2015)174. ISSN 1029-8479. S2CID 35625559.
  47. ^ Kadanoff, Leo P. (April 30, 2014). "Deep Understanding Achieved on the 3d Ising Model". Journal Club for Condensed Matter Physics. Archived from the original on July 22, 2015. Retrieved July 19, 2015.
  48. ^ Cipra, Barry A. (2000). "The Ising Model Is NP-Complete" (PDF). SIAM News. 33 (6).

References

[edit]
[edit]