Comp Notes
Comp Notes
Comp Notes
William F. Schneider
Contents
1 Introduction 1
8 First-principles thermodynamics 57
12 Surfaces 83
13 Implicit solvation 84
1 Introduction
1.1 What do we care about?
Things chemistry/materials-related:
• What are the properties of atoms?
1
Computational Chemistry ND CBE 60547
• What molecules do they make? What other substances do they make?
• What are the shapes of those molecules? Structures of those solids? Properties of them?
• How do those substances react with each other?
• What are the energies of those reactions?
• What are the rates of those reactions?
• What is the strongest substance?
• How do we make a substance to do. . . .?
• add your own questions. . . .
Things that relate to the chemical properties of substances.
1.2 How are we going to figure these out? With only a computer?
1926 Erwin Schrödinger equation: ĤΨ = EΨ
1929 Paul Dirac, British physicist
The fundamental laws necessary for the mathematical treatment of a large part of physics
and the whole of chemistry are thus completely known, and the difficulty lies only in the
fact that application of these laws leads to equations that are too complex to be solved.
It therefore becomes desirable that approximate practical methods of applying quantum
mechanics should be developed, which can lead to an explanation of the main features
of complex atomic systems without too much computation.
1930’s-1950’s Elaboration, analytical applications
1950’s Computers start to appear for technical applications
1960’s Density functional theory emerges.
1960’s-1970’s Numerical solutions of Schrödinger equation for atoms/molecules—expert users
1980’s “Supercomputer” era—routine use of computational chemistry software becomes possible
Figure 1: Ohio State Cray Y-MP supercomputer, ca. 1989. World’s fastest computer at the time.
333 MFlop top speed, 512 Mb RAM
∂
Linear momentum px , . . . p̂x , . . . −i~ , . . .
∂x
∂ ∂
Angular momentum lx , . . . p̂x , . . . −i~ y −z ,...
∂z ∂y
~2 2
Kinetic energy T T̂ − ∇
2m
Potential energy V V̂ V (r, t)
~2 2
Total energy E Ĥ − ∇ + V (r, t)
2m
The time-independent, stationary states of the system are solutions to the equation
ĤΨ(r) = EΨ(r)
Postulate 5: (The uncertainty principle.) Operators that do not commute (Â(B̂Ψ) 6= B̂(ÂΨ))
are called conjugate. Conjugate observables cannot be determined simultaneously to arbitrary
accuracy. For example, the standard deviation in the measured positions and momenta of
particles all described by the same Ψ must satisfy ∆x∆px ≥ ~/2.
~2 ∂2 ∂2 ∂2
− + + ψ(x, y, z) = Eψ(x, y, z) (6)
2me ∂x2 ∂y 2 ∂z 2
ψ(x, y, z) = 0, x, y, z ≤ 0, x, y, z ≥ L (7)
A second-order, linear, partial differential equation. Boundary value problem. Solve by separation
of variables. Postulate ψ(x, y, z) = X(x)Y (y)Z(z). Substituting and rearrange to get
~2 ∂ 2 X(x)
− = Ex X(x), X(0) = X(L) = 0 (9)
2me ∂x2
Seek function that twice differentiated returns itself and satisfies boundary conditions.
nx πx
X(x) = sin , nx = 1, 2, 3, . . . (10)
L
n2x π 2 ~2
Enx = (11)
2me L2
Solutions called eigenfunctions (or wavefunctions) and eigenvalues. Characterized by quantum num-
bers, one for each degree of freedom. These (and all QM) solutions have certain special properties,
including that they orthonormal and form a complete set.
Normalization
Seek a constant such that the inner eigenfunction product is unity.
Z L r
2 nx πx 2
C 2
sin 2
dx = C L/2 = 1 → C = ± (12)
0 L L
r
2 nx πx
X(x) = ± sin , nx = 1, 2, 3, . . . (13)
L L
Orthonormal
hXnx |Xn0x i = δnx ,n0x Dirac notation (14)
Properties of solutions:
• Symmetry of system introduces degeneracy in solutions
• Energy depends on volume → pressure!
~2 2
− ∇ + V (r) ψ(r) = Eψ(r) (19)
2me
e2 1
V (r) = − (20)
4π0 |r|
~2 d ~2 l(l + 1) e2 1
− + − rR(r) = ErR(r)
2me dr2 2me r2 4π0 r
Solutions are a polynomial * exponential decay. Exponential part called a Slater function. Larger
the exponent, faster the decay. Degree of polynomial determined by principle quantum number
n = 1, 2, . . ..
Energy expression, corresponds to our conventional H atom spectrum
2
1 e 1
En = − 2 = −13.6 eV · 2 , n = 1, 2, . . .
n 2a0 n
3 2
Z
hri = rPnl (r)dr = n − l(l + 1) a0
2
Note darn electron doesn’t want to stay in the Coulomb well! Wavefunction extends beyond the
classical region defined by En = V (rclassical ). This phenomenon is called tunneling, is a purely
quantum mechanical effect, is pervasive in chemistry, leading for instance to chemical bonding.
1 d2
1
− − rR(r) = ErR(r), 0 < r∞
2 dr2 r
Guess something. Must obey appropriate boundary conditions and be a well-behaved function. For
example, a Gaussian:
2
gξ (r) = e−ξr
e2 1
V (r) = − ,0 < r < ∞
4π0 r
( )
~2 L̂2
1 ∂ 2∂
Ĥ = − r − 2 2 + V (r)
2me r2 ∂r ∂r ~ r
1 ∂2
2 2 1 ∂ ∂
L̂ = −~ 2 ∂φ2 + sin θ ∂θ sin θ
sin θ ∂θ
~2 d ~2 l(l + 1) e2 1
− + − rR(r) = ErR(r)
2me dr2 2me r2 4π0 r
2r
Rnl (r) = Nnl e−x/2 xl Lnl (x), x=
na0
Pnl (r) = r2 Rnl
2
n = 1, 2, . . . , l = 0, . . . , n − 1 ml = 0, ±1, . . . , ±l
s 3
2 (n − l − 1)!
Nnl =
na0 2n(n + l)!
1 ~2 1 EH 1
En = − 2 2
=−
2 me a0 n 2 n2
p
|L| = ~ l(l + 1), Lz = ml ~
3 2 1 a0
hri = n − l(l + 1)
2 2 Z
Figure 5: Pythonic s (l = 0), p (l = 1), and d (l = 2) spherical harmonics. Color scale from red
to white to blue corresponds to positive to zero to negative sign of wavefunction.
Hmmm, not very good, much higher in energy than true answer of −0.5 Hartree.
Let’s try adding two Gaussians together, with equal weight:
Normalize:
Note appearance of “overlap integral” S = hg̃1 |g̃0.5 i, shows how similar or different gi are.
Re-evaluate energy
hb(r)|Ĥ|b(r)i = −0.306 Hartree
Much closer to the truth!
Could even weight the two Gaussians differently:
X
R10 (r) ≈ f (r) = ci φi (r)
i
hf (r)|Ĥ|f (r)i
hEi =
hf (r)|f (r)i
Substitute and solve ∂hEi/∂c1 = ∂hEi/∂c2 = 0. Each gives a linear secular equation in c1 and c2 :
H11 − E H12 − S12 E c1
=0
H12 − S12 E H22 c2
where Hij = hφi |Ĥ|φj i is a matrix element and Sij = hφi |iφj i is an overlap. If Sij = 0, basis is
orthogonal, problem simplifies. If Sij ≈ 1, basis is redundant, not efficient!
Evaluate secular determinant
H11 − E H12 − S12 E
=0
H12 − S12 E H22
Gives a quadratic in E. Yields two solutions, which would be approximations to the 1s and 2s
orbital energies of hydrogen. Back substitution to get coeffients:
1s: hE1s i > E1s,true 1 and c2
c1s 1s
Note we always get one solution for each basis function. Secular matrix grows as the square of the
number of basis functions, gets expensive to find roots.
If basis is not orthogonal, common to orthogonalize. Find linear transformation that makes hSij =
δij . Evaluate
c0 = S 1/2 c → Ĥ 0 = S −1/2 HS 1/2
0 − E H0 c01
H11 12
0 0 =0
H12 H22 c02
H 0 c0 = Ec0
Secular equations reduce to standard linear eigenvalue problem. All the tricks of linear algebra
can be applied to find the eigenvalues (orbital energies) and eigenvectors (wavefunctions). Called
diagonalizing the matrix. Tricks can be used to find the lowest energy roots only.
Same basic idea is used in virtually all calculations on atoms, molecules, . . . . Basis of semi-empirical
and first principles methods.
First summation over all electrons, second gets all electron pairs.
Solutions are many-dimensional functions of the coordinates of all the electrons. Cannot solve this
analytically, although approaches exist (eg quantum Monte Carlo) that can in principle get very
close. Thankfully, though, we can make approximations that work out really well. We’ll look at
three historically important ones.
So-called Hartree product. Can’t be right. It gives the probability of two electrons being in the
same place as some number > 0! Neglects electron correlation. How to apply?
1. Apply variational principle: What’s the best possible set of ψi ? We’ll say best are the set that
give the lowest expectation value of energy.
Have to solve this for all n electrons of an atom/molecule. “Hartree potential” represents Coulomb
repulsion between electron i and all other electrons, averaged over position of those electrons.
Always positive. This is a mean field approximation. Note appearance of “one electron” energies,
i , kinetic energy plus repulsion of electron with all others. Total energy is sum of these i corrected
to avoid overcounting repulsions:
X 1X
hEi = i − hψi |v̂iHartree |ψi i
2
i i
Presents an obvious difficulty. If we don’t know ψj ahead of time, how can we even construct
Hartree equations, let alone solve them? Hartree offered a numerical solution, in the 1930’s, called
the self-consistent field (SCF) approach:
1. Guess an initial set of ψi , one for each electron (he did this on a grid, and jumping ahead a
bit, allowed each ψi to represent two electrons)
2. Construct Hartree potential for each ψi
3. Solve the n differential equations for n new ψi
4. Compare new to old ψi
5. If the same within a desired tolerance, you are done!
6. If not, return to step 2, using new ψi , and repeat.
Hartree’s father did this by hand for all the atoms of the periodic table, tabulating wavefunctions
and energies for all the electrons in each. See Hartree, Douglas R. The Calculation of Atomic
Structures (1957). For instance, for He, he’d solve one equation, self-consistently, to get one ψ1 ,
and then combine to get Ψ(1, 2) = ψ1 (1)α(1)ψ1 (2)β2 . Tedious! Qualitatively great, quantitatively
not so hot. Mean-field approximation just not so hot.
Nonetheless, basic idea of representing many-body wavefunction in terms of “orbitals,” of setting
up orbital equations, and solving using a self-consistent procedure, remain today at the heart of
virtually all electronic structure calculations. Hurrah Hartree!
Note: It would be very cool to write a simple Python code to illustrate the SCF procedure
for two electrons in an atom. Could be done on a grid or in a basis. See eg http:
// www. users. csbsju. edu/ ~frioux/ scf/ scf-he. pdf .
Here the coordinate x includes both the position and the spin (up or down, α or β) of the electron.
Sorry Hartree. Can fix for He by writing
Hey, gentle reader, check, does this work? Yes! Exchanging the coordinates changes the sign but
keeps everything else the same. Normalizing is easy if we take ψ1 to be normalized and recall that
spin functions are orthogonal:
1
Ψ(x1 , x2 ) = √ ψ1 (r1 )ψ1 (r2 ) (α(1)β(2) − β(1)α(2))
2
Note it is impossible to construct an antisymmetric wavefunction in which both electrons have the
same spatial function and the same spin. Two electrons cannot have the same space and spin
variables.
ψ1 (1)α(1) ψ2 (1)β(1)
= ψ1 (1)ψ2 (2)α(1)β(2) − ψ2 (1)ψ1 (2)β(1)α(2)
ψ1 (2)α(2) ψ2 (2)β(2)
Antisymmetric? What happens when the two electrons have the same spatial coordinate?
ψ1 (1)α(1) ψ2 (1)α(1)
= (ψ1 (1)ψ2 (2) − ψ2 (1)ψ1 (2)) α(1)α(2)
ψ1 (2)α(2) ψ2 (2)α(2)
Yikes! Slater determinant wavefunction results in appearance of the “exchange” operator, which
turns a ψi into a ψj . Exchange operator is not a simple multiplication. Must be solved self-
consistently, and is much harder to do than the simple Hartree expression.
Slight simplification possible, noting that i = j terms cancel out and slightly redefining operators:
n
ĥi + v̂iHartree + v̂ exchange }ψi (r1 ) = i ψi (r1 ) (36)
1
X Z
v̂ Hartree (r1 ) = 2 |ψj (r2 )|2 dr2 (37)
|r2 − r1 |
j
exchange
XZ 1
v̂i (r1 )ψi (r1 ) = −ψj (r1 ) ψj (r2 ) · ψi (r2 ) dr2 (38)
|r2 − r1 |
j
Now “Hartree potential” is the same for all orbitals/electrons. We can define the “charge density”
to be X
ρ(r) = 2 |ψj (r)2 |
j
(units of charge/unit volume, multiply by e to get a charge). The Hartree potential can be written
ρ(r2 )
Z
Hartree
v̂ (r1 ) = dr2
|r2 − r1 |
4.5 Hartree-Fock-Slater
In 1951 John Slater introduced an approximation to the Hartree-Fock model that turned out to
anticipate a whole new approach to solving the electronic structure problem, called density functional
theory.
Rewrite exchange part as (and shorten “exchange” to “x”):
XZ 1
v̂ix (r1 )ψi (r1 ) = −ψj (r1 ) ψj (r2 ) · ψi (r2 ) dr2 (39)
|r2 − r1 |
j
XZ 1 ψi (r1 )ψi (r1 )
= −ψj (r1 ) ψj (r2 ) · ψi (r2 ) dr2 · (40)
|r2 − r1 | ψi (r1 )ψi (r1 )
j
ρxi (r1 ; r2 )
Z
= − dr2 ψi (r1 ) (41)
|r2 − r1 |
X ψi (r1 )ψj (r2 )ψj (r1 )ψi (r2 )
ρxi (r1 ; r2 ) = (42)
ψi (r1 )ψi (r2 )
j
This looks like the Coulomb expression, but the density thing is different for each orbital i. The
“exchange density” does have units of charge density enters in the same way, but with minus sign,
to the electron density. Suggests that exchange can be thought of as an electron “hole” around an
electron. This exchange density has some special properties:
1. Every electron at any position r1 has an exchange hole around it equal to one electron of the
same spin as itself: Z
ρxi (r1 ; r2 )dr2 = 1
Thus, the Coulomb repulsion felt by an electron is diminished by an exchange hole that follows
the electron around, exactly canceling out the charge at its current location. It’s not necessarily
spherical and is not the same for all orbitals, but the fact that it has these general properties gives
hope that it can be approximated somehow.
Hey, I have an idea! (Actually, Slater had an idea.) What if we had a homogeneous (density
the same everywhere) gas of electrons, like electrons of a given density ρ in an infinite box? By
symmetry the exchange hole would be spherical, and if it must integrate to 1, then it must have a
radius (factor of 2 comes from fact we are only including electrons of the same spin):
1/3
4πρ/2
Rhole =
3
9πρ 1/3
1
Z
ρ
v̂ x = − dr = −
2 sphere r 4
Now, let’s assume that an electron in a real system experiences an exchange hole potential at any
point exactly like that of a homogeneous electron gas of the same density at that point. This is the
basis of the Hartree-Fock-Slater model:
3 3ρ(r1 )
v̂ x,HF S (r1 ) = − − Cρ(r1 )1/3
2 π
Some ambiguity as to the right value of the constant C, so sometimes just taken as a parameter.
Can now write the Hartree-Fock-Slater equation:
n o
ĥ + v̂ Coulomb [ρ] + v̂ x,HFS [ρ] ψi (r) = i ψi (r)
This is much simpler to solve than Hartree-Fock equation, because the left hand side is the same
for all electrons given a total density ρ(r). Still must be solved iteratively, using the self-consistent
field.
4.5.1 Notes
1. Exchange potential scales with the total number of electrons around: more electrons (like near
a nucleus) means a more compact, denser exchange hole, more electron exchange “screening,”
a greater decrease in potential energy. Further from nucleus, more diffuse exchange hole, less
screening.
2. Screening is not exact, though; does not exactly cancel self-interaction. Clearest in one-
electron case: Coulomb and exchange potentials should then exactly cancel, which they evi-
dently do not! HFS energy of an H atom is not exactly 0.5 au!
4.6 Implementations
4.6.1 GAMESS
Hartree-Fock method always paired with basis set methods and implemented in the codes available
at http://webmo.net. Example GAMESS input for Hartree-Fock Ar:
$CONTRL SCFTYP=RHF RUNTYP=ENERGY ISPHER=1
ICHARG=0 MULT=1 COORD=CART $END
$BASIS GBASIS=CCT $END
$DATA
Ar
OH 1
4.7 Performance
One metric is the ability to predict ionization energies.
Koopman’s theorem: The negative of the energy of an occupied orbital (−i )) approxi-
mates the energy to extract an electron from that orbital, ie to ionize the system. The
energy of a virtual orbital approximates the energy to add an additional electron to a
system, i.e. the electron affinity. Assumes no relaxation of orbitals.
4.8 Correlation
If solved to reasonable precision, both the Hartree-Fock and Hartree-Fock-Slater models work pretty
darn well for things like the shapes of molecules, structures of solids, charge distributions, vibrational
frequencies, . . . . Don’t work so well for computing things that involve making and breaking bonds,
like a reaction energy or an activation energy.
Why? All the models discussed here neglect electron correlation, the fact that the potential felt by
an electron is a function of the instantaneous positions of all the other electrons. The contribution of
correlation to absolute energies is not big by proportion, but it is very imporant to energy differences.
Any “orbital” model cannot capture correlation. It can be introduced systematically and exactly
into H-F models (at great computational expense) and systematically and approximately into DFT
models (at much more modest expense). Hence the popularity of DFT!
Nuclei much heavier than electrons and move much more slowly. Assume nuclei are fixed in space
(“clamped”) and electrons move in static field of those electrons. Equivalent to assuming that nuclear
kinetic energy is decoupled from electron dynamics. Only change is that
1 X Zα
ĥ = − ∇2 −
2 α
|r − Rα |
Schrödinger equation becomes parameteric in nuclear positions; solutions E(Rα , Rβ ) define a po-
tential energy surface (PES).
1 X Zα Zβ
EPES (Rα , Rβ ) = ESchr +
2 |Rβ − Rα |
α,β
Standard models of today all treat the one-electron and Coulomb pieces exactly and treat the
electron-electron interactions at various levels of approximation.
vexchange vcorrelation
Wave function theory (WFT)
Hartree self-interaction neglect historic
Hartree-Fock exact neglect superceded
MPn, CC exact perturbative state-of-the-art
CI exact variational specialized
Density functional theory (DFT)
Hartree-Fock-Slater [ρ4/3 ] neglect historic
Local density approximation [ρ4/3 ] [ρ] general purpose solids
(LDA)
Generalized gradient approximation [ρ, ∇ρ] [ρ, ∇ρ] general purpose solids/surfaces
(GGA)
“Improved” GGA [ρ, ∇ρ] [ρ, ∇ρ] general purpose
(RPBE, BEEF, Mxx)
Hybrid ≈ exact [ρ, ∇ρ] general purpose molecules
(B3LYP, PBE0, HSE06) specialty solids/surfaces
Meta GGA [ρ, ∇ρ, ∇2 ρ] [ρ, ∇ρ, ∇2 ρ] developing
The choice of the electronic structure model is the most fundamental approximation in applying
these methods. Determined from experience and need.
Specification in GAMESS (https://www.msg.chem.iastate.edu/GAMESS/GAMESS.html) is a bit ar-
cane. Default is Hartree-Fock. To specify DFT model, use
$CONTRL DFTTYP = Slater (HFS), SVWN (LDA), PBE (GGA), B3LYP (Hybrid), M06 (Minnesota optimize
Often atom-centered. You’ll see the term “linear combination of atomic orbitals,” LCAO.
Depending on implementation, pieces of fˆ can often be computed just once and reused, eg one-
electron integrals hφµ |ĥ|φν i.
Algorithm:
1. Put your atoms somewhere in space
2. Select a basis
3. Pre-compute what you can
4. Guess some coefficients/density/density matrix
5. Construct secular matrix elements
6. Solve secular matrix equation for C and
7. Construct and compare new density to old
8. Update density and repeat, or . . .
9. . . . if less than tolerance, all done!
ALWAYS check to be sure result has converged, to the state you want!
5.6 Examples
5.6.1 Dissociating H2+ example
Compute energy vs distance. Should dissociate to H atom and H+ ion.
Oops, come on, LDA! Illustrates self-interaction problem in LDA. Electron is too eager to be diffuse,
spreads out over both atoms when it should localize on one.
Equilibrium distance? How’s the dissociation state? Bond energy? Truth is about −0.075 Hartree.
LDA has advantage of cancellation of errors between exchange and correlation errors. A good thing!
p2 p4
T ≈ −
2m0 8m30 c2
Reduces to non-relativistic result when c → ∞. Electrons near core move at speeds close to c,
second term becomes non-negligible and diminishes their energy. Most important for s states that
penetrate closest to nucleus; they shield nucleus better and other valence states rise up in energy.
Electron spin and orbital magnetic moments also couple when l > 0, leads to spin-orbit coupling
that splits p, d, . . . states into j = l ± s states.
Darwin correction corrects s orbitals for electron and nucleus being at the same point; comes from
solution of full Dirac relativistic equation for the atom.
Relativistic effects typically incorporated implicitly, by including in model for core electrons and
thus capturing their effect on the valence. Spin-orbit, if necessary, added after the fact.
5.9.2 Implementations
Non-relativistic and relativistic effective core potentials (ECPs) available for many elements. These
specify the potential felt by the valence electrons due to the core in terms of radial potential functions
and angular projection operators. Typically these have to be combined with basis functions designed
to work with them.
Most common are Hay-Wadt (LANL) and Stevens-Basch-Krause (SBK). Other more modern ones
also available, like Stuttgart.
Essential to all plane-wave codes, like Vasp, but implemented differently. Will touch on later in
class.
• Moments of charge density (dipole, quadrupole), useful for thinking about molecule-molecule
interactions. (Only exactly defined for neutrals!)
• Electrostatic potential, or Coulomb potential created by electrons and nuclei. More refined
way of thinking about “how spots” on a molecule. Commonly used to parameterize classical
forcefields, by seeking set of atom-centered charges that reproduce calculated electrostatic
potential, as is done with CHELPG. Not uniquely defined.
• Population analyses, which attempt to distribute electrons to individual atoms and possibly
bonds based on decomposition of molecular orbitals. Chemically it is intuitively nice to assign
charge to individual atoms. There is no single “right” way to do this. . . the “charge” on an
atom in a molecule in not uniquely defined! Consider an occupied molecular orbital ψ made
up of two basis functions on two different atoms, α and β:
ψ = cα φα + cβ φβ (44)
hψ|ψi = c2α + c2β + 2cα cβ hφα |φβ i (45)
In Mulliken analysis, c2α is fraction of ψ assignable to the atom of α, c2β fraction assigned to atom
of β. Remainder is the “overlap” population, which is split evenly between the two. Summing over
all occupied orbitals and subtracting nuclear charges gives gross atomic charges.
In Löwdin analysis, basis functions are pre-orthogonalized, so last term vanishes.
Both approaches very sensitive to choice of basis set. Only sensible to compare within a common
model type across molecules.
• Localized orbitals is notion of creating linear combinations of ψ that satisfy some constraint
for being compact. Leads to orbitals that are more naturally “bonding.”
• Natural orbitals a rigorous scheme for orthogonalizing and assigning charge. Based on recog-
nition that there is a set of orthogonal orbitals that optimally describe the density. Localizing
these give natural bonding orbitals. See 06Weinhold.pdf.
• Bader analysis another method, based on a geometric analysis of the total charge density.
Define See Bader, R. F. W. Atoms in Molecules: A Quantum Theory; Oxford University
Press: Oxford, 1990.
N N
X X Zα Zβ e 2
EPES (Rα , Rβ , . . .) = Eelec + (46)
Rαβ
α=1 β=α+1
F1#
H2# F1#
r3# H4#
r2# a1# r3# H2# H4#
r1#
C1# C2# a2#
d1#
r3# a2#
H3# r3# r2#
H3# H1#
H1# F2#
F2#
6.1.1 Cartesian
Computationally straightforward but don’t correspond with our physical notion of bonds, bends,
etc. Easiest to get out of a piece of software. A molecule has 3N − 6 internal degrees of freedom
(3N − 5 if linear), but Cartesians specify 3N . The extra values correspond to the location of the
center of mass and molecular oriendation. Codes will typically center and reorient the Cartesians.
In GAMESS, would specify Cartesian coordinates for FCH2 CH2 F like this:
$CONTRL COORD=CART $END
$DATA
FCH2CH2F drag calculation
C1
C 6.0 -3.76764 0.33879 0.03727
C 6.0 -2.35246 0.34495 0.03689
F 9.0 -4.72277 0.58147 -1.18012
F 9.0 -1.59909 -0.68487 -0.83662
H 1.0 -4.04387 1.08375 0.75395
H 1.0 -3.92958 -0.71060 0.16941
H 1.0 -2.03786 0.18875 1.04760
H 1.0 -2.09983 1.28759 -0.40187
$END
r1=1.5386
r2=1.39462
r3=1.11456
r4=1.12
A1=109.54214
A2=111.
A3=110.
D1=120.
D2=-120.5
D3=50.
$END
Particularly convenient when you’d like to “scan” over the value of some coordinate. Variable can
be applied to more than one independent coordinate, if the molecule has symmetry.
d1$
d1$
16
14
12
Saddle&point&
EPES$(kJ/mol)"
Transi4on&state &
10
dE d 2E
g= = 0, H = 2 < 0
8 dq dq
6
dE d 2E
g= < 0, H = 2 < 0
4 dq dq
0 Local&
0 50
Global& 100 150Minimum&
250 200 300 350 400
2
Minimum& g = dE = 0, H = d E > 0
2
dq dq
Note 3-fold periodicity as expected for rotation about a CC single bond. Note too there are some
special points:
• Minima are places where energy bottoms out. More formally, first derivative of energy, or
slope, or “gradient” g = 0, and second derivative, or curvature, or “Hessian” H > 0. These are
the locally stable conformations of the molecule. Note that lowest energy in this case is not
trans, but rather gauche conformations. Are you surprised?
• Saddle points are places where energy is maximized. Physicially, corresponds to “transition
states” connecting low-energy conformations. Gradient g = 0, but curvature H < 0.
∂2E ∂2E
∂E ···
∂q1 ∂q12 ∂q1 ∂q3N
.. .. .. ..
g= . H= . . .
∂E ∂2E ∂2E
∂q3N ∂q1 ∂q3N ··· 2
∂q3N
• gradient is vector tangent to PES. The force on an object is F = −g, so the gradients are
often called the forces. Where gradient (slope) is negative, force is positive, and vice versa.
Force always pushes system toward nearest minimum. If the potential is harmonic, then the
force constant k = H, so the Hessian is also called the “force constant.”
• Hessian matrix is real and symmetric. Diagonalization gives eignevalues and eigenvectors.
Eigenvectors give “natural” directions along PES (physically, the harmonic vibrational modes),
and eigenvalues indicate curvature in that direction.
• Minimum on multidimensional PES has gradient vector g = 0 and all positive Hessian eigen-
values.
• First-order saddle point, or transition state, has g = 0 and one and only one negative Hessian
eigenvalues. (Physically, one unique direction that leads downhill in energy.) Must correspond
to lowest-energy point connecting two minima.
• Minimum energy pathway (MEP) or intrinsic reaction coordinate (IRC) is steepest descent
pathway (in mass-weighted coordinates) from saddle point to nearby minima. Path a marble
with infinite inertia would follow.
• Higher order saddle points have g = 0 and more than one negative Hessian eigenvalue. Can
always lead to lower energy first order saddle point. These generally do not have chemical
significance.
In computational chemistry/materials science, it is frequently our job to identify the critical points
(minima and transition states on a PES). In liquids, PES much more flat and lightly corrugated.
Statistical mechanics becomes mores important.
Each distinct electronic state defines its own PES. Remember that there are multiple PES’s for any
atom configuration, corresponding to different electronic states. Sometimes these states can interact,
intersect, giving avoided crossings, conical intersections. Lead to more complicated dynamical
behavior.
6.3.2 Hessian
In some electronic structure models can be computed analytically. More commonly, determined
from numerical differentiation of gradients. Implementations typically assume that system is at
minimum.
h must be small enough to stay in the harmonic region, but big enough to avoid numerical noise
swamping the gradients.
For a molecule with N atoms, to construct complete 3N × 3N Hessian, have to evaluate gradients
6N times for two-sided differencing. Each pair of displacements completes one row of Hessian.
Obviously tends to be quite expensive.
To get better precision and accuracy, could calculate more than two displacements, and could fit to
a more complicated function than a harmonic potential.
R0 = R − H −1 (R)g(R)
Typically do not know Hessian and it is expensive to calculate. Make an initial guess, then update
Hessian with gradient information from each geometry step. “Learning” PES as we go. Generally
converges very rapidly near minima, where surface is not too anharmonic.
• Rational function optimization is similar in spirit, also constructs Hessian, but uses more
sophisticated (than quadratic) guess form of PES to update positions.
• Direct inversion in the iterative subspace (DIIS) uses sizes of QNR steps as estimates of error
and constructs new step from linear combination of previous that minimizes error inferred
from previous steps: X
err(R) = ci Hi−1 gi
i
Generally very efficient in region of minimum. Algorithm can misbehave away from minima, possibly
even converging to nearby saddle points, so often started with conjugate gradient steps.
~2 X 1 ∂ 2 1X
Ĥ = − + Hij qi qj
2 mi dqi2 2
i i,j
√
or in mass-weighted coordinates, ξi = mi qi :
~2 X ∂ 2 1X 1
Ĥ = − + H̃ij ξi ξj , H̃ij = √ Hij
2
i
dξi2 2 i,j mi mj
From eigenvalues κi and eignevectors si (“normal modes”) of mass-weighted Hessian, can transform
into 3N one-dimensional problems:
~2 d2
Ĥi = − + κi s2i
2 ds2i
This is one-dimensional harmonic oscillator Hamiltonian, solutions well known.
1
V (x) = κx2 , −∞ < x < ∞
2
2 /2α2
ψv (x) = Nv Hv (x/α)e−x , v = 0, 1, 2, . . .
√
α = (~2 /κ)1/4 , Nv = (2v v!α π)−1/2
Hermite polynomials
H0 (y) = 1
H1 (y) = 2y
H2 (y) = 4y 2 − 2
Hn+1 (y) = 2yHn (y) − 2nHn−1 (y)
1 √
ν= κ
2π
1
Ev = v + hν, v = 0, 1, 2, ...
2
Do this in 3N Cartesian space, so 6 (or 5) of the normal modes correspond to translations and
rotations of the molecule. If calculation is exact, these will have κi = 0. Numerical errors may
make them somewhat non-zero. If necessary, these can be projected out by transforming Hessian
to internal and back to Cartesian coordinates.
Note it is impossible for molecule to just sit at q = 0. Nuclei are always vibrating about xeq . Gives
zero point energy
1X
ZP E = hνi
2
i
where sum runs over all charged particles and qi is each charge. Dipole moment changes as molecule
vibrates. In direction ξ, can write
dµ
µ̂(ξ(t)) = µ(0) + ξ(t) + ...
dξ ξ=0
First integral vanishes for i 6= f . Second integral provides gross selection rule that intensity of tran-
sition proportional to the dynamic dipole moment along vibrational normal mode ξ and particular
selection rule that intensity of transition is zero unless f = i ± 1. Latter comes from nature of Her-
mite polynomials. At normal temperatures, i = 0, and the only observable vibrational transitions
are 1 → 2.
C
C 1 1.5000000
F 2 1.4000000 1 109.5421400
H 2 1.1000000 1 109.5421400 3 120.0000000 0
H 2 1.1000000 1 109.5421400 3 -120.0000000 0
F 1 1.5000000 2 109.5421400 3 0.0000000 0
H 1 1.1000000 2 109.5421400 6 120.0000000 0
H 1 1.1000000 2 109.5421400 6 -120.0000000 0
$END
See results in ../Labs/Gamess/FCH2CH2F/SYMMETRY.
−→
−277.937 777 074 9 au −277.925 347 844 1 au
∆E = 0.34 eV
...
D4=120.
...
$END
See results in ../Labs/Gamess/FCH2CH2F/SCAN.
A quasi-NR optimization started at one of the approximate TS’s will usually converge to the exact
TS.
$CONTRL SCFTYP=RHF DFTTYP=PBE RUNTYP=OPTIMIZE COORD=ZMT NZVAR=18 ISPHER=1 $END
$STATPT METHOD=NR $END
$BASIS GBASIS=PCseg-1 $END
$DATA
FCH2CH2F zmatrix optimization near saddle point, no hessian
C1
C
C 1 1.5256029
−→
−277.937 777 074 9 au −277.933 307 329 4 au
∆E ‡ = 0.12 eV
ν = 154i cm−1
Coordinate dragging can fail when the reaction coordinate is non-linearly related to the multiple
internal coordinates. Plus, it is relatively expensive, as it involves a lot of optimizations.
−→ −→
−185.630 793 7 au −185.585 420 9 au −185.625 487 8 au
∆E ‡ = ∆E =
1.23 eV 0.14 eV
Add configurations of a
Add all configurations
of a given excitation
given perturbation
configurations
order
type
• static correlation are electron correlation effects that arise from the restrictive form of the H-F
wavefunction
• dynamic correlation is like it sounds, the “dance” of all electrons about one another
• configuration interaction is a variational approach to adding in dynamical correlation by com-
bining H-F determinants. Difficult to apply, rarely used any more.
• size consistency is the property that the energy model scales properly with the system size.
CI lacks this.
• perturbation theory (MPn) is non-variational, size-consistent, and user-friendly approach
“Model chemistry” some linear combination of these, often calibrated against experimental data.
“G2” most venerable:
Hartree-Fock MP2 MP4 QCISD(T)
6-31G(d) ZPE Structure
6-311G(d) 1 2 3
6-311+G(d,p) 4 5
6-311G(2df,p) 6 7
6-311+G(3df,p) 8
QCISD(T)/6-311+G(3df,p) ≈ 2 + (3 − 2) + (5 − 2) + (7 − 2) + (8 − 1) − (4 − 1) − (6 − 1)
Quality assessed using, eg, mean absolute deviation from some reliable data set. Many elaborations
on this same idea in the literature.
where νi are the harmonic vibrational frequencies, obtained from a vibrational frequency analysis.
E 0 is the minimum physically meaninfful energy of the molecule.
Defines an exponentially decaying probability function for a state to be occupied at some tempera-
ture. Temperature is the characteristic of a system following this distribution.
30
20
10
Energy eV
0.02 0.04 0.06 0.08 0.10
ln Q N ln q N q
Helmholtz energy − − − ln +1
β β β N
(A = U − T S)
∂ ln Q ∂ ln q ∂ ln q
Internal energy (U ) − −N −N
∂β NV ∂β V ∂β V
1 ∂ ln Q N ∂ ln q N ∂ ln q
Pressure (P )
β ∂V Nβ β ∂V β β ∂V β
NOTE! All energies are referenced to their values at 0 K. Enthalpy H = U + P V , Gibb’s Energy
G = A + PV .
H
H
N N O
O
O
H H
H
H O
O
H
H
No intermolecular interactions Need all inter/intramolecular No rotation or translation.
Only sum over states states, MC/MD to get thermo- Sum over vibrational modes
of individual molecule dynamic quantities
8.2.1 Separability
in principle need to sum over all the types of energy states (translational, rotational, vibrational,
. . . ) of every molecule. Seemingly impossible task. One simplification is if we can write energy as
sum of energies of individual elements (molecules) of system:
Paolucci Thermodynamics and Kinetics March 18, 2014 5 / 14
Ej = j (1) + j (2) + ... + j (N ) (57)
X
Q(N, V, T ) = e−Ej β (58)
j
X
= e−(j (1)+j (2)+...+j (N ))β (59)
j
If molecules/elements of system can be distinguished from each other (like atoms in a fixed lattice),
expression can be factored:
X X
Q(N, V, T ) = e−j (1)β · · · e−j (N )β (60)
j j
If not distinguishable (like molecules in a liquid or gas, or electrons in a solid), problem is dif-
ficult, because identical arrangements of energy amongst elements should only be counted once.
Approximate solution, good almost all the time:
Q(N, V, T ) = q N /N ! (64)
Sidebar: “Correct” factoring depends on whether individual elements are fermions or bosons, leads
to funny things like superconductivity and superfluidity.
This q(V, T ) is the molecular partition function, and is calculated by summing over the individual
energy states of a single molecule (starting at E0 ).
Thermal wavelength Λ depends only a molecule mass and is of the order the box dimensions at
which quantization is evident. Typically a tiny number (eg 1.7 × 10−11 m for Ar in a 1 L volume at
298 K. qtrans is thus enormous: lots of translational freedom. q depends on volume, introduces vol-
ume/concentration/pressure dependence into thermo functions. Conventional to define a standard
state V ◦ volume, or corresponding pressure.
∞
1, unsymmetric
1X 1 T
qrot = (2l + 1)e−l(l+1)θrot /T , ≈ , T >> θrot σ=
σ σ θrot 2, symmetric
l=0
◦
Urot = RT Cv,rot = R Srot = R(1 − ln(σθrot /T ))
Non-linear molecule θrot,α = hcBα /kB
1/2
πT 3
1
qrot ≈ , T >> θrot,α,β,γ σ = rotational symmetry number
σ θrot,α θrot,β θrot,γ
3 3 ◦ R σθrot,α θrot,β θrot,γ
Urot = RT Cv,rot = R Srot = 3 − ln
2 2 2 πT 3
Vibrational DOFs Harmonic oscillator model
Single harmonic mode θvib = hν/kB
1 T
qvib = ≈ , T >> θvib
1− e−θvib /T θvib
Uvib = Cv,vib = ◦
Svib,i =
!2
θvib eθvib /2T
θvib θvib /T
R θ /T R R − ln(1 − e−θvib /T )
e vib − 1 T eθvib /T − 1 e /T − 1
θ vib
Uvib = Cv,vib = ◦
Svib,i =
!2
X θvib,i X θvib,i eθvib,i /2T
θvib,i /T
−θvib,i /T
R R R − ln(1 − e )
i
eθvib,i /T − 1 i
T eθvib,i /T − 1 eθvib,i /T − 1
Electronic DOFs qelec = spin multiplicity
µA (N, V, T ) = µB (N, V, T )
EA (0) − kT ln(qA /NA ) = EB (0) − kT ln(qB /NB )
NB NB /V qB (T, V )/V −∆E(0)/kT
= = e
NA NA /V qA (T, V )/V
r = kCA CB
‡
= k ‡ CAB
= ν ‡ K ‡ CA CB
kB T
= ν ‡ ‡ K̄ ‡ (T )CA CB
hν
kB T q ‡ (T )
= e−∆E(0)/kB T CA CB
h qA (T )qB (T )
ψ(G) is Fourier transform of ψ(x). Both contain exactly the same information. Can take advantage
of Fourier transform machinery to transform between two representations. For general function
ψ(x), represent on some grid of size N. Size of grid determined by maximum frequency/minimum
wavelength included. N log(N ) operations to transform back and forth.
Integrals can be evaluated in either x or G space:
Z
I = A∗ (x)B(x)dx (76)
a Z
0
X
= A∗ (G)B(G) e−iGx eiG x dx (77)
G,G0
X
= A∗ (G)B(G)δG,G0 (78)
G,G0
X
= a A∗ (G)B(G) (79)
G
9.2 Example
~2 2 ~2 2
G < G = Ecutoff
2me 2me cut
Sets minimum wavelength retained in basis, and thus size of basis. For fixed Ecutoff , larger a implies
larger basis. Basis size has to be a whole number, so changes discontinuously with Ecutoff .
Recall our basic equation
n o
ĥ + vCoulomb [ρ] + vexchange [ψi ] + vcorrelation [ψi ] ψi (r) = i ψi (r) (80)
1 d2 1
hφG | − |φG0 i = G2 δG,G0
2 dx2 2
How many Fourier components to include in the sums? Turns out for a basis of dimension m you
need 2m components to specify the potential exactly, but you can generally get away with smaller.
The cost and accuracy of the calculation scale with this choice.
Solutions given by ψi (x) = G ci (G)φG (x).
P
One key thing we would need to evaluate potentials is electron density, ρ(x). Let fi be occupancy
of each orbital.
X
ρ(x) = fi |ψi (x)|2 (84)
i
1X X ∗ 0
= fi ci (G)ci (G0 )ei(G−G )x (85)
a 0
i G,G
2G
X max
= n(G)eiGx (86)
−2Gmax
Electron density exactly represented in plane wave basis with cutoff four times that of the basis set
cutoff.
Reciprocal lattice vectors become G = ib1 + jb2 + kb3 . Basis functions become φ(r) = 1 iG·r
Ωe .
Locations of atoms Rα within this periodic cell can be specified in Cartesians or as fractional
coordinates of the lattice vectors. Related by Rα = hf .
9.7.5 SCF
Vasp uses same essential idea as GAMESS. Guess initial charge density and set of orbital coefficients.
Vasp initial guesses are not internally consistent: charge density typically a superposition of atoms,
but wavefunctions are random. Density and wavefuctions are optimized iteratively and in parallel
until internally consistent. A mixture of algorithms are used, in particular conjugate-gradient,
Davidson, and DIIS.
ALGO=Normal (default) block Davidson optimization, very safe
ALGO=Fast starts with block Davidson, then switches to DIIS. Good compromise method.
ALGO=VeryFast only DIIS. More aggressive.
where χm are core orbitals that are fixed. Equivalent to modifying the electron-electron interaction
by X
v̂ OPW = v̂ee + ( − m )|χm ihχm |
m
9.8.2 Pseudopotential
Replace last part with a potential designed to replicate the influence of the core electrons (and
nuclei) on the valence. Loose any description of core, but (hopefully) valence is retained.
c
b b
a
a
-
obvious, a n open question. reassure a chemist that one is getting what one expects
Let’s return to our chain of equally spaced H atoms. It from 5, let’s see what combinations are generated for two
10.2 Bloch’s theorem and qualitative band structure
turns out to be computationally convenient to think of that specific values of k, k = O and k = n / a (see 7). Referring
chain as an imperceptibly bent segment of a large ring (this
Images borrowed from https://doi.org/10.1002/anie.198708461.
is called applying cyclic boundary conditions). The orbi-
k=O e0 X, = 5 X, = X o + X I + X z + X J + ...
tals of medium-sized rings on the way to that very large $o=
-
ex+&cM%
--
8,-0 -
a --- --- back to 5 , we see that the wave function corresponding to
v-- -- --
8--
k=O is the most bonding one, the one for k = n / a the top
m1
v -- -- of the band. For other values of k we get a neat description
8- - a=-
of the other levels in the band. So k counts nodes as well.
80
-- -- The larger the absolute value of k, the more nodes one has
-- --
9 89- -
- --
-
in the wave function. But one has to be careful-there is a
range of k and if one goes outside of it, one doesn’t get a
---- new wave function, but repeats an old one. The unique val-
v = w= Q- 0- ues of k are in the interval - n / a I k < n / a o r I k l 1 d a .
This is called the first Brillouin zone, the range of unique
5
k.
How many values of k are there? As many as the num-
Energy levels increasing with chainFor alength. Wavefunctions
hydrogen molecule (or ethylene) there grow in a regular way, consistent with
is a bond- ber of translations in the crystal, or, alternatively, as many
ing o,(n) below a n antibonding o,*(n*). For cyclic H3 or as there are microscopic unit cells in the macroscopic crys-
(and identifiable by) underlying cyclopropenyl
symmetry. Nodes increase systematically.
we have one orbital below two degenerate
In limit, get an infinite
tal. So let us say Avogadro’s number (N,,), give or take a
band of levels. ones; for cyclobutadiene the familiar one below two below few. There is an energy level for each value of k (actually a
degenerate pair of levels for each pair of positive and ne-
one, and so on. Except for the lowest (and occasionally the
highest) level, the orbitals come in degenerate pairs. The gative k values). There is an easily proved theorem that
Infinite chain of H atoms separated by a lattice constant a. Electrons move in that periodic potential.
number of nodes increases as one rises in energy. We’d ex- E(k)=E(-k). Most representations of E(k) d o not give
the redundant E( - k), but plot E(lk1) and label it as E(k)).
Expand wavefunctions in atom-centered 1s orbitals.
pect the same for a n infinite polymer-the lowest level
Also, the allowed values of k are equally spaced in the
nodeless, the highest with the maximum number of nodes.
In between, the levels should come in pairs, with a growing space of k, which is called reciprocal or momentum space.
number of nodes. The chemist’s representation of that The relationship between k = 1//2 and momentum derives
band for the polymer is given at right in 5 . from the d e Broglie relationship il=h/’. Remarkably, k is
not only a symmetry label and a node counter, but it is also
a wave vector, and so measures momentum.
Bloch Functions, k, Band Structures
e- e- e- V ( x) = V ( x + na)
Potential:
"n Î integers
Basis:
c 01s c11s c 21s c 31s c 41s c 51s c 61s c 71s c81s c 91s
Wavefunctions:
k = 0, l ® ¥ y 0 = å e0 cn1s = å cn1s
. n n
.
p .
k= , l = 4a y p 2a = å eip n 2 cn1s
2a . n
.
.
p y p a = å eip n cn1s = å (-1)n c n1s
k= , l = 2a
a n n
p
Bloch s Theorem: y k = å eikna c n , k £ (recall eikr is like a cosine function)
n a
Bloch’s theorem states that the wavefunctions can be written as the product of a cell-invariant part
(the H 1s functions here) and a cell-periodic part. The periodic part is indexed by the wavevector
k. k takes as many values as there are periodic units N :
π n N −1 N −1
k= , n=− , . . . , 0, . . . ,
aN 2 2
If N is infinite, then k is a continuous and real variable. The space of unique values of k is called
the first Brillouin zone. The periodic phases of the basis functions correspond to an underlying
wavelength associated with k :
λk = 2π/k
By the de Broglie relationship, then, k relates to the momentum of an electron in that energy level.
Each k is degenerate such that E(k) = E(−k) (moving with the same momentum to the left and
right gives the same energy).
The continuum of energy levels in k is called the band structure. Conventionally plotted vs |k|. The
width of the band is called the dispersion:
.
.
energy
energy
H H 0 π/a H H H H H
k
The highest filled energy state, in this case (k=π/2a\), is the /Fermi energy. Band structure on
• Discrete orbital energies become continuous band of energies, indexed by k
• Width
November ( dispersion ) of band©
20, 2019 determined by overlaps
2019 W. F. Schneiderbetween neighboring 80
cells
– Larger supercells à smaller dispersion
Computational Chemistry ND CBE 60547
the right would correspond to a metal, as the energy difference between different momentum states
is zero.
Band dispersion depends on cell overlap, dispersion depends on type and orientation of orbitals:
14.6.1 LDA
One well-defined limit is the homogeneous electron gas, and this is the usual starting point for
modern approximate DFT methods. Assume exchange and correlation potentials at any given
point depend only on the value of ρ there (or spin-up and spin-down ρ, if spin-polarized). We know
from Slater and Dirac’s work what the exchange potential is for this system.
It is possible to determine numerically the correlation energy for a given density from quantum
Monte Carlo calculations. Ceperley and Alder (PRL 1980, 45, 566) did this to very high accuracy,
and others (Vosko, Wilk, and Nusair, “VWN”, and Perdew and Wang, “PW”) fit these numerical
results to analytical models in ρ. This combination of local exchange and correlation defines the
LDA model.
LDA offers modest improvement over HFS for molecules. “Homogeneous” approximation pretty
severe for an atom or molecule. Nonetheless, works surprisingly well for structures and charge
distributions, but has problems in calculating accurate bond energies, typically overbinding. Also
tends to underestimate the HOMO-LUMO gap in molecules and analogous band gap in solids.
14.6.2 GGA
14.6.3 Meta-GGA
14.6.4 Hyper GGA and hybrid functionals
• “Screened” exchange