Boolean Algebra

Download as pdf or txt
Download as pdf or txt
You are on page 1of 401
At a glance
Powered by AI
The document discusses Boolean algebra and its concepts such as 2-valued morphisms, states, and the absorption law.

A 2-valued morphism is a mapping from a Boolean algebra to a two-element Boolean algebra {0,1} that can be used to represent a particular state of the Boolean algebra.

The absorption law is an identity in algebra relating a pair of binary operations, where one operation 'absorbs' the other.

Boolean algebra

en.wikipedia.org
Chapter 1

2-valued morphism

2-valued morphism is a term used in mathematics[1] to describe a morphism that sends a Boolean algebra B onto a
two-element Boolean algebra 2 = {0,1}. It is essentially the same thing as an ultralter on B.
A 2-valued morphism can be interpreted as representing a particular state of B. All propositions of B which are
mapped to 1 are considered true, all propositions mapped to 0 are considered false. Since this morphism conserves
the Boolean operators (negation, conjunction, etc.), the set of true propositions will not be inconsistent but will
correspond to a particular maximal conjunction of propositions, denoting the (atomic) state.
The transition between two states s1 and s2 of B, represented by 2-valued morphisms, can then be represented by an
automorphism f from B to B, such tuhat s2 o f = s1 .
The possible states of dierent objects dened in this way can be conceived as representing potential events. The set of
events can then be structured in the same way as invariance of causal structure, or local-to-global causal connections
or even formal properties of global causal connections.
The morphisms between (non-trivial) objects could be viewed as representing causal connections leading from one
event to another one. For example, the morphism f above leads form event s1 to event s2 . The sequences or paths of
morphisms for which there is no inverse morphism, could then be interpreted as dening horismotic or chronological
precedence relations. These relations would then determine a temporal order, a topology, and possibly a metric.
According to,[2] A minimal realization of such a relationally determined space-time structure can be found. In this
model there are, however, no explicit distinctions. This is equivalent to a model where each object is characterized
by only one distinction: (presence, absence) or (existence, non-existence) of an event. In this manner, the 'arrows
or the 'structural language' can then be interpreted as morphisms which conserve this unique distinction.[2]
If more than one distinction is considered, however, the model becomes much more complex, and the interpretation
of distinctional states as events, or morphisms as processes, is much less straightforward.

1.1 References
[1] Fleischer, Isidore (1993), A Boolean formalization of predicate calculus, Algebras and orders (Montreal, PQ, 1991),
NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci., 389, Kluwer Acad. Publ., Dordrecht, pp. 193198, MR 1233791.

[2] Heylighen, Francis (1990). A Structural Language for the Foundations of Physics. Brussels: International Journal of General
Systems 18, p. 93-112.

1.2 External links


Representation and Change - A metarepresentational framework for the foundations of physical and cognitive
science

2
Chapter 2

Absorption law

In algebra, the absorption law or absorption identity is an identity linking a pair of binary operations.
Two binary operations, and , are said to be connected by the absorption law if:

a (a b) = a (a b) = a.

A set equipped with two commutative, associative and idempotent binary operations (join) and (meet) that
are connected by the absorption law is called a lattice.
Examples of lattices include Boolean algebras, the set of sets with union and intersection operators, Heyting algebras,
and ordered sets with min and max operations.
In classical logic, and in particular Boolean algebra, the operations OR and AND, which are also denoted by and
, satisfy the lattice axioms, including the absorption law. The same is true for intuitionistic logic.
The absorption law does not hold in many other algebraic structures, such as commutative rings, e.g. the eld of real
numbers, relevance logics, linear logics, and substructural logics. In the last case, there is no one-to-one correspon-
dence between the free variables of the dening pair of identities.

2.1 See also


Identity (mathematics)

2.2 References
Davey, B. A.; Priestley, H. A. (2002). Introduction to Lattices and Order (second ed.). Cambridge University
Press. ISBN 0-521-78451-4.

Hazewinkel, Michiel, ed. (2001) [1994], Absorption laws, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

Weisstein, Eric W. Absorption Law. MathWorld.

3
Chapter 3

Algebraic normal form

In Boolean algebra, the algebraic normal form (ANF), ring sum normal form (RSNF or RNF), Zhegalkin normal
form, or ReedMuller expansion is a way of writing logical formulas in one of three subforms:

The entire formula is purely true or false:


1
0
One or more variables are ANDed together into a term. One or more terms are XORed together into ANF.
No NOTs are permitted:
a b ab abc

or in standard propositional logic symbols:

a b (a b) (a b c)

The previous subform with a purely true term:


1 a b ab abc

Formulas written in ANF are also known as Zhegalkin polynomials (Russian: ) and Positive
Polarity (or Parity) ReedMuller expressions.

3.1 Common uses


ANF is a normal form, which means that two equivalent formulas will convert to the same ANF, easily showing
whether two formulas are equivalent for automated theorem proving. Unlike other normal forms, it can be represented
as a simple list of lists of variable names. Conjunctive and disjunctive normal forms also require recording whether
each variable is negated or not. Negation normal form is unsuitable for that purpose, since it doesn't use equality as
its equivalence relation: a a isn't reduced to the same thing as 1, even though they're equal.
Putting a formula into ANF also makes it easy to identify linear functions (used, for example, in linear feedback shift
registers): a linear function is one that is a sum of single literals. Properties of nonlinear feedback shift registers can
also be deduced from certain properties of the feedback function in ANF.

3.2 Performing operations within algebraic normal form


There are straightforward ways to perform the standard boolean operations on ANF inputs in order to get ANF results.
XOR (logical exclusive disjunction) is performed directly:

4
3.3. CONVERTING TO ALGEBRAIC NORMAL FORM 5

(1 x) (1 x y)
1x1xy
11xxy
y

NOT (logical negation) is XORing 1:[1]

(1 x y)
1 (1 x y)
11xy
xy

AND (logical conjunction) is distributed algebraically[2]

(1 x)(1 x y)
1(1 x y) x(1 x y)
(1 x y) (x x xy)
1 x x x y xy
1 x y xy

OR (logical disjunction) uses either 1 (1 a)(1 b)[3] (easier when both operands have purely true terms) or a
b ab[4] (easier otherwise):

(1 x) + (1 x y)
1 (1 1 x)(1 1 x y)
1 x(x y)
1 x xy

3.3 Converting to algebraic normal form


Each variable in a formula is already in pure ANF, so you only need to perform the formulas boolean operations as
shown above to get the entire formula into ANF. For example:

x + (y z)
x + (y(1 z))
x + (y yz)
x (y yz) x(y yz)
x y xy yz xyz

3.4 Formal representation


ANF is sometimes described in an equivalent way:

where a0 , a1 , . . . , a1,2,...,n {0, 1} fully describes f .


6 CHAPTER 3. ALGEBRAIC NORMAL FORM

3.4.1 Recursively deriving multiargument Boolean functions

There are only four functions with one argument:

f (x) = 0

f (x) = 1

f (x) = x

f (x) = 1 x

To represent a function with multiple arguments one can use the following equality:

f (x1 , x2 , . . . , xn ) = g(x2 , . . . , xn ) x1 h(x2 , . . . , xn ) , where

g(x2 , . . . , xn ) = f (0, x2 , . . . , xn )
h(x2 , . . . , xn ) = f (0, x2 , . . . , xn ) f (1, x2 , . . . , xn )

Indeed,

if x1 = 0 then x1 h = 0 and so f (0, . . .) = f (0, . . .)

if x1 = 1 then x1 h = h and so f (1, . . .) = f (0, . . .) f (0, . . .) f (1, . . .)

Since both g and h have fewer arguments than f it follows that using this process recursively we will nish with
functions with one variable. For example, let us construct ANF of f (x, y) = x y (logical or):

f (x, y) = f (0, y) x(f (0, y) f (1, y))

since f (0, y) = 0 y = y and f (1, y) = 1 y = 1

it follows that f (x, y) = y x(y 1)

by distribution, we get the nal ANF: f (x, y) = y xy x = x y xy

3.5 See also


ReedMuller expansion

Zhegalkin normal form

Boolean function

Logical graph

Zhegalkin polynomial

Negation normal form

Conjunctive normal form

Disjunctive normal form

Karnaugh map

Boolean ring
3.6. REFERENCES 7

3.6 References
[1] WolframAlpha NOT-equivalence demonstration: a = 1 a

[2] WolframAlpha AND-equivalence demonstration: (a b)(c d) = ac ad bc bd

[3] From De Morgans laws

[4] WolframAlpha OR-equivalence demonstration: a + b = a b ab

3.7 Further reading


Wegener, Ingo (1987). The complexity of Boolean functions. Wiley-Teubner. p. 6. ISBN 3-519-02107-2.

Presentation (PDF) (in German). University of Duisburg-Essen. Archived (PDF) from the original on 2017-
04-19. Retrieved 2017-04-19.

Maxeld, Clive Max (2006-11-29). Reed-Muller Logic. Logic 101. EETimes. Part 3. Archived from the
original on 2017-04-19. Retrieved 2017-04-19.
Chapter 4

Analysis of Boolean functions

In mathematics and theoretical computer science, analysis of Boolean functions[1] is the study of real-valued func-
tions on {0, 1}n or {1, 1}n from a spectral perspective (such functions are sometimes known as pseudo-Boolean
functions). The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The
area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer
science, especially in hardness of approximation, property testing and PAC learning.

4.1 Basic concepts


We will mostly consider functions dened on the domain {1, 1}n . Sometimes it is more convenient to work with
the domain {0, 1}n instead. If f is dened on {1, 1}n , then the corresponding function dened on {0, 1}n is

f01 (x1 , . . . , xn ) = f ((1)x1 , . . . , (1)xn ).

Similarly, for us a Boolean function is a {1, 1} -valued function, though often it is more convenient to consider
{0, 1} -valued functions instead.

4.1.1 Fourier expansion


Every real-valued function f : {1, 1}n R has a unique expansion as a multilinear polynomial:


f (x) = f(S)S (x), S (x) = xi .
S[n] iS

This is the Hadamard transform of the function f , which is the Fourier transform in the group Zn2 . The coecients
f(S) are known as Fourier coecients, and the entire sum is known as the Fourier expansion of f . The functions S
are known as Fourier characters, and they form an orthonormal basis for the space of all functions over {1, 1}n ,
n
with respect to the inner product f, g = 2 x{1,1}n f (x)g(x) .
The Fourier coecients can be calculated using an inner product:

f(S) = f, S .

In particular, this shows that f() = E[f ]. Parsevals identity states that


f 2 = E[f 2 ] = f(S)2 .
S

8
4.1. BASIC CONCEPTS 9

If we skip S = , then we get the variance of f :


V[f ] = f(S)2 .
S=

4.1.2 Fourier degree and Fourier levels

The degree of a function f : {1, 1}n R is the maximum d such that f(S) = 0 for some set S of size d . In other
words, the degree of f is its degree as a multilinear polynomial.
It is convenient to decompose the Fourier expansion into levels: the Fourier coecient f(S) is on level |S| .
The degree d part of f is


f =d = f(S)S .
|S|=d

It is obtained from f by zeroing out all Fourier coecients not on level d .


We similarly dene f >d , f <d , f d , f d .

4.1.3 Inuence
The i 'th inuence of a function f : {1, 1}n R can be dened in two equivalent ways:

[( )2 ]
f f i
Infi [f ] = E = f(S)2 ,
2
Si

f i (x1 , . . . , xn ) = f (x1 , . . . , xi1 , xi , xi+1 , . . . , xn ).

If f is Boolean then Infi [f ] is the probability that ipping the i 'th coordinate ips the value of the function:

Infi [f ] = Pr[f (x) = f i (x)].

If Infi [f ] = 0 then f doesn't depend on the i 'th coordinate.


The total inuence of f is the sum of all of its inuences:


n
Inf[f ] = Infi [f ] = |S|f(S)2 .
i=1 S

The total inuence of a Boolean function is also the average sensitivity of the function. The sensitivity of a Boolean
function f at a given point is the number of coordinates i such that if we ip the i 'th coordinate, the value of the
function changes. The average value of this quantity is exactly the total inuence.
The total inuence can also be dened using the discrete Laplacian of the Hamming graph, suitably normalized:
Inf[f ] = f, Lf .

4.1.4 Noise stability


Given 1 1 , we say that two random vectors x, y {1, 1}n are -correlated if the marginal distributions
of x, y are uniform, and E[xi yi ] = . Concretely, we can generate a pair of -correlated random variables by rst
choosing x, z {1, 1}n uniformly at random, and then choosing y according to one of the following two equivalent
rules, applied independently to each coordinate:
10 CHAPTER 4. ANALYSIS OF BOOLEAN FUNCTIONS

{ {
xi w.p., xi w.p. 1+
2 ,
yi = or yi =
zi w.p.1 . xi w.p. 1
2 .

We denote this distribution by y N (x) .


The noise stability of a function f : {1, 1}n R at can be dened in two equivalent ways:


Stab [f ] = Ex;yN (x) [f (x)f (y)] = |S| f(S)2 .
S[n]

For 0 1 , the noise sensitivity of f at is

1 1
NS [f ] = Stab12 [f ].
2 2
If f is Boolean, then this is the probability that the value of f changes if we ip each coordinate with probability ,
independently.

4.1.5 Noise operator


The noise operator T is an operator taking a function f : {1, 1}n R and returning another function T f : {1, 1}n
R given by


(T f )(x) = EyN (x) [f (y)] = |S| f(S)S .
S[n]

When > 0 , the noise operator can also be dened using a continuous-time Markov chain in which each bit is ipped
independently with rate 1. The operator T corresponds to running this Markov chain for 12 log 1 steps starting at x ,
and taking the average value of f at the nal state. This Markov chain is generated by the Laplacian of the Hamming
graph, and this relates total inuence to the noise operator.
Noise stability can be dened in terms of the noise operator: Stab [f ] = f, T f .

4.1.6 Hypercontractivity
For 1 q < , the Lq -norm of a function f : {1, 1}n R is dened by


f q = q
E[|f |q ].
We also dene f = maxx{1,1}n |f (x)|.
The hypercontractivity theorem states that for any q > 2 and q = 1/(1 1/q) ,

T f q f 2 and T f 2 f q .
Hypercontractivity is closely related to the logarithmic Sobolev inequalities of functional analysis.[2]
A similar result for q < 2 is known as reverse hypercontractivity.[3]

4.1.7 p-Biased analysis


In many situations the input to the function is not uniformly distributed over {1, 1}n , but instead has a bias toward
1 or 1 . In these situations it is customary to consider functions over the domain {0, 1}n . For 0 < p < 1 , the
p-biased measure p is given by
4.1. BASIC CONCEPTS 11


p (x) = p i xi
(1 p) i (1xi ) .

This measure can be generated by choosing each coordinate independently to be 1 with probability p and 0 with
probability 1 p .
The classical Fourier characters are no longer orthogonal with respect to this measure. Instead, we use the following
characters:

( )|{iS:xi =0}| ( )|{iS:xi =1}|


p 1p
S (x) = .
1p p

The p-biased Fourier expansion of f is the expansion of f as a linear combination of p-biased characters:


f= f(S)S .
S[n]

We can extend the denitions of inuence and the noise operator to the p-biased setting by using their spectral
denitions.

Inuence

The i 's inuence is given by


Infi [f ] = f(S)2 = p(1 p)E[(f f i )2 ].
Si

The total inuence is the sum of the individual inuences:


n
Inf[f ] = Infi [f ].
i=1

Noise operator

A pair of -correlated random variables can be obtained by choosing x, z p independently and y N (x) ,
where N is given by

{
xi w.p.,
yi =
zi w.p.1 .

The noise operator is then given by


(T f )(x) = |S| f(S)S (x) = EyN (x) [f (y)].
S[n]

Using this we can dene the noise stability and the noise sensitivity, as before.

RussoMargulis formula

The RussoMargulis formula states that for monotone Boolean functions f : {0, 1}n {0, 1} ,
12 CHAPTER 4. ANALYSIS OF BOOLEAN FUNCTIONS

d Inf[f ] n
Exp [f (x)] = = Pr[f = f i ].
dp p(1 p) i=1

Both the inuence and the probabilities are taken with respect to p , and on the right-hand side we have the average
sensitivity of f . If we think of f as a property, then the formula states that as p varies, the derivative of the probability
that f occurs at p equals the average sensitivity at p .
The RussoMargulis formula is key for proving sharp threshold theorems such as Friedguts.

4.1.8 Gaussian space


One of the deepest results in the area, the invariance principle, connects the distribution of functions on the Boolean
cube {1, 1}n to their distribution on Gaussian space, which is the space Rn endowed with the standard n -
dimensional Gaussian measure.
Many of the basic concepts of Fourier analysis on the Boolean cube have counterparts in Gaussian space:

The counterpart of the Fourier expansion in Gaussian space is the Hermite expansion, which is an expansion
to an innite sum (converging in L2 ) of multivariate Hermite polynomials.
The counterpart of total inuence or average sensitivity for the indicator function of a set is Gaussian surface
area, which is the Minkowski content of the boundary of the set.
The counterpart of the noise operator is the
OrnsteinUhlenbeck operator (related to the Mehler transform),
given by (U f )(x) = EzN (0,1) [f (x + 1 2 z)] , or alternatively by (U f )(x) = E[f (y)] , where x, y
is a pair of -correlated standard Gaussians.
Hypercontractivity holds (with appropriate parameters) in Gaussian space as well.

Gaussian space is more symmetric than the Boolean cube (for example, it is rotation invariant), and supports con-
tinuous arguments which may be harder to get through in the discrete setting of the Boolean cube. The invariance
principle links the two settings, and allows deducing results on the Boolean cube from results on Gaussian space.

4.2 Basic results

4.2.1 FriedgutKalaiNaor theorem


If f : {1, 1}n {1, 1} has degree at most 1, then f is either constant, equal to a coordinate, or equal to the
negation of a coordinate. In particular, f is a dictatorship: a function depending on at most one coordinate.
The FriedgutKalaiNaor theorem,[4] also known as the FKN theorem, states that if f almost has degree 1 then it
is close to a dictatorship. Quantitatively, if f : {1, 1}n {1, 1} and f >1 2 < , then f is O() -close to a
dictatorship, that is, f g2 = O() for some Boolean dictatorship g , or equivalently, Pr[f = g] = O() for some
Boolean dictatorship g .
Similarly, a Boolean function of degree at most d depends on at most d2d1 coordinates, making it a junta (a function
depending on a constant number of coordinates). The KindlerSafra theorem[5] generalizes the FriedgutKalaiNaor
theorem to this setting. It states that if f : {1, 1}n {1, 1} satises f >d 2 < then f is O() -close to a
Boolean function of degree at most d .

4.2.2 KahnKalaiLinial theorem


The Poincar inequality for the Boolean cube (which follows from formulas appearing above) states that for a function
f : {1, 1}n R ,

V[f ] Inf[f ] deg f V[f ].


4.3. SOME APPLICATIONS 13

V[f ]
This implies that maxi Infi [f ] n .
[6]
The
( KahnKalaiLinial
) theorem, also known as the KKL theorem, states that if f is Boolean then maxi Infi [f ] =
log n
n .
The bound given by the KahnKalaiLinial theorem is tight, and is achieved by the Tribes function of Ben-Or and
Linial:[7]

(x1,1 x1,w ) (x2w ,1 x2w ,w ).

The KahnKalaiLinial theorem was one of the rst results in the area, and was the one introducing hypercontractivity
into the context of Boolean functions.

4.2.3 Friedguts junta theorem

If f : {1, 1}n {1, 1} is an M -junta (a function depending on at most M coordinates) then Inf[f ] M
according to the Poincar inequality.
Friedguts theorem[8] is a converse to this result. It states that for any > 0 , the function f is -close to a Boolean
junta depending on exp(Inf[f ]/) coordinates.
Combined with the RussoMargulis lemma, Friedguts junta theorem implies that for every p , every monotone
function is close to a junta with respect to q for some q p .

4.2.4 Invariance principle

The invariance principle[9] generalizes the BerryEsseen theorem to non-linear functions.


n
The BerryEsseen theorem states (among else) that if f = i=1 ci xi and no ci is too large compared to the rest,
then the distribution of f over {1, 1}n is close to a normal distribution with the same mean and variance.
The invariance principle (in a special case) informally states that if f is a multilinear polynomial of bounded degree
over x1 , . . . , xn and all inuences of f are small, then the distribution of f under the uniform measure over {1, 1}n
is close to its distribution in Gaussian space.

More formally, let be a univariate Lipschitz function, let f = S[n] f (S)S , let k = deg f , and let =

maxi Si f(S)2 . Suppose that S= f(S)2 1 . Then


Ex{1,1}n [(f (x))] EgN (0,I) [(f (g))] = O(k9k ).

By choosing appropriate , this implies that the distributions of f under both measures are close in CDF distance,
which is given by supt | Pr[f (x) < t] Pr[f (g) < t]| .
The invariance principle was the key ingredient in the original proof of the Majority is Stablest theorem.

4.3 Some applications

4.3.1 Linearity testing

A Boolean function f : {1, 1}n {1, 1} is linear if it satises f (xy) = f (x)f (y) , where xy = (x1 y1 , . . . , xn yn )
. It is not hard to show that the Boolean linear functions are exactly the characters S .
In property testing we want to test whether a given function is linear. It is natural to try the following test: choose
x, y {1, 1}n uniformly at random, and check that f (xy) = f (x)f (y) . If f is linear then it always passes the
test. Blum, Luby and Rubinfeld[10] showed that if the test passes with probability 1 then f is O() -close to a
Fourier character. Their proof was combinatorial.
14 CHAPTER 4. ANALYSIS OF BOOLEAN FUNCTIONS

Bellare et al.[11] gave an extremely simple Fourier-analytic proof, that also shows that if the test succeeds with prob-
ability 1/2 + , then f is correlated with a Fourier character. Their proof relies on the following formula for the
success probability of the test:

1 1 3
+ f (S) .
2 2
S[n]

4.3.2 Arrows theorem


Arrows impossibility theorem states that for three and more candidates, the only unanimous voting rule for which
there is always a Condorcet winner is a dictatorship.
The usual proof of Arrows theorem is combinatorial. Kalai[12] gave an alternative proof of this result in the case
of three candidates using Fourier analysis. If f : {1, 1}n {1, 1} is the rule that assigns a winner among
two candidates given their relative orders in the votes, then the probability that there is a Condorcet winner given a
uniformly random vote is 43 34 Stab1/3 [f ] , from which the theorem easily follows.
The FKN theorem implies that if f is a rule for which there is almost always a Condorcet winner, then f is close to
a dictatorship.

4.3.3 Sharp thresholds


A classical result in the theory of random graphs states that the probability that a G(n, p) random graph is connected
c
tends to ee if p log nn+c . This is an example of a sharp threshold: the width of the threshold window, which
is O(1/n) , is asymptotically smaller than the threshold itself, which is roughly lognn . In contrast, the probability that
a G(n, p) graph contains a triangle tends to ec /6 when p nc . Here both the threshold window and the threshold
3

itself are (1/n) , and so this is a coarse threshold.


Friedguts sharp threshold theorem[13] states, roughly speaking, that a monotone graph property (a graph property is
a property which doesn't depend on the names of the vertices) has a sharp threshold unless it is correlated with the
appearance of small subgraphs. This theorem has been widely applied to analyze random graphs and percolation.
On a related note, the KKL theorem implies that the width of threshold window is always at most O(1/ log n) .[14]

4.3.4 Majority is Stablest


Let Majn : {1, 1}n {1, 1} denote the majority function on n coordinates. Sheppards formula gives the
asymptotic noise stability of majority:

2
Stab [Majn ] 1 arccos .

This is related to the probability that if we choose x {1, 1}n uniformly at random and form y {1, 1}n by
ipping each bit of x with probability 1 2 , then the majority stays the same:

Stab [Majn ] = 2 Pr[Majn (x) = Majn (y)] 1


There are Boolean functions with larger noise stability. For example, a dictatorship xi has noise stability .
The Majority is Stablest theorem states, informally, then the only functions having noise stability larger than majority
have inuential coordinates. Formally, for every > 0 there exists > 0 such that if f : {1, 1}n {1, 1} has
expectation zero and maxi Infi [f ] , then Stab [f ] 1 2 arccos + .
The rst proof of this theorem used the invariance principle in conjunction with an isoperimetric theorem of Borell
in Gaussian space; since then more direct proofs were devised.
Majority is Stablest implies that the GoemansWilliamson approximation algorithm for MAX-CUT is optimal, as-
suming the unique games conjecture. This implication, due to Khot et al.,[15] was the impetus behind proving the
theorem.
4.4. REFERENCES 15

4.4 References
[1] O'Donnell, Ryan (2014). Analysis of Boolean functions. Cambridge University Press. ISBN 978-1-107-03832-5.

[2] Diaconis, Persi; Salo-Coste, Laurent (1996). Logarithmic Sobolev inequalities for nite Markov chains. Ann. Appl.
Probab. 6 (3): 695750. doi:10.1214/aoap/1034968224.

[3] Mossel, Elchanan; Oleszkiewicz, Krzysztof; Sen, Arnab (2013). On reverse hypercontractivity. GAFA. 23 (3): 1062
1097. doi:10.1007/s00039-013-0229-4.

[4] Friedgut, Ehud; Kalai, Gil; Naor, Assaf (2002). Boolean functions whose Fourier transform is concentrated on the rst
two levels. Adv. Appl. Math. 29 (3): 427437. doi:10.1016/S0196-8858(02)00024-6.

[5] Kindler, Guy (2002). 16. Property testing, PCP, and juntas (Thesis). Tel Aviv University.

[6] Kahn, Je; Kalai, Gil; Linial, Nati (1988). The inuence of variables on Boolean functions.. Proc. 29th Symp. on
Foundations of Computer Science. SFCS'88. White Plains: IEEE. pp. 6880. doi:10.1109/SFCS.1988.2192.

[7] Ben-Or, Michael; Linial, Nathan (1985). Collective coin ipping, robust voting schemes and minima of Banzhaf val-
ues. Proc. 26th Symp. on Foundations of Computer Science. SFCS'85. Portland, Oregon: IEEE. pp. 408416.
doi:10.1109/SFCS.1985.15.

[8] Friedgut, Ehud (1998). Boolean functions with low average sensitivity depend on few coordinates. Combinatorica. 18
(1): 474483. doi:10.1007/PL00009809.

[9] Mossel, Elchanan; O'Donnell, Ryan; Oleszkiewicz, Krzysztof (2010). Noise stability of functions with low inuences:
Invariance and optimality. Ann. Math. 171 (1): 295341. doi:10.4007/annals.2010.171.295.

[10] Blum, Manuel; Luby, Michael; Rubinfeld, Ronitt (1993). Self-testing/correcting with applications to numerical problems.
J. Comput. Syst. Sci. 47 (3): 549595. doi:10.1016/0022-0000(93)90044-W.

[11] Bellare, Mihir; Coppersmith, Don; Hstad, Johan; Kiwi, Marcos; Sudan, Madhu (1995). Linearity testing in characteristic
two. Proc. 36th Symp. on Foundations of Computer Science. FOCS'95.

[12] Kalai, Gil (2002). A Fourier-theoretic perspective on the Condorcet paradox and Arrows theorem. Adv. Appl. Math.
29 (3): 412426. doi:10.1016/S0196-8858(02)00023-4.

[13] Friedgut, Ehud (1999). Sharp thresholds of graph properties and the k-SAT problem. J. Am. Math. Soc. 12 (4):
10171054. doi:10.1090/S0894-0347-99-00305-7.

[14] Friedgut, Ehud; Kalai, Gil (1996). Every monotone graph property has a sharp threshold. Proc. Am. Math. Soc. 124
(10): 29933002. doi:10.1090/S0002-9939-96-03732-X.

[15] Khot, Subhash; Kindler, Guy; Mossel, Elchanan; O'Donnell, Ryan (2007), Optimal inapproximability results for MAX-
CUT and other two-variable CSPs?" (PDF), SIAM Journal on Computing, 37 (1): 319357, doi:10.1137/S0097539705447372
Chapter 5

Balanced boolean function

In mathematics and computer science, a balanced boolean function is a boolean function whose output yields as
many 0s as 1s over its input set. This means that for a uniformly random input string of bits, the probability of getting
a 1 is 1/2.
An example of a balanced boolean function is the function that assigns a 1 to every even number and 0 to all odd
numbers (likewise the other way around). The same applies for functions assigning 1 to all positive numbers and 0
otherwise.
A Boolean function of n bits is balanced if it takes the value 1 with probability 12.

5.1 Usage
Balanced boolean functions are primarily used in cryptography. If a function is not balanced, it will have a statistical
bias, making it subject to cryptanalysis such as the correlation attack.

5.2 See also


Bent function

5.3 References
Balanced boolean functions that can be evaluated so that every input bit is unlikely to be read, Annual ACM
Symposium on Theory of Computing

16
Chapter 6

Bent function

The four 2-ary Boolean functions with Hamming weight 1 are bent, i.e. their nonlinearity is 1 (which is what this diagram shows).
The following formula shows that a 2-ary function is bent when its nonlinearity is 1:
2
221 2 2 1 = 2 1 = 1

In the mathematical eld of combinatorics, a bent function is a special type of Boolean function. This means it takes
several inputs and gives one output, each of which has two possible values (such as 0 and 1, or true and false). The
name is gurative. Bent functions are so called because they are as dierent as possible from all linear functions (the
simplest or straight-line functions) and from all ane functions (which preserve parallel lines). This makes the bent
functions naturally hard to approximate. Bent functions were dened and named in the 1960s by Oscar Rothaus in
research not published until 1976.[1] They have been extensively studied for their applications in cryptography, but
have also been applied to spread spectrum, coding theory, and combinatorial design. The denition can be extended
in several ways, leading to dierent classes of generalized bent functions that share many of the useful properties of
the original.
It is known that V. A. Eliseev and O. P. Stepchenkov studied bent functions, which they called minimal functions, in
the USSR in 1962, see.[2] However, their results have still not been declassied.

6.1 Walsh transform


Bent functions are dened in terms of the Walsh transform. The Walsh transform of a Boolean function f: Zn
2 Z2 is the function f : Zn2 Z given by

17
18 CHAPTER 6. BENT FUNCTION

1
f(a) = (1)f (x)+ax
2n xZn
2

where a x = a1 x1 + a2 x2 + ... + anxn (mod 2) is the dot product in Zn


2.[3] Alternatively, let S 0 (a) = { x Zn
2 : f(x) = a x } and S 1 (a) = { x Zn
2 : f(x) a x }. Then |S 0 (a)| + |S 1 (a)| = 2n and hence

f(a) = |S0 (a)| |S1 (a)| = 2|S0 (a)| 2n .

For any Boolean function f and a Zn


2 the transform lies in the range

2n f(a) 2n .

Moreover, the linear function f 0 (x) = a x and the ane function f 1 (x) = a x + 1 correspond to the two extreme
cases, since

f0 (a) = 2n , f1 (a) = 2n .

Thus, for each a Zn


2 the value of f(a) characterizes where the function f(x) lies in the range from f 0 (x) to f 1 (x).

6.2 Denition and properties


Rothaus dened a bent function as a Boolean function f: Zn
2 Z2 whose Walsh transform has constant absolute value. Bent functions are in a sense equidistant from all the
ane functions, so they are equally hard to approximate with any ane function.
The simplest examples of bent functions, written in algebraic normal form, are F(x1 ,x2 ) = x1 x2 and G(x1 ,x2 ,x3 ,x4 ) =
x1 x2 + x3 x4 . This pattern continues: x1 x2 + x3 x4 + ... + xn xn is a bent function Zn
2 Z2 for every even n, but there is a wide variety of dierent types of bent functions as n increases.[4] The sequence
of values (1)f(x) , with x Zn
2 taken in lexicographical order, is called a bent sequence; bent functions and bent sequences have equivalent prop-
erties. In this 1 form, the Walsh transform is easily computed as

f(a) = W (2n )(1)f (a) ,

where W(2n ) is the natural-ordered Walsh matrix and the sequence is treated as a column vector.[5]
Rothaus proved that bent functions exist only for even n, and that for a bent function f, |f(a)| = 2n/2 for all a Zn
2.[3] In fact, f(a) = 2n/2 (1)g(a) , where g is also bent. In this case, g(a) = 2n/2 (1)f (a) , so f and g are
considered dual functions.[5]
Every bent function has a Hamming weight (number of times it takes the value 1) of 2n 1 2n/2 1 , and in fact
agrees with any ane function at one of those two numbers of points. So the nonlinearity of f (minimum number of
times it equals any ane function) is 2n 1 2n/2 1 , the maximum possible. Conversely, any Boolean function with
nonlinearity 2n 1 2n/2 1 is bent.[3] The degree of f in algebraic normal form (called the nonlinear order of f) is at
most n/2 (for n > 2).[4]
Although bent functions are vanishingly rare among Boolean functions of many variables, they come in many dierent
kinds. There has been detailed research into special classes of bent functions, such as the homogeneous ones[6] or
those arising from a monomial over a nite eld,[7] but so far the bent functions have deed all attempts at a complete
enumeration or classication.
6.3. CONSTRUCTIONS 19

6.3 Constructions
There are several types of constructions for bent functions.[2]

combinatorial constructions: iterative constructions, Maiorana-McFarland construction, Partial Spreads, Dil-


lons and Dobbertins bent functions, minterm bent functions, Bent Iterative functions

algebraic constructions: monomial bent functions with exponents of Gold, Dillon, Kasami, Canteaut-Leander
and Canteaut-Charpin-Kuyreghyan; Niho bent functions, etc.

6.4 Applications
As early as 1982 it was discovered that maximum length sequences based on bent functions have cross-correlation and
autocorrelation properties rivalling those of the Gold codes and Kasami codes for use in CDMA.[8] These sequences
have several applications in spread spectrum techniques.
The properties of bent functions are naturally of interest in modern digital cryptography, which seeks to obscure
relationships between input and output. By 1988 Forr recognized that the Walsh transform of a function can be
used to show that it satises the Strict Avalanche Criterion (SAC) and higher-order generalizations, and recommended
this tool to select candidates for good S-boxes achieving near-perfect diusion.[9] Indeed, the functions satisfying the
SAC to the highest possible order are always bent.[10] Furthermore, the bent functions are as far as possible from
having what are called linear structures, nonzero vectors a such that f(x + a) + f(x) is a constant. In the language of
dierential cryptanalysis (introduced after this property was discovered) the derivative of a bent function f at every
nonzero point a (that is, fa(x) = f(x + a) + f(x)) is a balanced Boolean function, taking on each value exactly half of
the time. This property is called perfect nonlinearity.[4]
Given such good diusion properties, apparently perfect resistance to dierential cryptanalysis, and resistance by
denition to linear cryptanalysis, bent functions might at rst seem the ideal choice for secure cryptographic functions
such as S-boxes. Their fatal aw is that they fail to be balanced. In particular, an invertible S-box cannot be constructed
directly from bent functions, and a stream cipher using a bent combining function is vulnerable to a correlation attack.
Instead, one might start with a bent function and randomly complement appropriate values until the result is balanced.
The modied function still has high nonlinearity, and as such functions are very rare the process should be much faster
than a brute-force search.[4] But functions produced in this way may lose other desirable properties, even failing
to satisfy the SACso careful testing is necessary.[10] A number of cryptographers have worked on techniques
for generating balanced functions that preserve as many of the good cryptographic qualities of bent functions as
possible.[11][12][13]
Some of this theoretical research has been incorporated into real cryptographic algorithms. The CAST design pro-
cedure, used by Carlisle Adams and Staord Tavares to construct the S-boxes for the block ciphers CAST-128 and
CAST-256, makes use of bent functions.[13] The cryptographic hash function HAVAL uses Boolean functions built
from representatives of all four of the equivalence classes of bent functions on six variables.[14] The stream cipher
Grain uses an NLFSR whose nonlinear feedback polynomial is, by design, the sum of a bent function and a linear
function.[15]
Applications of bent functions are listed in.[2]

6.5 Generalizations
More than 25 dierent generalizations of bent functions are described in.[2] There are algebraic generalizations (q-
valued bent functions, p-ary bent functions, bent functions over a nite eld, generalized Boolean bent functions of
Schmidt, bent functions from a nite Abelian group into the set of complex numbers on the unit circle, bent func-
tions from a nite Abelian group into a nite Abelian group, non Abelian bent functions, vectorial G-bent functions,
multidimensional bent functions on a nite Abelian group), combinatorial generalizations (symmetric bent functions,
homogeneous bent functions, rotation symmetric bent functions, normal bent functions, self-dual and anti-self-dual
bent functions, partially dened bent functions, plateaued functions, Z-bent functions and quantum bent functions)
and cryptographic generalizations (semi-bent functions, balanced bent functions, partially bent functions, hyper-bent
functions, bent functions of higher order, k-bent functions).
20 CHAPTER 6. BENT FUNCTION

The most common class of generalized bent functions is the mod m type, f : Znm Zm such that

2i
f(a) = e m (f (x)ax)

xZn
m

has constant absolute value mn/2 . Perfect nonlinear functions f : Znm Zm , those such that for all nonzero a, f(x
+ a) f(a) takes on each value mn 1 times, are generalized bent. If m is prime, the converse is true. In most cases
only prime m are considered. For odd prime m, there are generalized bent functions for every positive n, even and
odd. They have many of the same good cryptographic properties as the binary bent functions.[16]
Semi-bent functions are an odd-order counterpart to bent functions. A semi-bent function is f : Znm Zm with n
odd, such that |f| takes only the values 0 and m(n+1)/2 . They also have good cryptographic characteristics, and some
of them are balanced, taking on all possible values equally often.[17]
The partially bent functions form a large class dened by a condition on the Walsh transform and autocorrela-
tion functions. All ane and bent functions are partially bent. This is in turn a proper subclass of the plateaued
functions.[18]
The idea behind the hyper-bent functions is to maximize the minimum distance to all Boolean functions coming
from bijective monomials on the nite eld GF(2n ), not just the ane functions. For these functions this distance is
constant, which may make them resistant to an interpolation attack.
Other related names have been given to cryptographically important classes of functions Zn
2 Zn
2, such as almost bent functions and crooked functions. While not bent functions themselves (these are not even
Boolean functions), they are closely related to the bent functions and have good nonlinearity properties.

6.6 References
[1] O. S. Rothaus (May 1976). On Bent Functions. Journal of Combinatorial Theory, Series A. 20 (3): 300305. ISSN
0097-3165. doi:10.1016/0097-3165(76)90024-8. Retrieved 16 December 2013.

[2] N. Tokareva. Bent functions: results and applications to cryptography. Acad. Press. Elsevier. 2015. 220 pages. Retrieved
30 November 2016.

[3] C. Qu; J. Seberry; T. Xia (29 December 2001). Boolean Functions in Cryptography. Retrieved 14 September 2009.

[4] W. Meier; O. Staelbach (April 1989). Nonlinearity Criteria for Cryptographic Functions. Eurocrypt '89. pp. 549562.

[5] C. Carlet; L.E. Danielsen; M.G. Parker; P. Sol (19 May 2008). Self Dual Bent Functions (PDF). Fourth International
Workshop on Boolean Functions: Cryptography and Applications (BFCA '08). Retrieved 21 September 2009.

[6] T. Xia; J. Seberry; J. Pieprzyk; C. Charnes (June 2004). Homogeneous bent functions of degree n in 2n variables do not
exist for n > 3. Discrete Applied Mathematics. 142 (13): 127132. ISSN 0166-218X. doi:10.1016/j.dam.2004.02.006.
Retrieved 21 September 2009.

[7] A. Canteaut; P. Charpin; G. Kyureghyan (January 2008). A new class of monomial bent functions (PDF). Finite Fields
and Their Applications. 14 (1): 221241. ISSN 1071-5797. doi:10.1016/j.a.2007.02.004. Retrieved 21 September
2009.

[8] J. Olsen; R. Scholtz; L. Welch (November 1982). Bent-Function Sequences. IEEE Transactions on Information Theory.
IT-28 (6): 858864. ISSN 0018-9448. doi:10.1109/tit.1982.1056589. Archived from the original on 22 July 2011.
Retrieved 24 September 2009.

[9] R. Forr (August 1988). The Strict Avalanche Criterion: Spectral Properties of Boolean Functions and an Extended Deni-
tion. CRYPTO '88. pp. 450468.

[10] C. Adams; S. Tavares (January 1990). The Use of Bent Sequences to Achieve Higher-Order Strict Avalanche Criterion
in S-Box Design. Technical Report TR 90-013. Queens University. CiteSeerX 10.1.1.41.8374 .

[11] K. Nyberg (April 1991). Perfect nonlinear S-boxes. Eurocrypt '91. pp. 378386.

[12] J. Seberry; X. Zhang (December 1992). Highly Nonlinear 0-1 Balanced Boolean Functions Satisfying Strict Avalanche
Criterion. AUSCRYPT '92. pp. 143155. CiteSeerX 10.1.1.57.4992 .
6.7. FURTHER READING 21

[13] C. Adams (November 1997). Constructing Symmetric Ciphers Using the CAST Design Procedure. Designs, Codes and
Cryptography. 12 (3): 283316. ISSN 0925-1022. doi:10.1023/A:1008229029587. Archived from the original on 26
October 2008. Retrieved 20 September 2009.

[14] Y. Zheng; J. Pieprzyk; J. Seberry (December 1992). HAVALa one-way hashing algorithm with variable length of output.
AUSCRYPT '92. pp. 83104. Retrieved 20 June 2015.

[15] M. Hell; T. Johansson; A. Maximov; W. Meier. A Stream Cipher Proposal: Grain-128 (PDF). Retrieved 24 September
2009.

[16] K. Nyberg (May 1990). Constructions of bent functions and dierence sets. Eurocrypt '90. pp. 151160.

[17] K. Khoo; G. Gong; D. Stinson (February 2006). A new characterization of semi-bent and bent functions on nite elds
(PostScript). Designs, Codes and Cryptography. 38 (2): 279295. ISSN 0925-1022. doi:10.1007/s10623-005-6345-x.
Retrieved 24 September 2009.

[18] Y. Zheng; X. Zhang (November 1999). Plateaued Functions. Second International Conference on Information and Com-
munication Security (ICICS '99). pp. 284300. Retrieved 24 September 2009.

6.7 Further reading


C. Carlet (May 1993). Two New Classes of Bent Functions. Eurocrypt '93. pp. 77101.
J. Seberry; X. Zhang (March 1994). Constructions of Bent Functions from Two Known Bent Functions.
Australasian Journal of Combinatorics. 9: 2135. CiteSeerX 10.1.1.55.531 . ISSN 1034-4942.

T. Neumann (May 2006). Bent Functions. CiteSeerX 10.1.1.85.8731 .


Colbourn, Charles J.; Dinitz, Jerey H. (2006). Handbook of Combinatorial Designs (2nd ed.). CRC Press.
pp. 337339. ISBN 978-1-58488-506-1.

Cusick, T.W.; Stanica, P. (2009). Cryptographic Boolean Functions and Applications. Academic Press. ISBN
9780123748904.
22 CHAPTER 6. BENT FUNCTION
Chapter 7

Binary decision diagram

In computer science, a binary decision diagram (BDD) or branching program is a data structure that is used to
represent a Boolean function. On a more abstract level, BDDs can be considered as a compressed representation of
sets or relations. Unlike other compressed representations, operations are performed directly on the compressed rep-
resentation, i.e. without decompression. Other data structures used to represent a Boolean function include negation
normal form (NNF), and propositional directed acyclic graph (PDAG).

7.1 Denition
A Boolean function can be represented as a rooted, directed, acyclic graph, which consists of several decision nodes
and terminal nodes. There are two types of terminal nodes called 0-terminal and 1-terminal. Each decision node N
is labeled by Boolean variable VN and has two child nodes called low child and high child. The edge from node VN
to a low (or high) child represents an assignment of VN to 0 (resp. 1). Such a BDD is called 'ordered' if dierent
variables appear in the same order on all paths from the root. A BDD is said to be 'reduced' if the following two rules
have been applied to its graph:

Merge any isomorphic subgraphs.

Eliminate any node whose two children are isomorphic.

In popular usage, the term BDD almost always refers to Reduced Ordered Binary Decision Diagram (ROBDD in
the literature, used when the ordering and reduction aspects need to be emphasized). The advantage of an ROBDD is
that it is canonical (unique) for a particular function and variable order.[1] This property makes it useful in functional
equivalence checking and other operations like functional technology mapping.
A path from the root node to the 1-terminal represents a (possibly partial) variable assignment for which the repre-
sented Boolean function is true. As the path descends to a low (or high) child from a node, then that nodes variable
is assigned to 0 (resp. 1).

7.1.1 Example

The left gure below shows a binary decision tree (the reduction rules are not applied), and a truth table, each rep-
resenting the function f (x1, x2, x3). In the tree on the left, the value of the function can be determined for a given
variable assignment by following a path down the graph to a terminal. In the gures below, dotted lines represent
edges to a low child, while solid lines represent edges to a high child. Therefore, to nd (x1=0, x2=1, x3=1), begin
at x1, traverse down the dotted line to x2 (since x1 has an assignment to 0), then down two solid lines (since x2 and
x3 each have an assignment to one). This leads to the terminal 1, which is the value of f (x1=0, x2=1, x3=1).
The binary decision tree of the left gure can be transformed into a binary decision diagram by maximally reducing
it according to the two reduction rules. The resulting BDD is shown in the right gure.

23
24 CHAPTER 7. BINARY DECISION DIAGRAM

7.2 History
The basic idea from which the data structure was created is the Shannon expansion. A switching function is split
into two sub-functions (cofactors) by assigning one variable (cf. if-then-else normal form). If such a sub-function
is considered as a sub-tree, it can be represented by a binary decision tree. Binary decision diagrams (BDD) were
introduced by Lee,[2] and further studied and made known by Akers[3] and Boute.[4]
The full potential for ecient algorithms based on the data structure was investigated by Randal Bryant at Carnegie
Mellon University: his key extensions were to use a xed variable ordering (for canonical representation) and shared
sub-graphs (for compression). Applying these two concepts results in an ecient data structure and algorithms for
the representation of sets and relations.[5][6] By extending the sharing to several BDDs, i.e. one sub-graph is used
by several BDDs, the data structure Shared Reduced Ordered Binary Decision Diagram is dened.[7] The notion of a
BDD is now generally used to refer to that particular data structure.
In his video lecture Fun With Binary Decision Diagrams (BDDs),[8] Donald Knuth calls BDDs one of the only really
fundamental data structures that came out in the last twenty-ve years and mentions that Bryants 1986 paper was
for some time one of the most-cited papers in computer science.
Adnan Darwiche and his collaborators have shown that BDDs are one of several normal forms for Boolean functions,
each induced by a dierent combination of requirements. Another important normal form identied by Darwiche is
Decomposable Negation Normal Form or DNNF.

7.3 Applications
BDDs are extensively used in CAD software to synthesize circuits (logic synthesis) and in formal verication. There
are several lesser known applications of BDD, including fault tree analysis, Bayesian reasoning, product conguration,
and private information retrieval.[9] [10]
Every arbitrary BDD (even if it is not reduced or ordered) can be directly implemented in hardware by replacing
each node with a 2 to 1 multiplexer; each multiplexer can be directly implemented by a 4-LUT in a FPGA. It is not
so simple to convert from an arbitrary network of logic gates to a BDD (unlike the and-inverter graph).

7.4 Variable ordering


The size of the BDD is determined both by the function being represented and the chosen ordering of the variables.
There exist Boolean functions f (x1 , . . . , xn ) for which depending upon the ordering of the variables we would end
up getting a graph whose number of nodes would be linear (in n) at the best and exponential at the worst case (e.g., a
ripple carry adder). Let us consider the Boolean function f (x1 , . . . , x2n ) = x1 x2 + x3 x4 + + x2n1 x2n . Using
the variable ordering x1 < x3 < < x2n1 < x2 < x4 < < x2n , the BDD needs 2n+1 nodes to represent
the function. Using the ordering x1 < x2 < x3 < x4 < < x2n1 < x2n , the BDD consists of 2n + 2 nodes.
It is of crucial importance to care about variable ordering when applying this data structure in practice. The problem
of nding the best variable ordering is NP-hard.[11] For any constant c > 1 it is even NP-hard to compute a variable
ordering resulting in an OBDD with a size that is at most c times larger than an optimal one.[12] However, there exist
ecient heuristics to tackle the problem.[13]
There are functions for which the graph size is always exponential independent of variable ordering. This holds e.g.
for the multiplication function.[1] In fact, the function computing the middle bit of the product of two n -bit numbers
does not have an OBDD smaller than 2n/2 /61 4 vertices.[14] (If the multiplication function had polynomial-size
OBDDs, it would show that integer factorization is in P/poly, which is not known to be true.[15] )
Researchers have suggested renements on the BDD data structure giving way to a number of related graphs, such as
BMD (binary moment diagrams), ZDD (zero-suppressed decision diagram), FDD (free binary decision diagrams),
PDD (parity decision diagrams), and MTBDDs (multiple terminal BDDs).

7.5 Logical operations on BDDs


Many logical operations on BDDs can be implemented by polynomial-time graph manipulation algorithms:[16]:20
7.6. SEE ALSO 25

conjunction

disjunction

negation

existential abstraction

universal abstraction

However, repeating these operations several times, for example forming the conjunction or disjunction of a set of
BDDs, may in the worst case result in an exponentially big BDD. This is because any of the preceding operations
for two BDDs may result in a BDD with a size proportional to the product of the BDDs sizes, and consequently
for several BDDs the size may be exponential. Also, since constructing the BDD of a Boolean function solves the
NP-complete Boolean satisability problem and the co-NP-complete tautology problem, constructing the BDD can
take exponential time in the size of the Boolean formula even when the resulting BDD is small.

7.6 See also


Boolean satisability problem

L/poly, a complexity class that captures the complexity of problems with polynomially sized BDDs

Model checking

Radix tree

Barringtons theorem

7.7 References
[1] Graph-Based Algorithms for Boolean Function Manipulation, Randal E. Bryant, 1986

[2] C. Y. Lee. Representation of Switching Circuits by Binary-Decision Programs. Bell System Technical Journal, 38:985
999, 1959.

[3] Sheldon B. Akers. Binary Decision Diagrams, IEEE Transactions on Computers, C-27(6):509516, June 1978.

[4] Raymond T. Boute, The Binary Decision Machine as a programmable controller. EUROMICRO Newsletter, Vol.
1(2):1622, January 1976.

[5] Randal E. Bryant. "Graph-Based Algorithms for Boolean Function Manipulation". IEEE Transactions on Computers,
C-35(8):677691, 1986.

[6] R. E. Bryant, "Symbolic Boolean Manipulation with Ordered Binary Decision Diagrams, ACM Computing Surveys, Vol.
24, No. 3 (September, 1992), pp. 293318.

[7] Karl S. Brace, Richard L. Rudell and Randal E. Bryant. "Ecient Implementation of a BDD Package. In Proceedings of
the 27th ACM/IEEE Design Automation Conference (DAC 1990), pages 4045. IEEE Computer Society Press, 1990.

[8] http://scpd.stanford.edu/knuth/index.jsp

[9] R.M. Jensen. CLab: A C+ + library for fast backtrack-free interactive product conguration. Proceedings of the Tenth
International Conference on Principles and Practice of Constraint Programming, 2004.

[10] H.L. Lipmaa. First CPIR Protocol with Data-Dependent Computation. ICISC 2009.

[11] Beate Bollig, Ingo Wegener. Improving the Variable Ordering of OBDDs Is NP-Complete, IEEE Transactions on Com-
puters, 45(9):9931002, September 1996.

[12] Detlef Sieling. The nonapproximability of OBDD minimization. Information and Computation 172, 103138. 2002.

[13] Rice, Michael. A Survey of Static Variable Ordering Heuristics for Ecient BDD/MDD Construction (PDF).
26 CHAPTER 7. BINARY DECISION DIAGRAM

[14] Philipp Woelfel. "Bounds on the OBDD-size of integer multiplication via universal hashing. Journal of Computer and
System Sciences 71, pp. 520-534, 2005.

[15] Richard J. Lipton. BDDs and Factoring. Gdels Lost Letter and P=NP, 2009.

[16] Andersen, H. R. (1999). An Introduction to Binary Decision Diagrams (PDF). Lecture Notes. IT University of Copen-
hagen.

R. Ubar, Test Generation for Digital Circuits Using Alternative Graphs (in Russian)", in Proc. Tallinn Tech-
nical University, 1976, No.409, Tallinn Technical University, Tallinn, Estonia, pp. 7581.

7.8 Further reading


D. E. Knuth, "The Art of Computer Programming Volume 4, Fascicle 1: Bitwise tricks & techniques; Binary
Decision Diagrams (AddisonWesley Professional, March 27, 2009) viii+260pp, ISBN 0-321-58050-8. Draft
of Fascicle 1b available for download.

Ch. Meinel, T. Theobald, "Algorithms and Data Structures in VLSI-Design: OBDD Foundations and Appli-
cations, Springer-Verlag, Berlin, Heidelberg, New York, 1998. Complete textbook available for download.

Rdiger Ebendt; Grschwin Fey; Rolf Drechsler (2005). Advanced BDD optimization. Springer. ISBN 978-
0-387-25453-1.

Bernd Becker; Rolf Drechsler (1998). Binary Decision Diagrams: Theory and Implementation. Springer.
ISBN 978-1-4419-5047-5.

7.9 External links


Fun With Binary Decision Diagrams (BDDs), lecture by Donald Knuth
List of BDD software libraries for several programming languages.
Chapter 8

Bitwise operation

In digital computer programming, a bitwise operation operates on one or more bit patterns or binary numerals at the
level of their individual bits. It is a fast, simple action directly supported by the processor, and is used to manipulate
values for comparisons and calculations.
On simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster
than multiplication, and sometimes signicantly faster than addition. While modern processors usually perform addi-
tion and multiplication just as fast as bitwise operations due to their longer instruction pipelines and other architectural
design choices, bitwise operations do commonly use less power because of the reduced use of resources.[1]

8.1 Bitwise operators


In the explanations below, any indication of a bits position is counted from the right (least signicant) side, advancing
left. For example, the binary value 0001 (decimal 1) has zeroes at every position but the rst one.

8.1.1 NOT

The bitwise NOT, or complement, is a unary operation that performs logical negation on each bit, forming the ones
complement of the given binary value. Bits that are 0 become 1, and those that are 1 become 0. For example:
NOT 0111 (decimal 7) = 1000 (decimal 8) NOT 10101011 = 01010100
The bitwise complement is equal to the twos complement of the value minus one. If twos complement arithmetic
is used, then NOT x = -x 1.
For unsigned integers, the bitwise complement of a number is the mirror reection of the number across the half-
way point of the unsigned integers range. For example, for 8-bit unsigned integers, NOT x = 255 - x, which can be
visualized on a graph as a downward line that eectively ips an increasing range from 0 to 255, to a decreasing
range from 255 to 0. A simple but illustrative example use is to invert a grayscale image where each pixel is stored
as an unsigned integer.

8.1.2 AND

A bitwise AND takes two equal-length binary representations and performs the logical AND operation on each pair
of the corresponding bits, by multiplying them. Thus, if both bits in the compared position are 1, the bit in the
resulting binary representation is 1 (1 1 = 1); otherwise, the result is 0 (1 0 = 0 and 0 0 = 0). For example:
0101 (decimal 5) AND 0011 (decimal 3) = 0001 (decimal 1)
The operation may be used to determine whether a particular bit is set (1) or clear (0). For example, given a bit pattern
0011 (decimal 3), to determine whether the second bit is set we use a bitwise AND with a bit pattern containing 1
only in the second bit:
0011 (decimal 3) AND 0010 (decimal 2) = 0010 (decimal 2)

27
28 CHAPTER 8. BITWISE OPERATION

Because the result 0010 is non-zero, we know the second bit in the original pattern was set. This is often called bit
masking. (By analogy, the use of masking tape covers, or masks, portions that should not be altered or portions that
are not of interest. In this case, the 0 values mask the bits that are not of interest.)
The bitwise AND may be used to clear selected bits (or ags) of a register in which each bit represents an individual
Boolean state. This technique is an ecient way to store a number of Boolean values using as little memory as
possible.
For example, 0110 (decimal 6) can be considered a set of four ags, where the rst and fourth ags are clear (0), and
the second and third ags are set (1). The second bit may be cleared by using a bitwise AND with the pattern that
has a zero only in the second bit:
0110 (decimal 6) AND 1101 (decimal 13) = 0100 (decimal 4)
Because of this property, it becomes easy to check the parity of a binary number by checking the value of the lowest
valued bit. Using the example above:
0110 (decimal 6) AND 0001 (decimal 1) = 0000 (decimal 0)
Because 6 AND 1 is zero, 6 is divisible by two and therefore even.

8.1.3 OR

A bitwise OR takes two bit patterns of equal length and performs the logical inclusive OR operation on each pair of
corresponding bits. The result in each position is 0 if both bits are 0, while otherwise the result is 1. For example:
0101 (decimal 5) OR 0011 (decimal 3) = 0111 (decimal 7)
The bitwise OR may be used to set to 1 the selected bits of the register described above. For example, the fourth bit
of 0010 (decimal 2) may be set by performing a bitwise OR with the pattern with only the fourth bit set:
0010 (decimal 2) OR 1000 (decimal 8) = 1010 (decimal 10)

8.1.4 XOR

A bitwise XOR takes two bit patterns of equal length and performs the logical exclusive OR operation on each pair
of corresponding bits. The result in each position is 1 if only the rst bit is 1 or only the second bit is 0, but will be 0
if both are 0 or both are 1. In this we perform the comparison of two bits, being 1 if the two bits are dierent, and 0
if they are the same. For example:
0101 (decimal 5) XOR 0011 (decimal 3) = 0110 (decimal 6)
The bitwise XOR may be used to invert selected bits in a register (also called toggle or ip). Any bit may be toggled
by XORing it with 1. For example, given the bit pattern 0010 (decimal 2) the second and fourth bits may be toggled
by a bitwise XOR with a bit pattern containing 1 in the second and fourth positions:
0010 (decimal 2) XOR 1010 (decimal 10) = 1000 (decimal 8)
This technique may be used to manipulate bit patterns representing sets of Boolean states.
Assembly language programmers sometimes use XOR as a short-cut to setting the value of a register to zero. Per-
forming XOR on a value against itself always yields zero, and on many architectures this operation requires fewer
clock cycles and memory than loading a zero value and saving it to the register.

8.1.5 Mathematical equivalents

Assuming , for the non-negative integers, the bitwise operations can be written as follows:
8.2. BIT SHIFTS 29

log2 (x)
[( x ) ]
NOT x = 2n mod2 + 1 mod2 = 2log2 (x)+1 1 x
n=0
2n
log2 (x)
( x ) ( y )
x AND y = 2n mod2 mod2
n=0
2n 2n
log2 (x)
[[( x ) ( y ) ( x ) ( y )] ]
x OR y = 2n mod2 + mod2 + mod2 mod2 mod2
n=0
2n 2n 2n 2n
log2 (x)
[[( x ) ( y )] ] log
2 (x) [( x y ) ]
x XOR y = 2n mod2 + mod2 mod2 = 2n
+ mod2
n=0
2n 2n n=0
2n 2n

8.2 Bit shifts


The bit shifts are sometimes considered bitwise operations, because they treat a value as a series of bits rather than as
a numerical quantity. In these operations the digits are moved, or shifted, to the left or right. Registers in a computer
processor have a xed width, so some bits will be shifted out of the register at one end, while the same number
of bits are shifted in from the other end; the dierences between bit shift operators lie in how they determine the
values of the shifted-in bits.

8.2.1 Arithmetic shift


Main article: Arithmetic shift
In an arithmetic shift, the bits that are shifted out of either end are discarded. In a left arithmetic shift, zeros are
MSB

LSB

7 6 5 4 3 2 1 0
0 0 0 1 0 1 1 1

0 0 1 0 1 1 1 0 0
Left arithmetic shift

shifted in on the right; in a right arithmetic shift, the sign bit (the MSB in twos complement) is shifted in on the left,
thus preserving the sign of the operand. This statement is not reliable in the latest C language draft standard, however:
30 CHAPTER 8. BITWISE OPERATION

MSB

LSB
7 6 5 4 3 2 1 0
0 0 0 1 0 1 1 1

0 0 0 0 1 0 1 1
Right arithmetic shift

if the value being shifted is negative, the result is implementation-dened, indicating the result is not necessarily
consistent across platforms.[2]
This example uses an 8-bit register:
00010111 (decimal +23) LEFT-SHIFT = 00101110 (decimal +46) 10010111 (decimal 105) RIGHT-SHIFT =
11001011 (decimal 53)
In the rst case, the leftmost digit was shifted past the end of the register, and a new 0 was shifted into the rightmost
position. In the second case, the rightmost 1 was shifted out (perhaps into the carry ag), and a new 1 was copied into
the leftmost position, preserving the sign of the number (but not reliably, according to the most recent C language
draft standard, as noted above). Multiple shifts are sometimes shortened to a single shift by some number of digits.
For example:
00010111 (decimal +23) LEFT-SHIFT-BY-TWO = 01011100 (decimal +92)
A left arithmetic shift by n is equivalent to multiplying by 2n (provided the value does not overow), while a right
arithmetic shift by n of a twos complement value is equivalent to dividing by 2n and rounding toward negative innity.
If the binary number is treated as ones complement, then the same right-shift operation results in division by 2n and
rounding toward zero.

8.2.2 Logical shift

Main article: Logical shift

In a logical shift, zeros are shifted in to replace the discarded bits. Therefore, the logical and arithmetic left-shifts are
exactly the same.
8.2. BIT SHIFTS 31

However, as the logical right-shift inserts value 0 bits into the most signicant bit, instead of copying the sign bit,
it is ideal for unsigned binary numbers, while the arithmetic right-shift is ideal for signed twos complement binary
numbers.

8.2.3 Rotate no carry


Main article: Circular shift

Another form of shift is the circular shift or bit rotation. In this operation, the bits are rotated as if the left and right
ends of the register were joined. The value that is shifted in on the right during a left-shift is whatever value was
shifted out on the left, and vice versa. This operation is useful if it is necessary to retain all the existing bits, and is
frequently used in digital cryptography.

8.2.4 Rotate through carry


Rotate through carry is similar to the rotate no carry operation, but the two ends of the register are separated by the
carry ag. The bit that is shifted in (on either end) is the old value of the carry ag, and the bit that is shifted out (on
the other end) becomes the new value of the carry ag.
A single rotate through carry can simulate a logical or arithmetic shift of one position by setting up the carry ag
beforehand. For example, if the carry ag contains 0, then x RIGHT-ROTATE-THROUGH-CARRY-BY-ONE
is a logical right-shift, and if the carry ag contains a copy of the sign bit, then x RIGHT-ROTATE-THROUGH-
CARRY-BY-ONE is an arithmetic right-shift. For this reason, some microcontrollers such as low end PICs just have
rotate and rotate through carry, and don't bother with arithmetic or logical shift instructions.
Rotate through carry is especially useful when performing shifts on numbers larger than the processors native word
size, because if a large number is stored in two registers, the bit that is shifted o the end of the rst register must
come in at the other end of the second. With rotate-through-carry, that bit is saved in the carry ag during the rst
shift, ready to shift in during the second shift without any extra preparation.

8.2.5 Shifts in C, C++, C#, Go, Java, JavaScript, Pascal, Perl, PHP, Python and Ruby
In C-inspired languages, the left and right shift operators are "<<" and ">>", respectively. The number of places to
shift is given as the second argument to the shift operators. For example,
x = y << 2;

assigns x the result of shifting y to the left by two bits, which is equivalent to a multiplication by four.
Shifts can result in implementation-dened behavior or undened behavior, so care must be taken when using them.
The result of shifting by a bit count greater than or equal to the words size is undened behavior in C and C++.[3][4]
Right-shifting a negative value is implementation-dened and not recommended by good coding practice;[5] the result
of left-shifting a signed value is undened if the result cannot be represented in the result type.[3] In C#, the right-shift
is an arithmetic shift when the rst operand is an int or long. If the rst operand is of type uint or ulong, the right-shift
is a logical shift.[6]

Circular shifts in C-family languages

Main article: Circular shift Implementing circular shifts

The C-family of languages lack a rotate operator, but one can be synthesized from the shift operators. Care must be
taken to ensure the statement is well formed to avoid undened behavior and timing attacks in software with security
requirements.[7] For example, a naive implementation that left rotates a 32-bit unsigned value x by n positions is
simply:
unsigned int x = ..., n = ...; unsigned int y = (x << n) | (x >> (32 - n));
32 CHAPTER 8. BITWISE OPERATION

However, a shift by 0 bits results in undened behavior in the right hand expression (x >> (32 - n)) because 32 - 0 is
32, and 32 is outside the range [0 - 31] inclusive. A second try might result in:
unsigned int x = ..., n = ...; unsigned int y = n ? (x << n) | (x >> (32 - n)) : x;

where the shift amount is tested to ensure it does not introduce undened behavior. However, the branch adds an
additional code path and presents an opportunity for timing analysis and attack, which is often not acceptable in high
integrity software.[7] In addition, the code compiles to multiple machine instructions, which is often less ecient than
the processors native instruction.
To avoid the undened behavior and branches under GCC and Clang, the following should be used. The pattern is
recognized by many compilers, and the compiler will emit a single rotate instruction:[8][9][10]
unsigned int x = ..., n = ...; unsigned int y = (x << n) | (x >> (-n & 31));

There are also compiler-specic intrinsics implementing circular shifts, like _rotl8, _rotl16, _rotr8, _rotr16 in Mi-
crosoft Visual C++. Clang provides some rotate intrinsics for Microsoft compatibility that suers the problems
above.[10] GCC does not oer rotate intrinsics. Intel also provides x86 Intrinsics.

Shifts in Java

In Java, all integer types are signed, so the "<<" and ">>" operators perform arithmetic shifts. Java adds the operator
">>>" to perform logical right shifts, but since the logical and arithmetic left-shift operations are identical for signed
integer, there is no "<<<" operator in Java.
More details of Java shift operators:[11]

The operators << (left shift), >> (signed right shift), and >>> (unsigned right shift) are called the shift operators.

The type of the shift expression is the promoted type of the left-hand operand. For example, aByte >>> 2 is
equivalent to ((int) aByte) >>> 2.

If the promoted type of the left-hand operand is int, only the ve lowest-order bits of the right-hand operand are
used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical AND operator
& with the mask value 0x1f (0b11111).[12] The shift distance actually used is therefore always in the range 0
to 31, inclusive.

If the promoted type of the left-hand operand is long, then only the six lowest-order bits of the right-hand
operand are used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical
AND operator & with the mask value 0x3f (0b111111).[12] The shift distance actually used is therefore always
in the range 0 to 63, inclusive.

The value of n >>> s is n right-shifted s bit positions with zero-extension.

In bit and shift operations, the type byte is implicitly converted to int. If the byte value is negative, the highest
bit is one, then ones are used to ll up the extra bytes in the int. So byte b1=5; int i = b1 | 0x0200; will give
i == 5 as result.

Shifts in JavaScript

JavaScript uses bitwise operations to evaluate each of two or more units place to 1 or 0.[13]

Shifts in Pascal

In Pascal, as well as in all its dialects (such as Object Pascal and Standard Pascal), the left and right shift operators
are shl and shr, respectively. The number of places to shift is given as the second argument. For example, the
following assigns x the result of shifting y to the left by two bits:
x := y shl 2;
8.3. OTHER 33

8.3 Other
popcount, used in cryptography
count leading zeros

8.4 Applications
Bitwise operations are necessary particularly in lower-level programming such as device drivers, low-level graphics,
communications protocol packet assembly, and decoding.
Although machines often have ecient built-in instructions for performing arithmetic and logical operations, all these
operations can be performed by combining the bitwise operators and zero-testing in various ways.[14] For example,
here is a pseudocode implementation of ancient Egyptian multiplication showing how to multiply two arbitrary inte-
gers a and b (a greater than b) using only bitshifts and addition:
c 0 while b 0 if (b and 1) 0 c c + a left shift a by 1 right shift b by 1 return c

Another example is a pseudocode implementation of addition, showing how to calculate a sum of two integers a and
b using bitwise operators and zero-testing:
while a 0 c b and a b b xor a left shift c by 1 a c return b

8.5 See also


Arithmetic logic unit
Bit manipulation
Bitboard
Bitwise operations in C
Boolean algebra (logic)
Double dabble
Find rst set
Karnaugh map
Logic gate
Logical operator
Primitive data type

8.6 References
[1] CMicrotek Low-power Design Blog. CMicrotek. Retrieved 12 August 2015.

[2] Garcia, Blandine (2011). INTERNATIONAL STANDARD ISO/IEC 9899:201x (PDF) (201x ed.). ISO/IEC. p. 95. Retrieved
7 September 2015.

[3] JTC1/SC22/WG14 N843 C programming language, section 6.5.7

[4] Arithmetic operators - cppreference.com. en.cppreference.com. Retrieved 2016-07-06.

[5] INT13-C. Use bitwise operators only on unsigned operands. CERT: Secure Coding Standards. Software Engineering
Institute, Carnegie Mellon University. Retrieved 7 September 2015.
34 CHAPTER 8. BITWISE OPERATION

[6] Operator (C# Reference)". Microsoft. Retrieved 14 July 2013.

[7] Near constant time rotate that does not violate the standards?". Stack Exchange Network. Retrieved 12 August 2015.

[8] Poor optimization of portable rotate idiom. GNU GCC Project. Retrieved 11 August 2015.

[9] Circular rotate that does not violate C/C++ standard?". Intel Developer Forums. Retrieved 12 August 2015.

[10] Constant not propagated into inline assembly, results in constraint 'I' expects an integer constant expression"". LLVM
Project. Retrieved 11 August 2015.

[11] The Java Language Specication, section 15.19. Shift Operators

[12] Chapter 15. Expressions. oracle.com.

[13] JavaScript Bitwise. W3Schools.com.

[14] Synthesizing arithmetic operations using bit-shifting tricks. Bisqwit.iki.. 15 February 2014. Retrieved 8 March 2014.

8.7 External links


Online Bitwise Calculator supports Bitwise AND, OR and XOR

Division using bitshifts


"Bitwise Operations Mod N" by Enrique Zeleny, Wolfram Demonstrations Project.

"Plots Of Compositions Of Bitwise Operations" by Enrique Zeleny, The Wolfram Demonstrations Project.
Chapter 9

Booles expansion theorem

Booles expansion theorem, often referred to as the Shannon expansion or decomposition, is the identity: F =
x Fx + x Fx , where F is any Boolean function, x is a variable, x is the complement of x , and Fx and Fx are F
with the argument x equal to 1 and to 0 respectively.
The terms Fx and Fx are sometimes called the positive and negative Shannon cofactors, respectively, of F with
respect to x . These are functions, computed by restrict operator, restrict(F, x, 0) and restrict(F, x, 1) (see valuation
(logic) and partial application).
It has been called the fundamental theorem of Boolean algebra.[1] Besides its theoretical importance, it paved the
way for binary decision diagrams, satisability solvers, and many other techniques relevant to computer engineering
and formal verication of digital circuits.

9.1 Statement of the theorem


A more explicit way of stating the theorem is:

f (X1 , X2 , . . . , Xn ) = X1 f (1, X2 , . . . , Xn ) + X1 f (0, X2 , . . . , Xn )

Proof for the statement follows from direct use of mathematical induction, from the observation that f (X1 ) =
X1 .f (1) + X1 .f (0) and expanding a 2-ary and n-ary Boolean functions identically.

9.2 Variations and implications


XOR-Form The statement also holds when the disjunction "+" is replaced by the XOR operator:

f (X1 , X2 , . . . , Xn ) = X1 f (1, X2 , . . . , Xn ) X1 f (0, X2 , . . . , Xn )

Dual form There is a dual form of the Shannon expansion (which does not have a related XOR form):

f (X1 , X2 , . . . , Xn ) = (X1 + f (0, X2 , . . . , Xn )) (X1 + f (1, X2 , . . . , Xn ))

Repeated application for each argument leads to the Sum of Products (SoP) canonical form of the Boolean function
f . For example for n = 2 that would be

f (X1 , X2 ) = X1 f (1, X2 ) + X1 f (0, X2 )


= X1 X2 f (1, 1) + X1 X2 f (1, 0) + X1 X2 f (0, 1) + X1 X2 f (0, 0)

Likewise, application of the dual form leads to the Product of Sums (PoS) canonical form (using the distributivity
law of + over ):

35
36 CHAPTER 9. BOOLES EXPANSION THEOREM

f (X1 , X2 ) = (X1 + f (0, X2 )) (X1 + f (1, X2 ))


= (X1 + X2 + f (0, 0)) (X1 + X2 + f (0, 1)) (X1 + X2 + f (1, 0)) (X1 + X2 + f (1, 1))

9.3 Properties of Cofactors


Linear Properties of Cofactors: For a boolean function H which is made up of two boolean functions F and G the
following are true:

If F = H then Fx = Hx

If F = G H then Fx = Gx Hx

If F = G + H then Fx = Gx + Hx

If F = G H then Fx = Gx Hx

Characteristics of Unate Functions: If H is a unate function and...

If H is positive unate then F = x Fx + Fx

If H is negative unate then F = Fx + x Fx

9.4 Operations with Cofactors


Boolean Dierence: The boolean dierence or boolean derivative of the function F with respect to the literal x is
dened as:
F
x = Fx Fx

Universal Quantication: The universal quantication of F is dened as:

xF = Fx Fx

Existential Quantication: The existential quantication of F is dened as:

xF = Fx + Fx

9.5 History
George Boole presented this expansion as his Proposition II, To expand or develop a function involving any number of
logical symbols, in his Laws of Thought (1854),[2] and it was widely applied by Boole and other nineteenth-century
logicians.[3]
Claude Shannon mentioned this expansion, among other Boolean identities, in a 1948 paper,[4] and showed the switch-
ing network interpretations of the identity. In the literature of computer design and switching theory, the identity is
often incorrectly attributed to Shannon.[3]

9.6 Application to switching circuits


1. Binary decision diagrams follow from systematic use of this theorem

2. Any Boolean function can be implemented directly in a switching circuit using a hierarchy of basic multiplexer
by repeated application of this theorem.
9.7. NOTES 37

9.7 Notes
[1] Paul C. Rosenbloom, The Elements of Mathematical Logic, 1950, p. 5

[2] George Boole, An Investigation of the Laws of Thought: On which are Founded the Mathematical Theories of Logic and
Probabilities, 1854, p. 72 full text at Google Books

[3] Frank Markham Brown, Boolean Reasoning: The Logic of Boolean Equations, 2nd edition, 2003, p. 42

[4] Claude Shannon, The Synthesis of Two-Terminal Switching Circuits, Bell System Technical Journal 28:5998, full text,
p. 62

9.8 See also


ReedMuller expansion

9.9 External links


Shannons Decomposition Example with multiplexers.
Optimizing Sequential Cycles Through Shannon Decomposition and Retiming (PDF) Paper on application.
Chapter 10

Boolean algebra

For other uses, see Boolean algebra (disambiguation).

In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables
are the truth values true and false, usually denoted 1 and 0 respectively. Instead of elementary algebra where the values
of the variables are numbers, and the prime operations are addition and multiplication, the main operations of Boolean
algebra are the conjunction and denoted as , the disjunction or denoted as , and the negation not denoted as . It
is thus a formalism for describing logical relations in the same way that ordinary algebra describes numeric relations.
Boolean algebra was introduced by George Boole in his rst book The Mathematical Analysis of Logic (1847), and set
forth more fully in his An Investigation of the Laws of Thought (1854).[1] According to Huntington, the term Boolean
algebra was rst suggested by Sheer in 1913.[2]
Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern
programming languages. It is also used in set theory and statistics.[3]

10.1 History
Booles algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as
connected to the origins of both elds.[4] In an abstract setting, Boolean algebra was perfected in the late 19th century
by Jevons, Schrder, Huntington, and others until it reached the modern conception of an (abstract) mathematical
structure.[4] For example, the empirical observation that one can manipulate expressions in the algebra of sets by
translating them into expressions in Booles algebra is explained in modern terms by saying that the algebra of sets
is a Boolean algebra (note the indenite article). In fact, M. H. Stone proved in 1936 that every Boolean algebra is
isomorphic to a eld of sets.
In the 1930s, while studying switching circuits, Claude Shannon observed that one could also apply the rules of
Booles algebra in this setting, and he introduced switching algebra as a way to analyze and design circuits by
algebraic means in terms of logic gates. Shannon already had at his disposal the abstract mathematical apparatus,
thus he cast his switching algebra as the two-element Boolean algebra. In circuit engineering settings today, there
is little need to consider other Boolean algebras, thus switching algebra and Boolean algebra are often used
interchangeably.[5][6][7] Ecient implementation of Boolean functions is a fundamental problem in the design of
combinational logic circuits. Modern electronic design automation tools for VLSI circuits often rely on an ecient
representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis
and formal verication.[8]
Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean
algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way.[9][10][11]
Boolean algebra is not sucient to capture logic formulas using quantiers, like those from rst order logic. Although
the development of mathematical logic did not follow Booles program, the connection between his algebra and logic
was later put on rm ground in the setting of algebraic logic, which also studies the algebraic systems of many other
logics.[4] The problem of determining whether the variables of a given Boolean (propositional) formula can be assigned
in such a way as to make the formula evaluate to true is called the Boolean satisability problem (SAT), and is of
importance to theoretical computer science, being the rst problem shown to be NP-complete. The closely related

38
10.2. VALUES 39

model of computation known as a Boolean circuit relates time complexity (of an algorithm) to circuit complexity.

10.2 Values
Whereas in elementary algebra expressions denote mainly numbers, in Boolean algebra they denote the truth values
false and true. These values are represented with the bits (or binary digits), namely 0 and 1. They do not behave like
the integers 0 and 1, for which 1 + 1 = 2, but may be identied with the elements of the two-element eld GF(2),
that is, integer arithmetic modulo 2, for which 1 + 1 = 0. Addition and multiplication then play the Boolean roles of
XOR (exclusive-or) and AND (conjunction) respectively, with disjunction xy (inclusive-or) denable as x + y + xy.
Boolean algebra also deals with functions which have their values in the set {0, 1}. A sequence of bits is a commonly
used such function. Another common example is the subsets of a set E: to a subset F of E is associated the indicator
function that takes the value 1 on F and 0 outside F. The most general example is the elements of a Boolean algebra,
with all of the foregoing being instances thereof.
As with elementary algebra, the purely equational part of the theory may be developed without considering explicit
values for the variables.[12]

10.3 Operations

10.3.1 Basic operations


The basic operations of Boolean calculus are as follows.

AND (conjunction), denoted xy (sometimes x AND y or Kxy), satises xy = 1 if x = y = 1 and xy = 0


otherwise.
OR (disjunction), denoted xy (sometimes x OR y or Axy), satises xy = 0 if x = y = 0 and xy = 1 otherwise.
NOT (negation), denoted x (sometimes NOT x, Nx or !x), satises x = 0 if x = 1 and x = 1 if x = 0.

Alternatively the values of xy, xy, and x can be expressed by tabulating their values with truth tables as follows.

If the truth values 0 and 1 are interpreted as integers, these operations may be expressed with the ordinary operations
of arithmetic, or by the minimum/maximum functions:

x y = x y = min(x, y)
x y = x + y (x y) = max(x, y)
x = 1 x

One may consider that only the negation and one of the two other operations are basic, because of the following
identities that allow to dene the conjunction in terms of the negation and the disjunction, and vice versa:

x y = (x y)
x y = (x y)

10.3.2 Secondary operations


The three Boolean operations described above are referred to as basic, meaning that they can be taken as a basis
for other Boolean operations that can be built up from them by composition, the manner in which operations are
combined or compounded. Operations composed from the basic operations include the following examples:
40 CHAPTER 10. BOOLEAN ALGEBRA

x y = x y

x y = (x y) (x y)

x y = (x y)
These denitions give rise to the following truth tables giving the values of these operations for all four possible inputs.

The rst operation, x y, or Cxy, is called material implication. If x is true then the value of x y is taken to be
that of y. But if x is false then the value of y can be ignored; however the operation must return some truth value and
there are only two choices, so the return value is the one that entails less, namely true. (Relevance logic addresses this
by viewing an implication with a false premise as something other than either true or false.)
The second operation, x y, or Jxy, is called exclusive or (often abbreviated as XOR) to distinguish it from disjunction
as the inclusive kind. It excludes the possibility of both x and y. Dened in terms of arithmetic it is addition mod 2
where 1 + 1 = 0.
The third operation, the complement of exclusive or, is equivalence or Boolean equality: x y, or Exy, is true just
when x and y have the same value. Hence x y as its complement can be understood as x y, being true just when
x and y are dierent. Equivalence counterpart in arithmetic mod 2 is x + y + 1.
Given two operands, each with two possible values, there are 22 = 4 possible combinations of inputs. Because each
output can have two possible values, there are a total of 24 = 16 possible binary Boolean operations.

10.4 Laws
A law of Boolean algebra is an identity such as x(yz) = (xy)z between two Boolean terms, where a Boolean
term is dened as an expression built up from variables and the constants 0 and 1 using the operations , , and .
The concept can be extended to terms involving other Boolean operations such as , , and , but such extensions
are unnecessary for the purposes to which the laws are put. Such purposes include the denition of a Boolean algebra
as any model of the Boolean laws, and as a means for deriving new laws from old as in the derivation of x(yz) =
x(zy) from yz = zy as treated in the section on axiomatization.

10.4.1 Monotone laws

Boolean algebra satises many of the same laws as ordinary algebra when one matches up with addition and with
multiplication. In particular the following laws are common to both kinds of algebra:[13]

The following laws hold in Boolean Algebra, but not in ordinary algebra:

Taking x = 2 in the third law above shows that it is not an ordinary algebra law, since 22 = 4. The remaining ve
laws can be falsied in ordinary algebra by taking all variables to be 1, for example in Absorption Law 1 the left hand
side would be 1(1+1) = 2 while the right hand side would be 1, and so on.
All of the laws treated so far have been for conjunction and disjunction. These operations have the property that
changing either argument either leaves the output unchanged or the output changes in the same way as the input.
Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this
property are said to be monotone. Thus the axioms so far have all been for monotonic Boolean logic. Nonmono-
tonicity enters via complement as follows.[3]
10.4. LAWS 41

10.4.2 Nonmonotone laws


The complement operation is dened by the following two laws.

1 Complementation x x = 0
2 Complementation x x = 1

All properties of negation including the laws below follow from the above two laws alone.[3]
In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, whence in both algebras it
satises the double negation law (also called involution law)

negation Double (x) = x

But whereas ordinary algebra satises the two laws

(x)(y) = xy
(x) + (y) = (x + y)

Boolean algebra satises De Morgans laws:

1 Morgan De x y = (x y)
2 Morgan De x y = (x y)

10.4.3 Completeness
The laws listed above dene Boolean algebra, in the sense that they entail the rest of the subject. The laws Com-
plementation 1 and 2, together with the monotone laws, suce for this purpose and can therefore be taken as one
possible complete set of laws or axiomatization of Boolean algebra. Every law of Boolean algebra follows logically
from these axioms. Furthermore, Boolean algebras can then be dened as the models of these axioms as treated in
the section thereon.
To clarify, writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms,
nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been
Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws
that were not Boolean algebras.
This axiomatization is by no means the only one, or even necessarily the most natural given that we did not pay attention
to whether some of the axioms followed from others but simply chose to stop when we noticed we had enough laws,
treated further in the section on axiomatizations. Or the intermediate notion of axiom can be sidestepped altogether
by dening a Boolean law directly as any tautology, understood as an equation that holds for all values of its variables
over 0 and 1. All these denitions of Boolean algebra can be shown to be equivalent.

10.4.4 Duality principle


Principle: If {X, R} is a poset, then {X, R(inverse)} is also a poset.
There is nothing magical about the choice of symbols for the values of Boolean algebra. We could rename 0 and 1
to say and , and as long as we did so consistently throughout it would still be Boolean algebra, albeit with some
obvious cosmetic dierences.
But suppose we rename 0 and 1 to 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating
on the same values. However it would not be identical to our original Boolean algebra because now we nd behaving
the way used to do and vice versa. So there are still some cosmetic dierences to show that we've been ddling
with the notation, despite the fact that we're still using 0s and 1s.
42 CHAPTER 10. BOOLEAN ALGEBRA

But if in addition to interchanging the names of the values we also interchange the names of the two binary operations,
now there is no trace of what we have done. The end product is completely indistinguishable from what we started
with. We might notice that the columns for xy and xy in the truth tables had changed places, but that switch is
immaterial.
When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are
switched simultaneously, we call the members of each pair dual to each other. Thus 0 and 1 are dual, and and
are dual. The Duality Principle, also called De Morgan duality, asserts that Boolean algebra is unchanged when all
dual pairs are interchanged.
One change we did not need to make as part of this interchange was to complement. We say that complement is a
self-dual operation. The identity or do-nothing operation x (copy the input to the output) is also self-dual. A more
complicated example of a self-dual operation is (xy) (yz) (zx). There is no self-dual binary operation that
depends on both its arguments. A composition of self-dual operations is a self-dual operation. For example, if f(x,
y, z) = (xy) (yz) (zx), then f(f(x, y, z), x, t) is a self-dual operation of four arguments x,y,z,t.
The principle of duality can be explained from a group theory perspective by fact that there are exactly four func-
tions that are one-to-one mappings (automorphisms) of the set of Boolean polynomials back to itself: the identity
function, the complement function, the dual function and the contradual function (complemented dual). These four
functions form a group under function composition, isomorphic to the Klein four-group, acting on the set of Boolean
polynomials. Walter Gottschalk remarked that consequently a more appropriate name for the phenomenon would be
the principle (or square) of quaternality.[14]

10.5 Diagrammatic representations

10.5.1 Venn diagrams


A Venn diagram[15] is a representation of a Boolean operation using shaded overlapping regions. There is one region
for each variable, all circular in the examples here. The interior and exterior of region x corresponds respectively to
the values 1 (true) and 0 (false) for variable x. The shading indicates the value of the operation for each combination
of regions, with dark denoting 1 and light 0 (some authors use the opposite convention).
The three Venn diagrams in the gure below represent respectively conjunction xy, disjunction xy, and complement
x.

x y x y x

xy xy x
Figure 2. Venn diagrams for conjunction, disjunction, and complement

For conjunction, the region inside both circles is shaded to indicate that xy is 1 when both variables are 1. The other
regions are left unshaded to indicate that xy is 0 for the other three combinations.
The second diagram represents disjunction xy by shading those regions that lie inside either or both circles. The
third diagram represents complement x by shading the region not inside the circle.
While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white
box and a dark box, neither one containing a circle. However we could put a circle for x in those boxes, in which
case each would denote a function of one argument, x, which returns the same value independently of x, called a
constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the
10.5. DIAGRAMMATIC REPRESENTATIONS 43

dierence is that a constant takes no arguments, called a zeroary or nullary operation, while a constant function takes
one argument, which it ignores, and is a unary operation.
Venn diagrams are helpful in visualizing laws. The commutativity laws for and can be seen from the symmetry
of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because inter-
changing x and y would have the eect of reecting the diagram horizontally and any failure of commutativity would
then appear as a failure of symmetry.
Idempotence of and can be visualized by sliding the two circles together and noting that the shaded area then
becomes the whole circle, for both and .
To see the rst absorption law, x(xy) = x, start with the diagram in the middle for xy and note that the portion of
the shaded area in common with the x circle is the whole of the x circle. For the second absorption law, x(xy) =
x, start with the left diagram for xy and note that shading the whole of the x circle results in just the x circle being
shaded, since the previous shading was inside the x circle.
The double negation law can be seen by complementing the shading in the third diagram for x, which shades the x
circle.
To visualize the rst De Morgans law, (x)(y) = (xy), start with the middle diagram for xy and complement its
shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes.
The result is the same as if we shaded that region which is both outside the x circle and outside the y circle, i.e. the
conjunction of their exteriors, which is what the left hand side of the law describes.
The second De Morgans law, (x)(y) = (xy), works the same way with the two diagrams interchanged.
The rst complement law, xx = 0, says that the interior and exterior of the x circle have no overlap. The second
complement law, xx = 1, says that everything is either inside or outside the x circle.

10.5.2 Digital logic gates

Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting of logic gates
connected to form a circuit diagram. Each gate implements a Boolean operation, and is depicted schematically by
a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction
(OR-gates), and complement (inverters) are as follows.[16]

The lines on the left of each gate represent input wires or ports. The value of the input is represented by a voltage
on the lead. For so-called active-high logic, 0 is represented by a voltage close to zero or ground, while 1 is
represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate
represents the output port, which normally follows the same voltage conventions as the input ports.
Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to
the output; the small circle on the output denotes the actual inversion complementing the input. The convention of
putting such a circle on any port means that the signal passing through this port is complemented on the way through,
whether it is an input or output port.
The Duality Principle, or De Morgans laws, can be understood as asserting that complementing all three ports of an
AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an
inverter however leaves the operation unchanged.
More generally one may complement any of the eight subsets of the three ports of either an AND or OR gate. The
resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s
in their truth table. There are eight such because the odd-bit-out can be either 0 or 1 and can go in any of four
positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an
44 CHAPTER 10. BOOLEAN ALGEBRA

even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both
their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namely x, y, x, and
y; and the remaining two are xy (XOR) and its complement xy.

10.6 Boolean algebras

Main article: Boolean algebra (structure)

The term algebra denotes both a subject, namely the subject of algebra, and an object, namely an algebraic structure.
Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects
called Boolean algebras, dened in full generality as any model of the Boolean laws. We begin with a special case
of the notion denable without reference to the laws, namely concrete Boolean algebras, and then give the formal
denition of the general notion.

10.6.1 Concrete Boolean algebras

A concrete Boolean algebra or eld of sets is any nonempty set of subsets of a given set X closed under the set
operations of union, intersection, and complement relative to X.[3]
(As an aside, historically X itself was required to be nonempty as well to exclude the degenerate or one-element
Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since
the degenerate algebra satises every equation. However this exclusion conicts with the preferred purely equational
denition of Boolean algebra, there being no way to rule out the one-element algebra using only equations 0 1
does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and let X be
empty.)
Example 1. The power set 2X of X, consisting of all subsets of X. Here X may be any set: empty, nite, innite, or
even uncountable.
Example 2. The empty set and X. This two-element algebra shows that a concrete Boolean algebra can be nite even
when it consists of subsets of an innite set. It can be seen that every eld of subsets of X must contain the empty set
and X. Hence no smaller example is possible, other than the degenerate algebra obtained by taking X to be empty so
as to make the empty set and X coincide.
Example 3. The set of nite and conite sets of integers, where a conite set is one omitting only nitely many
integers. This is clearly closed under complement, and is closed under union because the union of a conite set with
any set is conite, while the union of two nite sets is nite. Intersection behaves like union with nite and conite
interchanged.
Example 4. For a less trivial example of the point made by Example 2, consider a Venn diagram formed by n closed
curves partitioning the diagram into 2n regions, and let X be the (innite) set of all points in the plane not on any
curve but somewhere within the diagram. The interior of each region is thus an innite subset of X, and every point
n
in X is in exactly one region. Then the set of all 22 possible unions of regions (including the empty set obtained as the
union of the empty set of regions and X obtained as the union of all 2n regions) is closed under union, intersection,
and complement relative to X and therefore forms a concrete Boolean algebra. Again we have nitely many subsets
of an innite set forming a concrete Boolean algebra, with Example 2 arising as the case n = 0 of no curves.
10.6. BOOLEAN ALGEBRAS 45

10.6.2 Subsets as bit vectors


A subset Y of X can be identied with an indexed family of bits with index set X, with the bit indexed by x X being 1
or 0 according to whether or not x Y. (This is the so-called characteristic function notion of a subset.) For example,
a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,,31}, with 0 and 31 indexing the low and high
order bits respectively. For a smaller example, if X = {a,b,c} where a, b, c are viewed as bit positions in that order
from left to right, the eight subsets {}, {c}, {b}, {b,c}, {a}, {a,c}, {a,b}, and {a,b,c} of X can be identied with the
respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers
are innite sequences of bits, while those indexed by the reals in the unit interval [0,1] are packed too densely to be
able to write conventionally but nonetheless form well-dened indexed families (imagine coloring every point of the
interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]).
From this bit vector viewpoint, a concrete Boolean algebra can be dened equivalently as a nonempty set of bit
vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations of
bitwise , , and , as in 10100110 = 0010, 10100110 = 1110, and 1010 = 0101, the bit vector realizations of
intersection, union, and complement respectively.

10.6.3 The prototypical Boolean algebra


Main article: two-element Boolean algebra

The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length
one, which by the identication of bit vectors with subsets can also be understood as the two subsets of a one-element
set. We call this the prototypical Boolean algebra, justied by the following observation.

The laws satised by all nondegenerate concrete Boolean algebras coincide with those satised by the
prototypical Boolean algebra.

This observation is easily proved as follows. Certainly any law satised by all concrete Boolean algebras is satised
by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must
have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that
law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector.
The nal goal of the next section can be understood as eliminating concrete from the above observation. We shall
however reach that goal via the surprisingly stronger observation that, up to isomorphism, all Boolean algebras are
concrete.

10.6.4 Boolean algebras: the denition


The Boolean algebras we have seen so far have all been concrete, consisting of bit vectors or equivalently of subsets
of some set. Such a Boolean algebra consists of a set and operations on that set which can be shown to satisfy the
laws of Boolean algebra.
Instead of showing that the Boolean laws are satised, we can instead postulate a set X, two binary operations on X,
and one unary operation, and require that those operations satisfy the laws of Boolean algebra. The elements of X
need not be bit vectors or subsets but can be anything at all. This leads to the more general abstract denition.

A Boolean algebra is any set with binary operations and and a unary operation thereon satisfying
the Boolean laws.[17]

For the purposes of this denition it is irrelevant how the operations came to satisfy the laws, whether by at or proof.
All concrete Boolean algebras satisfy the laws (by proof rather than at), whence every concrete Boolean algebra is
a Boolean algebra according to our denitions. This axiomatic denition of a Boolean algebra as a set and certain
operations satisfying certain laws or axioms by at is entirely analogous to the abstract denitions of group, ring, eld
etc. characteristic of modern or abstract algebra.
Given any complete axiomatization of Boolean algebra, such as the axioms for a complemented distributive lattice,
a sucient condition for an algebraic structure of this kind to satisfy all the Boolean laws is that it satisfy just those
axioms. The following is therefore an equivalent denition.
46 CHAPTER 10. BOOLEAN ALGEBRA

A Boolean algebra is a complemented distributive lattice.

The section on axiomatization lists other axiomatizations, any of which can be made the basis of an equivalent de-
nition.

10.6.5 Representable Boolean algebras

Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Let
n be a square-free positive integer, one not divisible by the square of an integer, for example 30 but not 12. The
operations of greatest common divisor, least common multiple, and division into n (that is, x = n/x), can be shown
to satisfy all the Boolean laws when their arguments range over the positive divisors of n. Hence those divisors form
a Boolean algebra. These divisors are not subsets of a set, making the divisors of n a Boolean algebra that is not
concrete according to our denitions.
However, if we represent each divisor of n by the set of its prime factors, we nd that this nonconcrete Boolean algebra
is isomorphic to the concrete Boolean algebra consisting of all sets of prime factors of n, with union corresponding to
least common multiple, intersection to greatest common divisor, and complement to division into n. So this example
while not technically concrete is at least morally concrete via this representation, called an isomorphism. This
example is an instance of the following notion.

A Boolean algebra is called representable when it is isomorphic to a concrete Boolean algebra.

The obvious next question is answered positively as follows.

Every Boolean algebra is representable.

That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This quite nontrivial result
depends on the Boolean prime ideal theorem, a choice principle slightly weaker than the axiom of choice, and is
treated in more detail in the article Stones representation theorem for Boolean algebras. This strong relationship
implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of
representability.

The laws satised by all Boolean algebras coincide with those satised by the prototypical Boolean al-
gebra.

It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example
a relation algebra is a Boolean algebra with additional structure but it is not the case that every relation algebra is
representable in the sense appropriate to relation algebras.

10.7 Axiomatizing Boolean algebra


Main articles: Axiomatization of Boolean algebras and Boolean algebras canonically dened

The above denition of an abstract Boolean algebra as a set and operations satisfying the Boolean laws raises the
question, what are those laws? A simple-minded answer is all Boolean laws, which can be dened as all equations
that hold for the Boolean algebra of 0 and 1. Since there are innitely many such laws this is not a terribly satisfactory
answer in practice, leading to the next question: does it suce to require only nitely many laws to hold?
In the case of Boolean algebras the answer is yes. In particular the nitely many equations we have listed above
suce. We say that Boolean algebra is nitely axiomatizable or nitely based.
Can this list be made shorter yet? Again the answer is yes. To begin with, some of the above laws are implied by
some of the others. A sucient subset of the above laws consists of the pairs of associativity, commutativity, and
absorption laws, distributivity of over (or the other distributivity lawone suces), and the two complement
laws. In fact this is the traditional axiomatization of Boolean algebra as a complemented distributive lattice.
10.8. PROPOSITIONAL LOGIC 47

By introducing additional laws not listed above it becomes possible to shorten the list yet further. In 1933 Edward
Huntington showed that if the basic operations are taken to be xy and x, with xy considered a derived operation
(e.g. via De Morgans law in the form xy = (xy)), then the equation (xy)(xy) = x along with the
two equations expressing associativity and commutativity of completely axiomatized Boolean algebra. When the
only basic operation is the binary NAND operation (xy), Stephen Wolfram has proposed in his book A New Kind
of Science the single axiom (((xy)z)(x((xz)x))) = z as a one-equation axiomatization of Boolean algebra, where for
convenience here xy denotes the NAND rather than the AND of x and y.

10.8 Propositional logic


Main article: Propositional calculus

Propositional logic is a logical system that is intimately connected to Boolean algebra.[3] Many syntactic concepts
of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while
the semantics of propositional logic are dened via Boolean algebras in a way that the tautologies (theorems) of
propositional logic correspond to equational theorems of Boolean algebra.
Syntactically, every Boolean term corresponds to a propositional formula of propositional logic. In this transla-
tion between Boolean algebra and propositional logic, Boolean variables x,y become propositional variables (or
atoms) P,Q,, Boolean terms such as xy become propositional formulas PQ, 0 becomes false or , and 1 be-
comes true or T. It is convenient when referring to generic propositions to use Greek letters , , as metavariables
(variables outside the language of propositional calculus, used when talking about propositional calculus) to denote
propositions.
The semantics of propositional logic rely on truth assignments. The essential idea of a truth assignment is that the
propositional variables are mapped to elements of a xed Boolean algebra, and then the truth value of a propositional
formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean
term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while in
Boolean-valued semantics arbitrary Boolean algebras are considered. A tautology is a propositional formula that is
assigned truth value 1 by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or,
equivalently, every truth assignment to the two element Boolean algebra).
These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean
algebra. Every tautology of propositional logic can be expressed as the Boolean equation = 1, which will be
a theorem of Boolean algebra. Conversely every theorem = of Boolean algebra corresponds to the tautologies
() () and () (). If is in the language these last tautologies can also be written as
() (), or as two separate theorems and ; if is available then the single tautology
can be used.

10.8.1 Applications

One motivating application of propositional calculus is the analysis of propositions and deductive arguments in natural
language. Whereas the proposition if x = 3 then x+1 = 4 depends on the meanings of such symbols as + and 1, the
proposition if x = 3 then x = 3 does not; it is true merely by virtue of its structure, and remains true whether "x =
3 is replaced by "x = 4 or the moon is made of green cheese. The generic or abstract form of this tautology is if
P then P", or in the language of Boolean algebra, "P P".
Replacing P by x = 3 or any other proposition is called instantiation of P by that proposition. The result of instan-
tiating P in an abstract proposition is called an instance of the proposition. Thus "x = 3 x = 3 is a tautology by
virtue of being an instance of the abstract tautology "P P". All occurrences of the instantiated variable must be
instantiated with the same proposition, to avoid such nonsense as P x = 3 or x = 3 x = 4.
Propositional calculus restricts attention to abstract propositions, those built up from propositional variables using
Boolean operations. Instantiation is still possible within propositional calculus, but only by instantiating propositional
variables by abstract propositions, such as instantiating Q by QP in P(QP) to yield the instance P((QP)P).
(The availability of instantiation as part of the machinery of propositional calculus avoids the need for metavariables
within the language of propositional calculus, since ordinary propositional variables can be considered within the
language to denote arbitrary propositions. The metavariables themselves are outside the reach of instantiation, not
48 CHAPTER 10. BOOLEAN ALGEBRA

being part of the language of propositional calculus but rather part of the same language for talking about it that this
sentence is written in, where we need to be able to distinguish propositional variables and their instantiations as being
distinct syntactic entities.)

10.8.2 Deductive systems for propositional logic


An axiomatization of propositional calculus is a set of tautologies called axioms and one or more inference rules for
producing new tautologies from old. A proof in an axiom system A is a nite nonempty sequence of propositions
each of which is either an instance of an axiom of A or follows by some rule of A from propositions appearing earlier
in the proof (thereby disallowing circular reasoning). The last proposition is the theorem proved by the proof. Every
nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An
axiomatization is sound when every theorem is a tautology, and complete when every tautology is a theorem.[18]

Sequent calculus

Main article: Sequent calculus

Propositional calculus is commonly organized as a Hilbert system, whose operations are just those of Boolean algebra
and whose theorems are Boolean tautologies, those Boolean terms equal to the Boolean constant 1. Another form is
sequent calculus, which has two sorts, propositions as in ordinary propositional calculus, and pairs of lists of propo-
sitions called sequents, such as AB, AC, A, BC,. The two halves of a sequent are called the antecedent
and the succedent respectively. The customary metavariable denoting an antecedent or part thereof is , and for a
succedent ; thus ,A would denote a sequent whose succedent is a list and whose antecedent is a list with
an additional proposition A appended after it. The antecedent is interpreted as the conjunction of its propositions,
the succedent as the disjunction of its propositions, and the sequent itself as the entailment of the succedent by the
antecedent.
Entailment diers from implication in that whereas the latter is a binary operation that returns a value in a Boolean
algebra, the former is a binary relation which either holds or does not hold. In this sense entailment is an external form
of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external
and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of
is as in the partial order of the Boolean algebra dened by x y just when xy = y. This ability to mix external
implication and internal implication in the one logic is among the essential dierences between sequent calculus
and propositional calculus.[19]

10.9 Applications
Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and
mathematical logic, and is also used in other areas of mathematics such as set theory and statistics.[3]

10.9.1 Computers
In the early 20th century, several electrical engineers intuitively recognized that Boolean algebra was analogous to
the behavior of certain types of electrical circuits. Claude Shannon formally proved such behavior was logically
equivalent to Boolean algebra in his 1937 masters thesis, A Symbolic Analysis of Relay and Switching Circuits.
Today, all modern general purpose computers perform their functions using two-value Boolean logic; that is, their
electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: as
voltages on wires in high-speed circuits and capacitive storage devices, as orientations of a magnetic domain in fer-
romagnetic storage devices, as holes in punched cards or paper tape, and so on. (Some early computers used decimal
circuits or mechanisms instead of two-valued logic circuits.)
Of course, it is possible to code more than two symbols in any given medium. For example, one might use respectively
0, 1, 2, and 3 volts to code a four-symbol alphabet on a wire, or holes of dierent sizes in a punched card. In practice,
the tight constraints of high speed, small size, and low power combine to make noise a major factor. This makes it
hard to distinguish between symbols when there are several possible symbols that could occur at a single site. Rather
10.9. APPLICATIONS 49

than attempting to distinguish between four voltages on one wire, digital designers have settled on two voltages per
wire, high and low.
Computers use two-value Boolean circuits for the above reasons. The most common computer architectures use
ordered sequences of Boolean values, called bits, of 32 or 64 values, e.g. 01101000110101100101010101001011.
When programming in machine code, assembly language, and certain other programming languages, programmers
work with the low-level digital structure of the data registers. These registers operate on voltages, where zero volts
represents Boolean 0, and a reference voltage (often +5V, +3.3V, +1.8V) represents Boolean 1. Such languages
support both numeric operations and logical operations. In this context, numeric means that the computer treats
sequences of bits as binary numbers (base two numbers) and executes arithmetic operations like add, subtract, mul-
tiply, or divide. Logical refers to the Boolean logical operations of disjunction, conjunction, and negation between
two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence.
Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean
algebra as needed. A core dierentiating feature between these families of operations is the existence of the carry
operation in the rst but not the second.

10.9.2 Two-valued logic


Other areas where two values is a good choice are the law and mathematics. In everyday relaxed conversation, nuanced
or complex answers such as maybe or only on the weekend are acceptable. In more focused situations such as
a court of law or theorem-based mathematics however it is deemed advantageous to frame questions so as to admit
a simple yes-or-no answeris the defendant guilty or not guilty, is the proposition true or falseand to disallow
any other answer. However much of a straitjacket this might prove in practice for the respondent, the principle of
the simple yes-no question has become a central feature of both judicial and mathematical logic, making two-valued
logic deserving of organization and study in its own right.
A central concept of set theory is membership. Now an organization may permit multiple degrees of membership,
such as novice, associate, and full. With sets however an element is either in or out. The candidates for membership
in a set work just like the wires in a digital computer: each candidate is either a member or a nonmember, just as
each wire is either high or low.
Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to
make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory.
Two-valued logic can be extended to multi-valued logic, notably by replacing the Boolean domain {0, 1} with the
unit interval [0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be
assumed. Algebraically, negation (NOT) is replaced with 1 x, conjunction (AND) is replaced with multiplication
( xy ), and disjunction (OR) is dened via De Morgans law. Interpreting these values as logical truth values yields
a multi-valued logic, which forms the basis for fuzzy logic and probabilistic logic. In these interpretations, a value
is interpreted as the degree of truth to what extent a proposition is true, or the probability that the proposition is
true.

10.9.3 Boolean operations


The original application for Boolean operations was mathematical logic, where it combines the truth values, true or
false, of individual formulas.
Natural languages such as English have words for several Boolean operations, in particular conjunction (and), dis-
junction (or), negation (not), and implication (implies). But not is synonymous with and not. When used to combine
situational assertions such as the block is on the table and cats drink milk, which naively are either true or false,
the meanings of these logical connectives often have the meaning of their logical counterparts. However, with de-
scriptions of behavior such as Jim walked through the door, one starts to notice dierences such as failure of
commutativity, for example the conjunction of Jim opened the door with Jim walked through the door in that
order is not equivalent to their conjunction in the other order, since and usually means and then in such cases. Ques-
tions can be similar: the order Is the sky blue, and why is the sky blue?" makes more sense than the reverse order.
Conjunctive commands about behavior are like behavioral assertions, as in get dressed and go to school. Disjunctive
commands such love me or leave me or sh or cut bait tend to be asymmetric via the implication that one alternative
is less preferable. Conjoined nouns such as tea and milk generally describe aggregation as with set union while tea or
milk is a choice. However context can reverse these senses, as in your choices are coee and tea which usually means
the same as your choices are coee or tea (alternatives). Double negation as in I don't not like milk rarely means
50 CHAPTER 10. BOOLEAN ALGEBRA

literally I do like milk but rather conveys some sort of hedging, as though to imply that there is a third possibility.
Not not P can be loosely interpreted as surely P, and although P necessarily implies not not P" the converse
is suspect in English, much as with intuitionistic logic. In view of the highly idiosyncratic usage of conjunctions in
natural languages, Boolean algebra cannot be considered a reliable framework for interpreting them.
Boolean operations are used in digital logic to combine the bits carried on individual wires, thereby interpreting them
over {0,1}. When a vector of n identical binary gates are used to combine two bit vectors each of n bits, the individual
bit operations can be understood collectively as a single operation on values from a Boolean algebra with 2n elements.
Naive set theory interprets Boolean operations as acting on subsets of a given set X. As we saw earlier this behavior
exactly parallels the coordinate-wise combinations of bit vectors, with the union of two sets corresponding to the
disjunction of two bit vectors and so on.
The 256-element free Boolean algebra on three generators is deployed in computer displays based on raster graphics,
which use bit blit to manipulate whole regions consisting of pixels, relying on Boolean operations to specify how
the source region should be combined with the destination, typically with the help of a third region called the mask.
3
Modern video cards oer all 22 = 256 ternary operations for this purpose, with the choice of operation being a
one-byte (8-bit) parameter. The constants SRC = 0xaa or 10101010, DST = 0xcc or 11001100, and MSK = 0xf0 or
11110000 allow Boolean operations such as (SRC^DST)&MSK (meaning XOR the source and destination and then
AND the result with the mask) to be written directly as a constant denoting a byte calculated at compile time, 0x60
in the (SRC^DST)&MSK example, 0x66 if just SRC^DST, etc. At run time the video card interprets the byte as the
raster operation indicated by the original expression in a uniform way that requires remarkably little hardware and
which takes time completely independent of the complexity of the expression.
Solid modeling systems for computer aided design oer a variety of methods for building objects from other objects,
combination by Boolean operations being one of them. In this method the space in which objects exist is understood
as a set S of voxels (the three-dimensional analogue of pixels in two-dimensional graphics) and shapes are dened as
subsets of S, allowing objects to be combined as sets via union, intersection, etc. One obvious use is in building a
complex shape from simple shapes simply as the union of the latter. Another use is in sculpting understood as removal
of material: any grinding, milling, routing, or drilling operation that can be performed with physical machinery on
physical materials can be simulated on the computer with the Boolean operation x y or x y, which in set theory
is set dierence, remove the elements of y from those of x. Thus given two shapes one to be machined and the other
the material to be removed, the result of machining the former to remove the latter is described simply as their set
dierence.

Boolean searches

Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be consid-
ered to be an element of a set. The following examples use a syntax supported by Google.[20]

Doublequotes are used to combine whitespace-separated words into a single search term.[21]
Whitespace is used to specify logical AND, as it is the default operator for joining search terms:

Search term 1 Search term 2

The OR keyword is used for logical OR:

Search term 1 OR Search term 2

A prexed minus sign is used for logical NOT:

Search term 1 "Search term 2

10.10 See also


Binary number
Boolean algebra (structure)
10.11. REFERENCES 51

Boolean algebras canonically dened

Booleo

Heyting algebra

Intuitionistic logic

List of Boolean algebra topics

Logic design

Propositional calculus

Relation algebra

Three-valued logic

Vector logic

10.11 References
[1] Boole, George (2003) [1854]. An Investigation of the Laws of Thought. Prometheus Books. ISBN 978-1-59102-089-9.

[2] The name Boolean algebra (or Boolean 'algebras) for the calculus originated by Boole, extended by Schrder, and per-
fected by Whitehead seems to have been rst suggested by Sheer, in 1913. E. V. Huntington, "New sets of independent
postulates for the algebra of logic, with special reference to Whitehead and Russells Principia mathematica", in Trans.
Amer. Math. Soc. 35 (1933), 274-304; footnote, page 278.

[3] Givant, Steven; Halmos, Paul (2009). Introduction to Boolean Algebras. Undergraduate Texts in Mathematics, Springer.
ISBN 978-0-387-40293-2.

[4] J. Michael Dunn; Gary M. Hardegree (2001). Algebraic methods in philosophical logic. Oxford University Press US. p. 2.
ISBN 978-0-19-853192-0.

[5] Norman Balabanian; Bradley Carlson (2001). Digital logic design principles. John Wiley. pp. 3940. ISBN 978-0-471-
29351-4., online sample

[6] Rajaraman & Radhakrishnan. Introduction To Digital Computer Design An 5Th Ed. PHI Learning Pvt. Ltd. p. 65. ISBN
978-81-203-3409-0.

[7] John A. Camara (2010). Electrical and Electronics Reference Manual for the Electrical and Computer PE Exam. www.
ppi2pass.com. p. 41. ISBN 978-1-59126-166-7.

[8] Shin-ichi Minato, Saburo Muroga (2007). Binary Decision Diagrams. In Wai-Kai Chen. The VLSI handbook (2nd ed.).
CRC Press. ISBN 978-0-8493-4199-1. chapter 29.

[9] Alan Parkes (2002). Introduction to languages, machines and logic: computable languages, abstract machines and formal
logic. Springer. p. 276. ISBN 978-1-85233-464-2.

[10] Jon Barwise; John Etchemendy; Gerard Allwein; Dave Barker-Plummer; Albert Liu (1999). Language, proof, and logic.
CSLI Publications. ISBN 978-1-889119-08-3.

[11] Ben Goertzel (1994). Chaotic logic: language, thought, and reality from the perspective of complex systems science. Springer.
p. 48. ISBN 978-0-306-44690-0.

[12] Halmos, Paul (1963). Lectures on Boolean Algebras. van Nostrand.

[13] O'Regan, Gerard (2008). A brief history of computing. Springer. p. 33. ISBN 978-1-84800-083-4.

[14] Steven R. Givant; Paul Richard Halmos (2009). Introduction to Boolean algebras. Springer. pp. 2122. ISBN 978-0-387-
40293-2.

[15] Venn, John (July 1880). I. On the Diagrammatic and Mechanical Representation of Propositions and Reasonings (PDF).
The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 5. Taylor & Francis. 10 (59): 118.
doi:10.1080/14786448008626877. Archived (PDF) from the original on 2017-05-16.
52 CHAPTER 10. BOOLEAN ALGEBRA

[16] Shannon, Claude (1949). The Synthesis of Two-Terminal Switching Circuits. Bell System Technical Journal. 28: 5998.
doi:10.1002/j.1538-7305.1949.tb03624.x.

[17] Koppelberg, Sabine (1989). General Theory of Boolean Algebras. Handbook of Boolean Algebras, Vol. 1 (ed. J. Donald
Monk with Robert Bonnet). Amsterdam: North Holland. ISBN 978-0-444-70261-6.

[18] Hausman, Alan; Howard Kahane; Paul Tidman (2010) [2007]. Logic and Philosophy: A Modern Introduction. Wadsworth
Cengage Learning. ISBN 0-495-60158-6.

[19] Girard, Jean-Yves; Paul Taylor; Yves Lafont (1990) [1989]. Proofs and Types. Cambridge University Press (Cambridge
Tracts in Theoretical Computer Science, 7). ISBN 0-521-37181-3.

[20] Not all search engines support the same query syntax. Additionally, some organizations (such as Google) provide special-
ized search engines that support alternate or extended syntax. (See e.g.,Syntax cheatsheet, Google codesearch supports
regular expressions).

[21] Doublequote-delimited search terms are called exact phrase searches in the Google documentation.

10.11.1 General
Mano, Morris; Ciletti, Michael D. (2013). Digital Design. Pearson. ISBN 978-0-13-277420-8.

10.12 Further reading


J. Eldon Whitesitt (1995). Boolean algebra and its applications. Courier Dover Publications. ISBN 978-0-
486-68483-3. Suitable introduction for students in applied elds.
Dwinger, Philip (1971). Introduction to Boolean algebras. Wrzburg: Physica Verlag.
Sikorski, Roman (1969). Boolean Algebras (3/e ed.). Berlin: Springer-Verlag. ISBN 978-0-387-04469-9.
Bocheski, Jzef Maria (1959). A Prcis of Mathematical Logic. Translated from the French and German
editions by Otto Bird. Dordrecht, South Holland: D. Reidel.

Historical perspective

George Boole (1848). "The Calculus of Logic," Cambridge and Dublin Mathematical Journal III: 18398.
Theodore Hailperin (1986). Booles logic and probability: a critical exposition from the standpoint of contem-
porary algebra, logic, and probability theory (2nd ed.). Elsevier. ISBN 978-0-444-87952-3.
Dov M. Gabbay, John Woods, ed. (2004). The rise of modern logic: from Leibniz to Frege. Handbook of the
History of Logic. 3. Elsevier. ISBN 978-0-444-51611-4., several relevant chapters by Hailperin, Valencia,
and Grattan-Guinesss
Calixto Badesa (2004). The birth of model theory: Lwenheims theorem in the frame of the theory of rela-
tives. Princeton University Press. ISBN 978-0-691-05853-5., chapter 1, Algebra of Classes and Propositional
Calculus
Burris, Stanley, 2009. The Algebra of Logic Tradition. Stanford Encyclopedia of Philosophy.
Radomir S. Stankovic; Jaakko Astola (2011). From Boolean Logic to Switching Circuits and Automata: Towards
Modern Information Technology. Springer. ISBN 978-3-642-11681-0.

10.13 External links


Boolean Algebra chapter on All About Circuits
How Stu Works Boolean Logic
Science and Technology - Boolean Algebra contains a list and proof of Boolean theorems and laws.
Chapter 11

Boolean algebra (structure)

For an introduction to the subject, see Boolean algebra. For an alternative presentation, see Boolean algebras canon-
ically dened.

In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of
algebraic structure captures essential properties of both set operations and logic operations. A Boolean algebra can
be seen as a generalization of a power set algebra or a eld of sets, or its elements can be viewed as generalized truth
values. It is also a special case of a De Morgan algebra and a Kleene algebra (with involution).
Every Boolean algebra gives rise to a Boolean ring, and vice versa, with ring multiplication corresponding to conjunction
or meet , and ring addition to exclusive disjunction or symmetric dierence (not disjunction ). However, the theory
of Boolean rings has an inherent asymmetry between the two operators, while the axioms and theorems of Boolean
algebra express the symmetry of the theory described by the duality principle.[1]

{x,y,z}

{x,y} {x,z} {y,z}

{x} {y} {z}

Boolean lattice of subsets

53
54 CHAPTER 11. BOOLEAN ALGEBRA (STRUCTURE)

11.1 History
The term Boolean algebra honors George Boole (18151864), a self-educated English mathematician. He intro-
duced the algebraic system initially in a small pamphlet, The Mathematical Analysis of Logic, published in 1847 in
response to an ongoing public controversy between Augustus De Morgan and William Hamilton, and later as a more
substantial book, The Laws of Thought, published in 1854. Booles formulation diers from that described above
in some important respects. For example, conjunction and disjunction in Boole were not a dual pair of operations.
Boolean algebra emerged in the 1860s, in papers written by William Jevons and Charles Sanders Peirce. The rst sys-
tematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schrder.
The rst extensive treatment of Boolean algebra in English is A. N. Whitehead's 1898 Universal Algebra. Boolean
algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V.
Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s,
and with Garrett Birkho's 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep
new results in mathematical logic and axiomatic set theory using oshoots of Boolean algebra, namely forcing and
Boolean-valued models.

11.2 Denition
A Boolean algebra is a six-tuple consisting of a set A, equipped with two binary operations (called meet or
and), (called join or or), a unary operation (called complement or not) and two elements 0 and 1
(called bottom and top, or least and greatest element, also denoted by the symbols and , respectively),
such that for all elements a, b and c of A, the following axioms hold:[2]

Note, however, that the absorption law can be excluded from the set of axioms as it can be derived from the other
axioms (see Proven properties).
A Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra.
(Some authors require 0 and 1 to be distinct elements in order to exclude this case.)
It follows from the last three pairs of axioms above (identity, distributivity and complements), or from the absorption
axiom, that

a = b a if and only if a b = b.

The relation dened by a b if these equivalent conditions hold, is a partial order with least element 0 and greatest
element 1. The meet a b and the join a b of two elements coincide with their inmum and supremum, respectively,
with respect to .
The rst four pairs of axioms constitute a denition of a bounded lattice.
It follows from the rst ve pairs of axioms that any complement is unique.
The set of axioms is self-dual in the sense that if one exchanges with and 0 with 1 in an axiom, the result is
again an axiom. Therefore, by applying this operation to a Boolean algebra (or Boolean lattice), one obtains another
Boolean algebra with the same elements; it is called its dual.[3]

11.3 Examples
The simplest non-trivial Boolean algebra, the two-element Boolean algebra, has only two elements, 0 and 1,
and is dened by the rules:

It has applications in logic, interpreting 0 as false, 1 as true, as and, as or, and as not.
Expressions involving variables and the Boolean operations represent statement forms, and two
such expressions can be shown to be equal using the above axioms if and only if the corresponding
statement forms are logically equivalent.
11.4. HOMOMORPHISMS AND ISOMORPHISMS 55

The two-element Boolean algebra is also used for circuit design in electrical engineering; here 0
and 1 represent the two dierent states of one bit in a digital circuit, typically high and low voltage.
Circuits are described by expressions containing variables, and two such expressions are equal
for all values of the variables if and only if the corresponding circuits have the same input-output
behavior. Furthermore, every possible input-output behavior can be modeled by a suitable Boolean
expression.

The two-element Boolean algebra is also important in the general theory of Boolean algebras,
because an equation involving several variables is generally true in all Boolean algebras if and only
if it is true in the two-element Boolean algebra (which can be checked by a trivial brute force
algorithm for small numbers of variables). This can for example be used to show that the following
laws (Consensus theorems) are generally valid in all Boolean algebras:
(a b) (a c) (b c) (a b) (a c)
(a b) (a c) (b c) (a b) (a c)

The power set (set of all subsets) of any given nonempty set S forms a Boolean algebra, an algebra of sets, with
the two operations := (union) and := (intersection). The smallest element 0 is the empty set and the
largest element 1 is the set S itself.

After the two-element Boolean algebra, the simplest Boolean algebra is that dened by the power
set of two atoms:

The set of all subsets of S that are either nite or conite is a Boolean algebra, an algebra of sets.

Starting with the propositional calculus with sentence symbols, form the Lindenbaum algebra (that is, the set
of sentences in the propositional calculus modulo tautology). This construction yields a Boolean algebra. It is
in fact the free Boolean algebra on generators. A truth assignment in propositional calculus is then a Boolean
algebra homomorphism from this algebra to the two-element Boolean algebra.

Given any linearly ordered set L with a least element, the interval algebra is the smallest algebra of subsets of
L containing all of the half-open intervals [a, b) such that a is in L and b is either in L or equal to . Interval
algebras are useful in the study of Lindenbaum-Tarski algebras; every countable Boolean algebra is isomorphic
to an interval algebra.

For any natural number n, the set of all positive divisors of n, dening ab if a divides b, forms a distributive
lattice. This lattice is a Boolean algebra if and only if n is square-free. The bottom and the top element of this
Boolean algebra is the natural number 1 and n, respectively. The complement of a is given by n/a. The meet
and the join of a and b is given by the greatest common divisor (gcd) and the least common multiple (lcm) of
a and b, respectively. The ring addition a+b is given by lcm(a,b)/gcd(a,b). The picture shows an example for
n = 30. As a counter-example, considering the non-square-free n=60, the greatest common divisor of 30 and
its complement 2 would be 2, while it should be the bottom element 1.

Other examples of Boolean algebras arise from topological spaces: if X is a topological space, then the col-
lection of all subsets of X which are both open and closed forms a Boolean algebra with the operations :=
(union) and := (intersection).

If R is an arbitrary ring and we dene the set of central idempotents by


A = { e R : e2 = e, ex = xe, x R }
then the set A becomes a Boolean algebra with the operations e f := e + f - ef and e f := ef.

11.4 Homomorphisms and isomorphisms


A homomorphism between two Boolean algebras A and B is a function f : A B such that for all a, b in A:
56 CHAPTER 11. BOOLEAN ALGEBRA (STRUCTURE)

30

6 10 15

2 3 5

Hasse diagram of the Boolean algebra of divisors of 30.

f(a b) = f(a) f(b),


f(a b) = f(a) f(b),
f(0) = 0,
f(1) = 1.

It then follows that f(a) = f(a) for all a in A. The class of all Boolean algebras, together with this notion of
morphism, forms a full subcategory of the category of lattices.

11.5 Boolean rings


Main article: Boolean ring

Every Boolean algebra (A, , ) gives rise to a ring (A, +, ) by dening a + b := (a b) (b a) = (a b) (a


b) (this operation is called symmetric dierence in the case of sets and XOR in the case of logic) and a b := a b.
The zero element of this ring coincides with the 0 of the Boolean algebra; the multiplicative identity element of the
11.6. IDEALS AND FILTERS 57

ring is the 1 of the Boolean algebra. This ring has the property that a a = a for all a in A; rings with this property
are called Boolean rings.
Conversely, if a Boolean ring A is given, we can turn it into a Boolean algebra by dening x y := x + y + (x y) and
x y := x y. [4][5] Since these two constructions are inverses of each other, we can say that every Boolean ring arises
from a Boolean algebra, and vice versa. Furthermore, a map f : A B is a homomorphism of Boolean algebras
if and only if it is a homomorphism of Boolean rings. The categories of Boolean rings and Boolean algebras are
equivalent.[6]
Hsiang (1985) gave a rule-based algorithm to check whether two arbitrary expressions denote the same value in every
Boolean ring. More generally, Boudet, Jouannaud, and Schmidt-Schau (1989) gave an algorithm to solve equations
between arbitrary Boolean-ring expressions. Employing the similarity of Boolean rings and Boolean algebras, both
algorithms have applications in automated theorem proving.

11.6 Ideals and lters


Main articles: Ideal (order theory) and Filter (mathematics)

An ideal of the Boolean algebra A is a subset I such that for all x, y in I we have x y in I and for all a in A we have
a x in I. This notion of ideal coincides with the notion of ring ideal in the Boolean ring A. An ideal I of A is called
prime if I A and if a b in I always implies a in I or b in I. Furthermore, for every a A we have that a -a =
0 I and then a I or -a I for every a A, if I is prime. An ideal I of A is called maximal if I A and if the
only ideal properly containing I is A itself. For an ideal I, if a I and -a I, then I {a} or I {-a} is properly
contained in another ideal J. Hence, that an I is not maximal and therefore the notions of prime ideal and maximal
ideal are equivalent in Boolean algebras. Moreover, these notions coincide with ring theoretic ones of prime ideal
and maximal ideal in the Boolean ring A.
The dual of an ideal is a lter. A lter of the Boolean algebra A is a subset p such that for all x, y in p we have x y
in p and for all a in A we have a x in p. The dual of a maximal (or prime) ideal in a Boolean algebra is ultralter.
Ultralters can alternatively be described as 2-valued morphisms from A to the two-element Boolean algebra. The
statement every lter in a Boolean algebra can be extended to an ultralter is called the Ultralter Theorem and can
not be proved in ZF, if ZF is consistent. Within ZF, it is strictly weaker than the axiom of choice. The Ultralter
Theorem has many equivalent formulations: every Boolean algebra has an ultralter, every ideal in a Boolean algebra
can be extended to a prime ideal, etc.

11.7 Representations
It can be shown that every nite Boolean algebra is isomorphic to the Boolean algebra of all subsets of a nite set.
Therefore, the number of elements of every nite Boolean algebra is a power of two.
Stones celebrated representation theorem for Boolean algebras states that every Boolean algebra A is isomorphic to
the Boolean algebra of all clopen sets in some (compact totally disconnected Hausdor) topological space.

11.8 Axiomatics
The rst axiomatization of Boolean lattices/algebras in general was given by Alfred North Whitehead in 1898.[7][8]
It included the above axioms and additionally x1=1 and x0=0. In 1904, the American mathematician Edward
V. Huntington (18741952) gave probably the most parsimonious axiomatization based on , , , even proving the
associativity laws (see box).[9] He also proved that these axioms are independent of each other.[10] In 1933, Huntington
set out the following elegant axiomatization for Boolean algebra. It requires just one binary operation + and a unary
functional symbol n, to be read as 'complement', which satisfy the following laws:

1. Commutativity: x + y = y + x.

2. Associativity: (x + y) + z = x + (y + z).
58 CHAPTER 11. BOOLEAN ALGEBRA (STRUCTURE)

3. Huntington equation: n(n(x) + y) + n(n(x) + n(y)) = x.

Herbert Robbins immediately asked: If the Huntington equation is replaced with its dual, to wit:

4. Robbins Equation: n(n(x + y) + n(x + n(y))) = x,

do (1), (2), and (4) form a basis for Boolean algebra? Calling (1), (2), and (4) a Robbins algebra, the question then
becomes: Is every Robbins algebra a Boolean algebra? This question (which came to be known as the Robbins
conjecture) remained open for decades, and became a favorite question of Alfred Tarski and his students. In 1996,
William McCune at Argonne National Laboratory, building on earlier work by Larry Wos, Steve Winker, and Bob
Vero, answered Robbinss question in the armative: Every Robbins algebra is a Boolean algebra. Crucial to
McCunes proof was the automated reasoning program EQP he designed. For a simplication of McCunes proof,
see Dahn (1998).

11.9 Generalizations
Removing the requirement of existence of a unit from the axioms of Boolean algebra yields generalized Boolean
algebras. Formally, a distributive lattice B is a generalized Boolean lattice, if it has a smallest element 0 and for any
elements a and b in B such that a b, there exists an element x such that a x = 0 and a x = b. Dening a b
as the unique x such that (a b) x = a and (a b) x = 0, we say that the structure (B,,,,0) is a generalized
Boolean algebra, while (B,,0) is a generalized Boolean semilattice. Generalized Boolean lattices are exactly the ideals
of Boolean lattices.
A structure that satises all axioms for Boolean algebras except the two distributivity axioms is called an orthocomplemented
lattice. Orthocomplemented lattices arise naturally in quantum logic as lattices of closed subspaces for separable
Hilbert spaces.

11.10 See also

11.11 Notes
[1] Givant and Paul Halmos, 2009, p. 20

[2] Davey, Priestley, 1990, p.109, 131, 144

[3] Goodstein, R. L. (2012), Chapter 2: The self-dual system of axioms, Boolean Algebra, Courier Dover Publications, pp.
21, ISBN 9780486154978.

[4] Stone, 1936

[5] Hsiang, 1985, p.260

[6] Cohn (2003), p. 81.

[7] Padmanabhan, p. 73

[8] Whitehead, 1898, p.37

[9] Huntington, 1904, p.292-293, (rst of several axiomatizations by Huntington)

[10] Huntington, 1904, p.296

11.12 References
Brown, Stephen; Vranesic, Zvonko (2002), Fundamentals of Digital Logic with VHDL Design (2nd ed.),
McGrawHill, ISBN 978-0-07-249938-4. See Section 2.5.
11.13. EXTERNAL LINKS 59

A. Boudet; J.P. Jouannaud; M. Schmidt-Schau (1989). Unication in Boolean Rings and Abelian Groups
(PDF). Journal of Symbolic Computation. 8: 449477. doi:10.1016/s0747-7171(89)80054-9.
Cohn, Paul M. (2003), Basic Algebra: Groups, Rings, and Fields, Springer, pp. 51, 7081, ISBN 9781852335878
Cori, Rene; Lascar, Daniel (2000), Mathematical Logic: A Course with Exercises, Oxford University Press,
ISBN 978-0-19-850048-3. See Chapter 2.
Dahn, B. I. (1998), Robbins Algebras are Boolean: A Revision of McCunes Computer-Generated Solution
of the Robbins Problem, Journal of Algebra, 208 (2): 526532, doi:10.1006/jabr.1998.7467.
B.A. Davey; H.A. Priestley (1990). Introduction to Lattices and Order. Cambridge Mathematical Textbooks.
Cambridge University Press.
Givant, Steven; Halmos, Paul (2009), Introduction to Boolean Algebras, Undergraduate Texts in Mathematics,
Springer, ISBN 978-0-387-40293-2.
Halmos, Paul (1963), Lectures on Boolean Algebras, Van Nostrand, ISBN 978-0-387-90094-0.
Halmos, Paul; Givant, Steven (1998), Logic as Algebra, Dolciani Mathematical Expositions, 21, Mathematical
Association of America, ISBN 978-0-88385-327-6.
Hsiang, Jieh (1985). Refutational Theorem Proving Using Term Rewriting Systems (PDF). AI. 25: 255300.
doi:10.1016/0004-3702(85)90074-8.
Edward V. Huntington (1904). Sets of Independent Postulates for the Algebra of Logic. Transactions of the
American Mathematical Society. 5: 288309. JSTOR 1986459. doi:10.1090/s0002-9947-1904-1500675-4.
Huntington, E. V. (1933), New sets of independent postulates for the algebra of logic (PDF), Transactions
of the American Mathematical Society, American Mathematical Society, 35 (1): 274304, JSTOR 1989325,
doi:10.2307/1989325.
Huntington, E. V. (1933), Boolean algebra: A correction, Transactions of the American Mathematical Society,
35 (2): 557558, JSTOR 1989783, doi:10.2307/1989783.
Mendelson, Elliott (1970), Boolean Algebra and Switching Circuits, Schaums Outline Series in Mathematics,
McGrawHill, ISBN 978-0-07-041460-0.
Monk, J. Donald; Bonnet, R., eds. (1989), Handbook of Boolean Algebras, North-Holland, ISBN 978-0-
444-87291-3. In 3 volumes. (Vol.1:ISBN 978-0-444-70261-6, Vol.2:ISBN 978-0-444-87152-7, Vol.3:ISBN
978-0-444-87153-4)
Padmanabhan, Ranganathan; Rudeanu, Sergiu (2008), Axioms for lattices and boolean algebras, World Scien-
tic, ISBN 978-981-283-454-6.
Sikorski, Roman (1966), Boolean Algebras, Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer Ver-
lag.
Stoll, R. R. (1963), Set Theory and Logic, W. H. Freeman, ISBN 978-0-486-63829-4. Reprinted by Dover
Publications, 1979.
Marshall H. Stone (1936). The Theory of Representations for Boolean Algebra. Transactions of the Ameri-
can Mathematical Society. 40: 37111. doi:10.1090/s0002-9947-1936-1501865-8.
A.N. Whitehead (1898). A Treatise on Universal Algebra. Cambridge University Press. ISBN 1-4297-0032-7.

11.13 External links


Hazewinkel, Michiel, ed. (2001) [1994], Boolean algebra, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Stanford Encyclopedia of Philosophy: "The Mathematics of Boolean Algebra," by J. Donald Monk.
McCune W., 1997. Robbins Algebras Are Boolean JAR 19(3), 263276
60 CHAPTER 11. BOOLEAN ALGEBRA (STRUCTURE)

Boolean Algebra by Eric W. Weisstein, Wolfram Demonstrations Project, 2007.

A monograph available free online:

Burris, Stanley N.; Sankappanavar, H. P., 1981. A Course in Universal Algebra. Springer-Verlag. ISBN
3-540-90578-2.
Weisstein, Eric W. Boolean Algebra. MathWorld.
Chapter 12

Boolean algebras canonically dened

Boolean algebras have been formally dened variously as a kind of lattice and as a kind of ring. This
article presents them, equally formally, as simply the models of the equational theory of two values, and
observes the equivalence of both the lattice and ring denitions to this more elementary one.

Boolean algebra is a mathematically rich branch of abstract algebra. Just as group theory deals with groups, and linear
algebra with vector spaces, Boolean algebras are models of the equational theory of the two values 0 and 1 (whose
interpretation need not be numerical). Common to Boolean algebras, groups, and vector spaces is the notion of an
algebraic structure, a set closed under zero or more operations satisfying certain equations.
Just as there are basic examples of groups, such as the group Z of integers and the permutation group Sn of permutations
of n objects, there are also basic examples of Boolean algebra such as the following.

The algebra of binary digits or bits 0 and 1 under the logical operations including disjunction, conjunction, and
negation. Applications include the propositional calculus and the theory of digital circuits.

The algebra of sets under the set operations including union, intersection, and complement. Applications
include any area of mathematics for which sets form a natural foundation.

Boolean algebra thus permits applying the methods of abstract algebra to mathematical logic, digital logic, and the
set-theoretic foundations of mathematics.
Unlike groups of nite order, which exhibit complexity and diversity and whose rst-order theory is decidable only in
special cases, all nite Boolean algebras share the same theorems and have a decidable rst-order theory. Instead the
intricacies of Boolean algebra are divided between the structure of innite algebras and the algorithmic complexity
of their syntactic structure.

12.1 Denition
Boolean algebra treats the equational theory of the maximal two-element nitary algebra, called the Boolean proto-
type, and the models of that theory, called Boolean algebras. These terms are dened as follows.
An algebra is a family of operations on a set, called the underlying set of the algebra. We take the underlying set of
the Boolean prototype to be {0,1}.
An algebra is nitary when each of its operations takes only nitely many arguments. For the prototype each argument
of an operation is either 0 or 1, as is the result of the operation. The maximal such algebra consists of all nitary
operations on {0,1}.
The number of arguments taken by each operation is called the arity of the operation. An operation on {0,1} of arity
n, or n-ary operation, can be applied to any of 2n possible values for its n arguments. For each choice of arguments
n
the operation may return 0 or 1, whence there are 22 n-ary operations.
The prototype therefore has two operations taking no arguments, called zeroary or nullary operations, namely zero
and one. It has four unary operations, two of which are constant operations, another is the identity, and the most

61
62 CHAPTER 12. BOOLEAN ALGEBRAS CANONICALLY DEFINED

commonly used one, called negation, returns the opposite of its argument: 1 if 0, 0 if 1. It has sixteen binary
operations; again two of these are constant, another returns its rst argument, yet another returns its second, one is
called conjunction and returns 1 if both arguments are 1 and otherwise 0, another is called disjunction and returns 0 if
both arguments are 0 and otherwise 1, and so on. The number of (n+1)-ary operations in the prototype is the square
of the number of n-ary operations, so there are 162 = 256 ternary operations, 2562 = 65,536 quaternary operations,
and so on.
A family is indexed by an index set. In the case of a family of operations forming an algebra, the indices are called
operation symbols, constituting the language of that algebra. The operation indexed by each symbol is called the
denotation or interpretation of that symbol. Each operation symbol species the arity of its interpretation, whence
all possible interpretations of a symbol have the same arity. In general it is possible for an algebra to interpret
distinct symbols with the same operation, but this is not the case for the prototype, whose symbols are in one-one
n
correspondence with its operations. The prototype therefore has 22 n-ary operation symbols, called the Boolean
operation symbols and forming the language of Boolean algebra. Only a few operations have conventional symbols,
such as for negation, for conjunction, and for disjunction. It is convenient to consider the i-th n-ary symbol to
be n fi as done below in the section on truth tables.
An equational theory in a given language consists of equations between terms built up from variables using symbols
of that language. Typical equations in the language of Boolean algebra are xy = yx, xx = x, xx = yy, and
xy = x.
An algebra satises an equation when the equation holds for all possible values of its variables in that algebra when
the operation symbols are interpreted as specied by that algebra. The laws of Boolean algebra are the equations in
the language of Boolean algebra satised by the prototype. The rst three of the above examples are Boolean laws,
but not the fourth since 10 1.
The equational theory of an algebra is the set of all equations satised by the algebra. The laws of Boolean algebra
therefore constitute the equational theory of the Boolean prototype.
A model of a theory is an algebra interpreting the operation symbols in the language of the theory and satisfying the
equations of the theory.

A Boolean algebra is any model of the laws of Boolean algebra.

That is, a Boolean algebra is a set and a family of operations thereon interpreting the Boolean operation symbols and
satisfying the same laws as the Boolean prototype.
If we dene a homologue of an algebra to be a model of the equational theory of that algebra, then a Boolean algebra
can be dened as any homologue of the prototype.
Example 1. The Boolean prototype is a Boolean algebra, since trivially it satises its own laws. It is thus the
prototypical Boolean algebra. We did not call it that initially in order to avoid any appearance of circularity in the
denition.

12.2 Basis
The operations need not be all explicitly stated. A basis is any set from which the remaining operations can be
obtained by composition. A Boolean algebra may be dened from any of several dierent bases. Three bases for
Boolean algebra are in common use, the lattice basis, the ring basis, and the Sheer stroke or NAND basis. These
bases impart respectively a logical, an arithmetical, and a parsimonious character to the subject.

The lattice basis originated in the 19th century with the work of Boole, Peirce, and others seeking an algebraic
formalization of logical thought processes.
The ring basis emerged in the 20th century with the work of Zhegalkin and Stone and became the basis of choice
for algebraists coming to the subject from a background in abstract algebra. Most treatments of Boolean algebra
assume the lattice basis, a notable exception being Halmos[1963] whose linear algebra background evidently
endeared the ring basis to him.
Since all nitary operations on {0,1} can be dened in terms of the Sheer stroke NAND (or its dual NOR),
the resulting economical basis has become the basis of choice for analyzing digital circuits, in particular gate
arrays in digital electronics.
12.3. TRUTH TABLES 63

The common elements of the lattice and ring bases are the constants 0 and 1, and an associative commutative binary
operation, called meet xy in the lattice basis, and multiplication xy in the ring basis. The distinction is only termino-
logical. The lattice basis has the further operations of join, xy, and complement, x. The ring basis has instead the
arithmetic operation xy of addition (the symbol is used in preference to + because the latter is sometimes given
the Boolean reading of join).
To be a basis is to yield all other operations by composition, whence any two bases must be intertranslatable. The
lattice basis translates xy to the ring basis as xyxy, and x as x1. Conversely the ring basis translates xy to the
lattice basis as (xy)(xy).
Both of these bases allow Boolean algebras to be dened via a subset of the equational properties of the Boolean
operations. For the lattice basis, it suces to dene a Boolean algebra as a distributive lattice satisfying xx = 0
and xx = 1, called a complemented distributive lattice. The ring basis turns a Boolean algebra into a Boolean ring,
namely a ring satisfying x2 = x.
Emil Post gave a necessary and sucient condition for a set of operations to be a basis for the nonzeroary Boolean
operations. A nontrivial property is one shared by some but not all operations making up a basis. Post listed ve
nontrivial properties of operations, identiable with the ve Posts classes, each preserved by composition, and showed
that a set of operations formed a basis if, for each property, the set contained an operation lacking that property. (The
converse of Posts theorem, extending if to "if and only if, is the easy observation that a property from among these
ve holding of every operation in a candidate basis will also hold of every operation formed by composition from that
candidate, whence by nontriviality of that property the candidate will fail to be a basis.) Posts ve properties are:

monotone, no 0-1 input transition can cause a 1-0 output transition;

ane, representable with Zhegalkin polynomials that lack bilinear or higher terms, e.g. xy1 but not xy;

self-dual, so that complementing all inputs complements the output, as with x, or the median operator xyyzzx,
or their negations;

strict (mapping the all-zeros input to zero);

costrict (mapping all-ones to one).

The NAND (dually NOR) operation lacks all these, thus forming a basis by itself.

12.3 Truth tables


The nitary operations on {0,1} may be exhibited as truth tables, thinking of 0 and 1 as the truth values false and
true. They can be laid out in a uniform and application-independent way that allows us to name, or at least number,
them individually. These names provide a convenient shorthand for the Boolean operations. The names of the n-
n
ary operations are binary numbers of 2n bits. There being 22 such operations, one cannot ask for a more succinct
nomenclature. Note that each nitary operation can be called a switching function.
This layout and associated naming of operations is illustrated here in full for arities from 0 to 2.

These tables continue at higher arities, with 2n rows at arity n, each row giving a valuation or binding of the n variables
x0 ,...xn and each column headed n fi giving the value n fi(x0 ,...,xn) of the i-th n-ary operation at that valuation. The
operations include the variables, for example 1 f 2 is x0 while 2 f 10 is x0 (as two copies of its unary counterpart) and
2
f 12 is x1 (with no unary counterpart). Negation or complement x0 appears as 1 f 1 and again as 2 f 5 , along with 2 f 3
(x1 , which did not appear at arity 1), disjunction or union x0 x1 as 2 f 14 , conjunction or intersection x0 x1 as 2 f 8 ,
implication x0 x1 as 2 f 13 , exclusive-or symmetric dierence x0 x1 as 2 f 6 , set dierence x0 x1 as 2 f 2 , and so on.
As a minor detail important more for its form than its content, the operations of an algebra are traditionally organized
as a list. Although we are here indexing the operations of a Boolean algebra by the nitary operations on {0,1}, the
truth-table presentation above serendipitously orders the operations rst by arity and second by the layout of the tables
for each arity. This permits organizing the set of all Boolean operations in the traditional list format. The list order
for the operations of a given arity is determined by the following two rules.
64 CHAPTER 12. BOOLEAN ALGEBRAS CANONICALLY DEFINED

(i) The i-th row in the left half of the table is the binary representation of i with its least signicant or 0-th
bit on the left (little-endian order, originally proposed by Alan Turing, so it would not be unreasonable
to call it Turing order).

(ii) The j-th column in the right half of the table is the binary representation of j, again in little-endian
order. In eect the subscript of the operation is the truth table of that operation. By analogy with Gdel
numbering of computable functions one might call this numbering of the Boolean operations the Boole
numbering.

When programming in C or Java, bitwise disjunction is denoted x|y, conjunction x&y, and negation ~x. A program
can therefore represent for example the operation x(yz) in these languages as x&(y|z), having previously set x =
0xaa, y = 0xcc, and z = 0xf0 (the 0x indicates that the following constant is to be read in hexadecimal or base 16),
either by assignment to variables or dened as macros. These one-byte (eight-bit) constants correspond to the columns
for the input variables in the extension of the above tables to three variables. This technique is almost universally
used in raster graphics hardware to provide a exible variety of ways of combining and masking images, the typical
operations being ternary and acting simultaneously on source, destination, and mask bits.

12.4 Examples

12.4.1 Bit vectors

Example 2. All bit vectors of a given length form a Boolean algebra pointwise, meaning that any n-ary Boolean
operation can be applied to n bit vectors one bit position at a time. For example, the ternary OR of three bit vectors
each of length 4 is the bit vector of length 4 formed by oring the three bits in each of the four bit positions, thus
010010001001 = 1101. Another example is the truth tables above for the n-ary operations, whose columns are
all the bit vectors of length 2n and which therefore can be combined pointwise whence the n-ary operations form a
Boolean algebra. This works equally well for bit vectors of nite and innite length, the only rule being that the bit
positions all be indexed by the same set in order that corresponding position be well dened.
The atoms of such an algebra are the bit vectors containing exactly one 1. In general the atoms of a Boolean algebra
are those elements x such that xy has only two possible values, x or 0.

12.4.2 Power set algebra

Example 3. The power set algebra, the set 2W of all subsets of a given set W. This is just Example 2 in disguise,
with W serving to index the bit positions. Any subset X of W can be viewed as the bit vector having 1s in just those
bit positions indexed by elements of X. Thus the all-zero vector is the empty subset of W while the all-ones vector is
W itself, these being the constants 0 and 1 respectively of the power set algebra. The counterpart of disjunction xy
is union XY, while that of conjunction xy is intersection XY. Negation x becomes ~X, complement relative to
W. There is also set dierence XY = X~Y, symmetric dierence (XY)(YX), ternary union XYZ, and so on.
The atoms here are the singletons, those subsets with exactly one element.
Examples 2 and 3 are special cases of a general construct of algebra called direct product, applicable not just to
Boolean algebras but all kinds of algebra including groups, rings, etc. The direct product of any family B of Boolean
algebras where i ranges over some index set I (not necessarily nite or even countable) is a Boolean algebra consisting
of all I-tuples (...x,...) whose i-th element is taken from Bi. The operations of a direct product are the corresponding
operations of the constituent algebras acting within their respective coordinates; in particular operation n fj of the
product operates on n I-tuples by applying operation n fj of Bi to the n elements in the i-th coordinate of the n tuples,
for all i in I.
When all the algebras being multiplied together in this way are the same algebra A we call the direct product a direct
power of A. The Boolean algebra of all 32-bit bit vectors is the two-element Boolean algebra raised to the 32nd power,
or power set algebra of a 32-element set, denoted 232 . The Boolean algebra of all sets of integers is 2Z . All Boolean
algebras we have exhibited thus far have been direct powers of the two-element Boolean algebra, justifying the name
power set algebra.
12.4. EXAMPLES 65

12.4.3 Representation theorems

It can be shown that every nite Boolean algebra is isomorphic to some power set algebra. Hence the cardinality
(number of elements) of a nite Boolean algebra is a power of 2, namely one of 1,2,4,8,...,2n ,... This is called a
representation theorem as it gives insight into the nature of nite Boolean algebras by giving a representation of
them as power set algebras.
This representation theorem does not extend to innite Boolean algebras: although every power set algebra is a
Boolean algebra, not every Boolean algebra need be isomorphic to a power set algebra. In particular, whereas there
can be no countably innite power set algebras (the smallest innite power set algebra is the power set algebra 2N of
sets of natural numbers, shown by Cantor to be uncountable), there exist various countably innite Boolean algebras.
To go beyond power set algebras we need another construct. A subalgebra of an algebra A is any subset of A closed
under the operations of A. Every subalgebra of a Boolean algebra A must still satisfy the equations holding of A,
since any violation would constitute a violation for A itself. Hence every subalgebra of a Boolean algebra is a Boolean
algebra.
A subalgebra of a power set algebra is called a eld of sets; equivalently a eld of sets is a set of subsets of some set W
including the empty set and W and closed under nite union and complement with respect to W (and hence also under
nite intersection). Birkhos [1935] representation theorem for Boolean algebras states that every Boolean algebra
is isomorphic to a eld of sets. Now Birkhos HSP theorem for varieties can be stated as, every class of models
of the equational theory of a class C of algebras is the Homomorphic image of a Subalgebra of a direct Product of
algebras of C. Normally all three of H, S, and P are needed; what the rst of these two Birkho theorems shows is that
for the special case of the variety of Boolean algebras Homomorphism can be replaced by Isomorphism. Birkhos
HSP theorem for varieties in general therefore becomes Birkhos ISP theorem for the variety of Boolean algebras.

12.4.4 Other examples

It is convenient when talking about a set X of natural numbers to view it as a sequence x0 ,x1 ,x2 ,... of bits, with xi =
1 if and only if i X. This viewpoint will make it easier to talk about subalgebras of the power set algebra 2N , which
this viewpoint makes the Boolean algebra of all sequences of bits. It also ts well with the columns of a truth table:
when a column is read from top to bottom it constitutes a sequence of bits, but at the same time it can be viewed as
the set of those valuations (assignments to variables in the left half of the table) at which the function represented by
that column evaluates to 1.
Example 4. Ultimately constant sequences. Any Boolean combination of ultimately constant sequences is ultimately
constant; hence these form a Boolean algebra. We can identify these with the integers by viewing the ultimately-zero
sequences as nonnegative binary numerals (bit 0 of the sequence being the low-order bit) and the ultimately-one
sequences as negative binary numerals (think twos complement arithmetic with the all-ones sequence being 1).
This makes the integers a Boolean algebra, with union being bit-wise OR and complement being x1. There are
only countably many integers, so this innite Boolean algebra is countable. The atoms are the powers of two, namely
1,2,4,.... Another way of describing this algebra is as the set of all nite and conite sets of natural numbers, with
the ultimately all-ones sequences corresponding to the conite sets, those sets omitting only nitely many natural
numbers.
Example 5. Periodic sequence. A sequence is called periodic when there exists some number n > 0, called a witness
to periodicity, such that xi = xin for all i 0. The period of a periodic sequence is its least witness. Negation leaves
period unchanged, while the disjunction of two periodic sequences is periodic, with period at most the least common
multiple of the periods of the two arguments (the period can be as small as 1, as happens with the union of any
sequence and its complement). Hence the periodic sequences form a Boolean algebra.
Example 5 resembles Example 4 in being countable, but diers in being atomless. The latter is because the conjunction
of any nonzero periodic sequence x with a sequence of greater period is neither 0 nor x. It can be shown that
all countably innite atomless Boolean algebras are isomorphic, that is, up to isomorphism there is only one such
algebra.
Example 6. Periodic sequence with period a power of two. This is a proper subalgebra of Example 5 (a proper
subalgebra equals the intersection of itself with its algebra). These can be understood as the nitary operations, with
the rst period of such a sequence giving the truth table of the operation it represents. For example, the truth table
of x0 in the table of binary operations, namely 2 f 10 , has period 2 (and so can be recognized as using only the rst
variable) even though 12 of the binary operations have period 4. When the period is 2n the operation only depends
66 CHAPTER 12. BOOLEAN ALGEBRAS CANONICALLY DEFINED

on the rst n variables, the sense in which the operation is nitary. This example is also a countably innite atomless
Boolean algebra. Hence Example 5 is isomorphic to a proper subalgebra of itself! Example 6, and hence Example
5, constitutes the free Boolean algebra on countably many generators, meaning the Boolean algebra of all nitary
operations on a countably innite set of generators or variables.
Example 7. Ultimately periodic sequences, sequences that become periodic after an initial nite bout of lawlessness.
They constitute a proper extension of Example 5 (meaning that Example 5 is a proper subalgebra of Example 7)
and also of Example 4, since constant sequences are periodic with period one. Sequences may vary as to when they
settle down, but any nite set of sequences will all eventually settle down no later than their slowest-to-settle member,
whence ultimately periodic sequences are closed under all Boolean operations and so form a Boolean algebra. This
example has the same atoms and coatoms as Example 4, whence it is not atomless and therefore not isomorphic to
Example 5/6. However it contains an innite atomless subalgebra, namely Example 5, and so is not isomorphic to
Example 4, every subalgebra of which must be a Boolean algebra of nite sets and their complements and therefore
atomic. This example is isomorphic to the direct product of Examples 4 and 5, furnishing another description of it.
Example 8. The direct product of a Periodic Sequence (Example 5) with any nite but nontrivial Boolean algebra.
(The trivial one-element Boolean algebra is the unique nite atomless Boolean algebra.) This resembles Example 7
in having both atoms and an atomless subalgebra, but diers in having only nitely many atoms. Example 8 is in fact
an innite family of examples, one for each possible nite number of atoms.
These examples by no means exhaust the possible Boolean algebras, even the countable ones. Indeed, there are
uncountably many nonisomorphic countable Boolean algebras, which Jussi Ketonen [1978] classied completely in
terms of invariants representable by certain hereditarily countable sets.

12.5 Boolean algebras of Boolean operations


The n-ary Boolean operations themselves constitute a power set algebra 2W , namely when W is taken to be the set of
2n valuations of the n inputs. In terms of the naming system of operations n fi where i in binary is a column of a truth
table, the columns can be combined with Boolean operations of any arity to produce other columns present in the
table. That is, we can apply any Boolean operation of arity m to m Boolean operations of arity n to yield a Boolean
operation of arity n, for any m and n.
The practical signicance of this convention for both software and hardware is that n-ary Boolean operations can be
represented as words of the appropriate length. For example, each of the 256 ternary Boolean operations can be
represented as an unsigned byte. The available logical operations such as AND and OR can then be used to form
new operations. If we take x, y, and z (dispensing with subscripted variables for now) to be 10101010, 11001100,
and 11110000 respectively (170, 204, and 240 in decimal, 0xaa, 0xcc, and 0xf0 in hexadecimal), their pairwise
conjunctions are xy = 10001000, yz = 11000000, and zx = 10100000, while their pairwise disjunctions are xy =
11101110, yz = 11111100, and zx = 11111010. The disjunction of the three conjunctions is 11101000, which also
happens to be the conjunction of three disjunctions. We have thus calculated, with a dozen or so logical operations
on bytes, that the two ternary operations

(xy)(yz)(zx)

and

(xy)(yz)(zx)

are actually the same operation. That is, we have proved the equational identity

(xy)(yz)(zx) = (xy)(yz)(zx),

for the two-element Boolean algebra. By the denition of Boolean algebra this identity must therefore hold in every
Boolean algebra.
This ternary operation incidentally formed the basis for Graus [1947] ternary Boolean algebras, which he axiomatized
in terms of this operation and negation. The operation is symmetric, meaning that its value is independent of any of
the 3! = 6 permutations of its arguments. The two halves of its truth table 11101000 are the truth tables for , 1110,
and , 1000, so the operation can be phrased as if z then xy else xy. Since it is symmetric it can equally well be
12.6. AXIOMATIZING BOOLEAN ALGEBRAS 67

phrased as either of if x then yz else yz, or if y then zx else zx. Viewed as a labeling of the 8-vertex 3-cube, the
upper half is labeled 1 and the lower half 0; for this reason it has been called the median operator, with the evident
generalization to any odd number of variables (odd in order to avoid the tie when exactly half the variables are 0).

12.6 Axiomatizing Boolean algebras


The technique we just used to prove an identity of Boolean algebra can be generalized to all identities in a systematic
way that can be taken as a sound and complete axiomatization of, or axiomatic system for, the equational laws of
Boolean logic. The customary formulation of an axiom system consists of a set of axioms that prime the pump with
some initial identities, along with a set of inference rules for inferring the remaining identities from the axioms and
previously proved identities. In principle it is desirable to have nitely many axioms; however as a practical matter
it is not necessary since it is just as eective to have a nite axiom schema having innitely many instances each of
which when used in a proof can readily be veried to be a legal instance, the approach we follow here.
Boolean identities are assertions of the form s = t where s and t are n-ary terms, by which we shall mean here terms
whose variables are limited to x0 through xn-1. An n-ary term is either an atom or an application. An application
m
fi(t 0 ,...,tm) is a pair consisting of an m-ary operation m fi and a list or m-tuple (t 0 ,...,tm) of m n-ary terms
called operands.
Associated with every term is a natural number called its height. Atoms are of zero height, while applications are of
height one plus the height of their highest operand.
Now what is an atom? Conventionally an atom is either a constant (0 or 1) or a variable xi where 0 i < n. For the
proof technique here it is convenient to dene atoms instead to be n-ary operations n fi, which although treated here
as atoms nevertheless mean the same as ordinary terms of the exact form n fi(x0 ,...,xn) (exact in that the variables
must listed in the order shown without repetition or omission). This is not a restriction because atoms of this form
include all the ordinary atoms, namely the constants 0 and 1, which arise here as the n-ary operations n f 0 and n f for
n
each n (abbreviating 22 1 to 1), and the variables x0 ,...,xn as can be seen from the truth tables where x0 appears
as both the unary operation 1 f 2 and the binary operation 2 f 10 while x1 appears as 2 f 12 .
The following axiom schema and three inference rules axiomatize the Boolean algebra of n-ary terms.

A1. m fi(n fj 0 ,...,n fjm) = n fi where (io)v = iv, with being j transpose, dened by (v)u = (ju)v.
R1. With no premises infer t = t.
R2. From s = u and t = u infer s = t where s, t, and u are n-ary terms.
R3. From s0 = t 0 ,...,sm = tm infer m fi(s0 ,...,sm) = m fi(t 0 ,...,tm), where all terms s, t are
n-ary.

The meaning of the side condition on A1 is that io is that 2n -bit number whose v-th bit is the v-th bit of i, where
n
the ranges of each quantity are u: m, v: 2n , ju: 22 , and v: 2m . (So j is an m-tuple of 2n -bit numbers while as the
transpose of j is a 2 -tuple of m-bit numbers. Both j and therefore contain m2n bits.)
n

A1 is an axiom schema rather than an axiom by virtue of containing metavariables, namely m, i, n, and j0 through
jm-1. The actual axioms of the axiomatization are obtained by setting the metavariables to specic values. For
example, if we take m = n = i = j 0 = 1, we can compute the two bits of io from i1 = 0 and i0 = 1, so io = 2 (or 10
when written as a two-bit number). The resulting instance, namely 1 f 1 (1 f 1 ) = 1 f 2 , expresses the familiar axiom x
= x of double negation. Rule R3 then allows us to infer x = x by taking s0 to be 1 f 1 (1 f 1 ) or x0 , t0 to be 1 f 2
or x0 , and m fi to be 1 f1 or .
m n
For each m and n there are only nitely many axioms instantiating A1, namely 22 (22 )m . Each instance is specied
by 2m +m2n bits.
We treat R1 as an inference rule, even though it is like an axiom in having no premises, because it is a domain-
independent rule along with R2 and R3 common to all equational axiomatizations, whether of groups, rings, or any
other variety. The only entity specic to Boolean algebras is axiom schema A1. In this way when talking about
dierent equational theories we can push the rules to one side as being independent of the particular theories, and
conne attention to the axioms as the only part of the axiom system characterizing the particular equational theory at
hand.
This axiomatization is complete, meaning that every Boolean law s = t is provable in this system. One rst shows by
induction on the height of s that every Boolean law for which t is atomic is provable, using R1 for the base case (since
68 CHAPTER 12. BOOLEAN ALGEBRAS CANONICALLY DEFINED

distinct atoms are never equal) and A1 and R3 for the induction step (s an application). This proof strategy amounts
to a recursive procedure for evaluating s to yield an atom. Then to prove s = t in the general case when t may be an
application, use the fact that if s = t is an identity then s and t must evaluate to the same atom, call it u. So rst prove
s = u and t = u as above, that is, evaluate s and t using A1, R1, and R3, and then invoke R2 to infer s = t.
In A1, if we view the number nm as the function type mn, and mn as the application m(n), we can reinterpret the
numbers i, j, , and io as functions of type i: (m2)2, j: m((n2)2), : (n2)(m2), and io: (n2)2.
The denition (io)v = iv in A1 then translates to (io)(v) = i((v)), that is, io is dened to be composition of i and
understood as functions. So the content of A1 amounts to dening term application to be essentially composition,
modulo the need to transpose the m-tuple j to make the types match up suitably for composition. This composition
is the one in Lawveres previously mentioned category of power sets and their functions. In this way we have trans-
lated the commuting diagrams of that category, as the equational theory of Boolean algebras, into the equational
consequences of A1 as the logical representation of that particular composition law.

12.7 Underlying lattice structure


Underlying every Boolean algebra B is a partially ordered set or poset (B,). The partial order relation is dened
by x y just when x = xy, or equivalently when y = xy. Given a set X of elements of a Boolean algebra, an upper
bound on X is an element y such that for every element x of X, x y, while a lower bound on X is an element y such
that for every element x of X, y x.
A sup (supremum) of X is a least upper bound on X, namely an upper bound on X that is less or equal to every upper
bound on X. Dually an inf (inmum) of X is a greatest lower bound on X. The sup of x and y always exists in the
underlying poset of a Boolean algebra, being xy, and likewise their inf exists, namely xy. The empty sup is 0 (the
bottom element) and the empty inf is 1 (top). It follows that every nite set has both a sup and an inf. Innite subsets
of a Boolean algebra may or may not have a sup and/or an inf; in a power set algebra they always do.
Any poset (B,) such that every pair x,y of elements has both a sup and an inf is called a lattice. We write xy for
the sup and xy for the inf. The underlying poset of a Boolean algebra always forms a lattice. The lattice is said to
be distributive when x(yz) = (xy)(xz), or equivalently when x(yz) = (xy)(xz), since either law implies
the other in a lattice. These are laws of Boolean algebra whence the underlying poset of a Boolean algebra forms a
distributive lattice.
Given a lattice with a bottom element 0 and a top element 1, a pair x,y of elements is called complementary when
xy = 0 and xy = 1, and we then say that y is a complement of x and vice versa. Any element x of a distributive
lattice with top and bottom can have at most one complement. When every element of a lattice has a complement the
lattice is called complemented. It follows that in a complemented distributive lattice, the complement of an element
always exists and is unique, making complement a unary operation. Furthermore, every complemented distributive
lattice forms a Boolean algebra, and conversely every Boolean algebra forms a complemented distributive lattice.
This provides an alternative denition of a Boolean algebra, namely as any complemented distributive lattice. Each
of these three properties can be axiomatized with nitely many equations, whence these equations taken together
constitute a nite axiomatization of the equational theory of Boolean algebras.
In a class of algebras dened as all the models of a set of equations, it is usually the case that some algebras of the class
satisfy more equations than just those needed to qualify them for the class. The class of Boolean algebras is unusual in
that, with a single exception, every Boolean algebra satises exactly the Boolean identities and no more. The exception
is the one-element Boolean algebra, which necessarily satises every equation, even x = y, and is therefore sometimes
referred to as the inconsistent Boolean algebra.

12.8 Boolean homomorphisms


A Boolean homomorphism is a function h: AB between Boolean algebras A, B such that for every Boolean operation
m
fi,

h(m fi(x0 ,...,xm)) = m fi(h(x0 ),...,h(xm)).

The category Bool of Boolean algebras has as objects all Boolean algebras and as morphisms the Boolean homomor-
phisms between them.
12.9. INFINITARY EXTENSIONS 69

There exists a unique homomorphism from the two-element Boolean algebra 2 to every Boolean algebra, since ho-
momorphisms must preserve the two constants and those are the only elements of 2. A Boolean algebra with this
property is called an initial Boolean algebra. It can be shown that any two initial Boolean algebras are isomorphic,
so up to isomorphism 2 is the initial Boolean algebra.
In the other direction, there may exist many homomorphisms from a Boolean algebra B to 2. Any such homomorphism
partitions B into those elements mapped to 1 and those to 0. The subset of B consisting of the former is called an
ultralter of B. When B is nite its ultralters pair up with its atoms; one atom is mapped to 1 and the rest to 0. Each
ultralter of B thus consists of an atom of B and all the elements above it; hence exactly half the elements of B are in
the ultralter, and there as many ultralters as atoms.
For innite Boolean algebras the notion of ultralter becomes considerably more delicate. The elements greater or
equal than an atom always form an ultralter but so do many other sets; for example in the Boolean algebra of nite
and conite sets of integers the conite sets form an ultralter even though none of them are atoms. Likewise the
powerset of the integers has among its ultralters the set of all subsets containing a given integer; there are countably
many of these standard ultralters, which may be identied with the integers themselves, but there are uncountably
many more nonstandard ultralters. These form the basis for nonstandard analysis, providing representations for
such classically inconsistent objects as innitesimals and delta functions.

12.9 Innitary extensions

Recall the denition of sup and inf from the section above on the underlying partial order of a Boolean algebra. A
complete Boolean algebra is one every subset of which has both a sup and an inf, even the innite subsets. Gaifman
[1964] and Hales [1964] independently showed that innite free complete Boolean algebras do not exist. This suggests
that a logic with set-sized-innitary operations may have class-many termsjust as a logic with nitary operations
may have innitely many terms.
There is however another approach to introducing innitary Boolean operations: simply drop nitary from the
denition of Boolean algebra. A model of the equational theory of the algebra of all operations on {0,1} of arity
up to the cardinality of the model is called a complete atomic Boolean algebra, or CABA. (In place of this awkward
restriction on arity we could allow any arity, leading to a dierent awkwardness, that the signature would then be
larger than any set, that is, a proper class. One benet of the latter approach is that it simplies the denition of
homomorphism between CABAs of dierent cardinality.) Such an algebra can be dened equivalently as a complete
Boolean algebra that is atomic, meaning that every element is a sup of some set of atoms. Free CABAs exist for
V
all cardinalities of a set V of generators, namely the power set algebra 22 , this being the obvious generalization of
the nite free Boolean algebras. This neatly rescues innitary Boolean logic from the fate the GaifmanHales result
seemed to consign it to.
The nonexistence of free complete Boolean algebras can be traced to failure to extend the equations of Boolean logic
suitably to all laws that should hold for innitary conjunction and disjunction, in particular the neglect of distributivity
in the denition of complete Boolean algebra. A complete Boolean algebra is called completely distributive when
arbitrary conjunctions distribute over arbitrary disjunctions and vice versa. A Boolean algebra is a CABA if and only
if it is complete and completely distributive, giving a third denition of CABA. A fourth denition is as any Boolean
algebra isomorphic to a power set algebra.
A complete homomorphism is one that preserves all sups that exist, not just the nite sups, and likewise for infs. The
category CABA of all CABAs and their complete homomorphisms is dual to the category of sets and their functions,
meaning that it is equivalent to the opposite of that category (the category resulting from reversing all morphisms).
Things are not so simple for the category Bool of Boolean algebras and their homomorphisms, which Marshall Stone
showed in eect (though he lacked both the language and the conceptual framework to make the duality explicit) to
be dual to the category of totally disconnected compact Hausdor spaces, subsequently called Stone spaces.
Another innitary class intermediate between Boolean algebras and complete Boolean algebras is the notion of a
sigma-algebra. This is dened analogously to complete Boolean algebras, but with sups and infs limited to countable
arity. That is, a sigma-algebra is a Boolean algebra with all countable sups and infs. Because the sups and infs are of
bounded cardinality, unlike the situation with complete Boolean algebras, the Gaifman-Hales result does not apply
and free sigma-algebras do exist. Unlike the situation with CABAs however, the free countably generated sigma
algebra is not a power set algebra.
70 CHAPTER 12. BOOLEAN ALGEBRAS CANONICALLY DEFINED

12.10 Other denitions of Boolean algebra


We have already encountered several denitions of Boolean algebra, as a model of the equational theory of the two-
element algebra, as a complemented distributive lattice, as a Boolean ring, and as a product-preserving functor from
a certain category (Lawvere). Two more denitions worth mentioning are:.

Stone (1936) A Boolean algebra is the set of all clopen sets of a topological space. It is no limitation to require
the space to be a totally disconnected compact Hausdor space, or Stone space, that is, every Boolean algebra
arises in this way, up to isomorphism. Moreover, if the two Boolean algebras formed as the clopen sets of two
Stone spaces are isomorphic, so are the Stone spaces themselves, which is not the case for arbitrary topological
spaces. This is just the reverse direction of the duality mentioned earlier from Boolean algebras to Stone spaces.
This denition is eshed out by the next denition.

Johnstone (1982) A Boolean algebra is a ltered colimit of nite Boolean algebras.

(The circularity in this denition can be removed by replacing nite Boolean algebra by nite power set equipped
with the Boolean operations standardly interpreted for power sets.)
To put this in perspective, innite sets arise as ltered colimits of nite sets, innite CABAs as ltered limits of nite
power set algebras, and innite Stone spaces as ltered limits of nite sets. Thus if one starts with the nite sets and
asks how these generalize to innite objects, there are two ways: adding them gives ordinary or inductive sets while
multiplying them gives Stone spaces or pronite sets. The same choice exists for nite power set algebras as the
duals of nite sets: addition yields Boolean algebras as inductive objects while multiplication yields CABAs or power
set algebras as pronite objects.
A characteristic distinguishing feature is that the underlying topology of objects so constructed, when dened so as
to be Hausdor, is discrete for inductive objects and compact for pronite objects. The topology of nite Hausdor
spaces is always both discrete and compact, whereas for innite spaces discrete"' and compact are mutually exclu-
sive. Thus when generalizing nite algebras (of any kind, not just Boolean) to innite ones, discrete and compact
part company, and one must choose which one to retain. The general rule, for both nite and innite algebras, is that
nitary algebras are discrete, whereas their duals are compact and feature innitary operations. Between these two
extremes, there are many intermediate innite Boolean algebras whose topology is neither discrete nor compact.

12.11 See also

12.12 References
Birkho, Garrett (1935). On the structure of abstract algebras. Proc. Camb. Phil. Soc. 31: 433454. ISSN
0008-1981. doi:10.1017/s0305004100013463.

Boole, George (2003) [1854]. An Investigation of the Laws of Thought. Prometheus Books. ISBN 978-1-
59102-089-9.

Dwinger, Philip (1971). Introduction to Boolean algebras. Wrzburg: Physica Verlag.

Gaifman, Haim (1964). Innite Boolean Polynomials, I. Fundamenta Mathematicae. 54: 229250. ISSN
0016-2736.

Givant, Steven; Halmos, Paul (2009). Introduction to Boolean Algebras. Undergraduate Texts in Mathematics,
Springer. ISBN 978-0-387-40293-2..

Grau, A.A. (1947). Ternary Boolean algebra. Bull. Am. Math. Soc. 33 (6): 567572. doi:10.1090/S0002-
9904-1947-08834-0.

Hales, Alfred W. (1964). On the Non-Existence of Free Complete Boolean Algebras. Fundamenta Mathe-
maticae. 54: 4566. ISSN 0016-2736.

Halmos, Paul (1963). Lectures on Boolean Algebras. van Nostrand. ISBN 0-387-90094-2.
12.12. REFERENCES 71

--------, and Givant, Steven (1998) Logic as Algebra. Dolciani Mathematical Exposition, No. 21. Mathematical
Association of America.
Johnstone, Peter T. (1982). Stone Spaces. Cambridge, UK: Cambridge University Press. ISBN 978-0-521-
33779-3.
Ketonen, Jussi (1978). The structure of countable Boolean algebras. Annals of Mathematics. 108 (1): 4189.
JSTOR 1970929. doi:10.2307/1970929.
Koppelberg, Sabine (1989) General Theory of Boolean Algebras in Monk, J. Donald, and Bonnet, Robert,
eds., Handbook of Boolean Algebras, Vol. 1. North Holland. ISBN 978-0-444-70261-6.
Peirce, C. S. (1989) Writings of Charles S. Peirce: A Chronological Edition: 18791884. Kloesel, C. J. W., ed.
Indianapolis: Indiana University Press. ISBN 978-0-253-37204-8.
Lawvere, F. William (1963). Functorial semantics of algebraic theories. Proceedings of the National Academy
of Sciences. 50 (5): 869873. doi:10.1073/pnas.50.5.869.

Schrder, Ernst (18901910). Vorlesungen ber die Algebra der Logik (exakte Logik), IIII. Leipzig: B.G.
Teubner.

Sikorski, Roman (1969). Boolean Algebras (3rd. ed.). Berlin: Springer-Verlag. ISBN 978-0-387-04469-9.
Stone, M. H. (1936). The Theory of Representation for Boolean Algebras. Transactions of the American
Mathematical Society. 40 (1): 37111. ISSN 0002-9947. JSTOR 1989664. doi:10.2307/1989664.
Tarski, Alfred (1983). Logic, Semantics, Metamathematics, Corcoran, J., ed. Hackett. 1956 1st edition edited
and translated by J. H. Woodger, Oxford Uni. Press. Includes English translations of the following two articles:
Tarski, Alfred (1929). Sur les classes closes par rapport certaines oprations lmentaires. Funda-
menta Mathematicae. 16: 19597. ISSN 0016-2736.
Tarski, Alfred (1935). Zur Grundlegung der Booleschen Algebra, I. Fundamenta Mathematicae. 24:
17798. ISSN 0016-2736.

Vladimirov, D.A. (1969). (Boolean algebras, in Russian, German translation Boolesche Al-
gebren 1974). Nauka (German translation Akademie-Verlag).
Chapter 13

Boolean conjunctive query

In the theory of relational databases, a Boolean conjunctive query is a conjunctive query without distinguished
predicates, i.e., a query in the form R1 (t1 ) Rn (tn ) , where each Ri is a relation symbol and each ti is a
tuple of variables and constants; the number of elements in ti is equal to the arity of Ri . Such a query evaluates to
either true or false depending on whether the relations in the database contain the appropriate tuples of values, i.e.
the conjunction is valid according to the facts in the database.
As an example, if a database schema contains the relation symbols Father (binary, whos the father of whom) and
Employed (unary, who is employed), a conjunctive query could be F ather(Mark, x) Employed(x) . This query
evaluates to true if there exists an individual x who is a child of Mark and employed. In other words, this query
expresses the question: does Mark have an employed child?"

13.1 See also


Logical conjunction

Conjunctive query

13.2 References
G. Gottlob; N. Leone; F. Scarcello (2001). The complexity of acyclic conjunctive queries. Journal of the
ACM (JACM). 48 (3): 431498. doi:10.1145/382780.382783.

72
Chapter 14

Boolean data type

In computer science, the Boolean data type is a data type, having two values (usually denoted true and false),
intended to represent the truth values of logic and Boolean algebra. It is named after George Boole, who rst dened
an algebraic system of logic in the mid 19th century. The Boolean data type is primarily associated with conditional
statements, which allow dierent actions and change control ow depending on whether a programmer-specied
Boolean condition evaluates to true or false. It is a special case of a more general logical data type; logic need not
always be Boolean.

14.1 Generalities
In programming languages with a built-in Boolean data type, such as Pascal and Java, the comparison operators such
as > and are usually dened to return a Boolean value. Conditional and iterative commands may be dened to test
Boolean-valued expressions.
Languages with no explicit Boolean data type, like C90 and Lisp, may still represent truth values by some other data
type. Common Lisp uses an empty list for false, and any other value for true. C uses an integer type, where relational
expressions like i > j and logical expressions connected by && and || are dened to have value 1 if true and 0 if false,
whereas the test parts of if, while, for, etc., treat any non-zero value as true.[1][2] Indeed, a Boolean variable may be
regarded (and implemented) as a numerical variable with one binary digit (bit), which can store only two values. The
implementation of Booleans in computers are most likely represented as a full word, rather than a bit; this is usually
due to the ways computers transfer blocks of information.
Most programming languages, even those with no explicit Boolean type, have support for Boolean algebraic operations
such as conjunction (AND, &, *), disjunction (OR, |, +), equivalence (EQV, =, ==), exclusive or/non-equivalence
(XOR, NEQV, ^, !=), and negation (NOT, ~, !).
In some languages, like Ruby, Smalltalk, and Alice the true and false values belong to separate classes, i.e., True and
False, respectively, so there is no one Boolean type.
In SQL, which uses a three-valued logic for explicit comparisons because of its special treatment of Nulls, the Boolean
data type (introduced in SQL:1999) is also dened to include more than two truth values, so that SQL Booleans can
store all logical values resulting from the evaluation of predicates in SQL. A column of Boolean type can also be
restricted to just TRUE and FALSE though.

14.2 ALGOL and the built-in boolean type


One of the earliest programming languages to provide an explicit boolean data type was ALGOL 60 (1960) with
values true and false and logical operators denoted by symbols ' ' (and), ' ' (or), ' ' (implies), ' ' (equivalence),
and ' ' (not). Due to input device and character set limits on many computers of the time, however, most compilers
used alternative representations for many of the operators, such as AND or 'AND'.
This approach with boolean as a built-in (either primitive or otherwise predened) data type was adopted by many
later programming languages, such as Simula 67 (1967), ALGOL 68 (1970),[3] Pascal (1970), Ada (1980), Java

73
74 CHAPTER 14. BOOLEAN DATA TYPE

(1995), and C# (2000), among others.

14.3 Fortran
The rst version of FORTRAN (1957) and its successor FORTRAN II (1958) had no logical values or operations;
even the conditional IF statement took an arithmetic expression and branched to one of three locations according to
its sign; see arithmetic IF. FORTRAN IV (1962), however, followed the ALGOL 60 example by providing a Boolean
data type (LOGICAL), truth literals (.TRUE. and .FALSE.), Boolean-valued numeric comparison operators (.EQ.,
.GT., etc.), and logical operators (.NOT., .AND., .OR.). In FORMAT statements, a specic control character ('L')
was provided for the parsing or formatting of logical values.[4]

14.4 Lisp and Scheme


The language Lisp (1958) never had a built-in Boolean data type. Instead, conditional constructs like cond assume
that the logical value false is represented by the empty list (), which is dened to be the same as the special atom nil or
NIL; whereas any other s-expression is interpreted as true. For convenience, most modern dialects of Lisp predene
the atom t to have value t, so that t can be used as a mnemonic notation for true.
This approach (any value can be used as a Boolean value) was retained in most Lisp dialects (Common Lisp, Scheme,
Emacs Lisp), and similar models were adopted by many scripting languages, even ones having a distinct Boolean type
or Boolean values; although which values are interpreted as false and which are true vary from language to language.
In Scheme, for example, the false value is an atom distinct from the empty list, so the latter is interpreted as true.

14.5 Pascal, Ada, and Haskell


The language Pascal (1970) introduced the concept of programmer-dened enumerated types. A built-in Boolean
data type was then provided as a predened enumerated type with values FALSE and TRUE. By denition, all
comparisons, logical operations, and conditional statements applied to and/or yielded Boolean values. Otherwise, the
Boolean type had all the facilities which were available for enumerated types in general, such as ordering and use
as indices. In contrast, converting between Booleans and integers (or any other types) still required explicit tests or
function calls, as in ALGOL 60. This approach (Boolean is an enumerated type) was adopted by most later languages
which had enumerated types, such as Modula, Ada, and Haskell.

14.6 C, C++, Objective-C, AWK


Initial implementations of the language C (1972) provided no Boolean type, and to this day Boolean values are
commonly represented by integers (ints) in C programs. The comparison operators (>, ==, etc.) are dened to return
a signed integer (int) result, either 0 (for false) or 1 (for true). Logical operators (&&, ||, !, etc.) and condition-testing
statements (if, while) assume that zero is false and all other values are true.
After enumerated types (enums) were added to the American National Standards Institute version of C, ANSI C
(1989), many C programmers got used to dening their own Boolean types as such, for readability reasons. However,
enumerated types are equivalent to integers according to the language standards; so the eective identity between
Booleans and integers is still valid for C programs.
Standard C (since C99) provides a boolean type, called _Bool. By including the header stdbool.h one can use the
more intuitive name bool and the constants true and false. The language guarantees that any two true values will
compare equal (which was impossible to achieve before the introduction of the type). Boolean values still behave
as integers, can be stored in integer variables, and used anywhere integers would be valid, including in indexing,
arithmetic, parsing, and formatting. This approach (Boolean values are just integers) has been retained in all later
versions of C.
C++ has a separate Boolean data type bool, but with automatic conversions from scalar and pointer values that are
very similar to those of C. This approach was adopted also by many later languages, especially by some scripting
languages such as AWK.
14.7. PERL AND LUA 75

Objective-C also has a separate Boolean data type BOOL, with possible values being YES or NO, equivalents of
true and false respectively.[5] Also, in Objective-C compilers that support C99, Cs _Bool type can be used, since
Objective-C is a superset of C.

14.7 Perl and Lua


Perl has no boolean data type. Instead, any value can behave as boolean in boolean context (condition of if or while
statement, argument of && or ||, etc.). The number 0, the strings 0 and "", the empty list (), and the special value
undef evaluate to false.[6] All else evaluates to true.
Lua has a boolean data type, but non-boolean value can also behave as boolean. The non-value nil evaluate to false,
whereas every other data type always evaluates to true, regardless of value.

14.8 Python, Ruby, and JavaScript


Python, from version 2.3 forward, has a bool type which is a subclass of int, the standard integer type.[7] It has two
possible values: True and False, which are special versions of 1 and 0 respectively and behave as such in arithmetic
contexts. Also, a numeric value of zero (integer or fractional), the null value (None), the empty string, and empty
containers (i.e. lists, sets, etc.) are considered Boolean false; all other values are considered Boolean true by default.[8]
Classes can dene how their instances are treated in a Boolean context through the special method __nonzero__
(Python 2) or __bool__ (Python 3). For containers, __len__ (the special method for determining the length of
containers) is used if the explicit Boolean conversion method is not dened.
In Ruby, in contrast, only nil (Rubys null value) and a special false object are false, all else (including the integer 0
and empty arrays) is true.
In JavaScript, the empty string (""), null, undened, NaN, +0, 0 and false[9] are sometimes called falsy, and their
complement, truthy, to distinguish between strictly type-checked and coerced Booleans.[10] Languages such as PHP
also use this approach.

14.9 SQL
The SQL:1999 standard introduced a BOOLEAN data type as an optional feature (T031). When restricted by a NOT
NULL constraint, a SQL BOOLEAN behaves like Booleans in other languages. However, in SQL the BOOLEAN
type is nullable by default like all other SQL data types, meaning it can have the special null value also. Although
the SQL standard denes three literals for the BOOLEAN type TRUE, FALSE, and UNKNOWN it also says
that the NULL BOOLEAN and UNKNOWN may be used interchangeably to mean exactly the same thing.[11][12]
This has caused some controversy because the identication subjects UNKNOWN to the equality comparison rules
for NULL. More precisely UNKNOWN = UNKNOWN is not TRUE but UNKNOWN/NULL.[13] As of 2012 few
major SQL systems implement the T031 feature.[14] PostgreSQL is a notable exception, although it implements no
UNKNOWN literal; NULL can be used instead.[15]

14.10 See also


true and false (commands), for shell scripting
Shannons expansion
stdbool.h, C99 denitions for boolean

14.11 References
[1] Kernighan, Brian W; Ritchie, Dennis M (1978). The C Programming Language (1st ed.). Englewood Clis, NJ: Prentice
Hall. p. 41. ISBN 0-13-110163-3.
76 CHAPTER 14. BOOLEAN DATA TYPE

[2] Plauger, PJ; Brodie, Jim (1992) [1989]. ANSI and ISO Standard C Programmers reference. Microsoft Press. pp. 8693.
ISBN 1-55615-359-7.

[3] Report on the Algorithmic Language ALGOL 68, Section 10.2.2. (PDF). August 1968. Retrieved 30 April 2007.

[4] Digital Equipment Corporation, DECSystem10 FORTRAN IV Programmers Reference Manual. Reprinted in Mathematical
Languages Handbook. Online version accessed 2011-11-16.

[5] https://developer.apple.com/library/ios/#documentation/cocoa/conceptual/ProgrammingWithObjectiveC/FoundationTypesandCollections/
FoundationTypesandCollections.html

[6] perlsyn - Perl Syntax / Truth and Falsehood. Retrieved 10 September 2013.

[7] Van Rossum, Guido (3 April 2002). PEP 285 -- Adding a bool type. Retrieved 15 May 2013.

[8] Expressions. Python v3.3.2 documentation. Retrieved 15 May 2013.

[9] ECMAScript Language Specication (PDF). p. 43.

[10] The Elements of JavaScript Style. Douglas Crockford. Retrieved 5 March 2011.

[11] C. Date (2011). SQL and Relational Theory: How to Write Accurate SQL Code. O'Reilly Media, Inc. p. 83. ISBN
978-1-4493-1640-2.

[12] ISO/IEC 9075-2:2011 4.5

[13] Martyn Prigmore (2007). Introduction to Databases With Web Applications. Pearson Education Canada. p. 197. ISBN
978-0-321-26359-9.

[14] Troels Arvin, Survey of BOOLEAN data type implementation

[15] http://www.postgresql.org/docs/current/static/datatype-boolean.html
Chapter 15

Boolean domain

In mathematics and abstract algebra, a Boolean domain is a set consisting of exactly two elements whose interpre-
tations include false and true. In logic, mathematics and theoretical computer science, a Boolean domain is usually
written as {0, 1},[1][2][3] {false, true}, {F, T},[4] {, } [5] or B. [6][7]
The algebraic structure that naturally builds on a Boolean domain is the Boolean algebra with two elements. The
initial object in the category of bounded lattices is a Boolean domain.
In computer science, a Boolean variable is a variable that takes values in some Boolean domain. Some programming
languages feature reserved words or symbols for the elements of the Boolean domain, for example false and true.
However, many programming languages do not have a Boolean datatype in the strict sense. In C or BASIC, for
example, falsity is represented by the number 0 and truth is represented by the number 1 or 1, and all variables that
can take these values can also take any other numerical values.

15.1 Generalizations
The Boolean domain {0, 1} can be replaced by the unit interval [0,1], in which case rather than only taking values 0
or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 x,
conjunction (AND) is replaced with multiplication ( xy ), and disjunction (OR) is dened via De Morgans law to be
1 (1 x)(1 y) .
Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic
and probabilistic logic. In these interpretations, a value is interpreted as the degree of truth to what extent a
proposition is true, or the probability that the proposition is true.

15.2 See also


Boolean-valued function

15.3 Notes
[1] Dirk van Dalen, Logic and Structure. Springer (2004), page 15.

[2] David Makinson, Sets, Logic and Maths for Computing. Springer (2008), page 13.

[3] George S. Boolos and Richard C. Jerey, Computability and Logic. Cambridge University Press (1980), page 99.

[4] Elliott Mendelson, Introduction to Mathematical Logic (4th. ed.). Chapman & Hall/CRC (1997), page 11.

[5] Eric C. R. Hehner, A Practical Theory of Programming. Springer (1993, 2010), page 3.

[6] Ian Parberry (1994). Circuit Complexity and Neural Networks. MIT Press. p. 65. ISBN 978-0-262-16148-0.

77
78 CHAPTER 15. BOOLEAN DOMAIN

[7] Jordi Cortadella; et al. (2002). Logic Synthesis for Asynchronous Controllers and Interfaces. Springer Science & Business
Media. p. 73. ISBN 978-3-540-43152-7.
Chapter 16

Boolean expression

In computer science, a Boolean expression is an expression in a programming language that produces a Boolean
value when evaluated, i.e. one of true or false. A Boolean expression may be composed of a combination of the
Boolean constants true or false, Boolean-typed variables, Boolean-valued operators, and Boolean-valued functions.[1]
Boolean expressions correspond to propositional formulas in logic and are a special case of Boolean circuits.[2]

16.1 Boolean operators


Most programming languages have the Boolean operators OR, AND and not; in C and some newer languages, these
are represented by "||" (double pipe character), "&&" (double ampersand) and "!" (exclamation point) respectively,
while the corresponding bitwise operations are represented by "|", "&" and "~" (tilde).[3] In the mathematical literature
the symbols used are often "+" (plus), "" (dot) and overbar, or "" (cup), "" (cap) and "" or "" (prime).

16.2 Examples
The expression 5 > 3 is evaluated as true.
The expression 3 > 5 is evaluated as false.
5>=3 and 3<=5 are equivalent Boolean expressions, both of which are evaluated as true.
typeof true returns boolean and typeof false returns boolean
Of course, most Boolean expressions will contain at least one variable (X > 3), and often more (X > Y).

16.3 See also


Expression (computer science)
Expression (mathematics)

16.4 References
[1] Gries, David; Schneider, Fred B. (1993), Chapter 2. Boolean Expressions, A Logical Approach to Discrete Math, Mono-
graphs in Computer Science, Springer, p. 25, ISBN 9780387941158.
[2] van Melkebeek, Dieter (2000), Randomness and Completeness in Computational Complexity, Lecture Notes in Computer
Science, 1950, Springer, p. 22, ISBN 9783540414926.
[3] E.g. for Java see Brogden, William B.; Green, Marcus (2003), Java 2 Programmer, Que Publishing, p. 45, ISBN
9780789728616.

79
80 CHAPTER 16. BOOLEAN EXPRESSION

16.5 External links


The Calculus of Logic, by George Boole, Cambridge and Dublin Mathematical Journal Vol. III (1848), pp.
18398.
Chapter 17

Boolean function

Not to be confused with Binary function.

In mathematics and logic, a (nitary) Boolean function (or switching function) is a function of the form : Bk
B, where B = {0, 1} is a Boolean domain and k is a non-negative integer called the arity of the function. In the case
where k = 0, the function is essentially a constant element of B.
Every k-ary Boolean function can be expressed as a propositional formula in k variables x1 , , xk, and two propo-
k
sitional formulas are logically equivalent if and only if they express the same Boolean function. There are 22 k-ary
functions for every k.

17.1 Boolean functions in applications


A Boolean function describes how to determine a Boolean value output based on some logical calculation from
Boolean inputs. Such functions play a basic role in questions of complexity theory as well as the design of circuits
and chips for digital computers. The properties of Boolean functions play a critical role in cryptography, particularly
in the design of symmetric key algorithms (see substitution box).
Boolean functions are often represented by sentences in propositional logic, and sometimes as multivariate polynomials
over GF(2), but more ecient representations are binary decision diagrams (BDD), negation normal forms, and
propositional directed acyclic graphs (PDAG).
In cooperative game theory, monotone Boolean functions are called simple games (voting games); this notion is
applied to solve problems in social choice theory.

17.2 See also


Algebra of sets

Boolean algebra

Boolean algebra topics

Boolean domain

Boolean-valued function

Logical connective

Truth function

Truth table

Symmetric Boolean function

81
82 CHAPTER 17. BOOLEAN FUNCTION

Decision tree model

Evasive Boolean function


Indicator function

Balanced boolean function


Read-once function

3-ary Boolean functions

17.3 References
Crama, Y; Hammer, P. L. (2011), Boolean Functions, Cambridge University Press.
Hazewinkel, Michiel, ed. (2001) [1994], Boolean function, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Jankovi, Dragan; Stankovi, Radomir S.; Moraga, Claudio (November 2003). Arithmetic expressions opti-
misation using dual polarity property (PDF). Serbian Journal of Electrical Engineering. 1 (71-80, number 1).
Archived from the original (PDF) on 2016-03-05. Retrieved 2015-06-07.

Mano, M. M.; Ciletti, M. D. (2013), Digital Design, Pearson.


Chapter 18

Boolean prime ideal theorem

In mathematics, a prime ideal theorem guarantees the existence of certain types of subsets in a given algebra. A
common example is the Boolean prime ideal theorem, which states that ideals in a Boolean algebra can be extended
to prime ideals. A variation of this statement for lters on sets is known as the ultralter lemma. Other theorems
are obtained by considering dierent mathematical structures with appropriate notions of ideals, for example, rings
and prime ideals (of ring theory), or distributive lattices and maximal ideals (of order theory). This article focuses
on prime ideal theorems from order theory.
Although the various prime ideal theorems may appear simple and intuitive, they cannot be deduced in general from
the axioms of ZermeloFraenkel set theory without the axiom of choice (abbreviated ZF). Instead, some of the
statements turn out to be equivalent to the axiom of choice (AC), while othersthe Boolean prime ideal theorem,
for instancerepresent a property that is strictly weaker than AC. It is due to this intermediate status between ZF
and ZF + AC (ZFC) that the Boolean prime ideal theorem is often taken as an axiom of set theory. The abbreviations
BPI or PIT (for Boolean algebras) are sometimes used to refer to this additional axiom.

18.1 Prime ideal theorems

An order ideal is a (non-empty) directed lower set. If the considered partially ordered set (poset) has binary suprema
(a.k.a. joins), as do the posets within this article, then this is equivalently characterized as a non-empty lower set I
that is closed for binary suprema (i.e. x, y in I imply x y in I). An ideal I is prime if its set-theoretic complement
in the poset is a lter. Ideals are proper if they are not equal to the whole poset.
Historically, the rst statement relating to later prime ideal theorems was in fact referring to lterssubsets that
are ideals with respect to the dual order. The ultralter lemma states that every lter on a set is contained within
some maximal (proper) lteran ultralter. Recall that lters on sets are proper lters of the Boolean algebra of its
powerset. In this special case, maximal lters (i.e. lters that are not strict subsets of any proper lter) and prime
lters (i.e. lters that with each union of subsets X and Y contain also X or Y) coincide. The dual of this statement
thus assures that every ideal of a powerset is contained in a prime ideal.
The above statement led to various generalized prime ideal theorems, each of which exists in a weak and in a strong
form. Weak prime ideal theorems state that every non-trivial algebra of a certain class has at least one prime ideal.
In contrast, strong prime ideal theorems require that every ideal that is disjoint from a given lter can be extended
to a prime ideal that is still disjoint from that lter. In the case of algebras that are not posets, one uses dierent
substructures instead of lters. Many forms of these theorems are actually known to be equivalent, so that the assertion
that PIT holds is usually taken as the assertion that the corresponding statement for Boolean algebras (BPI) is valid.
Another variation of similar theorems is obtained by replacing each occurrence of prime ideal by maximal ideal. The
corresponding maximal ideal theorems (MIT) are oftenthough not alwaysstronger than their PIT equivalents.

83
84 CHAPTER 18. BOOLEAN PRIME IDEAL THEOREM

18.2 Boolean prime ideal theorem


The Boolean prime ideal theorem is the strong prime ideal theorem for Boolean algebras. Thus the formal statement
is:

Let B be a Boolean algebra, let I be an ideal and let F be a lter of B, such that I and F are disjoint. Then
I is contained in some prime ideal of B that is disjoint from F.

The weak prime ideal theorem for Boolean algebras simply states:

Every Boolean algebra contains a prime ideal.

We refer to these statements as the weak and strong BPI. The two are equivalent, as the strong BPI clearly implies the
weak BPI, and the reverse implication can be achieved by using the weak BPI to nd prime ideals in the appropriate
quotient algebra.
The BPI can be expressed in various ways. For this purpose, recall the following theorem:
For any ideal I of a Boolean algebra B, the following are equivalent:

I is a prime ideal.

I is a maximal ideal, i.e. for any proper ideal J, if I is contained in J then I = J.

For every element a of B, I contains exactly one of {a, a}.

This theorem is a well-known fact for Boolean algebras. Its dual establishes the equivalence of prime lters and
ultralters. Note that the last property is in fact self-dualonly the prior assumption that I is an ideal gives the full
characterization. All of the implications within this theorem can be proven in ZF.
Thus the following (strong) maximal ideal theorem (MIT) for Boolean algebras is equivalent to BPI:

Let B be a Boolean algebra, let I be an ideal and let F be a lter of B, such that I and F are disjoint. Then
I is contained in some maximal ideal of B that is disjoint from F.

Note that one requires global maximality, not just maximality with respect to being disjoint from F. Yet, this
variation yields another equivalent characterization of BPI:

Let B be a Boolean algebra, let I be an ideal and let F be a lter of B, such that I and F are disjoint. Then
I is contained in some ideal of B that is maximal among all ideals disjoint from F.

The fact that this statement is equivalent to BPI is easily established by noting the following theorem: For any
distributive lattice L, if an ideal I is maximal among all ideals of L that are disjoint to a given lter F, then I is
a prime ideal. The proof for this statement (which can again be carried out in ZF set theory) is included in the article
on ideals. Since any Boolean algebra is a distributive lattice, this shows the desired implication.
All of the above statements are now easily seen to be equivalent. Going even further, one can exploit the fact the dual
orders of Boolean algebras are exactly the Boolean algebras themselves. Hence, when taking the equivalent duals of
all former statements, one ends up with a number of theorems that equally apply to Boolean algebras, but where every
occurrence of ideal is replaced by lter. It is worth noting that for the special case where the Boolean algebra under
consideration is a powerset with the subset ordering, the maximal lter theorem is called the ultralter lemma.
Summing up, for Boolean algebras, the weak and strong MIT, the weak and strong PIT, and these statements with
lters in place of ideals are all equivalent. It is known that all of these statements are consequences of the Axiom of
Choice, AC, (the easy proof makes use of Zorns lemma), but cannot be proven in ZF (Zermelo-Fraenkel set theory
without AC), if ZF is consistent. Yet, the BPI is strictly weaker than the axiom of choice, though the proof of this
statement, due to J. D. Halpern and Azriel Lvy is rather non-trivial.
18.3. FURTHER PRIME IDEAL THEOREMS 85

18.3 Further prime ideal theorems

The prototypical properties that were discussed for Boolean algebras in the above section can easily be modied to
include more general lattices, such as distributive lattices or Heyting algebras. However, in these cases maximal ideals
are dierent from prime ideals, and the relation between PITs and MITs is not obvious.
Indeed, it turns out that the MITs for distributive lattices and even for Heyting algebras are equivalent to the axiom
of choice. On the other hand, it is known that the strong PIT for distributive lattices is equivalent to BPI (i.e. to the
MIT and PIT for Boolean algebras). Hence this statement is strictly weaker than the axiom of choice. Furthermore,
observe that Heyting algebras are not self dual, and thus using lters in place of ideals yields dierent theorems in
this setting. Maybe surprisingly, the MIT for the duals of Heyting algebras is not stronger than BPI, which is in sharp
contrast to the abovementioned MIT for Heyting algebras.
Finally, prime ideal theorems do also exist for other (not order-theoretical) abstract algebras. For example, the MIT
for rings implies the axiom of choice. This situation requires to replace the order-theoretic term lter by other
conceptsfor rings a multiplicatively closed subset is appropriate.

18.4 The ultralter lemma

A lter on a set X is a nonempty collection of nonempty subsets of X that is closed under nite intersection and under
superset. An ultralter is a maximal lter. The ultralter lemma states that every lter on a set X is a subset of
some ultralter on X.[1] This lemma is most often used in the study of topology. An ultralter that does not contain
nite sets is called non-principal. The ultralter lemma, and in particular the existence of non-principal ultralters
(consider the lter of all sets with nite complements), follows easily from Zorns lemma.
The ultralter lemma is equivalent to the Boolean prime ideal theorem, with the equivalence provable in ZF set theory
without the axiom of choice. The idea behind the proof is that the subsets of any set form a Boolean algebra partially
ordered by inclusion, and any Boolean algebra is representable as an algebra of sets by Stones representation theorem.

18.5 Applications

Intuitively, the Boolean prime ideal theorem states that there are enough prime ideals in a Boolean algebra in
the sense that we can extend every ideal to a maximal one. This is of practical importance for proving Stones
representation theorem for Boolean algebras, a special case of Stone duality, in which one equips the set of all prime
ideals with a certain topology and can indeed regain the original Boolean algebra (up to isomorphism) from this data.
Furthermore, it turns out that in applications one can freely choose either to work with prime ideals or with prime
lters, because every ideal uniquely determines a lter: the set of all Boolean complements of its elements. Both
approaches are found in the literature.
Many other theorems of general topology that are often said to rely on the axiom of choice are in fact equivalent to
BPI. For example, the theorem that a product of compact Hausdor spaces is compact is equivalent to it. If we leave
out Hausdor we get a theorem equivalent to the full axiom of choice.
A not too well known application of the Boolean prime ideal theorem is the existence of a non-measurable set[2] (the
example usually given is the Vitali set, which requires the axiom of choice). From this and the fact that the BPI is
strictly weaker than the axiom of choice, it follows that the existence of non-measurable sets is strictly weaker than
the axiom of choice.
In linear algebra, the Boolean prime ideal theorem can be used to prove that any two bases of a given vector space
have the same cardinality.

18.6 See also

List of Boolean algebra topics


86 CHAPTER 18. BOOLEAN PRIME IDEAL THEOREM

18.7 Notes
[1] Halpern, James D. (1966), Bases in Vector Spaces and the Axiom of Choice, Proceedings of the American Mathematical
Society, American Mathematical Society, 17 (3): 670673, JSTOR 2035388, doi:10.1090/S0002-9939-1966-0194340-1.

[2] Sierpiski, Wacaw (1938), Fonctions additives non compltement additives et fonctions non mesurables, Fundamenta
Mathematicae, 30: 9699

18.8 References
Davey, B. A.; Priestley, H. A. (2002), Introduction to Lattices and Order (2nd ed.), Cambridge University Press,
ISBN 978-0-521-78451-1.

An easy to read introduction, showing the equivalence of PIT for Boolean algebras and distributive lattices.

Johnstone, Peter (1982), Stone Spaces, Cambridge studies in advanced mathematics, 3, Cambridge University
Press, ISBN 978-0-521-33779-3.

The theory in this book often requires choice principles. The notes on various chapters discuss the general
relation of the theorems to PIT and MIT for various structures (though mostly lattices) and give pointers
to further literature.

Banaschewski, B. (1983), The power of the ultralter theorem, Journal of the London Mathematical Society
(2nd series), 27 (2): 193202, doi:10.1112/jlms/s2-27.2.193.

Discusses the status of the ultralter lemma.

Ern, M. (2000), Prime ideal theory for general algebras, Applied Categorical Structures, 8: 115144, doi:10.1023/A:100861192

Gives many equivalent statements for the BPI, including prime ideal theorems for other algebraic structures.
PITs are considered as special instances of separation lemmas.
Chapter 19

Boolean ring

In mathematics, a Boolean ring R is a ring for which x2 = x for all x in R,[1][2][3] such as the ring of integers modulo
2. That is, R consists only of idempotent elements.[4][5]
Every Boolean ring gives rise to a Boolean algebra, with ring multiplication corresponding to conjunction or meet
, and ring addition to exclusive disjunction or symmetric dierence (not disjunction , which would constitute a
semiring). Boolean rings are named after the founder of Boolean algebra, George Boole.

19.1 Notations

There are at least four dierent and incompatible systems of notation for Boolean rings and algebras.

In commutative algebra the standard notation is to use x + y = (x y) ( x y) for the ring sum of x and
y, and use xy = x y for their product.

In logic, a common notation is to use x y for the meet (same as the ring product) and use x y for the join,
given in terms of ring notation (given just above) by x + y + xy.

In set theory and logic it is also common to use x y for the meet, and x + y for the join x y. This use of + is
dierent from the use in ring theory.

A rare convention is to use xy for the product and x y for the ring sum, in an eort to avoid the ambiguity of
+.

Historically, the term Boolean ring has been used to mean a Boolean ring possibly without an identity, and
Boolean algebra has been used to mean a Boolean ring with an identity. The existence of the identity is necessary to
consider the ring as an algebra over the eld of two elements: otherwise there cannot be a (unital) ring homomorphism
of the eld of two elements into the Boolean ring. (This is the same as the old use of the terms ring and algebra
in measure theory.[lower-alpha 1] )

19.2 Examples

One example of a Boolean ring is the power set of any set X, where the addition in the ring is symmetric dierence,
and the multiplication is intersection. As another example, we can also consider the set of all nite or conite subsets
of X, again with symmetric dierence and intersection as operations. More generally with these operations any eld
of sets is a Boolean ring. By Stones representation theorem every Boolean ring is isomorphic to a eld of sets (treated
as a ring with these operations).

87
88 CHAPTER 19. BOOLEAN RING

x y x y x

xy xy x
Venn diagrams for the Boolean operations of conjunction, disjunction, and complement

19.3 Relation to Boolean algebras


Since the join operation in a Boolean algebra is often written additively, it makes sense in this context to denote
ring addition by , a symbol that is often used to denote exclusive or.
Given a Boolean ring R, for x and y in R we can dene

x y = xy,

x y = x y xy,

x = 1 x.

These operations then satisfy all of the axioms for meets, joins, and complements in a Boolean algebra. Thus every
Boolean ring becomes a Boolean algebra. Similarly, every Boolean algebra becomes a Boolean ring thus:

xy = x y,

x y = (x y) (x y).

If a Boolean ring is translated into a Boolean algebra in this way, and then the Boolean algebra is translated into a
ring, the result is the original ring. The analogous result holds beginning with a Boolean algebra.
A map between two Boolean rings is a ring homomorphism if and only if it is a homomorphism of the corresponding
Boolean algebras. Furthermore, a subset of a Boolean ring is a ring ideal (prime ring ideal, maximal ring ideal) if
and only if it is an order ideal (prime order ideal, maximal order ideal) of the Boolean algebra. The quotient ring of
a Boolean ring modulo a ring ideal corresponds to the factor algebra of the corresponding Boolean algebra modulo
the corresponding order ideal.

19.4 Properties of Boolean rings


Every Boolean ring R satises x x = 0 for all x in R, because we know

x x = (x x)2 = x2 x2 x2 x2 = x x x x

and since (R,) is an abelian group, we can subtract x x from both sides of this equation, which gives x x = 0. A
similar proof shows that every Boolean ring is commutative:

x y = (x y)2 = x2 xy yx y2 = x xy yx y
19.5. UNIFICATION 89

and this yields xy yx = 0, which means xy = yx (using the rst property above).
The property x x = 0 shows that any Boolean ring is an associative algebra over the eld F2 with two elements, in
just one way. In particular, any nite Boolean ring has as cardinality a power of two. Not every associative algebra
with one over F2 is a Boolean ring: consider for instance the polynomial ring F2 [X].
The quotient ring R/I of any Boolean ring R modulo any ideal I is again a Boolean ring. Likewise, any subring of a
Boolean ring is a Boolean ring.
Every prime ideal P in a Boolean ring R is maximal: the quotient ring R/P is an integral domain and also a Boolean
ring, so it is isomorphic to the eld F2 , which shows the maximality of P. Since maximal ideals are always prime,
prime ideals and maximal ideals coincide in Boolean rings.
Boolean rings are von Neumann regular rings.
Boolean rings are absolutely at: this means that every module over them is at.
Every nitely generated ideal of a Boolean ring is principal (indeed, (x,y)=(x+y+xy)).

19.5 Unication
Unication in Boolean rings is decidable,[6] that is, algorithms exist to solve arbitrary equations over Boolean rings.
Both unication and matching in nitely generated free Boolean rings are NP-complete, and NP-hard in nitely
presented Boolean rings.[7] (In fact, as any unication problem f(X) = g(X) in a Boolean ring can be rewritten as the
matching problem f(X) + g(X) = 0, the problems are equivalent.)
Unication in Boolean rings is unitary if all the uninterpreted function symbols are nullary and nitary otherwise
(i.e. if the function symbols not occurring in the signature of Boolean rings are all constants then there exists a most
general unier, and otherwise the minimal complete set of uniers is nite).[8]

19.6 See also


Ring-sum normal form

19.7 Notes
[1] When a Boolean ring has an identity, then a complement operation becomes denable on it, and a key characteristic of the
modern denitions of both Boolean algebra and sigma-algebra is that they have complement operations.

19.8 References
[1] Fraleigh (1976, p. 200)

[2] Herstein (1964, p. 91)

[3] McCoy (1968, p. 46)

[4] Fraleigh (1976, p. 25)

[5] Herstein (1964, p. 224)

[6] Martin, U.; Nipkow, T. (1986). Unication in Boolean Rings. In Jrg H. Siekmann. Proc. 8th CADE. LNCS. 230.
Springer. pp. 506513.

[7] Kandri-Rody, A., Kapur, D., and Narendran, P., An ideal-theoretic approach to word problems and unication problems
over nitely presented commutative algebras, Proc. of the rst Conference on Rewriting Techniques and Applications,
Dijon, France, May 1985, LNCS 202, Springer Verlag, New York, 345-364.

[8] A. Boudet; J.-P. Jouannaud; M. Schmidt-Schau (1989). Unication of Boolean Rings and Abelian Groups (PDF).
Journal of Symbolic Computation. 8: 449477. doi:10.1016/s0747-7171(89)80054-9.
90 CHAPTER 19. BOOLEAN RING

19.9 Further reading


Atiyah, Michael Francis; Macdonald, I. G. (1969), Introduction to Commutative Algebra, Westview Press, ISBN
978-0-201-40751-8
Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-
201-01984-1

Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 978-1114541016
McCoy, Neal H. (1968), Introduction To Modern Algebra (Revised ed.), Boston: Allyn and Bacon, LCCN
68015225
Ryabukhin, Yu. M. (2001) [1994], Boolean_ring, in Hazewinkel, Michiel, Encyclopedia of Mathematics,
Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

19.10 External links


John Armstrong, Boolean Rings
Chapter 20

Boolean satisability algorithm heuristics

Given a Boolean expression B with V = {v0 , . . . , vn } variables, nding an assignment V of the variables such that
B(V ) is true is called the Boolean satisability problem, frequently abbreviated SAT, and is seen as the canonical
NP-complete problem.
Although no known algorithm is known to solve SAT in polynomial time, there are classes of SAT problems which
do have ecient algorithms that solve them. These classes of problems arise from many practical problems in AI
planning, circuit testing, and software verication.[1][2] Research on constructing ecient SAT solvers has been based
on various principles such as resolution, search, local search and random walk, binary decisions, and Stalmarcks
algorithm.[2]
Some of these algorithms are deterministic, while others may be stochastic.
As there exist polynomial-time algorithms to convert any Boolean expression to conjunctive normal form such as
Tseitins algorithm, posing SAT problems in CNF does not change their computational diculty. SAT problems are
canonically expressed in CNF because CNF has certain properties that can help prune the search space and speed up
the search process.[2]

20.1 Branching heuristics in conict-driven algorithms [2]


One of the cornerstone Conict-Driven Clause Learning SAT solver algorithms is the DPLL algorithm. The algorithm
works by iteratively assigning free variables, and when the algorithm encounters a bad assignment, then it backtracks
to a previous iteration and chooses a dierent assignment of variables. It relies on a Branching Heuristic to pick
the next free variable assignment; the branching algorithm eectively makes choosing the variable assignment into a
decision tree. Dierent implementations of this heuristic produce markedly dierent decision trees, and thus have
signicant eect on the eciency of the solver.
Early branching Heuristics (Bohms Heuristic, Maximum Occurrences on Minimum sized clauses heuristic, and
Jeroslow-Wang heuristic) can be regarded as greedy algorithms. Their basic premise is to choose a free variable
assignment that will satisfy the most already unsatised clauses in the Boolean expression. However, as Boolean
expressions get larger, more complicated, or more structured, these heuristics fail to capture useful information about
these problems that could improve eciency; they often get stuck in local maxima or do not consider the distribution
of variables. Additionally, larger problems require more processing, as the operation of counting free variables in
unsatised clauses dominates the run-time.
Another heuristic called Variable State Independent Decaying Sum (VSIDS) attempts to score each variable. VSIDS
starts by looking at small portions of the Boolean expression and assigning each phase of a variable (a variable and its
negated complement) a score proportional to the number of clauses that variable phase is in. As VSIDS progresses
and searches more parts of the Boolean expression, periodically, all scores are divided by a constant. This discounts
the eect of the presence of variables in earlier-found clauses in favor of variables with a greater presence in more
recent clauses. VSIDS will select the variable phase with the highest score to determine where to branch.
VSIDS is quite eective because the scores of variable phases is independent of the current variable assignment, so
backtracking is much easier. Further, VSIDS guarantees that each variable assignment satises the greatest number
of recently searched segments of the Boolean expression.

91
92 CHAPTER 20. BOOLEAN SATISFIABILITY ALGORITHM HEURISTICS

20.2 Stochastic solvers [3]


MAX-SAT (the version of SAT in which the number of satised clauses is maximized) solvers can also be solved
using probabilistic algorithms. If we are given a Boolean expression B , with V = {v0 , . . . , vn } variables and we set
each variable randomly, then each clause c , with |c| variables, has a chance of being satised by a particular variable
assignment Pr( c is satised) = 1 2|c| . This is because each variable in c has 12 probability of being satised, and
we only need one variable in c to be satised. This works |c| 1 , so Pr( c is satised) = 1 2|c| 21 .
Now we show that randomly assigning variable values is a 12 -approximation algorithm, which means that is an optimal
approximation algorithm unless P = NP. Suppose we are given a Boolean expression B = {ci }ni=1 and

{
0 ifci satised is ,
ij =
1 ifci satised not is .

E[Satsied Clauses Num] = E[i ] = 1 2|ci |
i i
1 1 1
= |i| = OP T
i
2 2 2
This algorithm cannot be further optimized by the PCP theorem unless P = NP.
Other Stochastic SAT solvers, such as WalkSAT and GSAT are an improvement to the above procedure. They start by
randomly assigning values to each variable and then traverse the given Boolean expression to identify which variables
to ip to minimize the number of unsatised clauses. They may randomly select a variable to ip or select a new
random variable assignment to escape local maxima, much like a simulated annealing algorithm.

20.3 2-SAT heuristics


Unlike general SAT problems, 2-SAT problems are tractable. There exist algorithms that can compute the satisability
of a 2-SAT problem in polynomial time. This is a result of the constraint that each clause has only two variables,
so when an algorithm sets a variable vi , the satisfaction of clauses, which contain vi but are not satised by that
variable assignment, depend on the satisfaction of the second variable in those clauses, which leaves only one possible
assignment for those variables.

20.3.1 Backtracking
Suppose we are given a Boolean expressions:

B1 = (v3 v2 ) (v1 v3 )
B2 = (v3 v2 ) (v1 v3 ) (v1 v2 ).
With B1 , the algorithm can select v1 = true , so to satisfy the second clause, the algorithm will need to set v3 = false
, and resultantly to satisfy the rst clause, the algorithm will set v2 = false .
If the algorithm tries to satisfy B2 in the same way it tried to solve B1 , then the third clause will remain unsatised.
This will cause the algorithm to backtrack and set v1 = false and continue assigning variables further.

20.3.2 Graph reduction [4]


2-SAT problems can also be reduced to running a depth-rst search on a strongly connected components of a graph.
Each variable phase (a variable and its negated complement) is connected to other variable phases based on implica-
tions. In the same way when the algorithm above tried to solve

B1 , v1 = true = v3 = false = v2 = false = v1 = true.


20.4. WEIGHTED SAT PROBLEMS 93

However, when the algorithm tried solve

B2 , v1 = true = v3 = false = v2 = false


= v1 = false = = v1 = true,

which is a contradiction.
Once a 2-SAT problem is reduced to a graph, then if a depth rst search nds a strongly connected component with
both phases of a variable, then the 2-SAT problem is not satisable. Likewise, if the depth rst search does not nd
a strongly connected component with both phases of a variable, then the 2-SAT problem is satisable.

20.4 Weighted SAT problems


Numerous weighted SAT problems exist as the optimization versions of the general SAT problem. In this class of
problems, each clause in a CNF Boolean expression is given a weight. The objective is the maximize or minimize the
total sum of the weights of the satised clauses given a Boolean expression. weighted Max-SAT is the maximization
version of this problem, and Max-SAT is an instance of weighted MAX-SAT problem where the weights of each
clause are the same. The partial Max-SAT problem is the problem where some clauses necessarily must be satised
(hard clauses) and the sum total of weights of the rest of the clauses (soft clauses) are to be maximized or minimized,
depending on the problem. Partial Max-SAT represents an intermediary between Max-SAT (all clauses are soft) and
SAT (all clauses are hard).
Note that the stochasitic probabilistic solvers can also be used to nd optimal approximations for Max-SAT.

20.4.1 Variable splitting[5]


Variable splitting is a tool to nd upper and lower bounds on a Max-SAT problem. It involves splitting a variable a
into new variables for all but once occurrence of a in the original Boolean expression. For example, given the Boolean
expression: B = (abc)(aeb)(acf ) will become: B = (abc)(a1 eb)(a2 cf )
, with a, a1 , a2 , . . . , an being all distinct variables.
This relaxes the problem by introducing new variables into the Boolean expression, which has the eect of removing
many of the constraints in the expression. Because any assignment of variables in B can be represented by an
assignment of variables in B , the minimization and maximization of the weights of B represent lower and upper
bounds on the minimization and maximization of the weights of B .

20.4.2 Partial Max-SAT


Partial Max-SAT can be solved by rst considering all of the hard clauses and solving them as an instance of SAT. The
total maximum (or minimum) weight of the soft clauses can be evaluated given the variable assignment necessary to
satisfy the hard clauses and trying to optimize the free variables (the variables that the satisfaction of the hard clauses
does not depend on). The latter step is an implementation of Max-SAT given some pre-dened variables. Of course,
dierent variable assignments that satisfy the hard clauses might have dierent optimal free variable assignments, so
it is necessary to check dierent hard clause satisfaction variable assignments.

20.5 Data structures for storing clauses [2]


As SAT solvers and practical SAT problems (e.g. circuit verication) get more advanced, the Boolean expressions of
interest may exceed millions of variables with several million clauses; therefore, ecient data structures to store and
evaluate the clauses must be used.
Expressions can be stored as a list of clauses, where each clause is a list of variables, much like an adjacency list.
Though these data structures are convenient for manipulation (adding elements, deleting elements, etc.), they rely on
many pointers, which increases their memory overhead, decreases cache locality, and increases cache misses, which
renders them impractical for problems with large clause counts and large clause sizes.
94 CHAPTER 20. BOOLEAN SATISFIABILITY ALGORITHM HEURISTICS

When clause sizes are large, more ecient analogous implementations include storing expressions as a list of clauses,
where each clause is represented as a matrix that represents the clauses and the variables present in that clause, much
like an adjacency matrix. The elimination of pointers and the contiguous memory occupation of arrays serve to
decrease memory usage and increase cache locality and cache hits, which oers a run-time speed up compared to the
aforesaid implementation.

20.6 References
[1] Aloul, Fadi A., On Solving Optimization Problems Using Boolean Satisability, American University of Sharjah (2005),
http://www.aloul.net/Papers/faloul_icmsao05.pdf

[2] Zhang, Lintao. Malik, Sharad. The Quest for Ecient Boolean Satisability Solvers, Department of Electrical Engi-
neering, Princeton University. https://www.princeton.edu/~{}chaff/publication/cade_cav_2002.pdf

[3] Sung, Phil. Maximum Satisability (2006) http://math.mit.edu/~{}goemans/18434S06/max-sat-phil.pdf

[4] Grith, Richard. Strongly Connected Components and the 2-SAT Problem in Dart. http://www.greatandlittle.com/
studios/index.php?post/2013/03/26/Strongly-Connected-Components-and-the-2-SAT-Problem-in-Dart

[5] Pipatsrisawat, Knot. Palyan, Akop. et. al. Solving Weighted Max-SAT Problems in a Reduced Search Space: A Perfor-
mance Analysis. University of California Computer Science Department http://reasoning.cs.ucla.edu/fetch.php?id=86&
type=pdf
Chapter 21

Boolean satisability problem

3SAT redirects here. For the Central European television network, see 3sat.

In computer science, the Boolean satisability problem (sometimes called Propositional Satisability Problem
and abbreviated as SATISFIABILITY or SAT) is the problem of determining if there exists an interpretation that
satises a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be
consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the
case, the formula is called satisable. On the other hand, if no such assignment exists, the function expressed by the
formula is FALSE for all possible variable assignments and the formula is unsatisable. For example, the formula "a
AND NOT b" is satisable because one can nd the values a = TRUE and b = FALSE, which make (a AND NOT
b) = TRUE. In contrast, "a AND NOT a" is unsatisable.
SAT is the rst problem that was proven to be NP-complete; see CookLevin theorem. This means that all problems
in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most
as dicult to solve as SAT. There is no known algorithm that eciently solves each SAT problem, and it is generally
believed that no such algorithm exists; yet this belief has not been proven mathematically, and resolving the question
whether SAT has a polynomial-time algorithm is equivalent to the P versus NP problem, which is a famous open
problem in the theory of computing.
Nevertheless, as of 2016, heuristical SAT-algorithms are able to solve problem instances involving tens of thousands
of variables and formulas consisting of millions of symbols,[1] which is sucient for many practical SAT problems
from e.g. articial intelligence, circuit design, and automatic theorem proving.

21.1 Basic denitions and terminology


A propositional logic formula, also called Boolean expression, is built from variables, operators AND (conjunction,
also denoted by ), OR (disjunction, ), NOT (negation, ), and parentheses. A formula is said to be satisable if
it can be made TRUE by assigning appropriate logical values (i.e. TRUE, FALSE) to its variables. The Boolean
satisability problem (SAT) is, given a formula, to check whether it is satisable. This decision problem is of
central importance in various areas of computer science, including theoretical computer science, complexity theory,
algorithmics, cryptography and articial intelligence.
There are several special cases of the Boolean satisability problem in which the formulas are required to have a
particular structure. A literal is either a variable, then called positive literal, or the negation of a variable, then
called negative literal. A clause is a disjunction of literals (or a single literal). A clause is called a Horn clause
if it contains at most one positive literal. A formula is in conjunctive normal form (CNF) if it is a conjunction of
clauses (or a single clause). For example, x1 is a positive literal, x2 is a negative literal, x1 x2 is a clause, and (x1
x2 ) (x1 x2 x3 ) x1 is a formula in conjunctive normal form, its 1st and 3rd clause are Horn clauses, but
its 2nd clause is not. The formula is satisable, choosing x1 = FALSE, x2 = FALSE, and x3 arbitrarily, since (FALSE
FALSE) (FALSE FALSE x3 ) FALSE evaluates to (FALSE TRUE) (TRUE FALSE x3 )
TRUE, and in turn to TRUE TRUE TRUE (i.e. to TRUE). In contrast, the CNF formula a a, consisting of
two clauses of one literal, is unsatisable, since for a=TRUE and a=FALSE it evaluates to TRUE TRUE (i.e. to
FALSE) and FALSE FALSE (i.e. again to FALSE), respectively.

95
96 CHAPTER 21. BOOLEAN SATISFIABILITY PROBLEM

For some versions of the SAT problem, it is useful to dene the notion of a generalized conjunctive normal form
formula, viz. as a conjunction of arbitrarily many generalized clauses, the latter being of the form R(l1 ,...,ln) for some
boolean operator R and (ordinary) literals li. Dierent sets of allowed boolean operators lead to dierent problem
versions. As an example, R(x,a,b) is a generalized clause, and R(x,a,b) R(b,y,c) R(c,d,z) is a generalized
conjunctive normal form. This formula is used below, with R being the ternary operator that is TRUE just if exactly
one of its arguments is.
Using the laws of Boolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive
normal form, which may, however, be exponentially longer. For example, transforming the formula (x1 y1 ) (x2 y2 )
... (xnyn) into conjunctive normal form yields

(x1 x2 xn)
(y1 x2 xn)
(x1 y2 xn)
(y1 y2 xn) ...
(x1 x2 yn)
(y1 x2 yn)
(x1 y2 yn)
(y1 y2 yn);

while the former is a disjunction of n conjunctions of 2 variables, the latter consists of 2n clauses of n variables.

21.2 Complexity and restricted versions

21.2.1 Unrestricted satisability (SAT)


Main article: CookLevin theorem

SAT was the rst known NP-complete problem, as proved by Stephen Cook at the University of Toronto in 1971[2]
and independently by Leonid Levin at the National Academy of Sciences in 1973.[3] Until that time, the concept of
an NP-complete problem did not even exist. The proof shows how every decision problem in the complexity class
NP can be reduced to the SAT problem for CNF[note 1] formulas, sometimes called CNFSAT. A useful property of
Cooks reduction is that it preserves the number of accepting answers. For example, deciding whether a given graph
has a 3-coloring is another problem in NP; if a graph has 17 valid 3-colorings, the SAT formula produced by the
CookLevin reduction will have 17 satisfying assignments.
NP-completeness only refers to the run-time of the worst case instances. Many of the instances that occur in practical
applications can be solved much more quickly. See Algorithms for solving SAT below.
SAT is trivial if the formulas are restricted to those in disjunctive normal form, that is, they are disjunction of
conjunctions of literals. Such a formula is indeed satisable if and only if at least one of its conjunctions is satisable,
and a conjunction is satisable if and only if it does not contain both x and NOT x for some variable x. This can be
checked in linear time. Furthermore, if they are restricted to being in full disjunctive normal form, in which every
variable appears exactly once in every conjunction, they can be checked in constant time (each conjunction represents
one satisfying assignment). But it can take exponential time and space to convert a general SAT problem to disjunctive
normal form; for an example exchange "" and "" in the above exponential blow-up example for conjunctive normal
forms.

21.2.2 3-satisability
Like the satisability problem for arbitrary formulas, determining the satisability of a formula in conjunctive nor-
mal form where each clause is limited to at most three literals is NP-complete also; this problem is called 3-SAT,
3CNFSAT, or 3-satisability. To reduce the unrestricted SAT problem to 3-SAT, transform each clause l1
ln to a conjunction of n 2 clauses
21.2. COMPLEXITY AND RESTRICTED VERSIONS 97

x ~x

x y

y y

~x ~y ~y

The 3-SAT instance (xxy) (xyy) (xyy) reduced to a clique problem. The green vertices form a 3-clique and
correspond to the satisfying assignment x=FALSE, y=TRUE.

(l1 l2 x2 )
(x2 l3 x3 )
(x3 l4 x4 )
(xn ln xn )
(xn ln ln)

where x2 , , xn are fresh variables not occurring elsewhere. Although the two formulas are not logically equivalent,
they are equisatisable. The formula resulting from transforming all clauses is at most 3 times as long as its original,
i.e. the length growth is polynomial.[4]
3-SAT is one of Karps 21 NP-complete problems, and it is used as a starting point for proving that other problems
are also NP-hard.[note 2] This is done by polynomial-time reduction from 3-SAT to the other problem. An example
of a problem where this method has been used is the clique problem: given a CNF formula consisting of c clauses,
the corresponding graph consists of a vertex for each literal, and an edge between each two non-contradicting[note 3]
literals from dierent clauses, cf. picture. The graph has a c-clique if and only if the formula is satisable.[5]
There is a simple randomized algorithm due to Schning (1999) that runs in time (4/3)n where n is the number of
variables in the 3-SAT proposition, and succeeds with high probability to correctly decide 3-SAT.[6]
The exponential time hypothesis asserts that no algorithm can solve 3-SAT (or indeed k-SAT for any k > 2) in time
that is fundamentally faster than exp(o(n)).
Selman, Mitchell, and Levesque (1996) give empirical data on the diculty of randomly generated 3-SAT formulas,
depending on their size parameters. Diculty is measured in number recursive calls made by a DPLL algorithm.[7]
98 CHAPTER 21. BOOLEAN SATISFIABILITY PROBLEM

3-satisability can be generalized to k-satisability (k-SAT, also k-CNF-SAT), when formulas in CNF are consid-
ered with each clause containing up to k literals. However, since for any k3, this problem can neither be easier than
3-SAT nor harder than SAT, and the latter two are NP-complete, so must be k-SAT.
Some authors restrict k-SAT to CNF formulas with exactly k literals. This doesn't lead to a dierent complexity class
either, as each clause l1 lj with j<k literals can be padded with xed dummy variables to l1 lj dj
dk. After padding all clauses, 2k 1 extra clauses[note 4] have to be appended to ensure that only d1 ==dk=FALSE
can lead to a satisfying assignment. Since k doesn't depend on the formula length, the extra clauses lead to a constant
increase in length. For the same reason, it does not matter whether duplicate literals are allowed in clauses (like e.g.
x y y), or not.

21.2.3 Exactly-1 3-satisability

Left: Schaefers reduction of a 3-SAT clause xyz. The result of R is TRUE (1) if exactly one of its arguments is TRUE, and FALSE
(0) otherwise. All 8 combinations of values for x,y,z are examined, one per line. The fresh variables a,...,f can be chosen to satisfy
all clauses (exactly one green argument for each R) in all lines except the rst, where xyz is FALSE. Right: A simpler reduction
with the same properties.

A variant of the 3-satisability problem is the one-in-three 3-SAT (also known variously as 1-in-3-SAT and exactly-
1 3-SAT). Given a conjunctive normal form, the problem is to determine whether there exists a truth assignment to
the variables so that each clause has exactly one TRUE literal (and thus exactly two FALSE literals). In contrast,
ordinary 3-SAT requires that every clause has at least one TRUE literal. Formally, a one-in-three 3-SAT problem is
given as a generalized conjunctive normal form with all generalized clauses using a ternary operator R that is TRUE
just if exactly one of its arguments is. When all literals of a one-in-three 3-SAT formula are positive, the satisability
problem is called one-in-three positive 3-SAT.
One-in-three 3-SAT, together with its positive case, is listed as NP-complete problem LO4 in the standard reference,
Computers and Intractability: A Guide to the Theory of NP-Completeness by Michael R. Garey and David S. Johnson.
One-in-three 3-SAT was proved to be NP-complete by Thomas J. Schaefer as a special case of Schaefers dichotomy
theorem, which asserts that any problem generalizing Boolean satisability in a certain way is either in the class P or
is NP-complete.[8]
Schaefer gives a construction allowing an easy polynomial-time reduction from 3-SAT to one-in-three 3-SAT. Let "(x
or y or z)" be a clause in a 3CNF formula. Add six fresh boolean variables a, b, c, d, e, and f, to be used to simulate
this clause and no other. Then the formula R(x,a,d) R(y,b,d) R(a,b,e) R(c,d,f) R(z,c,FALSE) is satisable by
some setting of the fresh variables if and only if at least one of x, y, or z is TRUE, see picture (left). Thus any 3-SAT
instance with m clauses and n variables may be converted into an equisatisable one-in-three 3-SAT instance with
5m clauses and n+6m variables.[9] Another reduction involves only four fresh variables and three clauses: R(x,a,b)
R(b,y,c) R(c,d,z), see picture (right).

21.2.4 2-satisability

Main article: 2-satisability

SAT is easier if the number of literals in a clause is limited to at most 2, in which case the problem is called 2-SAT.
This problem can be solved in polynomial time, and in fact is complete for the complexity class NL. If additionally
all OR operations in literals are changed to XOR operations, the result is called exclusive-or 2-satisability, which
is a problem complete for the complexity class SL = L.
21.3. EXTENSIONS OF SAT 99

21.2.5 Horn-satisability

Main article: Horn-satisability

The problem of deciding the satisability of a given conjunction of Horn clauses is called Horn-satisability, or
HORN-SAT. It can be solved in polynomial time by a single step of the Unit propagation algorithm, which produces
the single minimal model of the set of Horn clauses (w.r.t. the set of literals assigned to TRUE). Horn-satisability is
P-complete. It can be seen as Ps version of the Boolean satisability problem. Also, deciding the truth of quantied
Horn formulas can be done in polynomial time. [10]
Horn clauses are of interest because they are able to express implication of one variable from a set of other variables.
Indeed, one such clause x1 ... xn y can be rewritten as x1 ... xn y, that is, if x1 ,...,xn are all TRUE,
then y needs to be TRUE as well.
A generalization of the class of Horn formulae is that of renamable-Horn formulae, which is the set of formulae that
can be placed in Horn form by replacing some variables with their respective negation. For example, (x1 x2 )
(x1 x2 x3 ) x1 is not a Horn formula, but can be renamed to the Horn formula (x1 x2 ) (x1 x2 y3 )
x1 by introducing y3 as negation of x3 . In contrast, no renaming of (x1 x2 x3 ) (x1 x2 x3 ) x1
leads to a Horn formula. Checking the existence of such a replacement can be done in linear time; therefore, the
satisability of such formulae is in P as it can be solved by rst performing this replacement and then checking the
satisability of the resulting Horn formula.

21.2.6 XOR-satisability

Another special case is the class of problems where each clause contains XOR (i.e. exclusive or) rather than (plain)
OR operators.[note 5] This is in P, since an XOR-SAT formula can also be viewed as a system of linear equations mod
2, and can be solved in cubic time by Gaussian elimination;[11] see the box for an example. This recast is based on
the kinship between Boolean algebras and Boolean rings, and the fact that arithmetic modulo two forms a nite eld.
Since a XOR b XOR c evaluates to TRUE if and only if exactly 1 or 3 members of {a,b,c} are TRUE, each solution
of the 1-in-3-SAT problem for a given CNF formula is also a solution of the XOR-3-SAT problem, and in turn each
solution of XOR-3-SAT is a solution of 3-SAT, cf. picture. As a consequence, for each CNF formula, it is possible
to solve the XOR-3-SAT problem dened by the formula, and based on the result infer either that the 3-SAT problem
is solvable or that the 1-in-3-SAT problem is unsolvable.
Provided that the complexity classes P and NP are not equal, neither 2-, nor Horn-, nor XOR-satisability is NP-
complete, unlike SAT.

21.2.7 Schaefers dichotomy theorem

Main article: Schaefers dichotomy theorem

The restrictions above (CNF, 2CNF, 3CNF, Horn, XOR-SAT) bound the considered formulae to be conjunctions
of subformulae; each restriction states a specic form for all subformulae: for example, only binary clauses can be
subformulae in 2CNF.
Schaefers dichotomy theorem states that, for any restriction to Boolean operators that can be used to form these sub-
formulae, the corresponding satisability problem is in P or NP-complete. The membership in P of the satisability
of 2CNF, Horn, and XOR-SAT formulae are special cases of this theorem.

21.3 Extensions of SAT


An extension that has gained signicant popularity since 2003 is Satisability modulo theories (SMT) that can
enrich CNF formulas with linear constraints, arrays, all-dierent constraints, uninterpreted functions,[12] etc. Such
extensions typically remain NP-complete, but very ecient solvers are now available that can handle many such kinds
of constraints.
The satisability problem becomes more dicult if both for all () and there exists () quantiers are allowed
100 CHAPTER 21. BOOLEAN SATISFIABILITY PROBLEM

to bind the Boolean variables. An example of such an expression would be x y z (x y z) (x y z);


it is valid, since for all values of x and y, an appropriate value of z can be found, viz. z=TRUE if both x and y are
FALSE, and z=FALSE else. SAT itself (tacitly) uses only quantiers. If only quantiers are allowed instead, the
so-called tautology problem is obtained, which is co-NP-complete. If both quantiers are allowed, the problem is
called the quantied Boolean formula problem (QBF), which can be shown to be PSPACE-complete. It is widely
believed that PSPACE-complete problems are strictly harder than any problem in NP, although this has not yet been
proved. Using highly parallel P systems, QBF-SAT problems can be solved in linear time.[13]
Ordinary SAT asks if there is at least one variable assignment that makes the formula true. A variety of variants deal
with the number of such assignments:

MAJ-SAT asks if the majority of all assignments make the formula TRUE. It is known to be complete for PP,
a probabilistic class.
#SAT, the problem of counting how many variable assignments satisfy a formula, is a counting problem, not a
decision problem, and is #P-complete.
UNIQUE-SAT is the problem of determining whether a formula has exactly one assignment. It is complete for
US, the complexity class describing problems solvable by a non-deterministic polynomial time Turing machine
that accepts when there is exactly one nondeterministic accepting path and rejects otherwise.
UNAMBIGUOUS-SAT is the name given to the satisability problem when the input is restricted to formu-
las having at most one satisfying assignment. A solving algorithm for UNAMBIGUOUS-SAT is allowed to
exhibit any behavior, including endless looping, on a formula having several satisfying assignments. Although
this problem seems easier, Valiant and Vazirani have shown[14] that if there is a practical (i.e. randomized
polynomial-time) algorithm to solve it, then all problems in NP can be solved just as easily.
MAX-SAT, the maximum satisability problem, is an FNP generalization of SAT. It asks for the maximum
number of clauses, which can be satised by any assignment. It has ecient approximation algorithms, but is
NP-hard to solve exactly. Worse still, it is APX-complete, meaning there is no polynomial-time approximation
scheme (PTAS) for this problem unless P=NP.

Other generalizations include satisability for rst- and second-order logic, constraint satisfaction problems, 0-1 in-
teger programming.

21.4 Self-reducibility
The SAT problem is self-reducible, that is, each algorithm which correctly answers if an instance of SAT is solvable
can be used to nd a satisfying assignment. First, the question is asked on the given formula . If the answer is no,
the formula is unsatisable. Otherwise, the question is asked on the partly instantiated formula {x1 =TRUE}, i.e.
with the rst variable x1 replaced by TRUE, and simplied accordingly. If the answer is yes, then x1 =TRUE,
otherwise x1 =FALSE. Values of other variables can be found subsequently in the same way. In total, n+1 runs of the
algorithm are required, where n is the number of distinct variables in .
This property of self-reducibility is used in several theorems in complexity theory:

NP P/poly PH = 2 (KarpLipton theorem)


NP BPP NP = RP
P = NP FP = FNP

21.5 Algorithms for solving SAT


Since the SAT problem is NP-complete, only algorithms with exponential worst-case complexity are known for it. In
spite of this, ecient and scalable algorithms for SAT were developed over the last decade and have contributed to
dramatic advances in our ability to automatically solve problem instances involving tens of thousands of variables and
millions of constraints (i.e. clauses).[1] Examples of such problems in electronic design automation (EDA) include
formal equivalence checking, model checking, formal verication of pipelined microprocessors,[12] automatic test
21.6. SEE ALSO 101

pattern generation, routing of FPGAs,[15] planning, and scheduling problems, and so on. A SAT-solving engine is
now considered to be an essential component in the EDA toolbox.
There are two classes of high-performance algorithms for solving instances of SAT in practice: the Conict-Driven
Clause Learning algorithm, which can be viewed as a modern variant of the DPLL algorithm (well known imple-
mentations include Cha[16] and GRASP[17] ) and stochastic local search algorithms, such as WalkSAT.
A DPLL SAT solver employs a systematic backtracking search procedure to explore the (exponentially sized) space
of variable assignments looking for satisfying assignments. The basic search procedure was proposed in two seminal
papers in the early 1960s (see references below) and is now commonly referred to as the DavisPutnamLogemann
Loveland algorithm (DPLL or DLL).[18][19] Theoretically, exponential lower bounds have been proved for the
DPLL family of algorithms.
In contrast, randomized algorithms like the PPSZ algorithm by Paturi, Pudlak, Saks, and Zane set variables in a
random order according to some heuristics, for example bounded-width resolution. If the heuristic can't nd the
correct setting, the variable is assigned randomly. The PPSZ algorithm has a runtime of O(20.386n ) for 3-SAT with
a single satisfying assignment. Currently this is the best-known runtime for this problem. In the setting with many
satisfying assignments the randomized algorithm by Schning has a better bound.[6][20]
Modern SAT solvers (developed in the last ten years) come in two avors: conict-driven and look-ahead.
Conict-driven solvers augment the basic DPLL search algorithm with ecient conict analysis, clause learning,
non-chronological backtracking (a.k.a. backjumping), as well as two-watched-literals unit propagation, adaptive
branching, and random restarts. These extras to the basic systematic search have been empirically shown to be es-
sential for handling the large SAT instances that arise in electronic design automation (EDA).[21] Look-ahead solvers
have especially strengthened reductions (going beyond unit-clause propagation) and the heuristics, and they are gen-
erally stronger than conict-driven solvers on hard instances (while conict-driven solvers can be much better on large
instances which actually have an easy instance inside).
Modern SAT solvers are also having signicant impact on the elds of software verication, constraint solving in
articial intelligence, and operations research, among others. Powerful solvers are readily available as free and open
source software. In particular, the conict-driven MiniSAT, which was relatively successful at the 2005 SAT com-
petition, only has about 600 lines of code. A modern Parallel SAT solver is ManySAT. It can achieve super linear
speed-ups on important classes of problems. An example for look-ahead solvers is march_dl, which won a prize at
the 2007 SAT competition.
Certain types of large random satisable instances of SAT can be solved by survey propagation (SP). Particularly
in hardware design and verication applications, satisability and other logical properties of a given propositional
formula are sometimes decided based on a representation of the formula as a binary decision diagram (BDD).
Almost all SAT solvers include time-outs, so they will terminate in reasonable time even if they cannot nd a solution.
Dierent SAT solvers will nd dierent instances easy or hard, and some excel at proving unsatisability, and others
at nding solutions. All of these behaviors can be seen in the SAT solving contests.[22]

21.6 See also


Unsatisable core
Satisability modulo theories
Counting SAT
KarloZwick algorithm
Circuit satisability

21.7 Notes
[1] The SAT problem for arbitrary formulas is NP-complete, too, since it is easily shown to be in NP, and it cannot be easier
than SAT for CNF formulas.

[2] i.e. at least as hard as every other problem in NP. A decision problem is NP-complete if and only if it is in NP and is
NP-hard.
102 CHAPTER 21. BOOLEAN SATISFIABILITY PROBLEM

[3] i.e. such that one literal is not the negation of the other

[4] viz. all maxterms that can be built with d1 ,,dk, except d1 dk

[5] Formally, generalized conjunctive normal forms with a ternary boolean operator R are employed, which is TRUE just if 1
or 3 of its arguments is. An input clause with more than 3 literals can be transformed into an equisatisable conjunction of
clauses 3 literals similar to above; i.e. XOR-SAT can be reduced to XOR-3-SAT.

21.8 References
[1] Ohrimenko, Olga; Stuckey, Peter J.; Codish, Michael (2007), Propagation = Lazy Clause Generation, Principles and
Practice of Constraint Programming CP 2007, Lecture Notes in Computer Science, 4741, pp. 544558, doi:10.1007/978-
3-540-74970-7_39, modern SAT solvers can often handle problems with millions of constraints and hundreds of thousands
of variables.

[2] Cook, Stephen A. (1971). The Complexity of Theorem-Proving Procedures (PDF). Proceedings of the 3rd Annual ACM
Symposium on Theory of Computing: 151158. doi:10.1145/800157.805047.

[3] Levin, Leonid (1973). Universal search problems (Russian: , Universal'nye perebornye
zadachi)". Problems of Information Transmission (Russian: , Problemy Peredachi In-
formatsii). 9 (3): 115116. (pdf) (in Russian), translated into English by Trakhtenbrot, B. A. (1984). A survey of
Russian approaches to perebor (brute-force searches) algorithms. Annals of the History of Computing. 6 (4): 384400.
doi:10.1109/MAHC.1984.10036.

[4] Alfred V. Aho; John E. Hopcroft; Jerey D. Ullman (1974). The Design and Analysis of Computer Algorithms. Addison-
Wesley.; here: Thm.10.4

[5] Aho, Hopcroft, Ullman[4] (1974); Thm.10.5

[6] Schning, Uwe (Oct 1999). A Probabilistic Algorithm for k-SAT and Constraint Satisfaction Problems. Proc. 40th Ann.
Symp. Foundations of Computer Science (PDF). pp. 410414. doi:10.1109/SFFCS.1999.814612.

[7] Bart Selman; David Mitchell; Hector Levesque (1996). Generating Hard Satisability Problems. Articial Intelligence.
81: 1729. doi:10.1016/0004-3702(95)00045-3.

[8] Schaefer, Thomas J. (1978). The complexity of satisability problems (PDF). Proceedings of the 10th Annual ACM
Symposium on Theory of Computing. San Diego, California. pp. 216226.

[9] (Schaefer, 1978), p.222, Lemma 3.5

[10] Buning, H.K.; Karpinski, Marek; Flogel, A. (1995). Resolution for Quantied Boolean Formulas. Information and
Computation. Elsevier. 117 (1): 1218. doi:10.1006/inco.1995.1025.

[11] Moore, Cristopher; Mertens, Stephan (2011), The Nature of Computation, Oxford University Press, p. 366, ISBN 9780199233212.

[12] R. E. Bryant, S. M. German, and M. N. Velev, Microprocessor Verication Using Ecient Decision Procedures for a Logic
of Equality with Uninterpreted Functions, in Analytic Tableaux and Related Methods, pp. 113, 1999.

[13] Alhazov, Artiom; Martn-Vide, Carlos; Pan, Linqiang (2003). Solving a PSPACE-Complete Problem by Recognizing P
Systems with Restricted Active Membranes. Fundamenta Informaticae. 58: 6777.

[14] Valiant, L.; Vazirani, V. (1986). NP is as easy as detecting unique solutions (PDF). Theoretical Computer Science. 47:
8593. doi:10.1016/0304-3975(86)90135-0.

[15] Gi-Joon Nam; Sakallah, K. A.; Rutenbar, R. A. (2002). A new FPGA detailed routing approach via search-based
Boolean satisability (PDF). IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 21 (6):
674. doi:10.1109/TCAD.2002.1004311.

[16] Moskewicz, M. W.; Madigan, C. F.; Zhao, Y.; Zhang, L.; Malik, S. (2001). Cha: Engineering an Ecient SAT Solver
(PDF). Proceedings of the 38th conference on Design automation (DAC). p. 530. ISBN 1581132972. doi:10.1145/378239.379017.

[17] Marques-Silva, J. P.; Sakallah, K. A. (1999). GRASP: a search algorithm for propositional satisability (PDF). IEEE
Transactions on Computers. 48 (5): 506. doi:10.1109/12.769433.

[18] Davis, M.; Putnam, H. (1960). A Computing Procedure for Quantication Theory. Journal of the ACM. 7 (3): 201.
doi:10.1145/321033.321034.
21.9. EXTERNAL LINKS 103

[19] Davis, M.; Logemann, G.; Loveland, D. (1962). A machine program for theorem-proving (PDF). Communications of
the ACM. 5 (7): 394397. doi:10.1145/368273.368557.
[20] An improved exponential-time algorithm for k-SAT, Paturi, Pudlak, Saks, Zani
[21] Vizel, Y.; Weissenbacher, G.; Malik, S. (2015). Boolean Satisability Solvers and Their Applications in Model Checking.
Proceedings of the IEEE. 103 (11). doi:10.1109/JPROC.2015.2455034.
[22] The international SAT Competitions web page. Retrieved 2007-11-15.

References are ordered by date of publication:

Michael R. Garey & David S. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-
Completeness. W.H. Freeman. ISBN 0-7167-1045-5. A9.1: LO1 LO7, pp. 259 260.
Marques-Silva, J.; Glass, T. (1999). Combinational equivalence checking using satisability and recursive
learning (PDF). Design, Automation and Test in Europe Conference and Exhibition, 1999. Proceedings (Cat.
No. PR00078). p. 145. ISBN 0-7695-0078-1. doi:10.1109/DATE.1999.761110.
Clarke, E.; Biere, A.; Raimi, R.; Zhu, Y. (2001). Bounded Model Checking Using Satisability Solving.
Formal Methods in System Design. 19: 7. doi:10.1023/A:1011276507260.
Giunchiglia, E.; Tacchella, A. (2004). Giunchiglia, Enrico; Tacchella, Armando, eds. Theory and Appli-
cations of Satisability Testing. Lecture Notes in Computer Science. 2919. ISBN 978-3-540-20851-8.
doi:10.1007/b95238.
Babic, D.; Bingham, J.; Hu, A. J. (2006). B-Cubing: New Possibilities for Ecient SAT-Solving (PDF).
IEEE Transactions on Computers. 55 (11): 1315. doi:10.1109/TC.2006.175.
Rodriguez, C.; Villagra, M.; Baran, B. (2007). Asynchronous team algorithms for Boolean Satisability
(PDF). 2007 2nd Bio-Inspired Models of Network, Information and Computing Systems. p. 66. doi:10.1109/BIMNICS.2007.46100
Carla P. Gomes; Henry Kautz; Ashish Sabharwal; Bart Selman (2008). Satisability Solvers. In Frank Van
Harmelen; Vladimir Lifschitz; Bruce Porter. Handbook of knowledge representation. Foundations of Articial
Intelligence. 3. Elsevier. pp. 89134. ISBN 978-0-444-52211-5. doi:10.1016/S1574-6526(07)03002-7.
Vizel, Y.; Weissenbacher, G.; Malik, S. (2015). Boolean Satisability Solvers and Their Applications in
Model Checking. Proceedings of the IEEE. 103 (11). doi:10.1109/JPROC.2015.2455034.

21.9 External links

21.9.1 SAT problem format


A SAT problem is often described in the DIMACS-CNF format: an input le in which each line represents a single
disjunction. For example, a le with the two lines
1 5 4 0 1 5 3 4 0
represents the formula "(x1 x5 x4 ) (x1 x5 x3 x4 )".
Another common format for this formula is the 7-bit ASCII representation "(x1 | ~x5 | x4) & (~x1 | x5 | x3 | x4)".

BCSAT is a tool that converts input les in human-readable format to the DIMACS-CNF format.

21.9.2 Online SAT solvers


BoolSAT Solves formulas in the DIMACS-CNF format or in a more human-friendly format: a and not b or
a. Runs on a server.
Logictools - Provides dierent solvers in javascript for learning, comparison and hacking. Runs in the browser.
minisat-in-your-browser Solves formulas in the DIMACS-CNF format. Runs in the browser.
SATRennesPA - Solves formulas written in a user-friendly way. Runs on a server.
somerby.net/mack/logic - Solves formulas written in symbolic logic. Runs in the browser.
104 CHAPTER 21. BOOLEAN SATISFIABILITY PROBLEM

21.9.3 Oine SAT solvers

MiniSAT DIMACS-CNF format.

Lingeling won a gold medal in a 2011 SAT competition.

PicoSAT an earlier solver from the Lingeling group.

Sat4j DIMACS-CNF format. Java source code available.

Glucose DIMACS-CNF format.

RSat won a gold medal in a 2007 SAT competition.

UBCSAT. Supports unweighted and weighted clauses, both in the DIMACS-CNF format. C source code
hosted on GitHub.

CryptoMiniSat won a gold medal in a 2011 SAT competition. C++ source code hosted on GitHub. Tries to
put many useful features of MiniSat 2.0 core, PrecoSat ver 236, and Glucose into one package, adding many
new features

Spear Supports bit-vector arithmetic. Can use the DIMACS-CNF format or the Spear format.

HyperSAT Written to experiment with B-cubing search space pruning. Won 3rd place in a 2005 SAT
competition. An earlier and slower solver from the developers of Spear.

BASolver

ArgoSAT

Fast SAT Solver based on genetic algorithms.

zCha not supported anymore.

BCSAT human-readable boolean circuit format (also converts this format to the DIMACS-CNF format and
automatically links to MiniSAT or zCha).

gini - Golang sat solver with related tools.

21.9.4 SAT applications

WinSAT v2.04: A Windows-based SAT application made particularly for researchers.

21.9.5 Conferences

International Conference on Theory and Applications of Satisability Testing

21.9.6 Publications

Journal on Satisability, Boolean Modeling and Computation

Survey Propagation
21.9. EXTERNAL LINKS 105

21.9.7 Benchmarks
Forced Satisable SAT Benchmarks
SATLIB

Software Verication Benchmarks


Fadi Aloul SAT Benchmarks

SAT solving in general:

http://www.satlive.org

http://www.satisfiability.org

21.9.8 Evaluation of SAT solvers


Yearly evaluation of SAT solvers

SAT solvers evaluation results for 2008


International SAT Competitions

History

More information on SAT:

SAT and MAX-SAT for the Lay-researcher

This article includes material from a column in the ACM SIGDA e-newsletter by Prof. Karem Sakallah
Original text is available here
Chapter 22

Boolean-valued function

A Boolean-valued function (sometimes called a predicate or a proposition) is a function of the type f : X B, where
X is an arbitrary set and where B is a Boolean domain, i.e. a generic two-element set, (for example B = {0, 1}), whose
elements are interpreted as logical values, for example, 0 = false and 1 = true, i.e., a single bit of information.
In the formal sciences, mathematics, mathematical logic, statistics, and their applied disciplines, a Boolean-valued
function may also be referred to as a characteristic function, indicator function, predicate, or proposition. In all of
these uses it is understood that the various terms refer to a mathematical object and not the corresponding semiotic
sign or syntactic expression.
In formal semantic theories of truth, a truth predicate is a predicate on the sentences of a formal language, interpreted
for logic, that formalizes the intuitive concept that is normally expressed by saying that a sentence is true. A truth
predicate may have additional domains beyond the formal language domain, if that is what is required to determine
a nal truth value.

22.1 References
Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer
Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY, 2003.

Kohavi, Zvi (1978), Switching and Finite Automata Theory, 1st edition, McGrawHill, 1970. 2nd edition,
McGrawHill, 1978.

Korfhage, Robert R. (1974), Discrete Computational Structures, Academic Press, New York, NY.

Mathematical Society of Japan, Encyclopedic Dictionary of Mathematics, 2nd edition, 2 vols., Kiyosi It (ed.),
MIT Press, Cambridge, MA, 1993. Cited as EDM.

Minsky, Marvin L., and Papert, Seymour, A. (1988), Perceptrons, An Introduction to Computational Geometry,
MIT Press, Cambridge, MA, 1969. Revised, 1972. Expanded edition, 1988.

22.2 See also


Bit

Boolean data type

Boolean algebra (logic)

Boolean domain

Boolean logic

Propositional calculus

106
22.2. SEE ALSO 107

Truth table

Logic minimization
Indicator function

Predicate
Proposition

Finitary boolean function


Boolean function
Chapter 23

Boolean-valued model

In mathematical logic, a Boolean-valued model is a generalization of the ordinary Tarskian notion of structure from
model theory. In a Boolean-valued model, the truth values of propositions are not limited to true and false, but
instead take values in some xed complete Boolean algebra.
Boolean-valued models were introduced by Dana Scott, Robert M. Solovay, and Petr Vopnka in the 1960s in order to
help understand Paul Cohen's method of forcing. They are also related to Heyting algebra semantics in intuitionistic
logic.

23.1 Denition
Fix a complete Boolean algebra B[1] and a rst-order language L; the signature of L will consist of a collection of
constant symbols, function symbols, and relation symbols.
A Boolean-valued model for the language L consists of a universe M, which is a set of elements (or names), together
with interpretations for the symbols. Specically, the model must assign to each constant symbol of L an element of
M, and to each n-ary function symbol f of L and each n-tuple <a0 ,...,an> of elements of M, the model must assign
an element of M to the term f(a0 ,...,an).
Interpretation of the atomic formulas of L is more complicated. To each pair a and b of elements of M, the model
must assign a truth value ||a=b|| to the expression a=b; this truth value is taken from the Boolean algebra B. Similarly,
for each n-ary relation symbol R of L and each n-tuple <a0 ,...,an> of elements of M, the model must assign an
element of B to be the truth value ||R(a0 ,...,an)||.

23.2 Interpretation of other formulas and sentences


The truth values of the atomic formulas can be used to reconstruct the truth values of more complicated formulas,
using the structure of the Boolean algebra. For propositional connectives, this is easy; one simply applies the cor-
responding Boolean operators to the truth values of the subformulae. For example, if (x) and (y,z) are formulas
with one and two free variables, respectively, and if a, b, c are elements of the models universe to be substituted for
x, y, and z, then the truth value of

(a) (b, c)

is simply

(a) (b, c) = (a) (b, c)

The completeness of the Boolean algebra is required to dene truth values for quantied formulas. If (x) is a formula
with free variable x (and possibly other free variables that are suppressed), then

108
23.3. BOOLEAN-VALUED MODELS OF SET THEORY 109


x(x) = (a),
aM

where the right-hand side is to be understood as the supremum in B of the set of all truth values ||(a)|| as a ranges
over M.
The truth value of a formula is sometimes referred to as its probability. However, these are not probabilities in the
ordinary sense, because they are not real numbers, but rather elements of the complete Boolean algebra B.

23.3 Boolean-valued models of set theory


Given a complete Boolean algebra B[1] there is a Boolean-valued model denoted by VB , which is the Boolean-valued
analogue of the von Neumann universe V. (Strictly speaking, VB is a proper class, so we need to reinterpret what it
means to be a model appropriately.) Informally, the elements of VB are Boolean-valued sets. Given an ordinary set
A, every set either is or is not a member; but given a Boolean-valued set, every set has a certain, xed probability of
being a member of A. Again, the probability is an element of B, not a real number. The concept of Boolean-valued
sets resembles, but is not the same as, the notion of a fuzzy set.
The (probabilistic) elements of the Boolean-valued set, in turn, are also Boolean-valued sets, whose elements are
also Boolean-valued sets, and so on. In order to obtain a non-circular denition of Boolean-valued set, they are
dened inductively in a hierarchy similar to the cumulative hierarchy. For each ordinal of V, the set VB is dened
as follows.

V B 0 is the empty set.

VB +1 is the set of all functions from VB to B. (Such a function represents a probabilistic subset of VB ;
if f is such a function, then for any xVB , f(x) is the probability that x is in the set.)

If is a limit ordinal, VB is the union of VB for <

The class VB is dened to be the union of all sets VB .


It is also possible to relativize this entire construction to some transitive model M of ZF (or sometimes a fragment
thereof). The Boolean-valued model M B is obtained by applying the above construction inside M. The restriction to
transitive models is not serious, as the Mostowski collapsing theorem implies that every reasonable (well-founded,
extensional) model is isomorphic to a transitive one. (If the model M is not transitive things get messier, as M's
interpretation of what it means to be a function or an ordinal may dier from the external interpretation.)
Once the elements of V B have been dened as above, it is necessary to dene B-valued relations of equality and
membership on VB . Here a B-valued relation on VB is a function from VB VB to B. To avoid confusion with the usual
equality and membership, these are denoted by ||x=y|| and ||xy|| for x and y in VB . They are dened as follows:

||xy|| is dened to be tD y ||x=t|| y(t) ("x is in y if it is equal to something in y").


||x=y|| is dened to be ||xy||||yx|| ("x equals y if x and y are both subsets of each other), where
||xy|| is dened to be tD x x(t)||ty|| ("x is a subset of y if all elements of x are in y")

The symbols and denote the least upper bound and greatest lower bound operations, respectively, in the complete
Boolean algebra B. At rst sight the denitions above appear to be circular: || || depends on || = ||, which depends on
|| ||, which depends on || ||. However, a close examination shows that the denition of || || only depends on || ||
for elements of smaller rank, so || || and || = || are well dened functions from VB VB to B.
It can be shown that the B-valued relations || || and || = || on VB make VB into a Boolean-valued model of set theory.
Each sentence of rst order set theory with no free variables has a truth value in B; it must be shown that the axioms
for equality and all the axioms of ZF set theory (written without free variables) have truth value 1 (the largest element
of B). This proof is straightforward, but it is long because there are many dierent axioms that need to be checked.
110 CHAPTER 23. BOOLEAN-VALUED MODEL

23.4 Relationship to forcing


Set theorists use a technique called forcing to obtain independence results and to construct models of set theory for
other purposes. The method was originally developed by Paul Cohen but has been greatly extended since then. In
one form, forcing adds to the universe a generic subset of a poset, the poset being designed to impose interesting
properties on the newly added object. The wrinkle is that (for interesting posets) it can be proved that there simply is
no such generic subset of the poset. There are three usual ways of dealing with this:

syntactic forcing A forcing relation p is dened between elements p of the poset and formulas of
the forcing language. This relation is dened syntactically and has no semantics; that is, no model is ever
produced. Rather, starting with the assumption that ZFC (or some other axiomatization of set theory) proves
the independent statement, one shows that ZFC must also be able to prove a contradiction. However, the
forcing is over V"; that is, it is not necessary to start with a countable transitive model. See Kunen (1980) for
an exposition of this method.
countable transitive models One starts with a countable transitive model M of as much of set theory as is
needed for the desired purpose, and that contains the poset. Then there do exist lters on the poset that are
generic over M; that is, that meet all dense open subsets of the poset that happen also to be elements of M.
ctional generic objects Commonly, set theorists will simply pretend that the poset has a subset that is generic
over all of V. This generic object, in nontrivial cases, cannot be an element of V, and therefore does not really
exist. (Of course, it is a point of philosophical contention whether any sets really exist, but that is outside the
scope of the current discussion.) Perhaps surprisingly, with a little practice this method is useful and reliable,
but it can be philosophically unsatisfying.

23.4.1 Boolean-valued models and syntactic forcing


Boolean-valued models can be used to give semantics to syntactic forcing; the price paid is that the semantics is not
2-valued (true or false), but assigns truth values from some complete Boolean algebra. Given a forcing poset P,
there is a corresponding complete Boolean algebra B, often obtained as the collection of regular open subsets of P,
where the topology on P is dened by declaring all lower sets open (and all upper sets closed). (Other approaches to
constructing B are discussed below.)
Now the order on B (after removing the zero element) can replace P for forcing purposes, and the forcing relation
can be interpreted semantically by saying that, for p an element of B and a formula of the forcing language,

p p ||||

where |||| is the truth value of in V B .


This approach succeeds in assigning a semantics to forcing over V without resorting to ctional generic objects. The
disadvantages are that the semantics is not 2-valued, and that the combinatorics of B are often more complicated than
those of the underlying poset P.

23.4.2 Boolean-valued models and generic objects over countable transitive models
One interpretation of forcing starts with a countable transitive model M of ZF set theory, a partially ordered set P,
and a generic subset G of P, and constructs a new model of ZF set theory from these objects. (The conditions that
the model be countable and transitive simplify some technical problems, but are not essential.) Cohens construction
can be carried out using Boolean-valued models as follows.

Construct a complete Boolean algebra B as the complete Boolean algebra generated by the poset P.
Construct an ultralter U on B (or equivalently a homomorphism from B to the Boolean algebra {true, false})
from the generic subset G of P.
Use the homomorphism from B to {true, false} to turn the Boolean-valued model MB of the section above into
an ordinary model of ZF.
23.5. NOTES 111

We now explain these steps in more detail.


For any poset P there is a complete Boolean algebra B and a map e from P to B+ (the non-zero elements of B) such
that the image is dense, e(p)e(q) whenever pq, and e(p)e(q)=0 whenever p and q are incompatible. This Boolean
algebra is unique up to isomorphism. It can be constructed as the algebra of regular open sets in the topological space
of P (with underlying set P, and a base given by the sets Up of elements q with qp).
The map from the poset P to the complete Boolean algebra B is not injective in general. The map is injective if and
only if P has the following property: if every rp is compatible with q, then pq.
The ultralter U on B is dened to be the set of elements b of B that are greater than some element of (the image
of) G. Given an ultralter U on a Boolean algebra, we get a homomorphism to {true, false} by mapping U to true
and its complement to false. Conversely, given such a homomorphism, the inverse image of true is an ultralter, so
ultralters are essentially the same as homomorphisms to {true, false}. (Algebraists might prefer to use maximal
ideals instead of ultralters: the complement of an ultralter is a maximal ideal, and conversely the complement of a
maximal ideal is an ultralter.)
If g is a homomorphism from a Boolean algebra B to a Boolean algebra C and MB is any B-valued model of ZF (or
of any other theory for that matter) we can turn MB into a C -valued model by applying the homomorphism g to
the value of all formulas. In particular if C is {true, false} we get a {true, false}-valued model. This is almost the
same as an ordinary model: in fact we get an ordinary model on the set of equivalence classes under || = || of a {true,
false}-valued model. So we get an ordinary model of ZF set theory by starting from M, a Boolean algebra B, and
an ultralter U on B. (The model of ZF constructed like this is not transitive. In practice one applies the Mostowski
collapsing theorem to turn this into a transitive model.)
We have seen that forcing can be done using Boolean-valued models, by constructing a Boolean algebra with ultralter
from a poset with a generic subset. It is also possible to go back the other way: given a Boolean algebra B, we can
form a poset P of all the nonzero elements of B, and a generic ultralter on B restricts to a generic set on P. So the
techniques of forcing and Boolean-valued models are essentially equivalent.

23.5 Notes
[1] B here is assumed to be nondegenerate; that is, 0 and 1 must be distinct elements of B. Authors writing on Boolean-valued
models typically take this requirement to be part of the denition of Boolean algebra, but authors writing on Boolean
algebras in general often do not.

23.6 References
Bell, J. L. (1985) Boolean-Valued Models and Independence Proofs in Set Theory, Oxford. ISBN 0-19-853241-
5
Grishin, V.N. (2001) [1994], b/b016990, in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer
Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Jech, Thomas (2002). Set theory, third millennium edition (revised and expanded). Springer. ISBN 3-540-
44085-2. OCLC 174929965.
Kunen, Kenneth (1980). Set Theory: An Introduction to Independence Proofs. North-Holland. ISBN 0-444-
85401-0. OCLC 12808956.

Kusraev, A. G. and S. S. Kutateladze (1999). Boolean Valued Analysis. Kluwer Academic Publishers. ISBN
0-7923-5921-6. OCLC 41967176. Contains an account of Boolean-valued models and applications to Riesz
spaces, Banach spaces and algebras.
Manin, Yu. I. (1977). A Course in Mathematical Logic. Springer. ISBN 0-387-90243-0. OCLC 2797938.
Contains an account of forcing and Boolean-valued models written for mathematicians who are not set theorists.
Rosser, J. Barkley (1969). Simplied Independence Proofs, Boolean valued models of set theory. Academic
Press.
Chapter 24

Booleo

Booleo is a strategy card game using boolean logic gates. It was developed by Jonathan Brandt and Chris Kampf
with Sean P. Dennis in 2008, and it was rst published by Tessera Games LLC in 2009.[1]

24.1 Game
The deck consists of 64 cards:

48 Gate cards using three Boolean operators AND, OR, and XOR

8 OR cards resolving to 1
8 OR cards resolving to 0
8 AND cards resolving to 1
8 AND cards resolving to 0
8 XOR cards resolving to 1
8 XOR cards resolving to 0

8 NOT cards

6 Initial Binary cards, each displaying a 0 and a 1 aligned to the two short ends of the card

2 Truth Tables (used for reference, not in play)

24.2 Play
Starting with a line of Initial Binary cards laid perpendicular to two facing players, the object of the game is to be
the rst to complete a logical pyramid whose nal output equals that of the rightmost Initial Binary card facing that
player.
The game is played in draw one play one format. The pyramid consists of decreasing rows of gate cards, where the
outputs of any contiguous pair of cards comprise the input values to a single card in the following row. The pyramid,
therefore, has Initial Binary values as its base and tapers to a single card closest to the player. By tracing the ow
of values through any series of gate, every card placed in the pyramid must make logical sense, i.e. the inputs and
output value of every gate card must conform to the rule of that gate card.
The NOT cards are played against any of the Initial Binary cards in play, causing that card to be rotated 180 degrees,
literally ipping the value of that card from 0 to 1 or vice versa.
By changing the value of any Initial Binary, any and all gate cards which ow from it must be re-evaluated to ensure
its placement makes logical sense. If it does not, that gate card is removed from the players pyramid.

112
24.3. VARIATIONS 113

Since both players pyramids share the Initial Binary cards as a base, ipping an Initial Binary has an eect on both
players pyramids. A principal strategy during game play is to invalidate gate cards in the opponents logic pyramid
while rendering as little damage to ones own pyramid in the process.
Some logic gates are more robust than others to a change to their inputs. Therefore, not all logic gate cards have the
same strategic value.
The standard edition of the game does not contain NAND, NOR, or XNOR gates. It is possible, therefore, for a
player to arrive at an unresolvable pair of inputs.[2]

24.3 Variations
The number of cards in bOOleO will comfortably support a match between two players whose logic pyramids are six
cards wide at their base. By combining decks, it is possible to construct larger pyramids or to have matches among
more than two players. For example:

Four players may play individually or as facing teams by arranging a cross of Initial Binary cards,
where four logic pyramids extend like compass points in four directions
Four or more players may build partially overlapping pyramids from a long base of Initial Binary
cards

Tessera Games also published bOOleO-N Edition, which is identical to bOOleO with the exception that it uses the
inverse set of logic gates: NAND, NOR, and XNOR. bOOleO-N Edition may be played on its own, or it may be
combined with bOOleO.

24.4 References
[1] http://boardgamegeek.com/boardgame/40943/booleo

[2] Somma, Ryan. A Game of Boolean Logic Gates with an Ambiguous Spelling. Geeking Out. Retrieved 9 August 2017.
Chapter 25

Canonical normal form

In Boolean algebra, any Boolean function can be put into the canonical disjunctive normal form (CDNF) or
minterm canonical form and its dual canonical conjunctive normal form (CCNF) or maxterm canonical form.
Other canonical forms include the complete sum of prime implicants or Blake canonical form (and its dual), and the
algebraic normal form (also called Zhegalkin or ReedMuller).
Minterms are called products because they are the logical AND of a set of variables, and maxterms are called sums
because they are the logical OR of a set of variables. These concepts are dual because of their complementary-
symmetry relationship as expressed by De Morgans laws.
Two dual canonical forms of any Boolean function are a sum of minterms and a product of maxterms. The term
"Sum of Products" or "SoP" is widely used for the canonical form that is a disjunction (OR) of minterms. Its De
Morgan dual is a "Product of Sums" or "PoS" for the canonical form that is a conjunction (AND) of maxterms.
These forms can be useful for the simplication of these functions, which is of great importance in the optimization
of Boolean formulas in general and digital circuits in particular.

25.1 Summary

One application of Boolean algebra is digital circuit design. The goal may be to minimize the number of gates, to
minimize the settling time, etc.
There are sixteen possible functions of two variables, but in digital logic hardware, the simplest gate circuits implement
only four of them: conjunction (AND), disjunction (inclusive OR), and the respective complements of those (NAND
and NOR).
Most gate circuits accept more than 2 input variables; for example, the spaceborne Apollo Guidance Computer, which
pioneered the application of integrated circuits in the 1960s, was built with only one type of gate, a 3-input NOR,
whose output is true only when all 3 inputs are false.[1]

25.2 Minterms

For a boolean function of n variables x1 , . . . , xn , a product term in which each of the n variables appears once (in
either its complemented or uncomplemented form) is called a minterm. Thus, a minterm is a logical expression of n
variables that employs only the complement operator and the conjunction operator.
For example, abc , ab c and abc are 3 examples of the 8 minterms for a Boolean function of the three variables a ,
b , and c . The customary reading of the last of these is a AND b AND NOT-c.
There are 2n minterms of n variables, since a variable in the minterm expression can be in either its direct or its
complemented formtwo choices per variable.

114
25.3. MAXTERMS 115

25.2.1 Indexing minterms


Minterms are often numbered by a binary encoding of the complementation pattern of the variables, where the
variables are written in a standard order, usually alphabetical. This convention assigns the value 1 to the direct form
n
( xi ) and 0 to the complemented form ( xi ); the minterm is then 2i value(xi ) . For example, minterm abc is
i=1
numbered 1102 = 610 and denoted m6 .

25.2.2 Functional equivalence


A given minterm n gives a true value (i.e., 1) for just one combination of the input variables. For example, minterm
5, a b' c, is true only when a and c both are true and b is falsethe input arrangement where a = 1, b = 0, c = 1 results
in 1.
Given the truth table of a logical function, it is possible to write the function as a sum of products. This is a special
form of disjunctive normal form. For example, if given the truth table for the arithmetic sum bit u of one bit positions
logic of an adder circuit, as a function of x and y from the addends and the carry in, ci:
Observing that the rows that have an output of 1 are the 2nd, 3rd, 5th, and 8th, we can write u as a sum of minterms
m1 , m2 , m4 , and m7 . If we wish to verify this: u(ci, x, y) = m1 + m2 + m4 + m7 = (ci , x , y) + (ci , x, y ) +
(ci, x , y ) + (ci, x, y) evaluated for all 8 combinations of the three variables will match the table.

25.3 Maxterms
For a boolean function of n variables x1 , . . . , xn , a sum term in which each of the n variables appears once (in
either its complemented or uncomplemented form) is called a maxterm. Thus, a maxterm is a logical expression
of n variables that employs only the complement operator and the disjunction operator. Maxterms are a dual of the
minterm idea (i.e., exhibiting a complementary symmetry in all respects). Instead of using ANDs and complements,
we use ORs and complements and proceed similarly.
For example, the following are two of the eight maxterms of three variables:

a + b' + c
a' + b + c

There are again 2n maxterms of n variables, since a variable in the maxterm expression can also be in either its direct
or its complemented formtwo choices per variable.

25.3.1 Indexing maxterms


Each maxterm is assigned an index based on the opposite conventional binary encoding used for minterms. The
maxterm convention assigns the value 0 to the direct form (xi ) and 1 to the complemented form (xi ) . For example,
we assign the index 6 to the maxterm a + b + c (110) and denote that maxterm as M 6 . Similarly M 0 of these three
variables is a + b + c (000) and M 7 is a + b + c (111).

25.3.2 Functional equivalence


It is apparent that maxterm n gives a false value (i.e., 0) for just one combination of the input variables. For example,
maxterm 5, a' + b + c', is false only when a and c both are true and b is falsethe input arrangement where a = 1, b
= 0, c = 1 results in 0.
If one is given a truth table of a logical function, it is possible to write the function as a product of sums. This is
a special form of conjunctive normal form. For example, if given the truth table for the carry-out bit co of one bit
positions logic of an adder circuit, as a function of x and y from the addends and the carry in, ci:
Observing that the rows that have an output of 0 are the 1st, 2nd, 3rd, and 5th, we can write co as a product of
maxterms M0 , M1 , M2 and M4 . If we wish to verify this: co(ci, x, y) = M0 M1 M2 M4 = (ci + x + y) (ci + x + y')
(ci + x' + y) (ci' + x + y) evaluated for all 8 combinations of the three variables will match the table.
116 CHAPTER 25. CANONICAL NORMAL FORM

25.4 Dualization
The complement of a minterm is the respective maxterm. This can be easily veried by using de Morgans law. For
example: M5 = a + b + c = (ab c) = m5

25.5 Non-canonical PoS and SoP forms


It is often the case that the canonical minterm form can be simplied to an equivalent SoP form. This simplied form
would still consist of a sum of product terms. However, in the simplied form, it is possible to have fewer product
terms and/or product terms that contain fewer variables. For example, the following 3-variable function:
has the canonical minterm representation: f = a bc + abc , but it has an equivalent simplied form: f = bc . In this
trivial example, it is obvious that bc = a bc + abc , but the simplied form has both fewer product terms, and the
term has fewer variables. The most simplied SoP representation of a function is referred to as a minimal SoP form.
In a similar manner, a canonical maxterm form can have a simplied PoS form.
While this example was easily simplied by applying normal algebraic methods [ f = (a + a)bc ], in less obvious
cases a convenient method for nding the minimal PoS/SoP form of a function with up to four variables is using a
Karnaugh map.
The minimal PoS and SoP forms are very important for nding optimal implementations of boolean functions and
minimizing logic circuits.

25.6 Application example


The sample truth tables for minterms and maxterms above are sucient to establish the canonical form for a single
bit position in the addition of binary numbers, but are not sucient to design the digital logic unless your inventory
of gates includes AND and OR. Where performance is an issue (as in the Apollo Guidance Computer), the available
parts are more likely to be NAND and NOR because of the complementing action inherent in transistor logic. The
values are dened as voltage states, one near ground and one near the DC supply voltage V , e.g. +5 VDC. If the
higher voltage is dened as the 1 true value, a NOR gate is the simplest possible useful logical element.
Specically, a 3-input NOR gate may consist of 3 bipolar junction transistors with their emitters all grounded, their
collectors tied together and linked to V through a load impedance. Each base is connected to an input signal, and the
common collector point presents the output signal. Any input that is a 1 (high voltage) to its base shorts its transistors
emitter to its collector, causing current to ow through the load impedance, which brings the collector voltage (the
output) very near to ground. That result is independent of the other inputs. Only when all 3 input signals are 0 (low
voltage) do the emitter-collector impedances of all 3 transistors remain very high. Then very little current ows, and
the voltage-divider eect with the load impedance imposes on the collector point a high voltage very near to V .
The complementing property of these gate circuits may seem like a drawback when trying to implement a function in
canonical form, but there is a compensating bonus: such a gate with only one input implements the complementing
function, which is required frequently in digital logic.
This example assumes the Apollo parts inventory: 3-input NOR gates only, but the discussion is simplied by sup-
posing that 4-input NOR gates are also available (in Apollo, those were compounded out of pairs of 3-input NORs).

25.6.1 Canonical and non-canonical consequences of NOR gates

Fact #1: a set of 8 NOR gates, if their inputs are all combinations of the direct and complement forms of the 3
input variables ci, x, and y, always produce minterms, never maxtermsthat is, of the 8 gates required to process
all combinations of 3 input variables, only one has the output value 1. Thats because a NOR gate, despite its name,
could better be viewed (using De Morgans law) as the AND of the complements of its input signals.
Fact #2: the reason Fact #1 is not a problem is the duality of minterms and maxterms, i.e. each maxterm is the
complement of the like-indexed minterm, and vice versa.
In the minterm example above, we wrote u(ci, x, y) = m1 + m2 + m4 + m7 but to perform this with a 4-input
25.6. APPLICATION EXAMPLE 117

NOR gate we need to restate it as a product of sums (PoS), where the sums are the opposite maxterms. That is,
u(ci, x, y) = AND( M0 , M3 , M5 , M6 ) = NOR( m0 , m3 , m5 , m6 ). Truth tables:
In the maxterm example above, we wrote co(ci, x, y) = M0 M1 M2 M4 but to perform this with a 4-input NOR gate
we need to notice the equality to the NOR of the same minterms. That is,
co(ci, x, y) = AND( M0 , M1 , M2 , M4 ) = NOR( m0 , m1 , m2 , m4 ). Truth tables:

25.6.2 Design trade-os considered in addition to canonical forms


One might suppose that the work of designing an adder stage is now complete, but we haven't addressed the fact that
all 3 of the input variables have to appear in both their direct and complement forms. Theres no diculty about the
addends x and y in this respect, because they are static throughout the addition and thus are normally held in latch
circuits that routinely have both direct and complement outputs. (The simplest latch circuit made of NOR gates is a
pair of gates cross-coupled to make a ip-op: the output of each is wired as one of the inputs to the other.) There is
also no need to create the complement form of the sum u. However, the carry out of one bit position must be passed
as the carry into the next bit position in both direct and complement forms. The most straightforward way to do this
is to pass co through a 1-input NOR gate and label the output co', but that would add a gate delay in the worst possible
place, slowing down the rippling of carries from right to left. An additional 4-input NOR gate building the canonical
form of co' (out of the opposite minterms as co) solves this problem.

co (ci, x, y) = AND(M3 , M5 , M6 , M7 ) = NOR(m3 , m5 , m6 , m7 ).

Truth tables:
The trade-o to maintain full speed in this way includes an unexpected cost (in addition to having to use a bigger
gate). If we'd just used that 1-input gate to complement co, there would have been no use for the minterm m7 , and
the gate that generated it could have been eliminated. Nevertheless, its still a good trade.
Now we could have implemented those functions exactly according to their SoP and PoS canonical forms, by turning
NOR gates into the functions specied. A NOR gate is made into an OR gate by passing its output through a 1-input
NOR gate; and it is made into an AND gate by passing each of its inputs through a 1-input NOR gate. However,
this approach not only increases the number of gates used, but also doubles the number of gate delays processing the
signals, cutting the processing speed in half. Consequently, whenever performance is vital, going beyond canonical
forms and doing the Boolean algebra to make the unenhanced NOR gates do the job is well worthwhile.

25.6.3 Top-down vs. bottom-up design


We have now seen how the minterm/maxterm tools can be used to design an adder stage in canonical form with the
addition of some Boolean algebra, costing just 2 gate delays for each of the outputs. Thats the top-down way to
design the digital circuit for this function, but is it the best way? The discussion has focused on identifying fastest as
best, and the augmented canonical form meets that criterion awlessly, but sometimes other factors predominate.
The designer may have a primary goal of minimizing the number of gates, and/or of minimizing the fanouts of signals
to other gates since big fanouts reduce resilience to a degraded power supply or other environmental factors. In such
a case, a designer may develop the canonical-form design as a baseline, then try a bottom-up development, and nally
compare the results.
The bottom-up development involves noticing that u = ci XOR (x XOR y), where XOR means eXclusive OR [true
when either input is true but not when both are true], and that co = ci x + x y + y ci. One such development takes
twelve NOR gates in all: six 2-input gates and two 1-input gates to produce u in 5 gate delays, plus three 2-input
gates and one 3-input gate to produce co' in 2 gate delays. The canonical baseline took eight 3-input NOR gates plus
three 4-input NOR gates to produce u, co and co' in 2 gate delays. If the circuit inventory actually includes 4-input
NOR gates, the top-down canonical design looks like a winner in both gate count and speed. But if (contrary to our
convenient supposition) the circuits are actually 3-input NOR gates, of which two are required for each 4-input NOR
function, then the canonical design takes 14 gates compared to 12 for the bottom-up approach, but still produces the
sum digit u considerably faster. The fanout comparison is tabulated as:
Whats a decision-maker to do? An observant one will have noticed that the description of the bottom-up development
mentions co' as an output but not co. Does that design simply never need the direct form of the carry out? Well, yes
118 CHAPTER 25. CANONICAL NORMAL FORM

and no. At each stage, the calculation of co' depends only on ci', x' and y', which means that the carry propagation
ripples along the bit positions just as fast as in the canonical design without ever developing co. The calculation of u,
which does require ci to be made from ci' by a 1-input NOR, is slower but for any word length the design only pays
that penalty once (when the leftmost sum digit is developed). Thats because those calculations overlap, each in what
amounts to its own little pipeline without aecting when the next bit positions sum bit can be calculated. And, to be
sure, the co' out of the leftmost bit position will probably have to be complemented as part of the logic determining
whether the addition overowed. But using 3-input NOR gates, the bottom-up design is very nearly as fast for doing
parallel addition on a non-trivial word length, cuts down on the gate count, and uses lower fanouts ... so it wins if gate
count and/or fanout are paramount!
We'll leave the exact circuitry of the bottom-up design of which all these statements are true as an exercise for the
interested reader, assisted by one more algebraic formula: u = ci(x XOR y) + ci'(x XOR y)']'. Decoupling the carry
propagation from the sum formation in this way is what elevates the performance of a carry-lookahead adder over
that of a ripple carry adder.
To see how NOR gate logic was used in the Apollo Guidance Computers ALU, visit http://klabs.org/history/ech/
agc_schematics/index.htm, select any of the 4-BIT MODULE entries in the Index to Drawings, and expand images
as desired.

25.7 See also


Algebraic normal form
Canonical form

Blake canonical form

List of Boolean algebra topics

25.8 Footnotes
[1] Hall, Eldon C. (1996). Journey to the Moon: The History of the Apollo Guidance Computer. AIAA. ISBN 1-56347-185-X.

25.9 References
Bender, Edward A.; Williamson, S. Gill (2005). A Short Course in Discrete Mathematics. Mineola, NY: Dover
Publications, Inc. ISBN 0-486-43946-1.
The authors demonstrate a proof that any Boolean (logic) function can be expressed in either disjunctive or
conjunctive normal form (cf pages 56); the proof simply proceeds by creating all 2N rows of N Boolean
variables and demonstrates that each row (minterm or maxterm) has a unique Boolean expression. Any
Boolean function of the N variables can be derived from a composite of the rows whose minterm or maxterm
are logical 1s (trues)

McCluskey, E. J. (1965). Introduction to the Theory of Switching Circuits. NY: McGrawHill Book Company.
p. 78. LCCN 65-17394. Canonical expressions are dened and described

Hill, Fredrick J.; Peterson, Gerald R. (1974). Introduction to Switching Theory and Logical Design (2nd ed.).
NY: John Wiley & Sons. p. 101. ISBN 0-471-39882-9. Minterm and maxterm designation of functions

25.10 External links


Boole, George (1848). Translated by Wilkins, David R.. The Calculus of Logic. Cambridge and Dublin
Mathematical Journal. III: 183198.
Chapter 26

Cantor algebra

For the algebras encoding a bijection from an innite set X onto the product XX, sometimes called Cantor algebras,
see JnssonTarski algebra.

In mathematics, a Cantor algebra, named after Georg Cantor, is one of two closely related Boolean algebras, one
countable and one complete.
The countable Cantor algebra is the Boolean algebra of all clopen subsets of the Cantor set. This is the free Boolean
algebra on a countable number of generators. Up to isomorphism, this is the only nontrivial Boolean algebra that is
both countable and atomless.
The complete Cantor algebra is the complete Boolean algebra of Borel subsets of the reals modulo meager sets (Balcar
& Jech 2006). It is isomorphic to the completion of the countable Cantor algebra. (The complete Cantor algebra is
sometimes called the Cohen algebra, though "Cohen algebra" usually refers to a dierent type of Boolean algebra.)
The complete Cantor algebra was studied by von Neumann in 1935 (later published as (von Neumann 1998)), who
showed that it is not isomorphic to the random algebra of Borel subsets modulo measure zero sets.

26.1 References
Balcar, Bohuslav; Jech, Thomas (2006), Weak distributivity, a problem of von Neumann and the mystery of
measurability, Bulletin of Symbolic Logic, 12 (2): 241266, MR 2223923

von Neumann, John (1998) [1960], Continuous geometry, Princeton Landmarks in Mathematics, Princeton
University Press, ISBN 978-0-691-05893-1, MR 0120174

119
Chapter 27

Cha algorithm

Cha is an algorithm for solving instances of the Boolean satisability problem in programming. It was designed
by researchers at Princeton University, United States. The algorithm is an instance of the DPLL algorithm with a
number of enhancements for ecient implementation.

27.1 Implementations
Some available implementations of the algorithm in software are mCha and zCha, the latter one being the most
widely known and used. zCha was originally written by Dr. Lintao Zhang, now at Microsoft Research, hence the
z. It is now maintained by researchers at Princeton University and available for download as both source code and
binaries on Linux. zCha is free for non-commercial use.

27.2 References
M. Moskewicz, C. Madigan, Y. Zhao, L. Zhang, S. Malik. Cha: Engineering an Ecient SAT Solver, 39th
Design Automation Conference (DAC 2001), Las Vegas, ACM 2001.
Vizel, Y.; Weissenbacher, G.; Malik, S. (2015). Boolean Satisability Solvers and Their Applications in
Model Checking. Proceedings of the IEEE. 103 (11). doi:10.1109/JPROC.2015.2455034.

27.3 External links


Web page about zCha

120
Chapter 28

Cohen algebra

Not to be confused with Cohen ring or RankinCohen algebra.


For the quotient of the algebra of Borel sets by the ideal of meager sets, sometimes called the Cohen algebra, see
Cantor algebra.

In mathematical set theory, a Cohen algebra, named after Paul Cohen, is a type of Boolean algebra used in the
theory of forcing. A Cohen algebra is a Boolean algebra whose completion is isomorphic to the completion of a free
Boolean algebra (Koppelberg 1993).

28.1 References
Koppelberg, Sabine (1993), Characterizations of Cohen algebras, Papers on general topology and applications
(Madison, WI, 1991), Annals of the New York Academy of Sciences, 704, New York Academy of Sciences,
pp. 222237, MR 1277859, doi:10.1111/j.1749-6632.1993.tb52525.x

121
Chapter 29

Collapsing algebra

In mathematics, a collapsing algebra is a type of Boolean algebra sometimes used in forcing to reduce (collapse)
the size of cardinals. The posets used to generate collapsing algebras were introduced by Azriel Lvy (1963).
The collapsing algebra of is a complete Boolean algebra with at least elements but generated by a countable
number of elements. As the size of countably generated complete Boolean algebras is unbounded, this shows that
there is no free complete Boolean algebra on a countable number of elements.

29.1 Denition
There are several slightly dierent sorts of collapsing algebras.
If and are cardinals, then the Boolean algebra of regular open sets of the product space is a collapsing algebra.
Here and are both given the discrete topology. There are several dierent options for the topology of . The
simplest option is to take the usual product topology. Another option is to take the topology generated by open sets
consisting of functions whose value is specied on less than elements of .

29.2 References
Bell, J. L. (1985). Boolean-Valued Models and Independence Proofs in Set Theory. Oxford Logic Guides. 12
(2nd ed.). Oxford: Oxford University Press (Clarendon Press). ISBN 0-19-853241-5. Zbl 0585.03021.
Jech, Thomas (2003). Set theory (third millennium (revised and expanded) ed.). Springer-Verlag. ISBN 3-
540-44085-2. OCLC 174929965. Zbl 1007.03002.
Lvy, Azriel (1963). Independence results in set theory by Cohens method. IV,. Notices Amer. Math. Soc.
10: 593.

122
Chapter 30

Complete Boolean algebra

This article is about a type of mathematical structure. For complete sets of Boolean operators, see Functional com-
pleteness.

In mathematics, a complete Boolean algebra is a Boolean algebra in which every subset has a supremum (least
upper bound). Complete Boolean algebras are used to construct Boolean-valued models of set theory in the theory
of forcing. Every Boolean algebra A has an essentially unique completion, which is a complete Boolean algebra
containing A such that every element is the supremum of some subset of A. As a partially ordered set, this completion
of A is the DedekindMacNeille completion.
More generally, if is a cardinal then a Boolean algebra is called -complete if every subset of cardinality less than
has a supremum.

30.1 Examples
Every nite Boolean algebra is complete.

The algebra of subsets of a given set is a complete Boolean algebra.

The regular open sets of any topological space form a complete Boolean algebra. This example is of particular
importance because every forcing poset can be considered as a topological space (a base for the topology
consisting of sets that are the set of all elements less than or equal to a given element). The corresponding
regular open algebra can be used to form Boolean-valued models which are then equivalent to generic extensions
by the given forcing poset.

The algebra of all measurable subsets of a -nite measure space, modulo null sets, is a complete Boolean
algebra. When the measure space is the unit interval with the -algebra of Lebesgue measurable sets, the
Boolean algebra is called the random algebra.

The algebra of all measurable subsets of a measure space is a 1 -complete Boolean algebra, but is not usually
complete.

The algebra of all subsets of an innite set that are nite or have nite complement is a Boolean algebra but is
not complete.

The Boolean algebra of all Baire sets modulo meager sets in a topological space with a countable base is
complete; when the topological space is the real numbers the algebra is sometimes called the Cantor algebra.

Another example of a Boolean algebra that is not complete is the Boolean algebra P() of all sets of natural
numbers, quotiented out by the ideal Fin of nite subsets. The resulting object, denoted P()/Fin, consists of
all equivalence classes of sets of naturals, where the relevant equivalence relation is that two sets of naturals are
equivalent if their symmetric dierence is nite. The Boolean operations are dened analogously, for example,
if A and B are two equivalence classes in P()/Fin, we dene A B to be the equivalence class of a b , where
a and b are some (any) elements of A and B respectively.

123
124 CHAPTER 30. COMPLETE BOOLEAN ALGEBRA

Now let a0 , a1 ,... be pairwise disjoint innite sets of naturals, and let A0 , A1 ,... be their corresponding
equivalence classes in P()/Fin . Then given any upper bound X of A0 , A1 ,... in P()/Fin, we can nd
a lesser upper bound, by removing from a representative for X one element of each an. Therefore the
An have no supremum.

A Boolean algebra is complete if and only if its Stone space of prime ideals is extremally disconnected.

30.2 Properties of complete Boolean algebras


Sikorskis extension theorem states that

if A is a subalgebra of a Boolean algebra B, then any homomorphism from A to a complete Boolean algebra C can be
extended to a morphism from B to C.

Every subset of a complete Boolean algebra has a supremum, by denition; it follows that every subset also has
an inmum (greatest lower bound).
For a complete boolean algebra both innite distributive laws hold.
For a complete boolean algebra innite de-Morgans laws hold.

30.3 The completion of a Boolean algebra


The completion of a Boolean algebra can be dened in several equivalent ways:

The completion of A is (up to isomorphism) the unique complete Boolean algebra B containing A such that A
is dense in B; this means that for every nonzero element of B there is a smaller non-zero element of A.
The completion of A is (up to isomorphism) the unique complete Boolean algebra B containing A such that
every element of B is the supremum of some subset of A.

The completion of a Boolean algebra A can be constructed in several ways:

The completion is the Boolean algebra of regular open sets in the Stone space of prime ideals of A. Each
element x of A corresponds to the open set of prime ideals not containing x (which open and closed, and
therefore regular).
The completion is the Boolean algebra of regular cuts of A. Here a cut is a subset U of A+ (the non-zero
elements of A) such that if q is in U and pq then p is in U, and is called regular if whenever p is not in U there
is some r p such that U has no elements r. Each element p of A corresponds to the cut of elements p.

If A is a metric space and B its completion then any isometry from A to a complete metric space C can be extended to
a unique isometry from B to C. The analogous statement for complete Boolean algebras is not true: a homomorphism
from a Boolean algebra A to a complete Boolean algebra C cannot necessarily be extended to a (supremum preserving)
homomorphism of complete Boolean algebras from the completion B of A to C. (By Sikorskis extension theorem it
can be extended to a homomorphism of Boolean algebras from B to C, but this will not in general be a homomorphism
of complete Boolean algebras; in other words, it need not preserve suprema.)

30.4 Free -complete Boolean algebras


Unless the Axiom of Choice is relaxed,[1] free complete boolean algebras generated by a set do not exist (unless the set
is nite). More precisely, for any cardinal , there is a complete Boolean algebra of cardinality 2 greater than that
is generated as a complete Boolean algebra by a countable subset; for example the Boolean algebra of regular open
sets in the product space , where has the discrete topology. A countable generating set consists of all sets am,n
30.5. SEE ALSO 125

for m, n integers, consisting of the elements x such that x(m)<x(n). (This boolean algebra is called a collapsing
algebra, because forcing with it collapses the cardinal onto .)
In particular the forgetful functor from complete Boolean algebras to sets has no left adjoint, even though it is contin-
uous and the category of Boolean algebras is small-complete. This shows that the solution set condition in Freyds
adjoint functor theorem is necessary.
Given a set X, one can form the free Boolean algebra A generated by this set and then take its completion B. However
B is not a free complete Boolean algebra generated by X (unless X is nite or AC is omitted), because a function
from X to a free Boolean algebra C cannot in general be extended to a (supremum-preserving) morphism of Boolean
algebras from B to C.
On the other hand, for any xed cardinal , there is a free (or universal) -complete Boolean algebra generated by
any given set.

30.5 See also


Complete lattice
Complete Heyting algebra

30.6 References
[1] Stavi, Jonathan (1974), A model of ZF with an innite free complete Boolean algebra (reprint), Israel Journal of Math-
ematics, 20 (2): 149163, doi:10.1007/BF02757883.

Johnstone, Peter T. (1982), Stone spaces, Cambridge University Press, ISBN 0-521-33779-8
Koppelberg, Sabine (1989), Monk, J. Donald; Bonnet, Robert, eds., Handbook of Boolean algebras, 1, Ams-
terdam: North-Holland Publishing Co., pp. xx+312, ISBN 0-444-70261-X, MR 0991565
Monk, J. Donald; Bonnet, Robert, eds. (1989), Handbook of Boolean algebras, 2, Amsterdam: North-Holland
Publishing Co., ISBN 0-444-87152-7, MR 0991595

Monk, J. Donald; Bonnet, Robert, eds. (1989), Handbook of Boolean algebras, 3, Amsterdam: North-Holland
Publishing Co., ISBN 0-444-87153-5, MR 0991607

Vladimirov, D.A. (2001) [1994], Boolean algebra, in Hazewinkel, Michiel, Encyclopedia of Mathematics,
Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Chapter 31

Consensus theorem

In Boolean algebra, the consensus theorem or rule of consensus[1] is the identity:

xy xz yz = xy xz

The consensus or resolvent of the terms xy and xz is yz . It is the conjunction of all the unique literals of the terms,
excluding the literal that appears unnegated in one term and negated in the other.
The conjunctive dual of this equation is:

(x y)(x z)(y z) = (x y)(x z)

31.1 Proof
xy xz yz = xy xz (x x)yz
= xy xz xyz xyz
= (xy xyz) (xz xyz)
= xy(1 z) xz(1 y)
= xy xz

31.2 Consensus
The consensus or consensus term of two conjunctive terms of a disjunction is dened when one term contains the
literal a and the other the literal a , an opposition. The consensus is the conjunction of the two terms, omitting both
a and a , and repeated literals. For example, the consensus of xyz and wyz is wxz .[2] The consensus is undened if
there is more than one opposition.
For the conjunctive dual of the rule, the consensus y z can be derived from (xy) and (xz) through the resolution
inference rule. This shows that the LHS is derivable from the RHS (if A B then A AB; replacing A with RHS
and B with (y z) ). The RHS can be derived from the LHS simply through the conjunction elimination inference
rule. Since RHS LHS and LHS RHS (in propositional calculus), then LHS = RHS (in Boolean algebra).

31.3 Applications
In Boolean algebra, repeated consensus is the core of one algorithm for calculating the Blake canonical form of a
formula.[2]
In digital logic, including the consensus term in a circuit can eliminate race hazards.[3]

126
31.4. HISTORY 127

31.4 History
The concept of consensus was introduced by Archie Blake in 1937, related to the Blake canonical form.[4] It was
rediscovered by Samson and Mills in 1954[5] and by Quine in 1955.[6] Quine coined the term 'consensus. Robinson
used it for clauses in 1965 as the basis of his "resolution principle".[7][8]

31.5 References
[1] Frank Markham Brown, Boolean Reasoning: The Logic of Boolean Equations, 2nd edition 2003, p. 44

[2] Frank Markham Brown, Boolean Reasoning: The Logic of Boolean Equations, 2nd edition 2003, p. 81

[3] M. Raquzzaman, Fundamentals of Digital Logic and Microcontrollers, 6th edition (2014), ISBN 1118855795, p. 75

[4] Canonical expressions in Boolean algebra, Dissertation, Department of Mathematics, University of Chicago, 1937, re-
viewed in J. C. C. McKinsey, The Journal of Symbolic Logic 3:2:93 (June 1938) doi:10.2307/2267634 JSTOR 2267634

[5] Edward W. Samson, Burton E. Mills, Air Force Cambridge Research Center, Technical Report 54-21, April 1954

[6] Willard van Orman Quine, The problem of simplifying truth functions, American Mathematical Monthly 59:521-531,
1952 JSTOR 2308219

[7] John Alan Robinson, A Machine-Oriented Logic Based on the Resolution Principle, Journal of the ACM 12:1: 2341.

[8] Donald Ervin Knuth, The Art of Computer Programming 4A: Combinatorial Algorithms, part 1, p. 539

31.6 Further reading


Roth, Charles H. Jr. and Kinney, Larry L. (2004, 2010). Fundamentals of Logic Design, 6th Ed., p. 66.
Chapter 32

Correlation immunity

In mathematics, the correlation immunity of a Boolean function is a measure of the degree to which its outputs
are uncorrelated with some subset of its inputs. Specically, a Boolean function is said to be correlation-immune
of order m if every subset of m or fewer variables in x1 , x2 , . . . , xn is statistically independent of the value of
f (x1 , x2 , . . . , xn ) .

32.1 Denition
A function f : Fn2 F2 is k -th order correlation immune if for any independent n binary random variables
X0 . . . Xn1 , the random variable Z = f (X0 , . . . , Xn1 ) is independent from any random vector (Xi1 . . . Xik )
with 0 i1 < . . . < ik < n .

32.2 Results in cryptography


When used in a stream cipher as a combining function for linear feedback shift registers, a Boolean function with low-
order correlation-immunity is more susceptible to a correlation attack than a function with correlation immunity of
high order.
Siegenthaler showed that the correlation immunity m of a Boolean function of algebraic degree d of n variables satises
m + d n; for a given set of input variables, this means that a high algebraic degree will restrict the maximum possible
correlation immunity. Furthermore, if the function is balanced then m + d n 1.[1]

32.3 References
[1] T. Siegenthaler (September 1984). Correlation-Immunity of Nonlinear Combining Functions for Cryptographic Appli-
cations. IEEE Transactions on Information Theory. 30 (5): 776780. doi:10.1109/TIT.1984.1056949.

32.3.1 Further reading


1. Cusick, Thomas W. & Stanica, Pantelimon (2009). Cryptographic Boolean functions and applications. Aca-
demic Press. ISBN 9780123748904.

128
Chapter 33

DavisPutnam algorithm

The DavisPutnam algorithm was developed by Martin Davis and Hilary Putnam for checking the validity of a
rst-order logic formula using a resolution-based decision procedure for propositional logic. Since the set of valid
rst-order formulas is recursively enumerable but not recursive, there exists no general algorithm to solve this prob-
lem. Therefore, the DavisPutnam algorithm only terminates on valid formulas. Today, the term DavisPutnam
algorithm is often used synonymously with the resolution-based propositional decision procedure that is actually
only one of the steps of the original algorithm.

33.1 Overview
The procedure is based on Herbrands theorem, which implies that an unsatisable formula has an unsatisable ground
instance, and on the fact that a formula is valid if and only if its negation is unsatisable. Taken together, these facts
imply that to prove the validity of it is enough to prove that a ground instance of is unsatisable. If is not
valid, then the search for an unsatisable ground instance will not terminate.
The procedure roughly consists of these three parts:

put the formula in prenex form and eliminate quantiers

generate all propositional ground instances, one by one

check if each instance is satisable

The last part is probably the most innovative one, and works as follows:

for every variable in the formula

for every clause c containing the variable and every clause n containing the negation of the variable
resolve c and n and add the resolvent to the formula
remove all original clauses containing the variable or its negation

At each step, the intermediate formula generated is equisatisable, but possibly not equivalent, to the original formula.
The resolution step leads to a worst-case exponential blow-up in the size of the formula.
The DavisPutnamLogemannLoveland algorithm is a 1962 renement of the propositional satisability step of the
DavisPutnam procedure which requires only a linear amount of memory in the worst case. It still forms the basis
for todays (as of 2015) most ecient complete SAT solvers.

33.2 See also


Herbrandization

129
130 CHAPTER 33. DAVISPUTNAM ALGORITHM

33.3 References
Davis, Martin; Putnam, Hilary (1960). A Computing Procedure for Quantication Theory. Journal of the
ACM. 7 (3): 201215. doi:10.1145/321033.321034.
Beckford, Jahbrill; Logemann, George; Loveland, Donald (1962). A Machine Program for Theorem Prov-
ing. Communications of the ACM. 5 (7): 394397. doi:10.1145/368273.368557.

R. Dechter; I. Rish. Directional Resolution: The DavisPutnam Procedure, Revisited. In J. Doyle and
E. Sandewall and P. Torasso. Principles of Knowledge Representation and Reasoning: Proc. of the Fourth
International Conference (KR'94). Starswager18. pp. 134145.
John Harrison (2009). Handbook of practical logic and automated reasoning. Cambridge University Press. pp.
7990. ISBN 978-0-521-89957-4.
Chapter 34

De Morgans laws

In propositional logic and boolean algebra, De Morgans laws[1][2][3] are a pair of transformation rules that are both
valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules
allow the expression of conjunctions and disjunctions purely in terms of each other via negation.
The rules can be expressed in English as:

the negation of a disjunction is the conjunction of the negations; and


the negation of a conjunction is the disjunction of the negations;

or

the complement of the union of two sets is the same as the intersection of their complements; and
the complement of the intersection of two sets is the same as the union of their complements.

In set theory and Boolean algebra, these are written formally as

A B = A B,
A B = A B,

where

A and B are sets,

A is the complement of A,

is the intersection, and

is the union.

In formal language, the rules are written as

(P Q) (P ) (Q),

and

(P Q) (P ) (Q)

where

131
132 CHAPTER 34. DE MORGANS LAWS

De Morgans laws represented with Venn diagrams. In each case, the resultant set is the set of all points in any shade of blue.

P and Q are propositions,


is the negation logic operator (NOT),
is the conjunction logic operator (AND),
is the disjunction logic operator (OR),
is a metalogical symbol meaning can be replaced in a logical proof with.
34.1. FORMAL NOTATION 133

Applications of the rules include simplication of logical expressions in computer programs and digital circuit designs.
De Morgans laws are an example of a more general concept of mathematical duality.

34.1 Formal notation


The negation of conjunction rule may be written in sequent notation:

(P Q) (P Q).

The negation of disjunction rule may be written as:

(P Q) (P Q).

In rule form: negation of conjunction

(P Q)
P Q
and negation of disjunction

(P Q)
P Q
and expressed as a truth-functional tautology or theorem of propositional logic:

(P Q) (P Q),
(P Q) (P Q),

where P and Q are propositions expressed in some formal system.

34.1.1 Substitution form


De Morgans laws are normally shown in the compact form above, with negation of the output on the left and negation
of the inputs on the right. A clearer form for substitution can be stated as:

(P Q) (P Q),
(P Q) (P Q).

This emphasizes the need to invert both the inputs and the output, as well as change the operator, when doing a
substitution.

34.1.2 Set theory and Boolean algebra


In set theory and Boolean algebra, it is often stated as union and intersection interchange under complementation,[4]
which can be formally expressed as:

A B = A B,
A B = A B,

where:
134 CHAPTER 34. DE MORGANS LAWS

A is the negation of A, the overline being written above the terms to be negated,

is the intersection operator (AND),

is the union operator (OR).

The generalized form is:


Ai Ai ,
iI iI

Ai Ai ,
iI iI

where I is some, possibly uncountable, indexing set.


In set notation, De Morgans laws can be remembered using the mnemonic break the line, change the sign.[5]

34.1.3 Engineering

In electrical and computer engineering, De Morgans laws are commonly written as:

AB A+B

and

A + B A B,

where:

is a logical AND,

+ is a logical OR,

the overbar is the logical NOT of what is underneath the overbar.

34.1.4 Text searching

De Morgans laws commonly apply to text searching using Boolean operators AND, OR, and NOT. Consider a set of
documents containing the words cars and trucks. De Morgans laws hold that these two searches will return the
same set of documents:

Search A: NOT (cars OR trucks)


Search B: (NOT cars) AND (NOT trucks)

The corpus of documents containing cars or trucks can be represented by four documents:

Document 1: Contains only the word cars.


Document 2: Contains only trucks.
Document 3: Contains both cars and trucks.
Document 4: Contains neither cars nor trucks.
34.2. HISTORY 135

To evaluate Search A, clearly the search (cars OR trucks) will hit on Documents 1, 2, and 3. So the negation of
that search (which is Search A) will hit everything else, which is Document 4.
Evaluating Search B, the search (NOT cars) will hit on documents that do not contain cars, which is Documents 2
and 4. Similarly the search (NOT trucks) will hit on Documents 1 and 4. Applying the AND operator to these two
searches (which is Search B) will hit on the documents that are common to these two searches, which is Document 4.
A similar evaluation can be applied to show that the following two searches will return the same set of documents
(Documents 1, 2, 4):

Search C: NOT (cars AND trucks),


Search D: (NOT cars) OR (NOT trucks).

34.2 History
The laws are named after Augustus De Morgan (18061871),[6] who introduced a formal version of the laws to
classical propositional logic. De Morgans formulation was inuenced by algebraization of logic undertaken by George
Boole, which later cemented De Morgans claim to the nd. Nevertheless, a similar observation was made by Aristotle,
and was known to Greek and Medieval logicians.[7] For example, in the 14th century, William of Ockham wrote down
the words that would result by reading the laws out.[8] Jean Buridan, in his Summulae de Dialectica, also describes
rules of conversion that follow the lines of De Morgans laws.[9] Still, De Morgan is given credit for stating the laws
in the terms of modern formal logic, and incorporating them into the language of logic. De Morgans laws can be
proved easily, and may even seem trivial.[10] Nonetheless, these laws are helpful in making valid inferences in proofs
and deductive arguments.

34.3 Informal proof


De Morgans theorem may be applied to the negation of a disjunction or the negation of a conjunction in all or part
of a formula.

34.3.1 Negation of a disjunction


In the case of its application to a disjunction, consider the following claim: it is false that either of A or B is true,
which is written as:

(A B).

In that it has been established that neither A nor B is true, then it must follow that both A is not true and B is not true,
which may be written directly as:

(A) (B).

If either A or B were true, then the disjunction of A and B would be true, making its negation false. Presented in
English, this follows the logic that since two things are both false, it is also false that either of them is true.
Working in the opposite direction, the second expression asserts that A is false and B is false (or equivalently that not
A and not B are true). Knowing this, a disjunction of A and B must be false also. The negation of said disjunction
must thus be true, and the result is identical to the rst claim.

34.3.2 Negation of a conjunction


The application of De Morgans theorem to a conjunction is very similar to its application to a disjunction both in
form and rationale. Consider the following claim: it is false that A and B are both true, which is written as:
136 CHAPTER 34. DE MORGANS LAWS

(A B).

In order for this claim to be true, either or both of A or B must be false, for if they both were true, then the conjunction
of A and B would be true, making its negation false. Thus, one (at least) or more of A and B must be false (or
equivalently, one or more of not A and not B must be true). This may be written directly as,

(A) (B).

Presented in English, this follows the logic that since it is false that two things are both true, at least one of them
must be false.
Working in the opposite direction again, the second expression asserts that at least one of not A and not B must
be true, or equivalently that at least one of A and B must be false. Since at least one of them must be false, then their
conjunction would likewise be false. Negating said conjunction thus results in a true expression, and this expression
is identical to the rst claim.

34.4 Formal proof


The proof that (A B)c = Ac B c is completed in 2 steps by proving both (A B)c Ac B c and Ac B c
(A B)c .

34.4.1 Part 1

Let x (A B)c . Then, x A B .


Because A B = {y|y A y B} , it must be the case that x A or x B .
If x A , then x Ac , so x Ac B c .
Similarly, if x B , then x B c , so x Ac B c .
Thus, x(x (A B)c x Ac B c ) ;
that is, (A B)c Ac B c .

34.4.2 Part 2

To prove the reverse direction, let x Ac B c , and assume x (A B)c .


Under that assumption, it must be the case that x A B ,
so it follows that x A and x B , and thus x Ac and x B c .
However, that means x Ac B c , in contradiction to the hypothesis that x Ac B c ,
therefore, the assumption x (A B)c must not be the case, meaning that x (A B)c .
Hence, x(x Ac B c x (A B)c ) ,
that is, Ac B c (A B)c .

34.4.3 Conclusion

If Ac B c (A B)c and (A B)c Ac B c , then (A B)c = Ac B c ; this concludes the proof of De


Morgans law.
The other De Morgans law, (A B)c = Ac B c , is proven similarly.
34.5. GENERALISING DE MORGAN DUALITY 137

1 &

& 1
De Morgans Laws represented as a circuit with logic gates

34.5 Generalising De Morgan duality


In extensions of classical propositional logic, the duality still holds (that is, to any logical operator one can always
nd its dual), since in the presence of the identities governing negation, one may always introduce an operator that
is the De Morgan dual of another. This leads to an important property of logics based on classical logic, namely
the existence of negation normal forms: any formula is equivalent to another formula where negations only occur
applied to the non-logical atoms of the formula. The existence of negation normal forms drives many applications,
for example in digital circuit design, where it is used to manipulate the types of logic gates, and in formal logic, where
it is needed to nd the conjunctive normal form and disjunctive normal form of a formula. Computer programmers
use them to simplify or properly negate complicated logical conditions. They are also often useful in computations
in elementary probability theory.
Let one dene the dual of any propositional operator P(p, q, ...) depending on elementary propositions p, q, ... to be
the operator Pd dened by

Pd (p, q, ...) = P (p, q, . . . ).

34.6 Extension to predicate and modal logic


This duality can be generalised to quantiers, so for example the universal quantier and existential quantier are
duals:

x P (x) [x P (x)]
x P (x) [x P (x)]
To relate these quantier dualities to the De Morgan laws, set up a model with some small number of elements in its
domain D, such as
138 CHAPTER 34. DE MORGANS LAWS

D = {a, b, c}.

Then

x P (x) P (a) P (b) P (c)


and

x P (x) P (a) P (b) P (c).


But, using De Morgans laws,

P (a) P (b) P (c) (P (a) P (b) P (c))


and

P (a) P (b) P (c) (P (a) P (b) P (c)),


verifying the quantier dualities in the model.
Then, the quantier dualities can be extended further to modal logic, relating the box (necessarily) and diamond
(possibly) operators:

p p,
p p.
In its application to the alethic modalities of possibility and necessity, Aristotle observed this case, and in the case of
normal modal logic, the relationship of these modal operators to the quantication can be understood by setting up
models using Kripke semantics.

34.7 See also


Isomorphism NOT operator as isomorphism between positive logic and negative logic
List of Boolean algebra topics
Positive logic

34.8 References
[1] Copi and Cohen
[2] Hurley, Patrick J. (2015), A Concise Introduction to Logic (12th ed.), Cengage Learning, ISBN 978-1-285-19654-1
[3] Moore and Parker
[4] Boolean Algebra by R. L. Goodstein. ISBN 0-486-45894-6
[5] 2000 Solved Problems in Digital Electronics by S. P. Bali
[6] DeMorgans Theorems at mtsu.edu
[7] Bocheskis History of Formal Logic
[8] William of Ockham, Summa Logicae, part II, sections 32 and 33.
[9] Jean Buridan, Summula de Dialectica. Trans. Gyula Klima. New Haven: Yale University Press, 2001. See especially
Treatise 1, Chapter 7, Section 5. ISBN 0-300-08425-0
[10] Augustus De Morgan (18061871) by Robert H. Orr
34.9. EXTERNAL LINKS 139

34.9 External links


Hazewinkel, Michiel, ed. (2001) [1994], Duality principle, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Weisstein, Eric W. de Morgans Laws. MathWorld.

de Morgans laws at PlanetMath.org.


Chapter 35

Derivative algebra (abstract algebra)

In abstract algebra, a derivative algebra is an algebraic structure of the signature

<A, , +, ', 0, 1, D >

where

<A, , +, ', 0, 1>

is a Boolean algebra and D is a unary operator, the derivative operator, satisfying the identities:

1. 0D = 0

2. xDD x + xD
3. (x + y)D = xD + yD .

xD is called the derivative of x. Derivative algebras provide an algebraic abstraction of the derived set operator in
topology. They also play the same role for the modal logic wK4 = K + p?p ??p that Boolean algebras play for
ordinary propositional logic.

35.1 References
Esakia, L., Intuitionistic logic and modality via topology, Annals of Pure and Applied Logic, 127 (2004) 155-
170

McKinsey, J.C.C. and Tarski, A., The Algebra of Topology, Annals of Mathematics, 45 (1944) 141-191

140
Chapter 36

DiVincenzos criteria

The DiVincenzo criteria are a list of conditions that are necessary for constructing a quantum computer proposed
by the theoretical physicist David P. DiVincenzo in his 2000 paper The Physical Implementation of Quantum
Computation.[1] Quantum computation was rst proposed by Richard Feynman[2] (1982) as a means to eciently
simulate quantum systems. There have been many proposals of how to construct a quantum computer, all of which
have varying degrees of success against the dierent challenges of constructing quantum devices. Some of these
proposals involve using superconducting qubits, trapped ions, liquid and solid state nuclear magnetic resonance or
optical cluster states all of which have remarkable prospects, however, they all have issues that prevent practical
implementation. The DiVincenzo criteria are a list of conditions that are necessary for constructing the quantum
computer as proposed by Feynman.
The DiVincenzo criteria consist of 5+2 conditions that an experimental setup must satisfy in order to successfully
implement quantum algorithms such as Grovers search algorithm or Shor factorisation. The 2 additional conditions
are necessary in order to implement quantum communication such as that used in quantum key distribution. One
can consider DiVincenzos criteria for a classical computer and demonstrate that these are satised. Comparing each
statement between the classical and quantum regimes highlights both the complications that arise in dealing with
quantum systems and the source of the quantum speed up.

36.1 Statement of the criteria


In order to construct a quantum computer the following conditions must be met by the experimental setup. The rst
ve are necessary for quantum computation and the remaining two are necessary for quantum communication.

1. A scalable physical system with well characterised qubits.

2. The ability to initialise the state of the qubits to a simple ducial state.

3. Long relevant decoherence times.

4. A universal set of quantum gates.

5. A qubit-specic measurement capability.

6. The ability to interconvert stationary and ying qubits.

7. The ability to faithfully transmit ying qubits between specied locations.

36.2 Why the DiVincenzo criteria?


Divincenzos criteria was proposed after many attempts of constructing a quantum computer. Below we state why
these statements are important and present examples to highlight these facts.

141
142 CHAPTER 36. DIVINCENZOS CRITERIA

36.2.1 Scalable with well characterised qubits

Most models of quantum computation require the use of qubits for computation. Some models use qudits and in this
case the rst statement is logically extended. Quantum mechanically a qubit is dened as 2 level system with some
energy gap. This can sometimes be dicult to implement physically and so we can focus on a particular transition of
atomic levels, etc. Whatever the system we choose, we require that the system remain (almost) always in the subspace
of these two levels and in doing so we can say it is a well characterised qubit. An example of a system that is not
well characterised would be 2 one-electron quantum dots (potential wells where we only allow for single electron
occupations). The electron being in one well or the other is properly characterised as a single qubit, however if we
consider a state such as |00 + |11 then this would correspond to a two qubit state. Such a state is not physically
allowed (we only permit single electron occupation) and so we cannot say that we have 2 well characterised qubits.
With todays technology it is simple to create a system that has a well characterised qubit, however it is a challenge to
create a system that has an arbitrary number of well characterised qubits. Currently one of the biggest problems being
faced is that we require exponentially larger experimental setups in order to accommodate greater number of qubits.
The quantum computer is capable of exponential speed ups on current classical algorithms for prime factorisation of
numbers, however if this requires an exponentially large setup then our advantage is lost. In the case of liquid state
nuclear magnetic resonance[3] it was found that the macroscopic size of the system caused the computational 'qubits
to be initialised in a highly mixed state. In spite of this a computation model was found that could still use these
mixed states for computation, however the more mixed these states are the weaker the induction signal corresponding
to a quantum measurement. If this signal is below the noise level a solution is to increase the size of the sample to
boost the signal strength, and this is the source of the non-scalability of liquid state NMR as a means for quantum
computation. One could say that as the number of computational qubits increases they become less well characterised
until we reach a threshold in which they are no longer useful.

36.2.2 The ability to initialise the state of the qubits to a simple ducial state

All models of quantum computation (and classical computation) are based on performing some operations on a
state (qubit or bit) and nally measuring/reading out a result, a procedure that is dependent on the initial state of the
system. In particular, the unitary nature of quantum mechanics makes initialisation of the qubits extremely important.
In many cases the approach to initialise a state is to let the system anneal into the ground state and then we can start the
computation. This is of particular importance when you consider Quantum Error Correction, a procedure to perform
quantum processes that are robust to certain types of noise, that requires a large supply of fresh initialised qubits. This
places restrictions on how fast the initialisation needs to be. An example of annealing is described in Petta et al.[4]
(2005) where a Bell pair of electrons is prepared in quantum dots. This procedure relies on T 1 to anneal the system
and the paper is dedicated to measuring the T 2 relaxation time of the quantum dot system. This quantum dot system
gives an idea of the timescales involved in initialising a system by annealing (~milliseconds) and this would become
a fundamental issue given that the decoherence time is shorter than the initialisation time. Alternative approaches
(usually involving optical pumping[5] ) have been developed to reduce the initialisation time and improve the delity
of the procedure.

36.2.3 Long relevant decoherence times

The emergence classicality in large quantum systems comes about from the increased decoherence experienced
by macroscopic systems. The timescale associated with this loss of quantum behaviour then becomes important
when constructing large quantum computation systems. The quantum resources used by quantum computing models
(superposition and/or entanglement) are quickly destroyed by decoherence and so long decoherence times are desired,
much longer than the average gate time so that we can combat decoherence with error correction and/or dynamical
decoupling. In solid state NMR using NV centres the orbital electron experiences short decoherence times making
computations problematic. The proposed solution has been to encode the qubit into the nuclear spin of the nitrogen
atom and this increases the decoherence time. In other systems such as the quantum dot we have issues with strong
environmental eects limiting the T 2 decoherence time. One of the problems in satisfying this criteria is that systems
that can be manipulated quickly (through strong interactions) tend to experience decoherence via these very same
strong interactions and so there is a trade-o between ability to implement control and increased decoherence.
36.3. ACKNOWLEDGEMENTS 143

36.2.4 A universal set of quantum gates

Both in classical computing and quantum computing the algorithms that we are permitted to implement are restricted
by the gates we are capable of implementing on the state. In the case of quantum computing we can construct a uni-
versal quantum computer (a quantum Turing machine) with a very small set of 1 and 2 qubit gates. Any experimental
setup that manages to have well characterised qubits, quick faithful initialisation and long decoherence times must
also be capable of inuencing the Hamiltonian of the system in order to implement coherent changes capable of
implementing the universal set of gates. Perfect implementation of gates is not always necessary as gate sequences
can be created that are more robust to certain systematic and random noise models.[6] Liquid state NMR was one of
the rst setups capable of implementing a universal set of gates through the use of precise timing and magnetic eld
pulses, however as mentioned above this system was not scalable.

36.2.5 A qubit-specic measurement capability

For any process applied to a quantum state of qubits the nal measurement is of fundamental importance when
performing computations. If our system allows for non-demolition projective measurements then we can in principle
use this for state preparation. Measurement is at the corner of all quantum algorithms, especially in concepts such as
teleportation. Note that some measurement techniques are not 100% ecient and so these tend to be corrected by
repeating the experiment in order to increase the success rate. Examples of reliable measurement devices are found
in optical systems where homodyne detectors have reached the point of reliably counting how many photons have
passed through the detecting cross-section. For more challenging measurement systems we can look at quantum dots
in Petta et al.[4] (2005) where they use the energy gap between the |01 + |10 and |01 |10 to measure the relative
spins of the 2 electrons.

36.2.6 The ability to interconvert stationary and ying qubits and the ability to faithfully
transmit ying qubits between specied locations

These two conditions are only necessary when considering quantum communication protocols such as quantum key
distribution that involve exchange of coherent quantum states or exchange of entangled qubits (for example the BB84
protocol). When creating pairs of entangled qubits in some experimental set up usually these qubits are 'stationary'
and cannot be moved from the laboratory. If these qubits can be teleported to ying qubits such as encoded into the
polarisation of a photon then we can consider sending entangled photons to a third party and having them extract
that information, leaving two entangled stationary qubits at two dierent locations. The ability to transmit the ying
qubit without decoherence is major problem. Currently at the Institute for Quantum Computing there are eorts to
produce a pair of entangled photons and transmit one of the photons to some other part of the world by reecting o
of a satellite. The main issue now is the decoherence the photon experiences whilst interacting with particles in the
atmosphere. Similarly some attempts have been made to use optical bres, however the attenuation of the signal has
stopped this from being a reality.

36.3 Acknowledgements
This article is based principally on DiVincenzos 2000[1] paper and lectures from the Perimeter Institute for Theo-
retical Physics.

36.4 See also

Quantum computing

Nuclear magnetic resonance quantum computer

Trapped ion quantum computer


144 CHAPTER 36. DIVINCENZOS CRITERIA

36.5 References
[1] DiVincenzo, David P. (2000-04-13). The Physical Implementation of Quantum Computation. Fortschritte der Physik.
48: 771783. arXiv:quant-ph/0002077 [quant-ph]. doi:10.1002/1521-3978(200009)48:9/11<771::AID-PROP771>3.0.CO;2-
E.

[2] Feynman, R. P. (June 1982). Simulating physics with computers. International Journal of Theoretical Physics. 21 (6):
467488. Bibcode:1982IJTP...21..467F. doi:10.1007/BF02650179.

[3] Menicucci NC, Caves CM (2002). Local realistic model for the dynamics of bulk-ensemble NMR information pro-
cessing. Physical Review Letters. 88 (16): 167901. Bibcode:2002PhRvL..88p7901M. PMID 11955265. arXiv:quant-
ph/0111152 . doi:10.1103/PhysRevLett.88.167901.

[4] Petta, J. R.; Johnson, A. C.; Taylor, J. M.; Laird, E. A.; Yacoby, A.; Lukin, M. D.; Marcus, C. M.; Hanson, M. P.; Gossard,
A. C. (September 2005). Coherent Manipulation of Coupled Electron Spins in Semiconductor Quantum Dots. Science.
309 (5744): 21802184. doi:10.1126/science.1116955.

[5] Atatre, Mete; Dreiser, Jan; Badolato, Antonio; Hgele, Alexander; Karrai, Khaled; Imamoglu, Atac (April 2006).
Quantum-Dot Spin-State Preparation with Near-Unity Fidelity. Science. 312 (5773): 551553. PMID 16601152.
doi:10.1126/science.1126074.

[6] Green, Todd J.; Sastrawan, Jarrah; Uys, Hermann; Biercuk, Michael J. (September 2013). Arbitrary quantum control of
qubits in the presence of universal noise. New Journal of Physics. 15 (9): 095004. doi:10.1088/1367-2630/15/9/095004.
Chapter 37

Evasive Boolean function

In mathematics, an evasive Boolean function (of n variables) is a Boolean function for which every decision tree
algorithm has running time of exactly n. Consequently, every decision tree algorithm that represents the function has,
at worst case, a running time of n.

37.1 Examples

37.1.1 An example for a non-evasive boolean function

The following is a Boolean function on the three variables x, y, z:


(where is the bitwise and, is the bitwise or, and is the bitwise not).
This function is not evasive, because there is a decision tree that solves it by checking exactly two variables: The
algorithm rst checks the value of x. If x is true, the algorithm checks the value of y and returns it.

( (x = false) ((x z) = false) )

If x is false, the algorithm checks the value of z and returns it.

37.1.2 A simple example for an evasive boolean function


Consider this simple and function on three variables:
A worst-case input (for every algorithm) is 1, 1, 1. In every order we choose to check the variables, we have to check
all of them. (Note that in general there could be a dierent worst-case input for every decision tree algorithm.) Hence
the functions: and, or (on n variables) are evasive.

37.2 Binary zero-sum games


For the case of binary zero-sum games, every evaluation function is evasive.
In every zero-sum game, the value of the game is achieved by the minimax algorithm (player 1 tries to maximize the
prot, and player 2 tries to minimize the cost).
In the binary case, the max function equals the bitwise or, and the min function equals the bitwise and.
A decision tree for this game will be of this form:

every leaf will have value in {0, 1}.

every node is connected to one of {"and, or"}

145
146 CHAPTER 37. EVASIVE BOOLEAN FUNCTION

For every such tree with n leaves, the running time in the worst case is n (meaning that the algorithm must check all
the leaves):
We will exhibit an adversary that produces a worst-case input for every leaf that the algorithm checks, the adversary
will answer 0 if the leafs parent is an Or node, and 1 if the parent is an And node.
This input (0 for all Or nodes children, and 1 for all And nodes children) forces the algorithm to check all nodes:
As in the second example

in order to calculate the Or result, if all children are 0 we must check them all.

In order to calculate the And result, if all children are 1 we must check them all.

37.3 See also


AanderaaKarpRosenberg conjecture, the conjecture that every nontrivial monotone graph property is eva-
sive.
Chapter 38

Field of sets

Set algebra redirects here. For the basic properties and laws of sets, see Algebra of sets.

In mathematics a eld of sets is a pair X, F where X is a set and F is an algebra over X i.e., a non-empty subset
of the power set of X closed under the intersection and union of pairs of sets and under complements of individual
sets. In other words, F forms a subalgebra of the power set Boolean algebra of X . (Many authors refer to F itself
as a eld of sets. The word eld in eld of sets is not used with the meaning of eld from eld theory.) Elements
of X are called points and those of F are called complexes and are said to be the admissible sets of X .
Fields of sets play an essential role in the representation theory of Boolean algebras. Every Boolean algebra can be
represented as a eld of sets.

38.1 Fields of sets in the representation theory of Boolean algebras

38.1.1 Stone representation


Every nite Boolean algebra can be represented as a whole power set - the power set of its set of atoms; each element
of the Boolean algebra corresponds to the set of atoms below it (the join of which is the element). This power set
representation can be constructed more generally for any complete atomic Boolean algebra.
In the case of Boolean algebras which are not complete and atomic we can still generalize the power set representation
by considering elds of sets instead of whole power sets. To do this we rst observe that the atoms of a nite Boolean
algebra correspond to its ultralters and that an atom is below an element of a nite Boolean algebra if and only if
that element is contained in the ultralter corresponding to the atom. This leads us to construct a representation of a
Boolean algebra by taking its set of ultralters and forming complexes by associating with each element of the Boolean
algebra the set of ultralters containing that element. This construction does indeed produce a representation of the
Boolean algebra as a eld of sets and is known as the Stone representation. It is the basis of Stones representation
theorem for Boolean algebras and an example of a completion procedure in order theory based on ideals or lters,
similar to Dedekind cuts.
Alternatively one can consider the set of homomorphisms onto the two element Boolean algebra and form complexes
by associating each element of the Boolean algebra with the set of such homomorphisms that map it to the top
element. (The approach is equivalent as the ultralters of a Boolean algebra are precisely the pre-images of the top
elements under these homomorphisms.) With this approach one sees that Stone representation can also be regarded
as a generalization of the representation of nite Boolean algebras by truth tables.

38.1.2 Separative and compact elds of sets: towards Stone duality


A eld of sets is called separative (or dierentiated) if and only if for every pair of distinct points there is a
complex containing one and not the other.
A eld of sets is called compact if and only if for every proper lter over X the intersection of all the complexes
contained in the lter is non-empty.

147
148 CHAPTER 38. FIELD OF SETS

These denitions arise from considering the topology generated by the complexes of a eld of sets. Given a eld of
sets X = X, F the complexes form a base for a topology, we denote the corresponding topological space by T (X)
. Then

T (X) is always a zero-dimensional space.

T (X) is a Hausdor space if and only if X is separative.

T (X) is a compact space with compact open sets F if and only if X is compact.

T (X) is a Boolean space with clopen sets F if and only if X is both separative and compact (in which case it
is described as being descriptive)

The Stone representation of a Boolean algebra is always separative and compact; the corresponding Boolean space is
known as the Stone space of the Boolean algebra. The clopen sets of the Stone space are then precisely the complexes
of the Stone representation. The area of mathematics known as Stone duality is founded on the fact that the Stone
representation of a Boolean algebra can be recovered purely from the corresponding Stone space whence a duality
exists between Boolean algebras and Boolean spaces.

38.2 Fields of sets with additional structure

38.2.1 Sigma algebras and measure spaces


If an algebra over a set is closed under countable intersections and countable unions, it is called a sigma algebra
and the corresponding eld of sets is called a measurable space. The complexes of a measurable space are called
measurable sets.
A measure space is a triple X, F, where X, F is a measurable space and is a measure dened on it. If
is in fact a probability measure we speak of a probability space and call its underlying measurable space a sample
space. The points of a sample space are called samples and represent potential outcomes while the measurable sets
(complexes) are called events and represent properties of outcomes for which we wish to assign probabilities. (Many
use the term sample space simply for the underlying set of a probability space, particularly in the case where every
subset is an event.) Measure spaces and probability spaces play a foundational role in measure theory and probability
theory respectively.
The Loomis-Sikorski theorem provides a Stone-type duality between abstract sigma algebras and measurable spaces.

38.2.2 Topological elds of sets


A topological eld of sets is a triple X, T , F where X, T is a topological space and X, F is a eld of sets
which is closed under the closure operator of T or equivalently under the interior operator i.e. the closure and interior
of every complex is also a complex. In other words, F forms a subalgebra of the power set interior algebra on X, T
.
Every interior algebra can be represented as a topological eld of sets with its interior and closure operators corre-
sponding to those of the topological space.
Given a topological space the clopen sets trivially form a topological eld of sets as each clopen set is its own interior
and closure. The Stone representation of a Boolean algebra can be regarded as such a topological eld of sets.

Algebraic elds of sets and Stone elds

A topological eld of sets is called algebraic if and only if there is a base for its topology consisting of complexes.
If a topological eld of sets is both compact and algebraic then its topology is compact and its compact open sets are
precisely the open complexes. Moreover, the open complexes form a base for the topology.
Topological elds of sets that are separative, compact and algebraic are called Stone elds and provide a generalization
of the Stone representation of Boolean algebras. Given an interior algebra we can form the Stone representation of
38.2. FIELDS OF SETS WITH ADDITIONAL STRUCTURE 149

its underlying Boolean algebra and then extend this to a topological eld of sets by taking the topology generated by
the complexes corresponding to the open elements of the interior algebra (which form a base for a topology). These
complexes are then precisely the open complexes and the construction produces a Stone eld representing the interior
algebra - the Stone representation.

38.2.3 Preorder elds


A preorder eld is a triple X, , F where X, is a preordered set and X, F is a eld of sets.
Like the topological elds of sets, preorder elds play an important role in the representation theory of interior alge-
bras. Every interior algebra can be represented as a preorder eld with its interior and closure operators corresponding
to those of the Alexandrov topology induced by the preorder. In other words,

Int(S) = {x X : there exists a y S with y x} and


Cl(S) = {x X : there exists a y S with x y} for all S F

Preorder elds arise naturally in modal logic where the points represent the possible worlds in the Kripke semantics
of a theory in the modal logic S4 (a formal mathematical abstraction of epistemic logic), the preorder represents the
accessibility relation on these possible worlds in this semantics, and the complexes represent sets of possible worlds
in which individual sentences in the theory hold, providing a representation of the Lindenbaum-Tarski algebra of the
theory.

Algebraic and canonical preorder elds

A preorder eld is called algebraic if and only if it has a set of complexes A which determines the preorder in the
following manner: x y if and only if for every complex S A , x S implies y S . The preorder elds
obtained from S4 theories are always algebraic, the complexes determining the preorder being the sets of possible
worlds in which the sentences of the theory closed under necessity hold.
A separative compact algebraic preorder eld is said to be canonical. Given an interior algebra, by replacing the
topology of its Stone representation with the corresponding canonical preorder (specialization preorder) we obtain a
representation of the interior algebra as a canonical preorder eld. By replacing the preorder by its corresponding
Alexandrov topology we obtain an alternative representation of the interior algebra as a topological eld of sets. (The
topology of this "Alexandrov representation" is just the Alexandrov bi-coreection of the topology of the Stone
representation.)

38.2.4 Complex algebras and elds of sets on relational structures


The representation of interior algebras by preorder elds can be generalized to a representation theorem for arbi-
trary (normal) Boolean algebras with operators. For this we consider structures X, (Ri )I , F where X, (Ri )I is a
relational structure i.e. a set with an indexed family of relations dened on it, and X, F is a eld of sets. The com-
plex algebra (or algebra of complexes) determined by a eld of sets X = X, (Ri )I , F on a relational structure,
is the Boolean algebra with operators

C(X) = F, , , , , X, (fi )I
where for all i I , if Ri is a relation of arity n + 1 , then fi is an operator of arity n and for all S1 , ..., Sn F

fi (S1 , ..., Sn ) = {x X : there exist x1 S1 , ..., xn Sn such that Ri (x1 , ..., xn , x)}

This construction can be generalized to elds of sets on arbitrary algebraic structures having both operators and
relations as operators can be viewed as a special case of relations. If F is the whole power set of X then C(X) is
called a full complex algebra or power algebra.
Every (normal) Boolean algebra with operators can be represented as a eld of sets on a relational structure in the
sense that it is isomorphic to the complex algebra corresponding to the eld.
(Historically the term complex was rst used in the case where the algebraic structure was a group and has its origins
in 19th century group theory where a subset of a group was called a complex.)
150 CHAPTER 38. FIELD OF SETS

38.3 See also


List of Boolean algebra topics

Algebra of sets
Sigma algebra

Measure theory

Probability theory
Interior algebra

Alexandrov topology
Stones representation theorem for Boolean algebras

Stone duality
Boolean ring

Preordered eld

38.4 References
Goldblatt, R., Algebraic Polymodal Logic: A Survey, Logic Journal of the IGPL, Volume 8, Issue 4, p. 393-450,
July 2000
Goldblatt, R., Varieties of complex algebras, Annals of Pure and Applied Logic, 44, p. 173-242, 1989

Johnstone, Peter T. (1982). Stone spaces (3rd ed.). Cambridge: Cambridge University Press. ISBN 0-521-
33779-8.

Naturman, C.A., Interior Algebras and Topology, Ph.D. thesis, University of Cape Town Department of Math-
ematics, 1991
Patrick Blackburn, Johan F.A.K. van Benthem, Frank Wolter ed., Handbook of Modal Logic, Volume 3 of
Studies in Logic and Practical Reasoning, Elsevier, 2006

38.5 External links


Hazewinkel, Michiel, ed. (2001) [1994], Algebra of sets, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Chapter 39

Formula game

A formula game is an articial game represented by a fully quantied Boolean formula. Players turns alternate and
the space of possible moves is denoted by bound variables. If a variable is universally quantied, the formula following
it has the same truth value as the formula beginning with the universal quantier regardless of the move taken. If a
variable is existentially quantied, the formula following it has the same truth value as the formula beginning with the
existential quantier for at least one move available at the turn. Turns alternate, and a player loses if he cannot move
at his turn. In computational complexity theory, the language FORMULA-GAME is dened as all formulas such
that Player 1 has a winning strategy in the game represented by . FORMULA-GAME is PSPACE-complete.

39.1 References
Sipser, Michael. (2006). Introduction to the Theory of Computation. Boston: Thomson Course Technology.

151
Chapter 40

Free Boolean algebra

In mathematics, a free Boolean algebra is a Boolean algebra with a distinguished set of elements, called generators,
such that:

1. Each element of the Boolean algebra can be expressed as a nite combination of generators, using the Boolean
operations, and
2. The generators are as independent as possible, in the sense that there are no relationships among them (again in
terms of nite expressions using the Boolean operations) that do not hold in every Boolean algebra no matter
which elements are chosen.

40.1 A simple example


The generators of a free Boolean algebra can represent independent propositions. Consider, for example, the propo-
sitions John is tall and Mary is rich. These generate a Boolean algebra with four atoms, namely:

John is tall, and Mary is rich;


John is tall, and Mary is not rich;
John is not tall, and Mary is rich;
John is not tall, and Mary is not rich.

Other elements of the Boolean algebra are then logical disjunctions of the atoms, such as John is tall and Mary is
not rich, or John is not tall and Mary is rich. In addition there is one more element, FALSE, which can be thought
of as the empty disjunction; that is, the disjunction of no atoms.
This example yields a Boolean algebra with 16 elements; in general, for nite n, the free Boolean algebra with n
n
generators has 2n atoms, and therefore 22 elements.
If there are innitely many generators, a similar situation prevails except that now there are no atoms. Each element of
the Boolean algebra is a combination of nitely many of the generating propositions, with two such elements deemed
identical if they are logically equivalent.

40.2 Category-theoretic denition


In the language of category theory, free Boolean algebras can be dened simply in terms of an adjunction between
the category of sets and functions, Set, and the category of Boolean algebras and Boolean algebra homomorphisms,
BA. In fact, this approach generalizes to any algebraic structure denable in the framework of universal algebra.
Above, we said that a free Boolean algebra is a Boolean algebra with a set of generators that behave a certain way;
alternatively, one might start with a set and ask which algebra it generates. Every set X generates a free Boolean

152
40.2. CATEGORY-THEORETIC DEFINITION 153

The Hasse diagram of the free Boolean algebra on two generators, p and q. Take p (left circle) to be John is tall and q (right
circle)to be Mary is rich. The atoms are the four elements in the row just above FALSE.

algebra FX dened as the algebra such that for every algebra B and function f : X B, there is a unique Boolean
algebra homomorphism f : FX B that extends f. Diagrammatically,
where iX is the inclusion, and the dashed arrow denotes uniqueness. The idea is that once one chooses where to send
the elements of X, the laws for Boolean algebra homomorphisms determine where to send everything else in the free
algebra FX. If FX contained elements inexpressible as combinations of elements of X, then f wouldn't be unique,
and if the elements of X weren't suciently independent, then f wouldn't be well dened! It is easily shown that FX
is unique (up to isomorphism), so this denition makes sense. It is also easily shown that a free Boolean algebra with
generating set X, as dened originally, is isomorphic to FX, so the two denitions agree.
One shortcoming of the above denition is that the diagram doesn't capture that f is a homomorphism; since it is a
diagram in Set each arrow denotes a mere function. We can x this by separating it into two diagrams, one in BA and
one in Set. To relate the two, we introduce a functor U : BA Set that "forgets" the algebraic structure, mapping
algebras and homomorphisms to their underlying sets and functions.
If we interpret the top arrow as a diagram in BA and the bottom triangle as a diagram in Set, then this diagram
properly expresses that every function f : X B extends to a unique Boolean algebra homomorphism f : FX B.
The functor U can be thought of as a device to pull the homomorphism f back into Set so it can be related to f.
The remarkable aspect of this is that the latter diagram is one of the various (equivalent) denitions of when two
functors are adjoint. Our F easily extends to a functor Set BA, and our denition of X generating a free Boolean
algebra FX is precisely that U has a left adjoint F.
154 CHAPTER 40. FREE BOOLEAN ALGEBRA

40.3 Topological realization

The free Boolean algebra with generators, where is a nite or innite cardinal number, may be realized as the
collection of all clopen subsets of {0,1} , given the product topology assuming that {0,1} has the discrete topology.
For each <, the th generator is the set of all elements of {0,1} whose th coordinate is 1. In particular, the free
Boolean algebra with 0 generators is the collection of all clopen subsets of a Cantor space, sometimes called the
Cantor algebra. Surprisingly, this collection is countable. In fact, while the free Boolean algebra with n generators,
n
n nite, has cardinality 22 , the free Boolean algebra with 0 generators, as for any free algebra with 0 generators
and countably many nitary operations, has cardinality 0 .
For more on this topological approach to free Boolean algebra, see Stones representation theorem for Boolean alge-
bras.

40.4 See also

Boolean algebra (structure)

Generating set
40.5. REFERENCES 155

40.5 References
Steve Awodey (2006) Category Theory (Oxford Logic Guides 49). Oxford University Press.

Paul Halmos and Steven Givant (1998) Logic as Algebra. Mathematical Association of America.

Saunders Mac Lane (1998) Categories for the Working Mathematician. 2nd ed. (Graduate Texts in Mathematics
5). Springer-Verlag.

Saunders Mac Lane (1999) Algebra, 3d. ed. American Mathematical Society. ISBN 0-8218-1646-2.
156 CHAPTER 40. FREE BOOLEAN ALGEBRA

Robert R. Stoll, 1963. Set Theory and Logic, chpt. 6.7. Dover reprint 1979.
Chapter 41

Functional completeness

In logic, a functionally complete set of logical connectives or Boolean operators is one which can be used to express
all possible truth tables by combining members of the set into a Boolean expression.[1][2] A well-known complete set
of connectives is { AND, NOT }, consisting of binary conjunction and negation. Each of the singleton sets { NAND
} and { NOR } is functionally complete.
In a context of propositional logic, functionally complete sets of connectives are also called (expressively) ade-
quate.[3]
From the point of view of digital electronics, functional completeness means that every possible logic gate can be
realized as a network of gates of the types prescribed by the set. In particular, all logic gates can be assembled from
either only binary NAND gates, or only binary NOR gates.

41.1 Introduction
Modern texts on logic typically take as primitive some subset of the connectives: conjunction ( ); disjunction ( );
negation ( ); material conditional ( ); and possibly the biconditional ( ). Further connectives can be dened,
if so desired, by dening them in terms of these primitives. For example, NOR (sometimes denoted , the negation
of the disjunction) can be expressed as conjunction of two negations:

A B := A B

Similarly, the negation of the conjunction, NAND (sometimes denoted as ), can be dened in terms of disjunction
and negation. It turns out that every binary connective can be dened in terms of {, , , , } , so this set is
functionally complete.
However, it still contains some redundancy: this set is not a minimal functionally complete set, because the conditional
and biconditional can be dened in terms of the other connectives as

A B := A B
A B := (A B) (B A).

It follows that the smaller set {, , } is also functionally complete. But this is still not minimal, as can be dened
as

A B := (A B).

Alternatively, may be dened in terms of in a similar manner, or may be dened in terms of :

A B := A B.

157
158 CHAPTER 41. FUNCTIONAL COMPLETENESS

No further simplications are possible. Hence, every two-element set of connectives containing and one of
{, , } is a minimal functionally complete subset of {, , , , } .

41.2 Formal denition


Given the Boolean domain B = {0,1}, a set F of Boolean functions : Bni B is functionally complete if the clone
on B generated by the basic functions contains all functions : Bn B, for all strictly positive integers n 1. In
other words, the set is functionally complete if every Boolean function that takes at least one variable can be expressed
in terms of the functions . Since every Boolean function of at least one variable can be expressed in terms of binary
Boolean functions, F is functionally complete if and only if every binary Boolean function can be expressed in terms
of the functions in F.
A more natural condition would be that the clone generated by F consist of all functions : Bn B, for all integers n
0. However, the examples given above are not functionally complete in this stronger sense because it is not possible
to write a nullary function, i.e. a constant expression, in terms of F if F itself does not contain at least one nullary
function. With this stronger denition, the smallest functionally complete sets would have 2 elements.
Another natural condition would be that the clone generated by F together with the two nullary constant functions
be functionally complete or, equivalently, functionally complete in the strong sense of the previous paragraph. The
example of the Boolean function given by S(x, y, z) = z if x = y and S(x, y, z) = x otherwise shows that this condition
is strictly weaker than functional completeness.[4][5][6]

41.3 Characterization of functional completeness


Further information: Posts lattice

Emil Post proved that a set of logical connectives is functionally complete if and only if it is not a subset of any of
the following sets of connectives:

The monotonic connectives; changing the truth value of any connected variables from F to T without changing
any from T to F never makes these connectives change their return value from T to F, e.g. , , , .
The ane connectives, such that each connected variable either always or never aects the truth value these
connectives return, e.g. , , , , .
The self-dual connectives, which are equal to their own de Morgan dual; if the truth values of all variables are
reversed, so is the truth value these connectives return, e.g. , MAJ(p,q,r).
The truth-preserving connectives; they return the truth value T under any interpretation which assigns T to
all variables, e.g. , , , , .
The falsity-preserving connectives; they return the truth value F under any interpretation which assigns F to
all variables, e.g. , , , , .

In fact, Post gave a complete description of the lattice of all clones (sets of operations closed under composition and
containing all projections) on the two-element set {T, F}, nowadays called Posts lattice, which implies the above
result as a simple corollary: the ve mentioned sets of connectives are exactly the maximal clones.

41.4 Minimal functionally complete operator sets


When a single logical connective or Boolean operator is functionally complete by itself, it is called a Sheer func-
tion[7] or sometimes a sole sucient operator. There are no unary operators with this property. NAND and NOR ,
which are dual to each other, are the only two binary Sheer functions. These were discovered, but not published, by
Charles Sanders Peirce around 1880, and rediscovered independently and published by Henry M. Sheer in 1913.[8]
In digital electronics terminology, the binary NAND gate and the binary NOR gate are the only binary universal logic
gates.
41.5. EXAMPLES 159

The following are the minimal functionally complete sets of logical connectives with arity 2:[9]

One element {}, {}.


Two elements {, } , {, } , {, } , {, } , {, } , {, } , {, } , {, } , {, } , {, } ,
{, } , {, } , {, } , {, } , {, } , {, } , {, } , {, } .
Three elements {, , } , {, , } , {, , } , {, , } , {, , } , {, , } .

There are no minimal functionally complete sets of more than three at most binary logical connectives.[9] In order to
keep the lists above readable, operators that ignore one or more inputs have been omitted. For example, an operator
that ignores the rst input and outputs the negation of the second could be substituted for a unary negation.

41.5 Examples
Examples of using the NAND() completeness. As illustrated by,[10]
A = A A
A B = (A B) = (A B) (A B)
A B = (A A) (B B)
Examples of using the NOR() completeness. As illustrated by,[11]
A = A A
A B = (A A) (B B)
A B = (A B) = (A B) (A B)

Note that, an electronic circuit or a software function is optimized by the reuse, that reduce the number of gates. For
instance, the A B operation, when expressed by gates, is implemented with the reuse of A B,

X = (A B); A B = X X

41.6 In other domains


Apart from logical connectives (Boolean operators), functional completeness can be introduced in other domains.
For example, a set of reversible gates is called functionally complete, if it can express every reversible operator.
The 3-input Fredkin gate is functionally complete reversible gate by itself a sole sucient operator. There are many
other three-input universal logic gates, such as the Tooli gate.

41.7 Set theory


There is an isomorphism between the algebra of sets and the Boolean algebra, that is, they have the same structure.
Then, if we map boolean operators into set operators, the translated above text are valid also for sets: there are
many minimal complete set of set-theory operators that can generate any other set relations. The more popular
Minimal complete operator sets are {, } and {, }.

41.8 See also


Completeness (logic)
Algebra of sets
Boolean algebra
160 CHAPTER 41. FUNCTIONAL COMPLETENESS

41.9 References
[1] Enderton, Herbert (2001), A mathematical introduction to logic (2nd ed.), Boston, MA: Academic Press, ISBN 978-0-12-
238452-3. (Complete set of logical connectives).

[2] Nolt, John; Rohatyn, Dennis; Varzi, Achille (1998), Schaums outline of theory and problems of logic (2nd ed.), New York:
McGrawHill, ISBN 978-0-07-046649-4. ("[F]unctional completeness of [a] set of logical operators).

[3] Smith, Peter (2003), An introduction to formal logic, Cambridge University Press, ISBN 978-0-521-00804-4. (Denes
expressively adequate, shortened to adequate set of connectives in a section heading.)

[4] Wesselkamper, T.C. (1975), A sole sucient operator, Notre Dame Journal of Formal Logic, 16: 8688, doi:10.1305/ndj/1093891614

[5] Massey, G.J. (1975), Concerning an alleged Sheer function, Notre Dame Journal of Formal Logic, 16 (4): 549550,
doi:10.1305/ndj/1093891898

[6] Wesselkamper, T.C. (1975), A Correction To My Paper A. Sole Sucient Operator, Notre Dame Journal of Formal
Logic, 16 (4): 551, doi:10.1305/ndj/1093891899

[7] The term was originally restricted to binary operations, but since the end of the 20th century it is used more generally.
Martin, N.M. (1989), Systems of logic, Cambridge University Press, p. 54, ISBN 978-0-521-36770-7.

[8] Scharle, T.W. (1965), Axiomatization of propositional calculus with Sheer functors, Notre Dame J. Formal Logic, 6
(3): 209217, doi:10.1305/ndj/1093958259.

[9] Wernick, William (1942) Complete Sets of Logical Functions, Transactions of the American Mathematical Society 51:
11732. In his list on the last page of the article, Wernick does not distinguish between and , or between and .

[10] NAND Gate Operations at http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/nand.html

[11] NOR Gate Operations at http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/nor.html


Chapter 42

George Boole

Boole redirects here. For other uses, see Boole (disambiguation).

Warning: Page using Template:Infobox person with unknown parameter religion (this message is shown only in
preview).

George Boole (/bul/; 2 November 1815 8 December 1864) was an English mathematician, educator, philosopher
and logician. He worked in the elds of dierential equations and algebraic logic, and is best known as the author of
The Laws of Thought (1854) which contains Boolean algebra. Boolean logic is credited with laying the foundations
for the information age.[3] Boole maintained that:

No general method for the solution of questions in the theory of probabilities can be established
which does not explicitly recognise, not only the special numerical bases of the science, but also those
universal laws of thought which are the basis of all reasoning, and which, whatever they may be as to
their essence, are at least mathematical as to their form.[4]

42.1 Early life


Boole was born in Lincoln, Lincolnshire, England, the son of John Boole Sr (17791848), a shoemaker[5] and Mary
Ann Joyce.[6] He had a primary school education, and received lessons from his father, but had little further formal
and academic teaching. William Brooke, a bookseller in Lincoln, may have helped him with Latin, which he may
also have learned at the school of Thomas Bainbridge. He was self-taught in modern languages.[2] At age 16, Boole
became the breadwinner for his parents and three younger siblings, taking up a junior teaching position in Doncaster
at Heighams School.[7] He taught briey in Liverpool.[1]
Boole participated in the Mechanics Institute, in the Greyfriars, Lincoln, which was founded in 1833.[2][8] Edward
Bromhead, who knew John Boole through the institution, helped George Boole with mathematics books[9] and he was
given the calculus text of Sylvestre Franois Lacroix by the Rev. George Stevens Dickson of St Swithins, Lincoln.[10]
Without a teacher, it took him many years to master calculus.[1]
At age 19, Boole successfully established his own school in Lincoln. Four years later he took over Halls Academy in
Waddington, outside Lincoln, following the death of Robert Hall. In 1840 he moved back to Lincoln, where he ran a
boarding school.[1] Boole immediately became involved in the Lincoln Topographical Society, serving as a member of
the committee, and presenting a paper entitled, On the origin, progress and tendencies Polytheism, especially amongst
the ancient Egyptians, and Persians, and in modern India. [11] on 30 November 1841.
Boole became a prominent local gure, an admirer of John Kaye, the bishop.[12] He took part in the local campaign
for early closing.[2] With E. R. Larken and others he set up a building society in 1847.[13] He associated also with the
Chartist Thomas Cooper, whose wife was a relation.[14]
From 1838 onwards Boole was making contacts with sympathetic British academic mathematicians and reading more
widely. He studied algebra in the form of symbolic methods, as far as these were understood at the time, and began
to publish research papers.[1]

161
162 CHAPTER 42. GEORGE BOOLE

Booles House and School at 3 Pottergate in Lincoln

Greyfriars, Lincoln, which housed the Mechanics Institute


42.2. PROFESSOR AT CORK 163

Plaque from the house in Lincoln

42.2 Professor at Cork

Booles status as mathematician was recognised by his appointment in 1849 as the rst professor of mathematics
at Queens College, Cork (now University College Cork (UCC)) in Ireland. He met his future wife, Mary Everest,
there in 1850 while she was visiting her uncle John Ryall who was Professor of Greek. They married some years
later in 1855.[15] He maintained his ties with Lincoln, working there with E. R. Larken in a campaign to reduce
prostitution.[16]

42.3 Honours and awards

Boole was awarded the Keith Medal by the Royal Society of Edinburgh in 1855 [17] and was elected a Fellow of
the Royal Society (FRS) in 1857.[10] He received honorary degrees of LL.D. from the University of Dublin and the
University of Oxford.[18]

42.4 Death

In late November 1864, Boole walked, in heavy rain, from his home at Licheld Cottage in Ballintemple[19] to the
university, a distance of three miles, and lectured wearing his wet clothes.[20] He soon became ill, developing a severe
cold and high fever, or possibly pneumonia.[21] Booles condition worsened and on 8 December 1864, he died of
fever-induced pleural eusion.
He was buried in the Church of Ireland cemetery of St Michaels, Church Road, Blackrock (a suburb of Cork). There
is a commemorative plaque inside the adjoining church.[22]
164 CHAPTER 42. GEORGE BOOLE

The house at 5 Grenville Place in Cork, in which Boole lived between 1849 and 1855, and where he wrote The Laws of Thought

42.5 Works

Booles rst published paper was Researches in the theory of analytical transformations, with a special application to
the reduction of the general equation of the second order, printed in the Cambridge Mathematical Journal in February
1840 (Volume 2, no. 8, pp. 6473), and it led to a friendship between Boole and Duncan Farquharson Gregory, the
editor of the journal. His works are in about 50 articles and a few separate publications.[23]
In 1841 Boole published an inuential paper in early invariant theory.[10] He received a medal from the Royal Society
42.5. WORKS 165

Booles gravestone in Blackrock, Cork, Ireland

for his memoir of 1844, On A General Method of Analysis. It was a contribution to the theory of linear dierential
equations, moving from the case of constant coecients on which he had already published, to variable coecients.[24]
The innovation in operational methods is to admit that operations may not commute.[25] In 1847 Boole published The
Mathematical Analysis of Logic, the rst of his works on symbolic logic.[26]

42.5.1 Dierential equations

Boole completed two systematic treatises on mathematical subjects during his lifetime. The Treatise on Dierential
Equations[27] appeared in 1859, and was followed, the next year, by a Treatise on the Calculus of Finite Dierences, a
sequel to the former work.

42.5.2 Analysis

In 1857, Boole published the treatise On the Comparison of Transcendents, with Certain Applications to the Theory of
Denite Integrals,[28] in which he studied the sum of residues of a rational function. Among other results, he proved
what is now called Booles identity:

{ }
1 ak ak
mes x R | t =
x bk t

for any real numbers ak > 0, bk, and t > 0.[29] Generalisations of this identity play an important role in the theory of
the Hilbert transform.[29]

42.5.3 Symbolic logic

Main article: Boolean algebra


166 CHAPTER 42. GEORGE BOOLE

Detail of stained glass window in Lincoln Cathedral dedicated to Boole

In 1847 Boole published the pamphlet Mathematical Analysis of Logic. He later regarded it as a awed exposition
of his logical system, and wanted An Investigation of the Laws of Thought on Which are Founded the Mathematical
Theories of Logic and Probabilities to be seen as the mature statement of his views. Contrary to widespread belief,
Boole never intended to criticise or disagree with the main principles of Aristotles logic. Rather he intended to
systematise it, to provide it with a foundation, and to extend its range of applicability.[30] Booles initial involvement
in logic was prompted by a current debate on quantication, between Sir William Hamilton who supported the theory
of quantication of the predicate, and Booles supporter Augustus De Morgan who advanced a version of De
Morgan duality, as it is now called. Booles approach was ultimately much further reaching than either sides in the
controversy.[31] It founded what was rst known as the algebra of logic tradition.[32]
42.5. WORKS 167

Plaque beneath Booles window in Lincoln Cathedral

Among his many innovations is his principle of wholistic reference, which was later, and probably independently,
adopted by Gottlob Frege and by logicians who subscribe to standard rst-order logic. A 2003 article[33] provides a
systematic comparison and critical evaluation of Aristotelian logic and Boolean logic; it also reveals the centrality of
wholistic reference in Booles philosophy of logic.

1854 denition of universe of discourse

In every discourse, whether of the mind conversing with its own thoughts, or of the individual in his
intercourse with others, there is an assumed or expressed limit within which the subjects of its opera-
tion are conned. The most unfettered discourse is that in which the words we use are understood in
the widest possible application, and for them the limits of discourse are co-extensive with those of the
universe itself. But more usually we conne ourselves to a less spacious eld. Sometimes, in discoursing
of men we imply (without expressing the limitation) that it is of men only under certain circumstances
and conditions that we speak, as of civilised men, or of men in the vigour of life, or of men under some
other condition or relation. Now, whatever may be the extent of the eld within which all the objects of
our discourse are found, that eld may properly be termed the universe of discourse. Furthermore, this
universe of discourse is in the strictest sense the ultimate subject of the discourse.[34]

Treatment of addition in logic

Boole conceived of elective symbols of his kind as an algebraic structure. But this general concept was not available
to him: he did not have the segregation standard in abstract algebra of postulated (axiomatic) properties of operations,
and deduced properties.[35] His work was a beginning to the algebra of sets, again not a concept available to Boole as a
familiar model. His pioneering eorts encountered specic diculties, and the treatment of addition was an obvious
diculty in the early days.
Boole replaced the operation of multiplication by the word 'and' and addition by the word 'or'. But in Booles original
168 CHAPTER 42. GEORGE BOOLE

system, + was a partial operation: in the language of set theory it would correspond only to disjoint union of subsets.
Later authors changed the interpretation, commonly reading it as exclusive or, or in set theory terms symmetric
dierence; this step means that addition is always dened.[32][36]
In fact there is the other possibility, that + should be read as disjunction,[35] This other possibility extends from the
disjoint union case, where exclusive or and non-exclusive or both give the same answer. Handling this ambiguity was
an early problem of the theory, reecting the modern use of both Boolean rings and Boolean algebras (which are
simply dierent aspects of one type of structure). Boole and Jevons struggled over just this issue in 1863, in the form
of the correct evaluation of x + x. Jevons argued for the result x, which is correct for + as disjunction. Boole kept the
result as something undened. He argued against the result 0, which is correct for exclusive or, because he saw the
equation x + x = 0 as implying x = 0, a false analogy with ordinary algebra.[10]

42.5.4 Probability theory


The second part of the Laws of Thought contained a corresponding attempt to discover a general method in probabili-
ties. Here the goal was algorithmic: from the given probabilities of any system of events, to determine the consequent
probability of any other event logically connected with those events.[37]

42.6 Legacy
Boolean algebra is named after him, as is the crater Boole on the Moon. The keyword Bool represents a Boolean
datatype in many programming languages, though Pascal and Java, among others, both use the full name Boolean.[38]
The library, underground lecture theatre complex and the Boole Centre for Research in Informatics[39] at University
College Cork are named in his honour. A road called Boole Heights in Bracknell, Berkshire is named after him.

42.6.1 19th-century development


Booles work was extended and rened by a number of writers, beginning with William Stanley Jevons. Augustus De
Morgan had worked on the logic of relations, and Charles Sanders Peirce integrated his work with Booles during the
1870s.[40] Other signicant gures were Platon Sergeevich Poretskii, and William Ernest Johnson. The conception of
a Boolean algebra structure on equivalent statements of a propositional calculus is credited to Hugh MacColl (1877),
in work surveyed 15 years later by Johnson.[40] Surveys of these developments were published by Ernst Schrder,
Louis Couturat, and Clarence Irving Lewis.

42.6.2 20th-century development


In 1921 the economist John Maynard Keynes published a book on probability theory, A Treatise of Probability.
Keynes believed that Boole had made a fundamental error in his denition of independence which vitiated much
of his analysis.[41] In his book The Last Challenge Problem, David Miller provides a general method in accord with
Booles system and attempts to solve the problems recognised earlier by Keynes and others. Theodore Hailperin
showed much earlier that Boole had used the correct mathematical denition of independence in his worked out
problems [42]
Booles work and that of later logicians initially appeared to have no engineering uses. Claude Shannon attended
a philosophy class at the University of Michigan which introduced him to Booles studies. Shannon recognised
that Booles work could form the basis of mechanisms and processes in the real world and that it was therefore
highly relevant. In 1937 Shannon went on to write a masters thesis, at the Massachusetts Institute of Technology, in
which he showed how Boolean algebra could optimise the design of systems of electromechanical relays then used in
telephone routing switches. He also proved that circuits with relays could solve Boolean algebra problems. Employing
the properties of electrical switches to process logic is the basic concept that underlies all modern electronic digital
computers. Victor Shestakov at Moscow State University (19071987) proposed a theory of electric switches based
on Boolean logic even earlier than Claude Shannon in 1935 on the testimony of Soviet logicians and mathematicians
Sofya Yanovskaya, Gaaze-Rapoport, Roland Dobrushin, Lupanov, Medvedev and Uspensky, though they presented
their academic theses in the same year, 1938. But the rst publication of Shestakovs result took place only in 1941 (in
Russian). Hence, Boolean algebra became the foundation of practical digital circuit design; and Boole, via Shannon
and Shestakov, provided the theoretical grounding for the Information Age.[43]
42.6. LEGACY 169

In modern notation, the free Boolean algebra on basic propositions p and q arranged in a Hasse diagram. The Boolean combinations
make up 16 dierent propositions, and the lines show which are logically related.

42.6.3 21st-century celebration

Booles legacy surrounds us everywhere, in the computers, information storage and retrieval, electronic circuits and
controls that support life, learning and communications in the 21st century. His pivotal advances in mathematics,
logic and probability provided the essential groundwork for modern mathematics, microelectronic engineering and
computer science.

University College Cork.[3]


2015 saw the 200th anniversary of George Booles birth. To mark the bicentenary year, University College Cork
joined admirers of Boole around the world to celebrate his life and legacy.
UCCs George Boole 200[44] project, featured events, student outreach activities and academic conferences on Booles
legacy in the digital age, including a new edition of Desmond MacHale's 1985 biography The Life and Work of George
Boole: A Prelude to the Digital Age,[45] 2014).
The search engine Google marked the 200th anniversary of his birth on 2 November 2015 with an algebraic reimaging
of its Google Doodle.[3]
Litcheld Cottage in Ballintemple, Cork, where Boole lived for the last two years of his life, bears a memorial
plaque. His former residence, in Grenville Place, is being restored through a collaboration between UCC and Cork
City Council, as the George Boole House of Innovation, after the city council acquired the premises under the Derelict
Sites Act.[46]
170 CHAPTER 42. GEORGE BOOLE

42.7 Views
Booles views were given in four published addresses: The Genius of Sir Isaac Newton; The Right Use of Leisure; The
Claims of Science; and The Social Aspect of Intellectual Culture.[47] The rst of these was from 1835, when Charles
Anderson-Pelham, 1st Earl of Yarborough gave a bust of Newton to the Mechanics Institute in Lincoln.[48] The
second justied and celebrated in 1847 the outcome of the successful campaign for early closing in Lincoln, headed
by Alexander Leslie-Melville, of Branston Hall.[49] The Claims of Science was given in 1851 at Queens College,
Cork.[50] The Social Aspect of Intellectual Culture was also given in Cork, in 1855 to the Cuvierian Society.[51]
Though his biographer Des MacHale describes Boole as an agnostic deist,[52][53] Boole read a wide variety of
Christian theology. Combining his interests in mathematics and theology, he compared the Christian trinity of Father,
Son, and Holy Ghost with the three dimensions of space, and was attracted to the Hebrew conception of God as an
absolute unity. Boole considered converting to Judaism but in the end was said to have chosen Unitarianism. Boole
came to speak against a what he saw as prideful scepticism, and instead, favoured the belief in a Supreme Intelligent
Cause.[54] He also declared I rmly believe, for the accomplishment of a purpose of the Divine Mind.[55][56] In
addition, he stated that he perceived teeming evidences of surrounding design" and concluded that the course of
this world is not abandoned to chance and inexorable fate.[57][58]
Two inuences on Boole were later claimed by his wife, Mary Everest Boole: a universal mysticism tempered by
Jewish thought, and Indian logic.[59] Mary Boole stated that an adolescent mystical experience provided for his lifes
work:

My husband told me that when he was a lad of seventeen a thought struck him suddenly, which
became the foundation of all his future discoveries. It was a ash of psychological insight into the
conditions under which a mind most readily accumulates knowledge [...] For a few years he supposed
himself to be convinced of the truth of the Bible as a whole, and even intended to take orders as a
clergyman of the English Church. But by the help of a learned Jew in Lincoln he found out the true
nature of the discovery which had dawned on him. This was that mans mind works by means of some
mechanism which functions normally towards Monism.[60]

In Ch. 13 of Laws of Thought Boole used examples of propositions from Baruch Spinoza and Samuel Clarke. The
work contains some remarks on the relationship of logic to religion, but they are slight and cryptic.[61] Boole was
apparently disconcerted at the books reception just as a mathematical toolset:

George afterwards learned, to his great joy, that the same conception of the basis of Logic was held by
Leibnitz, the contemporary of Newton. De Morgan, of course, understood the formula in its true sense;
he was Booles collaborator all along. Herbert Spencer, Jowett, and Robert Leslie Ellis understood, I feel
sure; and a few others, but nearly all the logicians and mathematicians ignored [953] the statement that
the book was meant to throw light on the nature of the human mind; and treated the formula entirely as
a wonderful new method of reducing to logical order masses of evidence about external fact.[60]

Mary Boole claimed that there was profound inuence via her uncle George Everest of Indian thought on
George Boole, as well as on Augustus De Morgan and Charles Babbage:

Think what must have been the eect of the intense Hinduizing of three such men as Babbage,
De Morgan, and George Boole on the mathematical atmosphere of 183065. What share had it in
generating the Vector Analysis and the mathematics by which investigations in physical science are now
conducted?[60]

42.8 Family
In 1855 he married Mary Everest (niece of George Everest), who later wrote several educational works on her
husbands principles.
The Booles had ve daughters:

Mary Ellen(18561908)[62] who married the mathematician and author Charles Howard Hinton and had four
children: George (18821943), Eric (*1884), William (18861909)[63] and Sebastian (18871923) inventor
42.9. SEE ALSO 171

of the Jungle gym. After the sudden death of her husband, Mary Ellen committed suicide, in Washington,
D.C., in May 1908.[64] Sebastian had three children:

Jean Hinton (married name Rosner) (19172002) peace activist.


William H. Hinton (19192004) visited China in the 1930s and 40s and wrote an inuential account of
the Communist land reform.
Joan Hinton (19212010) worked for the Manhattan Project and lived in China from 1948 until her death
on 8 June 2010; she was married to Sid Engst.

Margaret, (1858 1935) married Edward Ingram Taylor, an artist.

Their elder son Georey Ingram Taylor became a mathematician and a Fellow of the Royal Society.
Their younger son Julian was a professor of surgery.

Alicia (18601940), who made important contributions to four-dimensional geometry.

Lucy Everest (18621904), who was the rst female professor of chemistry in England.

Ethel Lilian (18641960), who married the Polish scientist and revolutionary Wilfrid Michael Voynich and
was the author of the novel The Gady.

42.9 See also


Boolean algebra, a logical calculus of truth values or set membership

Boolean algebra (structure), a set with operations resembling logical ones

Boolean ring, a ring consisting of idempotent elements

Boolean circuit, a mathematical model for digital logical circuits.

Boolean data type is a data type, having two values (usually denoted true and false)

Boolean expression, an expression in a programming language that produces a Boolean value when evaluated

Boolean function, a function that determines Boolean values or operators

Boolean model (probability theory), a model in stochastic geometry

Boolean network, a certain network consisting of a set of Boolean variables whose state is determined by other
variables in the network

Boolean processor, a 1-bit variables computing unit

Boolean satisability problem

Booles syllogistic is a logic invented by 19th-century British mathematician George Boole, which attempts to
incorporate the empty set.

List of Boolean algebra topics

List of pioneers in computer science

42.10 Notes
[1] O'Connor, John J.; Robertson, Edmund F., George Boole, MacTutor History of Mathematics archive, University of St
Andrews.

[2] Hill, p. 149; Google Books.

[3] Who is George Boole: the mathematician behind the Google doodle. Sydney Morning Herald. 2 November 2015.
172 CHAPTER 42. GEORGE BOOLE

[4] Boole, George (2012) [Originally published by Watts & Co., London, in 1952]. Rhees, Rush, ed. Studies in Logic and
Probability (Reprint ed.). Mineola, NY: Dover Publications. p. 273. ISBN 978-0-486-48826-4. Retrieved 27 October
2015.

[5] John Boole. Lincoln Boole Foundation. Retrieved 6 November 2015.

[6] Chisholm, Hugh, ed. (1911). "Boole, George". Encyclopdia Britannica (11th ed.). Cambridge University Press.

[7] Rhees, Rush. (1954) George Boole as Student and Teacher. By Some of His Friends and Pupils, Proceedings of the
Royal Irish Academy. Section A: Mathematical and Physical Sciences. Vol. 57. Royal Irish Academy

[8] Society for the History of Astronomy, Lincolnshire.

[9] Edwards, A. W. F. Bromhead, Sir Edward Thomas French. Oxford Dictionary of National Biography (online ed.). Oxford
University Press. doi:10.1093/ref:odnb/37224. (Subscription or UK public library membership required.)

[10] Burris, Stanley. George Boole. Stanford Encyclopedia of Philosophy.

[11] A Selection of Papers relative to the County of Lincoln, read before the Lincolnshire Topographical Society, 1841-1842.
Printed by W. and B. Brooke, High-Street, Lincoln, 1843.

[12] Hill, p. 172 note 2; Google Books.

[13] Hill, p. 130 note 1; Google Books.

[14] Hill, p. 148; Google Books.

[15] Ronald Calinger, Vita mathematica: historical research and integration with teaching (1996), p. 292; Google Books.

[16] Hill, p. 138 note 4; Google Books.

[17] Keith Awards 1827-1890. Canmbridge Journals Online. Retrieved 29 November 2014.

[18] Ivor Grattan-Guinness, Grard Bornet, George Boole: Selected manuscripts on logic and its philosophy (1997), p. xiv;
Google Books.

[19] Dublin City Quick Search: Buildings of Ireland: National Inventory of Architectural Heritage.

[20] Barker, Tommy (13 June 2015). Have a look inside the home of UCC maths professor George Boole. Irish Examiner.
Retrieved 6 November 2015.

[21] Stanford Encyclopedia of Philosophy

[22] Death-His Life-- George Boole 200.

[23] A list of Booles memoirs and papers is in the Catalogue of Scientic Memoirs published by the Royal Society, and in the
supplementary volume on dierential equations, edited by Isaac Todhunter. To the Cambridge Mathematical Journal and
its successor, the Cambridge and Dublin Mathematical Journal, Boole contributed 22 articles in all. In the third and fourth
series of the Philosophical Magazine are found 16 papers. The Royal Society printed six memoirs in the Philosophical
Transactions, and a few other memoirs are to be found in the Transactions of the Royal Society of Edinburgh and of the
Royal Irish Academy, in the Bulletin de l'Acadmie de St-Ptersbourg for 1862 (under the name G. Boldt, vol. iv. pp.
198215), and in Crelles Journal. Also included is a paper on the mathematical basis of logic, published in the Mechanics
Magazine in 1848.

[24] Andrei Nikolaevich Kolmogorov, Adolf Pavlovich Yushkevich (editors), Mathematics of the 19th Century: function theory
according to Chebyshev, ordinary dierential equations, calculus of variations, theory of nite dierences (1998), pp. 1302;
Google Books.

[25] Jeremy Gray, Karen Hunger Parshall, Episodes in the History of Modern Algebra (18001950) (2007), p. 66; Google
Books.

[26] George Boole, The Mathematical Analysis of Logic, Being an Essay towards a Calculus of Deductive Reasoning (London,
England: Macmillan, Barclay, & Macmillan, 1847).

[27] George Boole, A treatsie on dierential equations (1859), Internet Archive.

[28] Boole, George (1857). On the Comparison of Transcendents, with Certain Applications to the Theory of Denite Inte-
grals. Philosophical Transactions of the Royal Society of London. 147: 745803. JSTOR 108643. doi:10.1098/rstl.1857.0037.

[29] Cima, Joseph A.; Matheson, Alec; Ross, William T. (2005). The Cauchy transform. Quadrature domains and their
applications. Oper. Theory Adv. Appl. 156. Basel: Birkhuser. pp. 79111. MR 2129737.
42.10. NOTES 173

[30] John Corcoran, Aristotles Prior Analytics and Booles Laws of Thought, History and Philosophy of Logic, vol. 24 (2003),
pp. 261288.

[31] Grattan-Guinness, I. Boole, George. Oxford Dictionary of National Biography (online ed.). Oxford University Press.
doi:10.1093/ref:odnb/2868. (Subscription or UK public library membership required.)

[32] Witold Marciszewski (editor), Dictionary of Logic as Applied in the Study of Language (1981), pp. 1945.

[33] Corcoran, John (2003). Aristotles Prior Analytics and Booles Laws of Thought. History and Philosophy of Logic,
24: 261288. Reviewed by Risto Vilkko. Bulletin of Symbolic Logic, 11(2005) 8991. Also by Marcel Guillaume,
Mathematical Reviews 2033867 (2004m:03006).

[34] George Boole. 1854/2003. The Laws of Thought, facsimile of 1854 edition, with an introduction by John Corcoran.
Bualo: Prometheus Books (2003). Reviewed by James van Evra in Philosophy in Review.24 (2004) 167169.

[35] Andrei Nikolaevich Kolmogorov, Adolf Pavlovich Yushkevich, Mathematics of the 19th Century: mathematical logic, al-
gebra, number theory, probability theory (2001), pp. 15 (note 15)16; Google Books.

[36] Burris, Stanley. The Algebra of Logic Tradition. Stanford Encyclopedia of Philosophy.

[37] Boole, George (1854). An Investigation of the Laws of Thought. London: Walton & Maberly. pp. 265275.

[38] P. J. Brown, Pascal from Basic, Addison-Wesley, 1982. ISBN 0-201-13789-5, page 72

[39] Boole Centre for Research in Informatics.

[40] Ivor Grattan-Guinness, Grard Bornet, George Boole: Selected manuscripts on logic and its philosophy (1997), p. xlvi;
Google Books.

[41] Chapter XVI, p. 167, section 6 of A treatise on probability, volume 4: The central error in his system of probability
arises out of his giving two inconsistent denitions of 'independence' (2) He rst wins the readers acquiescence by giving a
perfectly correct denition: Two events are said to be independent when the probability of either of them is unaected by
our expectation of the occurrence or failure of the other. (3) But a moment later he interprets the term in quite a dierent
sense; for, according to Booles second denition, we must regard the events as independent unless we are told either that
they must concur or that they cannot concur. That is to say, they are independent unless we know for certain that there is,
in fact, an invariable connection between them. The simple events, x, y, z, will be said to be conditioned when they are not
free to occur in every possible combination; in other words, when some compound event depending upon them is precluded
from occurring. ... Simple unconditioned events are by denition independent. (1) In fact as long as xz is possible, x and
z are independent. This is plainly inconsistent with Booles rst denition, with which he makes no attempt to reconcile it.
The consequences of his employing the term independence in a double sense are far-reaching. For he uses a method of
reduction which is only valid when the arguments to which it is applied are independent in the rst sense, and assumes that
it is valid if they are independent in second sense. While his theorems are true if all propositions or events involved are
independent in the rst sense, they are not true, as he supposes them to be, if the events are independent only in the second
sense.

[42] ZETETIC GLEANINGS.

[43] That dissertation has since been hailed as one of the most signicant masters theses of the 20th century. To all intents and
purposes, its use of binary code and Boolean algebra paved the way for the digital circuitry that is crucial to the operation
of modern computers and telecommunications equipment."Emerson, Andrew (8 March 2001). Claude Shannon. United
Kingdom: The Guardian.

[44] George Boole 200 - George Boole Bicentenary Celebrations.

[45] Cork University Press

[46] Boolean logic meets Victorian gothic in leafy Cork suburb.

[47] 1902 Britannica article by Jevons; online text.

[48] James Gasser, A Boole Anthology: recent and classical studies in the logic of George Boole (2000), p. 5; Google Books.

[49] Gasser, p. 10; Google Books.

[50] Boole, George (1851). The Claims of Science, especially as founded in its relations to human nature; a lecture. Retrieved 4
March 2012.

[51] Boole, George (1855). The Social Aspect of Intellectual Culture: an address delivered in the Cork Athenum, May 29th,
1855 : at the soire of the Cuvierian Society. George Purcell & Co. Retrieved 4 March 2012.
174 CHAPTER 42. GEORGE BOOLE

[52] International Association for Semiotic Studies; International Council for Philosophy and Humanistic Studies; International
Social Science Council (1995). A tale of two amateurs. Semiotica, Volume 105. Mouton. p. 56. MacHales biography
calls George Boole 'an agnostic deist'. Both Booles classication of 'religious philosophies as monistic, dualistic, and
trinitarian left little doubt about their preference for 'the unity religion', whether Judaic or Unitarian.

[53] International Association for Semiotic Studies; International Council for Philosophy and Humanistic Studies; International
Social Science Council (1996). Semiotica, Volume 105. Mouton. p. 17. MacHale does not repress this or other evidence
of the Booles nineteenth-century beliefs and practices in the paranormal and in religious mysticism. He even concedes
that George Booles many distinguished contributions to logic and mathematics may have been motivated by his distinctive
religious beliefs as an agnostic deist and by an unusual personal sensitivity to the suerings of other people.

[54] Boole, George. Studies in Logic and Probability. 2002. Courier Dover Publications. p. 201-202

[55] Boole, George. Studies in Logic and Probability. 2002. Courier Dover Publications. p. 451

[56] Some-Side of a Scientic Mind (2013). pp. 112-3. The University Magazine, 1878. London: Forgotten Books. (Original
work published 1878)

[57] Concluding remarks of his treatise of Clarke and Spinoza, as found in Boole, George (2007). An Investigation of the
Laws of Thought. Cosimo, Inc. Chap . XIII. p. 217-218. (Original work published 1854)

[58] Boole, George (1851). The claims of science, especially as founded in its relations to human nature; a lecture, Volume 15.
p. 24

[59] Jonardon Ganeri (2001), Indian Logic: a reader, Routledge, p. 7, ISBN 0-7007-1306-9; Google Books.

[60] Boole, Mary Everest Indian Thought and Western Science in the Nineteenth Century, Boole, Mary Everest Collected Works
eds. E. M. Cobham and E. S. Dummer, London, Daniel 1931 pp.947967

[61] Grattan-Guinness and Bornet, p. 16; Google Books.

[62] Family and Genealogy - His Life George Boole 200. Georgeboole.com. Retrieved 7 March 2016.

[63] Smothers In Orchard in The Los Angeles Times v. 27 February 1909.

[64] `My Right To Die, Woman Kills Self in The Washington Times v. 28 May 1908 (PDF); Mrs. Mary Hinton A Suicide in The
New York Times v. 29 May 1908 (PDF).

42.11 References
University College Cork, George Boole 200 Bicentenary Celebration, GeorgeBoole.com.
Chisholm, Hugh, ed. (1911). "Boole, George". Encyclopdia Britannica (11th ed.). Cambridge University
Press.
Ivor Grattan-Guinness, The Search for Mathematical Roots 18701940. Princeton University Press. 2000.
Francis Hill (1974), Victorian Lincoln; Google Books.
Des MacHale, George Boole: His Life and Work. Boole Press. 1985.
Des MacHale, The Life and Work of George Boole: A Prelude to the Digital Age (new edition). Cork University
Press. 2014
Stephen Hawking, God Created the Integers. Running Press, Philadelphia. 2007.

42.12 External links


Roger Parsons article on Boole
George Boole: A 200-Year View by Stephen Wolfram.
Works by George Boole at Project Gutenberg
Works by or about George Boole at Internet Archive
42.12. EXTERNAL LINKS 175

George Booles work as rst Professor of Mathematics in University College, Cork, Ireland

George Boole website


Author prole in the database zbMATH
Chapter 43

GoodmanNguyenvan Fraassen algebra

A GoodmanNguyenvan Fraassen algebra is a type of conditional event algebra (CEA) that embeds the standard
Boolean algebra of unconditional events in a larger algebra which is itself Boolean. The goal (as with all CEAs) is
to equate the conditional probability P(A B) / P(A) with the probability of a conditional event, P(A B) for more
than just trivial choices of A, B, and P.

43.1 Construction of the algebra


Given set , which is the set of possible outcomes, and set F of subsets of so that F is the set of possible events
consider an innite Cartesian product of the form E 1 E 2 En , where E 1 , E 2 , En are
members of F. Such a product species the set of all innite sequences whose rst element is in E 1 , whose second
element is in E 2 , , and whose nth element is in En, and all of whose elements are in . Note that one such product
is the one where E 1 = E 2 = = En = , i.e., the set . Designate this set as ; it is the set of
all innite sequences whose elements are in .
A new Boolean algebra is now formed, whose elements are subsets of . To begin with, any event which was formerly
represented by subset A of is now represented by A = A .
Additionally, however, for events A and B, let the conditional event A B be represented as the following innite
union of disjoint sets:

[(A B) ]
[A (A B) ]
[A A (A B) ] .

The motivation for this representation of conditional events will be explained shortly. Note that the construction can
be iterated; A and B can themselves be conditional events.
Intuitively, unconditional event A ought to be representable as conditional event A. And indeed: because A
= A and = , the innite union representing A reduces to A .
Let F now be a set of subsets of , which contains representations of all events in F and is otherwise just large
enough to be closed under construction of conditional events and under the familiar Boolean operations. F is a
Boolean algebra of conditional events which contains a Boolean algebra corresponding to the algebra of ordinary
events.

43.2 Denition of the extended probability function


Corresponding to the newly constructed logical objects, called conditional events, is a new denition of a probability
function, P , based on a standard probability function P:

176
43.3. P(A B) = P(B|A) 177

P (E 1 E 2 En ) = P(E 1 )P(E 2 ) P(En)P()P()P() = P(E 1 )P(E 2 )


P(En), since P() = 1.

It follows from the denition of P that P ( A ) = P(A). Thus P = P over the domain of P.

43.3 P(A B) = P(B|A)


Now comes the insight which motivates all of the preceding work. For P, the original probability function, P(A) = 1
P(A), and therefore P(B|A) = P(A B) / P(A) can be rewritten as P(A B) / [1 P(A)]. The factor 1 / [1 P(A)],
however, can in turn be represented by its Maclaurin series expansion, 1 + P(A) + P(A)2 . Therefore, P(B|A) =
P(A B) + P(A)P(A B) + P(A)2 P(A B) + .
The right side of the equation is exactly the expression for the probability P of A B, just dened as a union of
carefully chosen disjoint sets. Thus that union can be taken to represent the conditional event A B, such that P (A
B) = P(B|A) for any choice of A, B, and P. But since P = P over the domain of P, the hat notation is optional.
So long as the context is understood (i.e., conditional event algebra), one can write P(A B) = P(B|A), with P now
being the extended probability function.

43.4 References
Bamber, Donald, I. R. Goodman, and H. T. Nguyen. 2004. Deduction from Conditional Knowledge. Soft Com-
puting 8: 247255.
Goodman, I. R., R. P. S. Mahler, and H. T. Nguyen. 1999. What is conditional event algebra and why should you
care?" SPIE Proceedings, Vol 3720.
Chapter 44

Implicant

In Boolean logic, an implicant is a covering (sum term or product term) of one or more minterms in a sum of
products (or maxterms in product of sums) of a Boolean function. Formally, a product term P in a sum of products
is an implicant of the Boolean function F if P implies F. More precisely:

P implies F (and thus is an implicant of F) if F also takes the value 1 whenever P equals 1.

where

F is a Boolean function of n variables.


P is a product term.

This means that P F with respect to the natural ordering of the Boolean space. For instance, the function

f (x, y, z, w) = xy + yz + w

is implied by xy , by xyz , by xyzw , by w and many others; these are the implicants of f .

44.1 Prime implicant


A prime implicant of a function is an implicant that cannot be covered by a more general, (more reduced - meaning
with fewer literals) implicant. W. V. Quine dened a prime implicant of F to be an implicant that is minimal - that
is, the removal of any literal from P results in a non-implicant for F. Essential prime implicants (aka core prime
implicants) are prime implicants that cover an output of the function that no combination of other prime implicants
is able to cover.
Using the example above, one can easily see that while xy (and others) is a prime implicant, xyz and xyzw are not.
From the latter, multiple literals can be removed to make it prime:

x , y and z can be removed, yielding w .


Alternatively, z and w can be removed, yielding xy .
Finally, x and w can be removed, yielding yz .

The process of removing literals from a Boolean term is called expanding the term. Expanding by one literal doubles
the number of input combinations for which the term is true (in binary Boolean algebra). Using the example function
above, we may expand xyz to xy or to yz without changing the cover of f .[1]
The sum of all prime implicants of a Boolean function is called its complete sum, minimal covering sum, or Blake
canonical form.

178
44.2. SEE ALSO 179

44.2 See also


QuineMcCluskey algorithm

Karnaugh map
Petricks method

44.3 References
[1] De Micheli, Giovanni. Synthesis and Optimization of Digital Circuits. McGraw-Hill, Inc., 1994

44.4 External links


Slides explaining implicants, prime implicants and essential prime implicants

Examples of nding essential prime implicants using K-map


Chapter 45

Implication graph

~x2 x0

x6 ~x4 x3

~x5 ~x1 x1 x5

~x3 x4 ~x6

~x0 x2

(x0 x2 )(x0 x3 )(x1 x3 )(x1 x4 )(x2 x4 )


An implication graph representing the 2-satisability instance (x0 x5 )(x1 x5 )(x2 x5 )(x3 x6 )(x4 x6 )(x5 x6 ).

In mathematical logic, an implication graph is a skew-symmetric directed graph G(V, E) composed of vertex set
V and directed edge set E. Each vertex in V represents the truth status of a Boolean literal, and each directed edge
from vertex u to vertex v represents the material implication If the literal u is true then the literal v is also true.
Implication graphs were originally used for analyzing complex Boolean expressions.

180
45.1. APPLICATIONS 181

45.1 Applications
A 2-satisability instance in conjunctive normal form can be transformed into an implication graph by replacing
each of its disjunctions by a pair of implications. For example, the statement (x0 x1 ) can be rewritten as the
pair (x0 x1 ), (x1 x0 ) . An instance is satisable if and only if no literal and its negation belong to the
same strongly connected component of its implication graph; this characterization can be used to solve 2-satisability
instances in linear time.[1]
In CDCL SAT-solvers, unit propagation can be naturally associated with an implication graph that captures all possible
ways of deriving all implied literals from decision literals,[2] which is then used for clause learning.

45.2 References
[1] Aspvall, Bengt; Plass, Michael F.; Tarjan, Robert E. (1979). A linear-time algorithm for testing the truth of certain
quantied boolean formulas. Information Processing Letters. 8 (3): 121123. doi:10.1016/0020-0190(79)90002-4.

[2] Paul Beame; Henry Kautz; Ashish Sabharwal (2003). Understanding the Power of Clause Learning (PDF). IJCAI. pp.
11941201.
Chapter 46

Inclusion (Boolean algebra)

In Boolean algebra (structure), the inclusion relation a b is dened as ab = 0 and is the Boolean analogue to the
subset relation in set theory. Inclusion is a partial order.
The inclusion relation a < b can be expressed in many ways:

a<b

ab = 0
a + b = 1

b < a
a+b=b

ab = a

The inclusion relation has a natural interpretation in various Boolean algebras: in the subset algebra, the subset relation;
in arithmetic Boolean algebra, divisibility; in the algebra of propositions, material implication; in the two-element
algebra, the set { (0,0), (0,1), (1,1) }.
Some useful properties of the inclusion relation are:

aa+b

ab a

The inclusion relation may be used to dene Boolean intervals such that a x b A Boolean algebra whose carrier
set is restricted to the elements in an interval is itself a Boolean algebra.

46.1 References
Frank Markham Brown, Boolean Reasoning: The Logic of Boolean Equations, 2nd edition, 2003, p. 52

182
Chapter 47

Interior algebra

In abstract algebra, an interior algebra is a certain type of algebraic structure that encodes the idea of the topological
interior of a set. Interior algebras are to topology and the modal logic S4 what Boolean algebras are to set theory and
ordinary propositional logic. Interior algebras form a variety of modal algebras.

47.1 Denition
An interior algebra is an algebraic structure with the signature

S, , +, , 0, 1, I

where

S, , +, , 0, 1

is a Boolean algebra and postx I designates a unary operator, the interior operator, satisfying the identities:

1. xI x
2. xII = xI
3. (xy)I = xI yI
4. 1I = 1

xI is called the interior of x.


The dual of the interior operator is the closure operator C dened by xC = ((x)I ). xC is called the closure of x. By
the principle of duality, the closure operator satises the identities:

1. xC x
2. xCC = xC
3. (x + y)C = xC + yC
4. 0C = 0

If the closure operator is taken as primitive, the interior operator can be dened as xI = ((x)C ). Thus the theory
of interior algebras may be formulated using the closure operator instead of the interior operator, in which case one
considers closure algebras of the form S, , +, , 0, 1, C , where S, , +, , 0, 1 is again a Boolean algebra and C satises
the above identities for the closure operator. Closure and interior algebras form dual pairs, and are paradigmatic
instances of Boolean algebras with operators. The early literature on this subject (mainly Polish topology) invoked
closure operators, but the interior operator formulation eventually became the norm.

183
184 CHAPTER 47. INTERIOR ALGEBRA

47.2 Open and closed elements


Elements of an interior algebra satisfying the condition xI = x are called open. The complements of open elements
are called closed and are characterized by the condition xC = x. An interior of an element is always open and the
closure of an element is always closed. Interiors of closed elements are called regular open and closures of open
elements are called regular closed. Elements which are both open and closed are called clopen. 0 and 1 are clopen.
An interior algebra is called Boolean if all its elements are open (and hence clopen). Boolean interior algebras
can be identied with ordinary Boolean algebras as their interior and closure operators provide no meaningful addi-
tional structure. A special case is the class of trivial interior algebras which are the single element interior algebras
characterized by the identity 0 = 1.

47.3 Morphisms of interior algebras

47.3.1 Homomorphisms
Interior algebras, by virtue of being algebraic structures, have homomorphisms. Given two interior algebras A and B,
a map f : A B is an interior algebra homomorphism if and only if f is a homomorphism between the underlying
Boolean algebras of A and B, that also preserves interiors and closures. Hence:

f(xI ) = f(x)I ;
f(xC ) = f(x)C .

47.3.2 Topomorphisms
Topomorphisms are another important, and more general, class of morphisms between interior algebras. A map f :
A B is a topomorphism if and only if f is a homomorphism between the Boolean algebras underlying A and B, that
also preserves the open and closed elements of A. Hence:

If x is open in A, then f(x) is open in B;


If x is closed in A, then f(x) is closed in B.

Every interior algebra homomorphism is a topomorphism, but not every topomorphism is an interior algebra homo-
morphism.

47.4 Relationships to other areas of mathematics

47.4.1 Topology
Given a topological space X = X, T one can form the power set Boolean algebra of X:

P(X), , , , , X

and extend it to an interior algebra

A(X) = P(X), , , , , X, I ,

where I is the usual topological interior operator. For all S X it is dened by

S I = {O : O S and O is open in X}

For all S X the corresponding closure operator is given by


47.4. RELATIONSHIPS TO OTHER AREAS OF MATHEMATICS 185

S C = {C : S C and C is closed in X}

S I is the largest open subset of S and S C is the smallest closed superset of S in X. The open, closed, regular open,
regular closed and clopen elements of the interior algebra A(X) are just the open, closed, regular open, regular closed
and clopen subsets of X respectively in the usual topological sense.
Every complete atomic interior algebra is isomorphic to an interior algebra of the form A(X) for some topological
space X. Moreover, every interior algebra can be embedded in such an interior algebra giving a representation of an
interior algebra as a topological eld of sets. The properties of the structure A(X) are the very motivation for the
denition of interior algebras. Because of this intimate connection with topology, interior algebras have also been
called topo-Boolean algebras or topological Boolean algebras.
Given a continuous map between two topological spaces

f: XY

we can dene a complete topomorphism

A(f) : A(Y) A(X)

by

A(f)(S) = f 1 [S]

for all subsets S of Y. Every complete topomorphism between two complete atomic interior algebras can be derived
in this way. If Top is the category of topological spaces and continuous maps and Cit is the category of complete
atomic interior algebras and complete topomorphisms then Top and Cit are dually isomorphic and A : Top Cit
is a contravariant functor that is a dual isomorphism of categories. A(f) is a homomorphism if and only if f is a
continuous open map.
Under this dual isomorphism of categories many natural topological properties correspond to algebraic properties, in
particular connectedness properties correspond to irreducibility properties:

X is empty if and only if A(X) is trivial


X is indiscrete if and only if A(X) is simple
X is discrete if and only if A(X) is Boolean
X is almost discrete if and only if A(X) is semisimple
X is nitely generated (Alexandrov) if and only if A(X) is operator complete i.e. its interior and closure
operators distribute over arbitrary meets and joins respectively
X is connected if and only if A(X) is directly indecomposable
X is ultraconnected if and only if A(X) is nitely subdirectly irreducible
X is compact ultra-connected if and only if A(X) is subdirectly irreducible

Generalized topology

The modern formulation of topological spaces in terms of topologies of open subsets, motivates an alternative for-
mulation of interior algebras: A generalized topological space is an algebraic structure of the form

B, , +, , 0, 1, T

where B, , +, , 0, 1 is a Boolean algebra as usual, and T is a unary relation on B (subset of B) such that:

1. 0,1 T
186 CHAPTER 47. INTERIOR ALGEBRA

2. T is closed under arbitrary joins (i.e. if a join of an arbitrary subset of T exists then it will be in T)
3. T is closed under nite meets
4. For every element b of B, the join {a T : a b} exists

T is said to be a generalized topology in the Boolean algebra.


Given an interior algebra its open elements form a generalized topology. Conversely given a generalized topological
space

B, , +, , 0, 1, T

we can dene an interior operator on B by bI = {a T : a b} thereby producing an interior algebra whose open
elements are precisely T. Thus generalized topological spaces are equivalent to interior algebras.
Considering interior algebras to be generalized topological spaces, topomorphisms are then the standard homomor-
phisms of Boolean algebras with added relations, so that standard results from universal algebra apply.

Neighbourhood functions and neighbourhood lattices

The topological concept of neighbourhoods can be generalized to interior algebras: An element y of an interior algebra
is said to be a neighbourhood of an element x if x yI . The set of neighbourhoods of x is denoted by N(x) and
forms a lter. This leads to another formulation of interior algebras:
A neighbourhood function on a Boolean algebra is a mapping N from its underlying set B to its set of lters, such
that:

1. For all x B, max{y B : x N(y)} exists


2. For all x,y B, x N(y) if and only if there is a z B such that y z x and z N(z).

The mapping N of elements of an interior algebra to their lters of neighbourhoods is a neighbourhood function on
the underlying Boolean algebra of the interior algebra. Moreover, given a neighbourhood function N on a Boolean
algebra with underlying set B, we can dene an interior operator by xI = max {y B : x N(y)} thereby obtaining an
interior algebra. N(x) will then be precisely the lter of neighbourhoods of x in this interior algebra. Thus interior
algebras are equivalent to Boolean algebras with specied neighbourhood functions.
In terms of neighbourhood functions, the open elements are precisely those elements x such that x N(x). In terms
of open elements x N(y) if and only if there is an open element z such that y z x.
Neighbourhood functions may be dened more generally on (meet)-semilattices producing the structures known as
neighbourhood (semi)lattices. Interior algebras may thus be viewed as precisely the Boolean neighbourhood lattices
i.e. those neighbourhood lattices whose underlying semilattice forms a Boolean algebra.

47.4.2 Modal logic


Given a theory (set of formal sentences) M in the modal logic S4, we can form its Lindenbaum-Tarski algebra:

L(M) = M / ~, , , , F, T,

where ~ is the equivalence relation on sentences in M given by p ~ q if and only if p and q are logically equivalent
in M, and M / ~ is the set of equivalence classes under this relation. Then L(M) is an interior algebra. The interior
operator in this case corresponds to the modal operator (necessarily), while the closure operator corresponds to
(possibly). This construction is a special case of a more general result for modal algebras and modal logic.
The open elements of L(M) correspond to sentences that are only true if they are necessarily true, while the closed
elements correspond to those that are only false if they are necessarily false.
Because of their relation to S4, interior algebras are sometimes called S4 algebras or Lewis algebras, after the
logician C. I. Lewis, who rst proposed the modal logics S4 and S5.
47.4. RELATIONSHIPS TO OTHER AREAS OF MATHEMATICS 187

47.4.3 Preorders

Since interior algebras are (normal) Boolean algebras with operators, they can be represented by elds of sets on
appropriate relational structures. In particular, since they are modal algebras, they can be represented as elds of sets
on a set with a single binary relation, called a modal frame. The modal frames corresponding to interior algebras
are precisely the preordered sets. Preordered sets (also called S4-frames) provide the Kripke semantics of the modal
logic S4, and the connection between interior algebras and preorders is deeply related to their connection with modal
logic.
Given a preordered set X = X, we can construct an interior algebra

B(X) = P(X), , , , , X, I

from the power set Boolean algebra of X where the interior operator I is given by

S I = {x X : for all y X, x y implies y S} for all S X.

The corresponding closure operator is given by

S C = {x X : there exists a y S with x y} for all S X.

S I is the set of all worlds inaccessible from worlds outside S, and S C is the set of all worlds accessible from some
world in S. Every interior algebra can be embedded in an interior algebra of the form B(X) for some preordered set
X giving the above-mentioned representation as a eld of sets (a preorder eld).
This construction and representation theorem is a special case of the more general result for modal algebras and
modal frames. In this regard, interior algebras are particularly interesting because of their connection to topology.
The construction provides the preordered set X with a topology, the Alexandrov topology, producing a topological
space T(X) whose open sets are:

{O X : for all x O and all y X, x y implies y O}.

The corresponding closed sets are:

{C X : for all x C and all y X, y x implies y C}.

In other words, the open sets are the ones whose worlds are inaccessible from outside (the up-sets), and the closed sets
are the ones for which every outside world is inaccessible from inside (the down-sets). Moreover, B(X) = A(T(X)).

47.4.4 Monadic Boolean algebras

Any monadic Boolean algebra can be considered to be an interior algebra where the interior operator is the universal
quantier and the closure operator is the existential quantier. The monadic Boolean algebras are then precisely the
variety of interior algebras satisfying the identity xIC = xI . In other words, they are precisely the interior algebras in
which every open element is closed or equivalently, in which every closed element is open. Moreover, such interior
algebras are precisely the semisimple interior algebras. They are also the interior algebras corresponding to the modal
logic S5, and so have also been called S5 algebras.
In the relationship between preordered sets and interior algebras they correspond to the case where the preorder is
an equivalence relation, reecting the fact that such preordered sets provide the Kripke semantics for S5. This also
reects the relationship between the monadic logic of quantication (for which monadic Boolean algebras provide
an algebraic description) and S5 where the modal operators (necessarily) and (possibly) can be interpreted
in the Kripke semantics using monadic universal and existential quantication, respectively, without reference to an
accessibility relation.
188 CHAPTER 47. INTERIOR ALGEBRA

47.4.5 Heyting algebras


The open elements of an interior algebra form a Heyting algebra and the closed elements form a dual Heyting algebra.
The regular open elements and regular closed elements correspond to the pseudo-complemented elements and dual
pseudo-complemented elements of these algebras respectively and thus form Boolean algebras. The clopen elements
correspond to the complemented elements and form a common subalgebra of these Boolean algebras as well as of
the interior algebra itself. Every Heyting algebra can be represented as the open elements of an interior algebra.
Heyting algebras play the same role for intuitionistic logic that interior algebras play for the modal logic S4 and
Boolean algebras play for propositional logic. The relation between Heyting algebras and interior algebras reects
the relationship between intuitionistic logic and S4, in which one can interpret theories of intuitionistic logic as S4
theories closed under necessity.

47.4.6 Derivative algebras


Given an interior algebra A, the closure operator obeys the axioms of the derivative operator, D . Hence we can form
a derivative algebra D(A) with the same underlying Boolean algebra as A by using the closure operator as a derivative
operator.
Thus interior algebras are derivative algebras. From this perspective, they are precisely the variety of derivative
algebras satisfying the identity xD x. Derivative algebras provide the appropriate algebraic semantics for the modal
logic WK4. Hence derivative algebras stand to topological derived sets and WK4 as interior/closure algebras stand
to topological interiors/closures and S4.
Given a derivative algebra V with derivative operator D , we can form an interior algebra I(V) with the same underlying
Boolean algebra as V, with interior and closure operators dened by xI = xx D and xC = x + xD , respectively. Thus
every derivative algebra can be regarded as an interior algebra. Moreover, given an interior algebra A, we have
I(D(A)) = A. However, D(I(V)) = V does not necessarily hold for every derivative algebra V.

47.5 Metamathematics
Grzegorczyk proved the elementary theory of closure algebras undecidable.[1]

47.6 Notes
[1] Andrzej Grzegorczyk (1951) Undecidability of some topological theories, Fundamenta Mathematicae 38: 137-52.

47.7 References
Blok, W.A., 1976, Varieties of interior algebras, Ph.D. thesis, University of Amsterdam.

Esakia, L., 2004, "Intuitionistic logic and modality via topology, Annals of Pure and Applied Logic 127:
155-70.

McKinsey, J.C.C. and Alfred Tarski, 1944, The Algebra of Topology, Annals of Mathematics 45: 141-91.
Naturman, C.A., 1991, Interior Algebras and Topology, Ph.D. thesis, University of Cape Town Department of
Mathematics.
Chapter 48

Join (sigma algebra)

In mathematics, the join of two sigma algebras over the same set X is the coarsest sigma algebra containing both.[1]

48.1 References
[1] Peter Walters, An Introduction to Ergodic Theory (1982) Springer Verlag, ISBN 0-387-90599-5

189
Chapter 49

Veitch chart

The Karnaugh map (KM or K-map) is a method of simplifying Boolean algebra expressions. Maurice Karnaugh
introduced it in 1953[1] as a renement of Edward Veitch's 1952 Veitch chart,[2][3] which actually was a rediscovery
of Allan Marquand's 1881 logical diagram[4] aka Marquand diagram[3] but with a focus now set on its utility for
switching circuits.[3] Veitch charts are therefore also known as MarquandVeitch diagrams,[3] and Karnaugh maps as
KarnaughVeitch maps (KV maps).
The Karnaugh map reduces the need for extensive calculations by taking advantage of humans pattern-recognition
capability.[1] It also permits the rapid identication and elimination of potential race conditions.
The required Boolean results are transferred from a truth table onto a two-dimensional grid where, in Karnaugh maps,
the cells are ordered in Gray code,[5][3] and each cell position represents one combination of input conditions, while
each cell value represents the corresponding output value. Optimal groups of 1s or 0s are identied, which represent
the terms of a canonical form of the logic in the original truth table.[6] These terms can be used to write a minimal
Boolean expression representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using a minimum
number of physical logic gates. A sum-of-products expression can always be implemented using AND gates feeding
into an OR gate, and a product-of-sums expression leads to OR gates feeding an AND gate.[7] Karnaugh maps can
also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional
statements, can get very complicated, which makes the code dicult to read and to maintain. Once minimised,
canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic
operators.[8]

49.1 Example
Karnaugh maps are used to facilitate the simplication of Boolean algebra functions. For example, consider the
Boolean function described by the following truth table.
Following are two dierent notations describing the same function in unsimplied Boolean algebra, using the Boolean
variables A, B, C, D, and their inverses.


f (A, B, C, D) = mi , i {6, 8, 9, 10, 11, 12, 13, 14} where mi are the minterms to map (i.e., rows that
have output 1 in the truth table).

f (A, B, C, D) = Mi , i {0, 1, 2, 3, 4, 5, 7, 15} where Mi are the maxterms to map (i.e., rows that have
output 0 in the truth table).

49.1.1 Karnaugh map

In the example above, the four input variables can be combined in 16 dierent ways, so the truth table has 16 rows,
and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 4 grid.

190
49.1. EXAMPLE 191

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
An example Karnaugh map. This image actually shows two Karnaugh maps: for the function , using minterms (colored rectangles)
and
for its complement, using maxterms (gray rectangles). In the image, E() signies a sum of minterms, denoted in the article as
mi .

The row and column indices (shown across the top, and down the left side of the Karnaugh map) are ordered in Gray
code rather than binary numerical order. Gray code ensures that only one variable changes between each pair of
adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the functions output
for that combination of inputs.
After the Karnaugh map has been constructed, it is used to nd one of the simplest possible forms a canonical
form for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify
the expression. The minterms ('minimal terms) for the nal expression are found by encircling groups of 1s in the
map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8). Minterm
rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one
larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and
green groups overlap. The red group is a 2 2 square, the green group is a 4 1 rectangle, and the overlap area is
indicated in brown.
The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For
192 CHAPTER 49. VEITCH CHART

K-map drawn on a torus, and in a plane. The dot-marked cells are adjacent.

AB
00 01 11 10
ABCD ABCD
00

0 4 12 8 0000 - 0 1000 - 8
0001 - 1 1001 - 9
01

1 5 13 9 0010 - 2 1010 - 10
CD

0011 - 3 1011 - 11
0100 - 4 1100 - 12
11

3 7 15 11
0101 - 5 1101 - 13
0110 - 6 1110 - 14
10

2 6 14 10
0111 - 7 1111 - 15

K-map construction. Instead of containing output values, this diagram shows the numbers of outputs, therefore it is not a Karnaugh
map.

example, AD would mean a cell which covers the 2x2 area where A and D are true, i.e. the cells numbered 13, 9,
15, 11 in the diagram above. On the other hand, AD would mean the cells where A is true and D is false (that is, D
is true).
The grid is toroidally connected, which means that rectangular groups can wrap across the edges (see picture). Cells
on the extreme right are actually 'adjacent' to those on the far left; similarly, so are those at the very top and those
at the bottom. Therefore, AD can be a valid termit includes cells 12 and 8 at the top, and wraps to the bottom to
include cells 10 and 14as is B, D, which includes the four corners.
49.1. EXAMPLE 193

In three dimensions, one can bend a rectangle into a torus.

49.1.2 Solution
Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic
minterms can be found by examining which variables stay the same within each box.
For the red grouping:

A is the same and is equal to 1 throughout the box, therefore it should be included in the algebraic representation
of the red minterm.
B does not maintain the same state (it shifts from 1 to 0), and should therefore be excluded.
C does not change. It is always 0, so its complement, NOT-C, should be included. Thus, C should be included.
D changes, so it is excluded.

Thus the rst minterm in the Boolean sum-of-products expression is AC.


For the green grouping, A and B maintain the same state, while C and D change. B is 0 and has to be negated before
it can be included. The second term is therefore AB. Note that it is acceptable that the green grouping overlaps with
the red one.
In the same way, the blue grouping gives the term BCD.
The solutions of each grouping are combined: the normal form of the circuit is AC + AB + BCD .
Thus the Karnaugh map has guided a simplication of

f (A, B, C, D) = ABCD + AB C D + AB CD + ABCD +


ABCD + ABC D + ABCD + ABCD
= AC + AB + BCD
194 CHAPTER 49. VEITCH CHART

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
Diagram showing two K-maps. The K-map for the function f(A, B, C, D) is shown as colored rectangles which correspond to
minterms. The brown region is an overlap of the red 22 square and the green 41 rectangle. The K-map for the inverse of f is
shown as gray rectangles, which correspond to maxterms.

It would also have been possible to derive this simplication by carefully applying the axioms of boolean algebra, but
the time it takes to do that grows exponentially with the number of terms.

49.1.3 Inverse
The inverse of a function is solved in the same way by grouping the 0s instead.
The three terms to cover the inverse are all shown with grey boxes with dierent colored borders:

brown: A, B

gold: A, C

blue: BCD
49.1. EXAMPLE 195

This yields the inverse:

f (A, B, C, D) = A B + A C + BCD

Through the use of De Morgans laws, the product of sums can be determined:

f (A, B, C, D) = A B + A C + BCD
f (A, B, C, D) = A B + A C + BCD
( )
f (A, B, C, D) = (A + B) (A + C) B + C + D

49.1.4 Don't cares

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 X 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=A+BCD'
F=(A+B)(A+C)(A+D')
The value of for ABCD = 1111 is replaced by a don't care. This removes the green term completely and allows the red term to be
larger. It also allows blue inverse term to shift and become larger
196 CHAPTER 49. VEITCH CHART

Karnaugh maps also allow easy minimizations of functions whose truth tables include "don't care" conditions. A
don't care condition is a combination of inputs for which the designer doesn't care what the output is. Therefore,
don't care conditions can either be included in or excluded from any rectangular group, whichever makes it larger.
They are usually indicated on the map with a dash or X.
The example on the right is the same as the example above but with the value of f(1,1,1,1) replaced by a don't care.
This allows the red term to expand all the way down and, thus, removes the green term completely.
This yields the new minimum equation:

f (A, B, C, D) = A + BCD

Note that the rst term is just A, not AC. In this case, the don't care has dropped a term (the green rectangle);
simplied another (the red one); and removed the race hazard (removing the yellow term as shown in the following
section on race hazards).
The inverse case is simplied as follows:

f (A, B, C, D) = A B + A C + AD

49.2 Race hazards

49.2.1 Elimination
Karnaugh maps are useful for detecting and eliminating race conditions. Race hazards are very easy to spot using a
Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions
circumscribed on the map. However, because of the nature of Gray coding, adjacent has a special denition explained
above - we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides.

In the example above, a potential race condition exists when C is 1 and D is 0, A is 1, and B changes from 1
to 0 (moving from the blue state to the green state). For this case, the output is dened to remain unchanged
at 1, but because this transition is not covered by a specic term in the equation, a potential for a glitch (a
momentary transition of the output to 0) exists.
There is a second potential glitch in the same example that is more dicult to spot: when D is 0 and A and B
are both 1, with C changing from 1 to 0 (moving from the blue state to the red state). In this case the glitch
wraps around from the top of the map to the bottom.

Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to
worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in
time to meet the timing deadline. In our example, we are not considering clocked logic.
In our case, an additional term of AD would eliminate the potential race hazard, bridging between the green and blue
output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom
to the top of the right half) in the adjacent diagram.
The term is redundant in terms of the static logic of the system, but such redundant, or consensus terms, are often
needed to assure race-free dynamic performance.
Similarly, an additional term of AD must be added to the inverse to eliminate another potential(race hazard.
) Applying
De Morgans laws creates another product of sums expression for f, but with a new factor of A + D .

49.2.2 2-variable map examples


Thefollowing are all the possible 2-variable, 2 2 Karnaugh maps. Listed with each is the minterms as a function
of m() and the race hazard free (see previous section) minimum equation. A minterm is dened as an expression
that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical inter-
connected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These
49.2. RACE HAZARDS 197

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
Race hazards are present in this diagram.

expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to
be mapped. Here are all the blocks with one eld.
A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge
of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and
horizontal row. A visualization of the k-map can be considered cylindrical. The elds at edges on the left and right
are adjacent, and the top and bottom are adjacent. K-Maps for 4 variables must be depicted as a donut or torus shape.
The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables
and more.
198 CHAPTER 49. VEITCH CHART

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
Above diagram with consensus terms added to avoid race hazards.

A
0 1

0 0
0
B

0 0
1

f(A,B) = E()
K=0
K'=1 m(0); K = 0
49.2. RACE HAZARDS 199

A
0 1

1 0
0
B
0 0
1

f(A,B) = E(1)
K=A'B'
K'=A+B m(1); K = AB

A
0 1

0 1
0
B

0 0
1

f(A,B) = E(2)
K=AB'
K'=A'+B m(2); K = AB

A
0 1

0 0
0
B

1 0
1

f(A,B) = E(3)
K=A'B
K'=A+B' m(3); K = AB

A
0 1

0 0
0
B

0 1
1

f(A,B) = E(4)
K=AB
K'=A'+B' m(4); K = AB

A
0 1

1 1
0
B

0 0
1

f(A,B) = E(1,2)
K=B'
K'=B m(1,2); K = B

A
0 1

1 0
0
B

1 0
1

f(A,B) = E(1,3)
K=A'
K'=A m(1,3); K = A
200 CHAPTER 49. VEITCH CHART

A
0 1

1 0
0
B
0 1
1

f(A,B) = E(1,4)
K=A'B'+AB
K'=AB'+A'B m(1,4); K = AB + AB

A
0 1

0 1
0
B

1 0
1

f(A,B) = E(2,3)
K=AB'+A'B
K'=A'B'+AB m(2,3); K = AB + AB

A
0 1

0 1
0
B

0 1
1

f(A,B) = E(2,4)
K=A
K'=A' m(2,4); K = A

A
0 1

0 0
0
B

1 1
1

f(A,B) = E(3,4)
K=B
K'=B' m(3,4); K = B

A
0 1

1 1
0
B

1 0
1

f(A,B) = E(1,2,3)
K=A'+B'
K'=AB m(1,2,3); K = A' + B

A
0 1

1 1
0
B

0 1
1

f(A,B) = E(1,2,4)
K=B'+A
K'=A'B m(1,2,4); K = A + B
49.3. OTHER GRAPHICAL METHODS 201

A
0 1

1 0
0
B
1 1
1

f(A,B) = E(1,3,4)
K=A'+B
K'=AB' m(1,3,4); K = A + B

A
0 1

0 1
0
B

1 1
1

f(A,B) = E(2,3,4)
K=A+B
K'=A'B' m(2,3,4); K = A + B

A
0 1

1 1
0
B

1 1
1

f(A,B) = E(1,2,3,4)
K=1
K'=0 m(1,2,3,4); K = 1

49.3 Other graphical methods


Alternative graphical minimization methods include:

Marquand diagram (1881) by Allan Marquand (18531924)[4][3]


Harvard minimizing chart (1951) by Howard H. Aiken and Martha L. Whitehouse of the Harvard Computation
Laboratory[9][1][10][11]
Veitch chart (1952) by Edward Veitch (19242013)[2][3]
Svobodas graphical aids (1956) and triadic map by Antonn Svoboda (19071980)[12][13][14][15]
Hndler circle graph (aka Hndlerscher Kreisgraph, Kreisgraph nach Hndler, Hndler-Kreisgraph, Hndler-
Diagramm, Minimisierungsgraph [sic]) (1958) by Wolfgang Hndler (19201998)[16][17][18][14][19][20][21][22][23]
Graph method (1965) by Herbert Kortum (19071979)[24][25][26][27][28][29]

49.4 See also


Circuit minimization
Espresso heuristic logic minimizer
List of Boolean algebra topics
QuineMcCluskey algorithm
Algebraic normal form (ANF)
Ring sum normal form (RSNF)
202 CHAPTER 49. VEITCH CHART

Zhegalkin normal form


Reed-Muller expansion
Venn diagram
Punnett square (a similar diagram in biology)

49.5 References
[1] Karnaugh, Maurice (November 1953) [1953-04-23, 1953-03-17]. The Map Method for Synthesis of Combinational Logic
Circuits (PDF). Transactions of the American Institute of Electrical Engineers part I. 72 (9): 593599. doi:10.1109/TCE.1953.6371932.
Paper 53-217. Archived (PDF) from the original on 2017-04-16. Retrieved 2017-04-16. (NB. Also contains a short review
by Samuel H. Caldwell.)

[2] Veitch, Edward W. (1952-05-03) [1952-05-02]. A Chart Method for Simplifying Truth Functions. ACM Annual Con-
ference/Annual Meeting: Proceedings of the 1952 ACM Annual Meeting (Pittsburg). New York, USA: ACM: 127133.
doi:10.1145/609784.609801.

[3] Brown, Frank Markham (2012) [2003, 1990]. Boolean Reasoning - The Logic of Boolean Equations (reissue of 2nd ed.).
Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-42785-0.

[4] Marquand, Allan (1881). XXXIII: On Logical Diagrams for n terms. The London, Edinburgh, and Dublin Philosophical
Magazine and Journal of Science. 5. 12 (75): 266270. doi:10.1080/14786448108627104. Retrieved 2017-05-15. (NB.
Quite many secondary sources erroneously cite this work as A logical diagram for n terms or On a logical diagram for
n terms.)

[5] Wakerly, John F. (1994). Digital Design: Principles & Practices. New Jersey, USA: Prentice Hall. pp. 222, 4849. ISBN
0-13-211459-3. (NB. The two page sections taken together say that K-maps are labeled with Gray code. The rst section
says that they are labeled with a code that changes only one bit between entries and the second section says that such a code
is called Gray code.)

[6] Belton, David (April 1998). Karnaugh Maps Rules of Simplication. Archived from the original on 2017-04-18.
Retrieved 2009-05-30.

[7] Dodge, Nathan B. (September 2015). Simplifying Logic Circuits with Karnaugh Maps (PDF). The University of Texas
at Dallas, Erik Jonsson School of Engineering and Computer Science. Archived (PDF) from the original on 2017-04-18.
Retrieved 2017-04-18.

[8] Cook, Aaron. Using Karnaugh Maps to Simplify Code. Quantum Rarity. Archived from the original on 2017-04-18.
Retrieved 2012-10-07.

[9] Aiken, Howard H.; Blaauw, Gerrit; Burkhart, William; Burns, Robert J.; Cali, Lloyd; Canepa, Michele; Ciampa, Carmela
M.; Coolidge, Jr., Charles A.; Fucarile, Joseph R.; Gadd, Jr., J. Orten; Gucker, Frank F.; Harr, John A.; Hawkins, Robert
L.; Hayes, Miles V.; Hofheimer, Richard; Hulme, William F.; Jennings, Betty L.; Johnson, Stanley A.; Kalin, Theodore;
Kincaid, Marshall; Lucchini, E. Edward; Minty, William; Moore, Benjamin L.; Remmes, Joseph; Rinn, Robert J.; Roche,
John W.; Sanbord, Jacquelin; Semon, Warren L.; Singer, Theodore; Smith, Dexter; Smith, Leonard; Strong, Peter F.;
Thomas, Helene V.; Wang, An; Whitehouse, Martha L.; Wilkins, Holly B.; Wilkins, Robert E.; Woo, Way Dong; Lit-
tle, Elbert P.; McDowell, M. Scudder (1952) [January 1951]. Chapter V: Minimizing charts. Synthesis of electronic
computing and control circuits (second printing, revised ed.). Write-Patterson Air Force Base: Harvard University Press
(Cambridge, Massachusetts, USA) / Georey Cumberlege Oxford University Press (London). pp. preface, 5067. Re-
trieved 2017-04-16. [] Martha Whitehouse constructed the minimizing charts used so profusely throughout this book,
and in addition prepared minimizing charts of seven and eight variables for experimental purposes. [] Hence, the present
writer is obliged to record that the general algebraic approach, the switching function, the vacuum-tube operator, and the
minimizing chart are his proposals, and that he is responsible for their inclusion herein. [] (NB. Work commenced in
April 1948.)

[10] Phister, Jr., Montgomery (1959) [December 1958]. Logical design of digital computers. New York, USA: John Wiley &
Sons Inc. pp. 7583. ISBN 0471688053.

[11] Curtis, H. Allen (1962). A new approach to the design of switching circuits. Princeton: D. van Nostrand Company.

[12] Svoboda, Antonn (1956). Gracko-mechanick pomcky uvan pi analyse a synthese kontaktovch obvod [Utilization
of graphical-mechanical aids for the analysis and synthesis of contact circuits]. Stroje na zpracovn informac [Symphosium
IV on information processing machines] (in Czech). IV. Prague: Czechoslovak Academy of Sciences, Research Institute
of Mathematical Machines. pp. 921.
49.5. REFERENCES 203

[13] Svoboda, Antonn (1956). Graphical Mechanical Aids for the Synthesis of Relay Circuits. Nachrichtentechnische Fach-
berichte (NTF), Beihefte der Nachrichtentechnischen Zeitschrift (NTZ). Braunschweig, Germany: Vieweg-Verlag.

[14] Steinbuch, Karl W.; Weber, Wolfgang; Heinemann, Traute, eds. (1974) [1967]. Taschenbuch der Informatik - Band II
- Struktur und Programmierung von EDV-Systemen. Taschenbuch der Nachrichtenverarbeitung (in German). 2 (3 ed.).
Berlin, Germany: Springer-Verlag. pp. 25, 62, 96, 122123, 238. ISBN 3-540-06241-6. LCCN 73-80607.

[15] Svoboda, Antonn; White, Donnamaie E. (2016) [1979-08-01]. Advanced Logical Circuit Design Techniques (PDF) (re-
typed electronic reissue ed.). Garland STPM Press (original issue) / WhitePubs (reissue). ISBN 978-0-8240-7014-4.
Archived (PDF) from the original on 2017-04-14. Retrieved 2017-04-15.

[16] Hndler, Wolfgang (1958). Ein Minimisierungsverfahren zur Synthese von Schaltkreisen: Minimisierungsgraphen (Disser-
tation) (in German). Technische Hochschule Darmstadt. D 17. (NB. Although written by a German, the title contains an
anglicism; the correct German term would be Minimierung instead of Minimisierung.)

[17] Hndler, Wolfgang (2013) [1961]. Zum Gebrauch von Graphen in der Schaltkreis- und Schaltwerktheorie. In Peschl,
Ernst Ferdinand; Unger, Heinz. Colloquium ber Schaltkreis- und Schaltwerk-Theorie - Vortragsauszge vom 26. bis 28.
Oktober 1960 in Bonn - Band 3 von Internationale Schriftenreihe zur Numerischen Mathematik [International Series of
Numerical Mathematics] (ISNM) (in German). 3. Institut fr Angewandte Mathematik, Universitt Saarbrcken, Rheinisch-
Westflisches Institut fr Instrumentelle Mathematik: Springer Basel AG / Birkhuser Verlag Basel. pp. 169198. ISBN
978-3-0348-5771-0. doi:10.1007/978-3-0348-5770-3.

[18] Berger, Erich R.; Hndler, Wolfgang (1967) [1962]. Steinbuch, Karl W.; Wagner, Siegfried W., eds. Taschenbuch der
Nachrichtenverarbeitung (in German) (2 ed.). Berlin, Germany: Springer-Verlag OHG. pp. 64, 10341035, 1036, 1038.
LCCN 67-21079. Title No. 1036. [] bersichtlich ist die Darstellung nach Hndler, die smtliche Punkte, numeriert
nach dem Gray-Code [], auf dem Umfeld eines Kreises anordnet. Sie erfordert allerdings sehr viel Platz. [] [Hndlers
illustration, where all points, numbered according to the Gray code, are arranged on the circumference of a circle, is easily
comprehensible. It needs, however, a lot of space.]

[19] Hotz, Gnter (1974). Schaltkreistheorie [Switching circuit theory]. DeGruyter Lehrbuch (in German). Walter de Gruyter
& Co. p. 117. ISBN 3-11-00-2050-5. [] Der Kreisgraph von Hndler ist fr das Aunden von Primimplikanten
gut brauchbar. Er hat den Nachteil, da er schwierig zu zeichnen ist. Diesen Nachteil kann man allerdings durch die
Verwendung von Schablonen verringern. [] [The circle graph by Hndler is well suited to nd prime implicants. A
disadvantage is that it is dicult to draw. This can be remedied using stencils.]

[20] Informatik Sammlung Erlangen (ISER)" (in German). Erlangen, Germany: Friedrich-Alexander Universitt. 2012-03-
13. Retrieved 2017-04-12. (NB. Shows a picture of a Kreisgraph by Hndler.)

[21] Informatik Sammlung Erlangen (ISER) - Impressum (in German). Erlangen, Germany: Friedrich-Alexander Universitt.
2012-03-13. Archived from the original on 2012-02-26. Retrieved 2017-04-15. (NB. Shows a picture of a Kreisgraph by
Hndler.)

[22] Zemanek, Heinz (2013) [1990]. Geschichte der Schaltalgebra [History of circuit switching algebra]. In Broy, Man-
fred. Informatik und Mathematik [Computer Sciences and Mathematics] (in German). Springer-Verlag. pp. 4372. ISBN
9783642766770. Einen Weg besonderer Art, der damals zu wenig beachtet wurde, wies W. Hndler in seiner Dissertation
[] mit einem Kreisdiagramm. [] (NB. Collection of papers at a colloquium held at the Bayerische Akademie der
Wissenschaften, 1989-06-12/14, in honor of Friedrich L. Bauer.)

[23] Bauer, Friedrich Ludwig; Wirsing, Martin (March 1991). Elementare Aussagenlogik (in German). Berlin / Heidelberg:
Springer-Verlag. pp. 5456, 71, 112113, 138139. ISBN 978-3-540-52974-3. [] handelt es sich um ein Hndler-
Diagramm [], mit den Wrfelecken als Ecken eines 2m -gons. [] Abb. [] zeigt auch Gegenstcke fr andere Di-
mensionen. Durch waagerechte Linien sind dabei Tupel verbunden, die sich nur in der ersten Komponente unterscheiden;
durch senkrechte Linien solche, die sich nur in der zweiten Komponente unterscheiden; durch 45-Linien und 135-Linien
solche, die sich nur in der dritten Komponente unterscheiden usw. Als Nachteil der Hndler-Diagramme wird angefhrt,
da sie viel Platz beanspruchen. []

[24] Kortum, Herbert (1965). Minimierung von Kontaktschaltungen durch Kombination von Krzungsverfahren und Graphen-
methoden. messen-steuern-regeln (msr) (in German). Verlag Technik. 8 (12): 421425.

[25] Kortum, Herbert (1966). Konstruktion und Minimierung von Halbleiterschaltnetzwerken mittels Graphentransformation.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (1): 912.

[26] Kortum, Herbert (1966). Weitere Bemerkungen zur Minimierung von Schaltnetzwerken mittels Graphenmethoden.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (3): 96102.

[27] Kortum, Herbert (1966). Weitere Bemerkungen zur Behandlung von Schaltnetzwerken mittels Graphen. messen-steuern-
regeln (msr) (in German). Verlag Technik. 9 (5): 151157.
204 CHAPTER 49. VEITCH CHART

[28] Kortum, Herbert (1967). "ber zweckmige Anpassung der Graphenstruktur diskreter Systeme an vorgegebene Auf-
gabenstellungen. messen-steuern-regeln (msr) (in German). Verlag Technik. 10 (6): 208211.

[29] Tafel, Hans Jrg (1971). 4.3.5. Graphenmethode zur Vereinfachung von Schaltfunktionen. Written at RWTH, Aachen,
Germany. Einfhrung in die digitale Datenverarbeitung [Introduction to digital information processing] (in German). Mu-
nich, Germany: Carl Hanser Verlag. pp. 98105, 107113. ISBN 3-446-10569-7.

49.6 Further reading


Katz, Randy Howard (1998) [1994]. Contemporary Logic Design. The Benjamin/Cummings Publishing Com-
pany. pp. 7085. ISBN 0-8053-2703-7. doi:10.1016/0026-2692(95)90052-7.

Vingron, Shimon Peter (2004) [2003-11-05]. Karnaugh Maps. Switching Theory: Insight Through Predicate
Logic. Berlin, Heidelberg, New York: Springer-Verlag. pp. 5776. ISBN 3-540-40343-4.

Wickes, William E. (1968). Logic Design with Integrated Circuits. New York, USA: John Wiley & Sons.
pp. 3649. LCCN 68-21185. A renement of the Venn diagram in that circles are replaced by squares and
arranged in a form of matrix. The Veitch diagram labels the squares with the minterms. Karnaugh assigned 1s
and 0s to the squares and their labels and deduced the numbering scheme in common use.

Maxeld, Clive Max (2006-11-29). Reed-Muller Logic. Logic 101. EETimes. Part 3. Archived from the
original on 2017-04-19. Retrieved 2017-04-19.

49.7 External links


Detect Overlapping Rectangles, by Herbert Glarner.
Using Karnaugh maps in practical applications, Circuit design project to control trac lights.

K-Map Tutorial for 2,3,4 and 5 variables


Karnaugh Map Example

POCKETPC BOOLEAN FUNCTION SIMPLIFICATION, Ledion Bitincka George E. Antoniou


Chapter 50

Laws of Form

Laws of Form (hereinafter LoF) is a book by G. Spencer-Brown, published in 1969, that straddles the boundary
between mathematics and philosophy. LoF describes three distinct logical systems:

The primary arithmetic (described in Chapter 4 of LoF), whose models include Boolean arithmetic;
The primary algebra" (Chapter 6 of LoF), whose models include the two-element Boolean algebra (hereinafter
abbreviated 2), Boolean logic, and the classical propositional calculus;
Equations of the second degree (Chapter 11), whose interpretations include nite automata and Alonzo
Church's Restricted Recursive Arithmetic (RRA).

Boundary algebra is Dr Philip Meguire's (2011) term[1] for the union of the primary algebra (hereinafter abbreviated
pa) and the primary arithmetic. Laws of Form sometimes loosely refers to the pa as well as to LoF.

50.1 The book


LoF emerged from work in electronic engineering its author did around 1960, and from subsequent lectures on
mathematical logic he gave under the auspices of the University of London's extension program. LoF has appeared
in several editions, the most recent being a 1997 German translation, and has never gone out of print.
The mathematics lls only about 55pp and is rather elementary. But LoF's mystical and declamatory prose, and its
love of paradox, make it a challenging read for all. Spencer-Brown was inuenced by Wittgenstein and R. D. Laing.
LoF also echoes a number of themes from the writings of Charles Sanders Peirce, Bertrand Russell, and Alfred North
Whitehead.
The entire book is written in an operational way, giving instructions to the reader instead of telling him what is. In
accordance with G. Spencer-Browns interest in paradoxes, the only sentence that makes a statement that something
is, is the statement, which says no such statements are used in this book.[2] Except for this one sentence the book can
be seen as an example of E-Prime.

50.2 Reception
Ostensibly a work of formal mathematics and philosophy, LoF became something of a cult classic, praised in the
Whole Earth Catalog. Those who agree point to LoF as embodying an enigmatic mathematics of consciousness,
its algebraic symbolism capturing an (perhaps even the) implicit root of cognition: the ability to distinguish.
LoF argues that primary algebra reveals striking connections among logic, Boolean algebra, and arithmetic, and the
philosophy of language and mind.
Banaschewski (1977)[3] argues that the pa is nothing but new notation for Boolean algebra. Indeed, the two-element
Boolean algebra 2 can be seen as the intended interpretation of the pa. Yet the notation of the pa":

Fully exploits the duality characterizing not just Boolean algebras but all lattices;

205
206 CHAPTER 50. LAWS OF FORM

Highlights how syntactically distinct statements in logic and 2 can have identical semantics;
Dramatically simplies Boolean algebra calculations, and proofs in sentential and syllogistic logic.

Moreover, the syntax of the pa can be extended to formal systems other than 2 and sentential logic, resulting in
boundary mathematics (see Related Work below).
LoF has inuenced, among others, Heinz von Foerster, Louis Kauman, Niklas Luhmann, Humberto Maturana,
Francisco Varela and William Bricken. Some of these authors have modied the primary algebra in a variety of
interesting ways.
LoF claimed that certain well-known mathematical conjectures of very long standing, such as the Four Color Theo-
rem, Fermats Last Theorem, and the Goldbach conjecture, are provable using extensions of the pa. Spencer-Brown
eventually circulated a purported proof of the Four Color Theorem, but it met with skepticism.[4]

50.3 The form (Chapter 1)


The symbol:

also called the mark or cross, is the essential feature of the Laws of Form. In Spencer-Browns inimitable and
enigmatic fashion, the Mark symbolizes the root of cognition, i.e., the dualistic Mark indicates the capability of
dierentiating a this from everything else but this.
In LoF, a Cross denotes the drawing of a distinction, and can be thought of as signifying the following, all at once:

The act of drawing a boundary around something, thus separating it from everything else;
That which becomes distinct from everything by drawing the boundary;
Crossing from one side of the boundary to the other.

All three ways imply an action on the part of the cognitive entity (e.g., person) making the distinction. As LoF puts
it:

The rst command:


Draw a distinction
can well be expressed in such ways as:
Let there be a distinction,
Find a distinction,
See a distinction,
Describe a distinction,
Dene a distinction,
Or:
Let a distinction be drawn. (LoF, Notes to chapter 2)

The counterpoint to the Marked state is the Unmarked state, which is simply nothing, the void, or the un-expressable
innite represented by a blank space. It is simply the absence of a Cross. No distinction has been made and nothing
has been crossed. The Marked state and the void are the two primitive values of the Laws of Form.
The Cross can be seen as denoting the distinction between two states, one considered as a symbol and another
not so considered. From this fact arises a curious resonance with some theories of consciousness and language.
Paradoxically, the Form is at once Observer and Observed, and is also the creative act of making an observation. LoF
(excluding back matter) closes with the words:
"...the rst distinction, the Mark and the observer are not only interchangeable, but, in the form, identical.
C. S. Peirce came to a related insight in the 1890s; see Related Work.
50.4. THE PRIMARY ARITHMETIC (CHAPTER 4) 207

50.4 The primary arithmetic (Chapter 4)


The syntax of the primary arithmetic (PA) goes as follows. There are just two atomic expressions:

The empty Cross ;


All or part of the blank page (the void).

There are two inductive rules:

A Cross may be written over any expression;


Any two expressions may be concatenated.

The semantics of the primary arithmetic are perhaps nothing more than the sole explicit denition in LoF: Distinction
is perfect continence.
Let the unmarked state be a synonym for the void. Let an empty Cross denote the marked state. To cross is to
move from one value, the unmarked or marked state, to the other. We can now state the arithmetical axioms A1
and A2, which ground the primary arithmetic (and hence all of the Laws of Form):
1. The law of Calling. Calling twice from a state is indistinguishable from calling once. To make a distinction
twice has the same eect as making it once. For example, saying Let there be light and then saying Let there be
light again, is the same as saying it once. Formally:

A2. The law of Crossing. After crossing from the unmarked to the marked state, crossing again (recrossing)
starting from the marked state returns one to the unmarked state. Hence recrossing annuls crossing. Formally:

In both A1 and A2, the expression to the right of '=' has fewer symbols than the expression to the left of '='. This
suggests that every primary arithmetic expression can, by repeated application of A1 and A2, be simplied to one of
two states: the marked or the unmarked state. This is indeed the case, and the result is the expressions simplication.
The two fundamental metatheorems of the primary arithmetic state that:

Every nite expression has a unique simplication. (T3 in LoF);


Starting from an initial marked or unmarked state, complicating an expression by a nite number of repeated
application of A1 and A2 cannot yield an expression whose simplication diers from the initial state. (T4 in
LoF).

Thus the relation of logical equivalence partitions all primary arithmetic expressions into two equivalence classes:
those that simplify to the Cross, and those that simplify to the void.
A1 and A2 have loose analogs in the properties of series and parallel electrical circuits, and in other ways of dia-
gramming processes, including owcharting. A1 corresponds to a parallel connection and A2 to a series connection,
with the understanding that making a distinction corresponds to changing how two points in a circuit are connected,
and not simply to adding wiring.
The primary arithmetic is analogous to the following formal languages from mathematics and computer science:

A Dyck language of order 1 with a null alphabet;


The simplest context-free language in the Chomsky hierarchy;
A rewrite system that is strongly normalizing and conuent.

The phrase calculus of indications in LoF is a synonym for primary arithmetic.


208 CHAPTER 50. LAWS OF FORM

50.4.1 The notion of canon


A concept peculiar to LoF is that of canon. While LoF does not dene canon, the following two excerpts from the
Notes to chpt. 2 are apt:

The more important structures of command are sometimes called canons. They are the ways in
which the guiding injunctions appear to group themselves in constellations, and are thus by no means
independent of each other. A canon bears the distinction of being outside (i.e., describing) the system
under construction, but a command to construct (e.g., 'draw a distinction'), even though it may be of
central importance, is not a canon. A canon is an order, or set of orders, to permit or allow, but not to
construct or create.

"...the primary form of mathematical communication is not description but injunction... Music is
a similar art form, the composer does not even attempt to describe the set of sounds he has in mind,
much less the set of feelings occasioned through them, but writes down a set of commands which, if
they are obeyed by the performer, can result in a reproduction, to the listener, of the composers original
experience.

These excerpts relate to the distinction in metalogic between the object language, the formal language of the logi-
cal system under discussion, and the metalanguage, a language (often a natural language) distinct from the object
language, employed to exposit and discuss the object language. The rst quote seems to assert that the canons are
part of the metalanguage. The second quote seems to assert that statements in the object language are essentially
commands addressed to the reader by the author. Neither assertion holds in standard metalogic.

50.5 The primary algebra (Chapter 6)

50.5.1 Syntax
Given any valid primary arithmetic expression, insert into one or more locations any number of Latin letters bearing
optional numerical subscripts; the result is a pa formula. Letters so employed in mathematics and logic are called

variables. A pa variable indicates a location where one can write the primitive value or its complement .
Multiple instances of the same variable denote multiple locations of the same primitive value.

50.5.2 Rules governing logical equivalence


The sign '=' may link two logically equivalent expressions; the result is an equation. By logically equivalent is meant
that the two expressions have the same simplication. Logical equivalence is an equivalence relation over the set of
pa formulas, governed by the rules R1 and R2. Let C and D be formulae each containing at least one instance of
the subformula A:

R1, Substitution of equals. Replace one or more instances of A in C by B, resulting in E. If A=B, then C=E.
R2, Uniform replacement. Replace all instances of A in C and D with B. C becomes E and D becomes F. If
C=D, then E=F. Note that A=B is not required.

R2 is employed very frequently in pa demonstrations (see below), almost always silently. These rules are routinely
invoked in logic and most of mathematics, nearly always unconsciously.
The pa consists of equations, i.e., pairs of formulae linked by an inx '='. R1 and R2 enable transforming one
equation into another. Hence the pa is an equational formal system, like the many algebraic structures, including
Boolean algebra, that are varieties. Equational logic was common before Principia Mathematica (e.g., Peirce,1,2,3
Johnson 1892), and has present-day advocates (Gries and Schneider 1993).
Conventional mathematical logic consists of tautological formulae, signalled by a prexed turnstile. To denote that

the pa formula A is a tautology, simply write "A = ". If one replaces '=' in R1 and R2 with the biconditional, the
50.5. THE PRIMARY ALGEBRA (CHAPTER 6) 209

resulting rules hold in conventional logic. However, conventional logic relies mainly on the rule modus ponens; thus
conventional logic is ponential. The equational-ponential dichotomy distills much of what distinguishes mathematical
logic from the rest of mathematics.

50.5.3 Initials
An initial is a pa equation veriable by a decision procedure and as such is not an axiom. LoF lays down the initials:

J1 : ((A)A) = .

The absence of anything to the right of the "=" above, is deliberate.

J2 : ((A)(B))C = ((AC)(BC)).

J2 is the familiar distributive law of sentential logic and Boolean algebra.


Another set of initials, friendlier to calculations, is:

J0 : (())A = A.
J1a : (A)A = ()
C2 : A(AB) = A(B).

It is thanks to C2 that the pa is a lattice. By virtue of J1a, it is a complemented lattice whose upper bound is (). By
J0, (()) is the corresponding lower bound and identity element. J0 is also an algebraic version of A2 and makes clear
the sense in which (()) aliases with the blank page.
T13 in LoF generalizes C2 as follows. Any pa (or sentential logic) formula B can be viewed as an ordered tree with
branches. Then:
T13: A subformula A can be copied at will into any depth of B greater than that of A, as long as A and its copy are
in the same branch of B. Also, given multiple instances of A in the same branch of B, all instances but the shallowest
are redundant.
While a proof of T13 would require induction, the intuition underlying it should be clear.
C2 or its equivalent is named:

Generation in LoF;
Exclusion in Johnson (1892);
Pervasion in the work of William Bricken.

Perhaps the rst instance of an axiom or rule with the power of C2 was the Rule of (De)Iteration, combining T13
and AA=A, of C. S. Peirce's existential graphs.
LoF asserts that concatenation can be read as commuting and associating by default and hence need not be explicitly
assumed or demonstrated. (Peirce made a similar assertion about his existential graphs.) Let a period be a temporary
notation to establish grouping. That concatenation commutes and associates may then be demonstrated from the:

Initial AC.D=CD.A and the consequence AA=A (Byrne 1946). This result holds for all lattices, because AA=A
is an easy consequence of the absorption law, which holds for all lattices;
Initials AC.D=AD.C and J0. Since J0 holds only for lattices with a lower bound, this method holds only for
bounded lattices (which include the pa and 2). Commutativity is trivial; just set A=(()). Associativity: AC.D =
CA.D = CD.A = A.CD.

Having demonstrated associativity, the period can be discarded.


The initials in Meguire (2011) are AC.D=CD.A, called B1; B2, J0 above; B3, J1a above; and B4, C2. By design,
these initials are very similar to the axioms for an abelian group, G1-G3 below.
210 CHAPTER 50. LAWS OF FORM

50.5.4 Proof theory

The pa contains three kinds of proved assertions:

Consequence is a pa equation veried by a demonstration. A demonstration consists of a sequence of steps,


each step justied by an initial or a previously demonstrated consequence.

Theorem is a statement in the metalanguage veried by a proof, i.e., an argument, formulated in the metalan-
guage, that is accepted by trained mathematicians and logicians.

Initial, dened above. Demonstrations and proofs invoke an initial as if it were an axiom.

The distinction between consequence and theorem holds for all formal systems, including mathematics and logic, but
is usually not made explicit. A demonstration or decision procedure can be carried out and veried by computer. The
proof of a theorem cannot be.
Let A and B be pa formulas. A demonstration of A=B may proceed in either of two ways:

Modify A in steps until B is obtained, or vice versa;

Simplify both (A)B and (B)A to . This is known as a calculation.

Once A=B has been demonstrated, A=B can be invoked to justify steps in subsequent demonstrations. pa demonstra-
tions and calculations often require no more than J1a, J2, C2, and the consequences ()A=() (C3 in LoF), ((A))=A
(C1), and AA=A (C5).
The consequence (((A)B)C) = (AC)((B)C), C7 in LoF, enables an algorithm, sketched in LoFs proof of T14, that
transforms an arbitrary pa formula to an equivalent formula whose depth does not exceed two. The result is a normal
form, the pa analog of the conjunctive normal form. LoF (T14-15) proves the pa analog of the well-known Boolean
algebra theorem that every formula has a normal form.
Let A be a subformula of some formula B. When paired with C3, J1a can be viewed as the closure condition for
calculations: B is a tautology if and only if A and (A) both appear in depth 0 of B. A related condition appears in
some versions of natural deduction. A demonstration by calculation is often little more than:

Invoking T13 repeatedly to eliminate redundant subformulae;

Erasing any subformulae having the form ((A)A).

The last step of a calculation always invokes J1a.


LoF includes elegant new proofs of the following standard metatheory:

Completeness: all pa consequences are demonstrable from the initials (T17).

Independence: J1 cannot be demonstrated from J2 and vice versa (T18).

That sentential logic is complete is taught in every rst university course in mathematical logic. But university courses
in Boolean algebra seldom mention the completeness of 2.

50.5.5 Interpretations

If the Marked and Unmarked states are read as the Boolean values 1 and 0 (or True and False), the pa interprets 2
(or sentential logic). LoF shows how the pa can interpret the syllogism. Each of these interpretations is discussed
in a subsection below. Extending the pa so that it could interpret standard rst-order logic has yet to be done, but
Peirce's beta existential graphs suggest that this extension is feasible.
50.5. THE PRIMARY ALGEBRA (CHAPTER 6) 211

Two-element Boolean algebra 2

The pa is an elegant minimalist notation for the two-element Boolean algebra 2. Let:

One of Boolean meet () or join (+) interpret concatenation;

The complement of A interpret


0 (1) interpret the empty Mark if meet (join) interprets concatenation.

If meet (join) interprets AC, then join (meet) interprets ((A)(C)). Hence the pa and 2 are isomorphic but for one detail:
pa complementation can be nullary, in which case it denotes a primitive value. Modulo this detail, 2 is a model of the
primary algebra. The primary arithmetic suggests the following arithmetic axiomatization of 2: 1+1=1+0=0+1=1=~0,
and 0+0=0=~1.

The set B = { , } is the Boolean domain or carrier. In the language of universal algebra, the pa is the
algebraic structure B, , (), () of type 2, 1, 0 . The expressive adequacy of the Sheer stroke points to the
pa also being a B, (), () algebra of type 2, 0 . In both cases, the identities are J1a, J0, C2, and ACD=CDA.
Since the pa and 2 are isomorphic, 2 can be seen as a B, +, , 1 algebra of type 2, 1, 0 . This description of 2 is
simpler than the conventional one, namely an B, +, , , 1, 0 algebra of type 2, 2, 1, 0, 0 .

Sentential logic

Let the blank page denote True or False, and let a Cross be read as Not. Then the primary arithmetic has the
following sentential reading:

= False

= True = not False

= Not True = False

The pa interprets sentential logic as follows. A letter represents any given sentential expression. Thus:

interprets Not A

interprets A Or B

interprets Not A Or B or If A Then B.

interprets Not (Not A Or Not B)


or Not (If A Then Not B)
or A And B.

(((A)B)(A(B))), ((A)(B))(AB) both interpret A if and only if B or A is equivalent to B.


Thus any expression in sentential logic has a pa translation. Equivalently, the pa interprets sentential logic. Given an
assignment of every variable to the Marked or Unmarked states, this pa translation reduces to a PA expression, which
212 CHAPTER 50. LAWS OF FORM

can be simplied. Repeating this exercise for all possible assignments of the two primitive values to each variable,
reveals whether the original expression is tautological or satisable. This is an example of a decision procedure, one
more or less in the spirit of conventional truth tables. Given some pa formula containing N variables, this decision
procedure requires simplifying 2N PA formulae. For a less tedious decision procedure more in the spirit of Quine's
truth value analysis, see Meguire (2003).
Schwartz (1981) proved that the pa is equivalent -- syntactically, semantically, and proof theoreticallywith the
classical propositional calculus. Likewise, it can be shown that the pa is syntactically equivalent with expressions built
up in the usual way from the classical truth values true and false, the logical connectives NOT, OR, and AND, and
parentheses.
Interpreting the Unmarked State as False is wholly arbitrary; that state can equally well be read as True. All that is
required is that the interpretation of concatenation change from OR to AND. IF A THEN B now translates as (A(B))
instead of (A)B. More generally, the pa is self-dual, meaning that any pa formula has two sentential or Boolean
readings, each the dual of the other. Another consequence of self-duality is the irrelevance of De Morgans laws;
those laws are built into the syntax of the pa from the outset.
The true nature of the distinction between the pa on the one hand, and 2 and sentential logic on the other, now
emerges. In the latter formalisms, complementation/negation operating on nothing is not well-formed. But an
empty Cross is a well-formed pa expression, denoting the Marked state, a primitive value. Hence a nonempty Cross
is an operator, while an empty Cross is an operand because it denotes a primitive value. Thus the pa reveals that
the heretofore distinct mathematical concepts of operator and operand are in fact merely dierent facets of a single
fundamental action, the making of a distinction.

Syllogisms

Appendix 2 of LoF shows how to translate traditional syllogisms and sorites into the pa. A valid syllogism is simply
one whose pa translation simplies to an empty Cross. Let A* denote a literal, i.e., either A or (A), indierently. Then
all syllogisms that do not require that one or more terms be assumed nonempty are one of 24 possible permutations
of a generalization of Barbara whose pa equivalent is (A*B)((B)C*)A*C*. These 24 possible permutations include
the 19 syllogistic forms deemed valid in Aristotelian and medieval logic. This pa translation of syllogistic logic also
suggests that the pa can interpret monadic and term logic, and that the pa has anities to the Boolean term schemata
of Quine (1982: Part II).

50.5.6 An example of calculation


The following calculation of Leibniz's nontrivial Praeclarum Theorema exemplies the demonstrative power of the pa.
Let C1 be ((A))=A, and let OI mean that variables and subformulae have been reordered in a way that commutativity
and associativity permit. Because the only commutative connective appearing in the Theorema is conjunction, it
is simpler to translate the Theorema into the pa using the dual interpretation. The objective then becomes one of
simplifying that translation to (()).

[(PR)(QS)][(PQ)(RS)]. Praeclarum Theorema.

((P(R))(Q(S))((PQ(RS)))). pa translation.

= ((P(R))P(Q(S))Q(RS)). OI; C1.

= (((R))((S))PQ(RS)). Invoke C2 2x to eliminate the bold letters in the previous expression; OI.

= (RSPQ(RS)). C1,2x.

= ((RSPQ)RSPQ). C2; OI.

= (()). J1.

Remarks:

C1 (C2) is repeatedly invoked in a fairly mechanical way to eliminate nested parentheses (variable instances).
This is the essence of the calculation method;
50.5. THE PRIMARY ALGEBRA (CHAPTER 6) 213

A single invocation of J1 (or, in other contexts, J1a) terminates the calculation. This too is typical;

Experienced users of the pa are free to invoke OI silently. OI aside, the demonstration requires a mere 7 steps.

50.5.7 A technical aside

Given some standard notions from mathematical logic and some suggestions in Bostock (1997: 83, fn 11, 12), {}
and _ (underscore added for clarity.) may be interpreted as the classical bivalent truth values. Let the extension of
an n-place atomic formula be the set of ordered n-tuples of individuals that satisfy it (i.e., for which it comes out
true). Let a sentential variable be a 0-place atomic formula, whose extension is a classical truth value, by denition.
An ordered 2-tuple is an ordered pair, whose standard (Kuratowski's denition) set theoretic denition is <a,b> =
{{a},{{a,b}}, where a,b are individuals. Ordered n-tuples for any n>2 may be obtained from ordered pairs by a well-
known recursive construction. Dana Scott has remarked that the extension of a sentential variable can also be seen as
the empty ordered pair (ordered 0-tuple), {{},{}} = _ because {a,a}={a} for all a. Hence _ has the interpretation
True. Reading {} as False follows naturally.
It should be noted that choosing _ as True is arbitrary. All of the Laws of Form algebra and calculus work perfectly
as long as {} != _ .

50.5.8 Relation to magmas

The pa embodies a point noted by Huntington in 1933: Boolean algebra requires, in addition to one unary operation,
one, and not two, binary operations. Hence the seldom-noted fact that Boolean algebras are magmas. (Magmas
were called groupoids until the latter term was appropriated by category theory.) To see this, note that the pa is a
commutative:

Semigroup because pa juxtaposition commutes and associates;

Monoid with identity element (()), by virtue of J0.

Groups also require a unary operation, called inverse, the group counterpart of Boolean complementation. Let (a)
denote the inverse of a. Let () denote the group identity element. Then groups and the pa have the same signatures,
namely they are both --,(-),() algebras of type 2,1,0 . Hence the pa is a boundary algebra. The axioms for an
abelian group, in boundary notation, are:

G1. abc = acb (assuming association from the left);

G2. ()a = a;

G3. (a)a = ().

From G1 and G2, the commutativity and associativity of concatenation may be derived, as above. Note that G3 and
J1a are identical. G2 and J0 would be identical if (())=() replaced A2. This is the dening arithmetical identity of
group theory, in boundary notation.
The pa diers from an abelian group in two ways:

From A2, it follows that (()) (). If the pa were a group, (())=() would hold, and one of (a)a=(()) or a()=a
would have to be a pa consequence. Note that () and (()) are mutual pa complements, as group theory requires,
so that ((())) = () is true of both group theory and the pa;

C2 most clearly demarcates the pa from other magmas, because C2 enables demonstrating the absorption law
that denes lattices, and the distributive law central to Boolean algebra.

Both A2 and C2 follow from B 's being an ordered set.


214 CHAPTER 50. LAWS OF FORM

50.6 Equations of the second degree (Chapter 11)


Chapter 11 of LoF introduces equations of the second degree, composed of recursive formulae that can be seen
as having innite depth. Some recursive formulae simplify to the marked or unmarked state. Others oscillate
indenitely between the two states depending on whether a given depth is even or odd. Specically, certain recursive
formulae can be interpreted as oscillating between true and false over successive intervals of time, in which case a
formula is deemed to have an imaginary truth value. Thus the ow of time may be introduced into the pa.
Turney (1986) shows how these recursive formulae can be interpreted via Alonzo Church's Restricted Recursive
Arithmetic (RRA). Church introduced RRA in 1955 as an axiomatic formalization of nite automata. Turney (1986)
presents a general method for translating equations of the second degree into Churchs RRA, illustrating his method
using the formulae E1, E2, and E4 in chapter 11 of LoF. This translation into RRA sheds light on the names Spencer-
Brown gave to E1 and E4, namely memory and counter. RRA thus formalizes and claries LoF 's notion of an
imaginary truth value.

50.7 Related work


Gottfried Leibniz, in memoranda not published before the late 19th and early 20th centuries, invented Boolean logic.
His notation was isomorphic to that of LoF: concatenation read as conjunction, and non-(X)" read as the complement
of X. Leibnizs pioneering role in algebraic logic was foreshadowed by Lewis (1918) and Rescher (1954). But a full
appreciation of Leibnizs accomplishments had to await the work of Wolfgang Lenzen, published in the 1980s and
reviewed in Lenzen (2004).
Charles Sanders Peirce (18391914) anticipated the pa in three veins of work:

1. Two papers he wrote in 1886 proposed a logical algebra employing but one symbol, the streamer, nearly iden-
tical to the Cross of LoF. The semantics of the streamer are identical to those of the Cross, except that Peirce
never wrote a streamer with nothing under it. An excerpt from one of these papers was published in 1976,[5]
but they were not published in full until 1993.[6]

2. In a 1902 encyclopedia article,[7] Peirce notated Boolean algebra and sentential logic in the manner of this
entry, except that he employed two styles of brackets, toggling between '(', ')' and '[', ']' with each increment in
formula depth.

3. The syntax of his alpha existential graphs is merely concatenation, read as conjunction, and enclosure by ovals,
read as negation.[8] If pa concatenation is read as conjunction, then these graphs are isomorphic to the pa
(Kauman 2001).

Ironically, LoF cites vol. 4 of Peirces Collected Papers, the source for the formalisms in (2) and (3) above. (1)-(3)
were virtually unknown at the time when (1960s) and in the place where (UK) LoF was written. Peirces semiotics,
about which LoF is silent, may yet shed light on the philosophical aspects of LoF.
Kauman (2001) discusses another notation similar to that of LoF, that of a 1917 article by Jean Nicod, who was a
disciple of Bertrand Russell's.
The above formalisms are, like the pa, all instances of boundary mathematics, i.e., mathematics whose syntax is
limited to letters and brackets (enclosing devices). A minimalist syntax of this nature is a boundary notation.
Boundary notation is free of inx, prex, or postx operator symbols. The very well known curly braces ('{', '}') of
set theory can be seen as a boundary notation.
The work of Leibniz, Peirce, and Nicod is innocent of metatheory, as they wrote before Emil Post's landmark 1920
paper (which LoF cites), proving that sentential logic is complete, and before Hilbert and ukasiewicz showed how
to prove axiom independence using models.
Craig (1979) argued that the world, and how humans perceive and interact with that world, has a rich Boolean
structure. Craig was an orthodox logician and an authority on algebraic logic.
Second-generation cognitive science emerged in the 1970s, after LoF was written. On cognitive science and its rele-
vance to Boolean algebra, logic, and set theory, see Lako (1987) (see index entries under Image schema examples:
container) and Lako and Nez (2001). Neither book cites LoF.
50.8. SEE ALSO 215

The biologists and cognitive scientists Humberto Maturana and his student Francisco Varela both discuss LoF in
their writings, which identify distinction as the fundamental cognitive act. The Berkeley psychologist and cognitive
scientist Eleanor Rosch has written extensively on the closely related notion of categorization.
Other formal systems with possible anities to the primary algebra include:

Mereology which typically has a lattice structure very similar to that of Boolean algebra. For a few authors,
mereology is simply a model of Boolean algebra and hence of the primary algebra as well.

Mereotopology, which is inherently richer than Boolean algebra;

The system of Whitehead (1934), whose fundamental primitive is indication.

The primary arithmetic and algebra are a minimalist formalism for sentential logic and Boolean algebra. Other
minimalist formalisms having the power of set theory include:

The lambda calculus;

Combinatory logic with two (S and K) or even one (X) primitive combinators;

Mathematical logic done with merely three primitive notions: one connective, NAND (whose pa translation
is (AB) or, dually, (A)(B)), universal quantication, and one binary atomic formula, denoting set membership.
This is the system of Quine (1951).

The beta existential graphs, with a single binary predicate denoting set membership. This has yet to be explored.
The alpha graphs mentioned above are a special case of the beta graphs.

50.8 See also


Boolean algebra (Simple English Wikipedia)

Boolean algebra (introduction)

Boolean algebra (logic)

Boolean algebra (structure)

Boolean algebras canonically dened

Boolean logic

Entitative graph

Existential graph

List of Boolean algebra topics

Propositional calculus

Two-element Boolean algebra

50.9 Notes
[1] Meguire, P. (2011) Boundary Algebra: A Simpler Approach to Basic Logic and Boolean Algebra. Saarbrcken: VDM
Publishing Ltd. 168pp

[2] Felix Lau: Die Form der Paradoxie, 2005 Carl-Auer Verlag, ISBN 9783896703521

[3] B. Banaschewski (Jul 1977). On G. Spencer Browns Laws of Form. Notre Dame Journal of Formal Logic. 18 (3):
507509.

[4] For a sympathetic evaluation, see Kauman (2001).


216 CHAPTER 50. LAWS OF FORM

[5] Qualitative Logic, MS 736 (c. 1886) in Eisele, Carolyn, ed. 1976. The New Elements of Mathematics by Charles S.
Peirce. Vol. 4, Mathematical Philosophy. (The Hague) Mouton: 101-15.1

[6] Qualitative Logic, MS 582 (1886) in Kloesel, Christian et al., eds., 1993. Writings of Charles S. Peirce: A Chronological
Edition, Vol. 5, 1884-1886. Indiana University Press: 323-71. The Logic of Relatives: Qualitative and Quantitative,
MS 584 (1886) in Kloesel, Christian et al., eds., 1993. Writings of Charles S. Peirce: A Chronological Edition, Vol. 5,
1884-1886. Indiana University Press: 372-78.

[7] Reprinted in Peirce, C.S. (1933) Collected Papers of Charles Sanders Peirce, Vol. 4, Charles Hartshorne and Paul Weiss,
eds. Harvard University Press. Paragraphs 378-383

[8] The existential graphs are described at length in Peirce, C.S. (1933) Collected Papers, Vol. 4, Charles Hartshorne and Paul
Weiss, eds. Harvard University Press. Paragraphs 347-529.

50.10 References
Editions of Laws of Form:

1969. London: Allen & Unwin, hardcover.


1972. Crown Publishers, hardcover: ISBN 0-517-52776-6
1973. Bantam Books, paperback. ISBN 0-553-07782-1
1979. E.P. Dutton, paperback. ISBN 0-525-47544-3
1994. Portland OR: Cognizer Company, paperback. ISBN 0-9639899-0-1
1997 German translation, titled Gesetze der Form. Lbeck: Bohmeier Verlag. ISBN 3-89094-321-7
2008 Bohmeier Verlag, Leipzig, 5th international edition. ISBN 978-3-89094-580-4

Bostock, David, 1997. Intermediate Logic. Oxford Univ. Press.

Byrne, Lee, 1946, Two Formulations of Boolean Algebra, Bulletin of the American Mathematical Society:
268-71.

Craig, William (1979). Boolean Logic and the Everyday Physical World. Proceedings and Addresses of the
American Philosophical Association. 52 (6): 75178. JSTOR 3131383. doi:10.2307/3131383.

David Gries, and Schneider, F B, 1993. A Logical Approach to Discrete Math. Springer-Verlag.

William Ernest Johnson, 1892, The Logical Calculus, Mind 1 (n.s.): 3-30.

Louis H. Kauman, 2001, "The Mathematics of C.S. Peirce", Cybernetics and Human Knowing 8: 79-110.

------, 2006, "Reformulating the Map Color Theorem."

------, 2006a. "Laws of Form - An Exploration in Mathematics and Foundations." Book draft (hence big).

Lenzen, Wolfgang, 2004, "Leibnizs Logic" in Gabbay, D., and Woods, J., eds., The Rise of Modern Logic:
From Leibniz to Frege (Handbook of the History of Logic Vol. 3). Amsterdam: Elsevier, 1-83.

Lako, George, 1987. Women, Fire, and Dangerous Things. University of Chicago Press.

-------- and Rafael E. Nez, 2001. Where Mathematics Comes From: How the Embodied Mind Brings Mathe-
matics into Being. Basic Books.

Meguire, P. G. (2003). Discovering Boundary Algebra: A Simplied Notation for Boolean Algebra and the
Truth Functors. International Journal of General Systems. 32: 2587. doi:10.1080/0308107031000075690.

--------, 2011. Boundary Algebra: A Simpler Approach to Basic Logic and Boolean Algebra. VDM Publishing
Ltd. ISBN 978-3639367492. The source for much of this entry, including the notation which encloses in
parentheses what LoF places under a cross. Steers clear of the more speculative aspects of LoF.

Willard Quine, 1951. Mathematical Logic, 2nd ed. Harvard University Press.

--------, 1982. Methods of Logic, 4th ed. Harvard University Press.


50.11. EXTERNAL LINKS 217

Rescher, Nicholas (1954). Leibnizs Interpretation of His Logical Calculi. Journal of Symbolic Logic. 18:
113. doi:10.2307/2267644.
Schwartz, Daniel G. (1981). "Isomorphisms of G. Spencer-Brown's Laws of Form and F. Varelas Calculus for
Self-Reference. International Journal of General Systems. 6 (4): 23955. doi:10.1080/03081078108934802.
Turney, P. D. (1986). "Laws of Form and Finite Automata. International Journal of General Systems. 12 (4):
30718. doi:10.1080/03081078608934939.
A. N. Whitehead, 1934, Indication, classes, number, validation, Mind 43 (n.s.): 281-97, 543. The corrigenda
on p. 543 are numerous and important, and later reprints of this article do not incorporate them.
Dirk Baecker (ed.) (1993), Kalkl der Form. Suhrkamp; Dirk Baecker (ed.), Probleme der Form. Suhrkamp.

Dirk Baecker (ed.) (1999), Problems of Form, Stanford University Press.


Dirk Baecker (ed.) (2013), A Mathematics of Form, A Sociology of Observers, Cybernetics & Human Knowing,
vol. 20, no. 3-4.

50.11 External links


Laws of Form, archive of website by Richard Shoup.
Spencer-Browns talks at Esalen, 1973. Self-referential forms are introduced in the section entitled Degree
of Equations and the Theory of Types.
Louis H. Kauman, "Box Algebra, Boundary Mathematics, Logic, and Laws of Form."

Kissel, Matthias, "A nonsystematic but easy to understand introduction to Laws of Form."
The Laws of Form Forum, where the primary algebra and related formalisms have been discussed since 2002.

A meeting with G.S.B by Moshe Klein


Chapter 51

List of Boolean algebra topics

This is a list of topics around Boolean algebra and propositional logic.

51.1 Articles with a wide scope and introductions


Algebra of sets

Boolean algebra (structure)

Boolean algebra

Field of sets

Logical connective

Propositional calculus

51.2 Boolean functions and connectives


Ampheck

Boolean algebras canonically dened

Conditioned disjunction

Evasive Boolean function

Exclusive or

Functional completeness

Logical biconditional

Logical conjunction

Logical disjunction

Logical equality

Logical implication

Logical negation

Logical NOR

Lupanov representation

218
51.3. EXAMPLES OF BOOLEAN ALGEBRAS 219

Majority function

Material conditional

Peirce arrow

Sheer stroke

Sole sucient operator

Symmetric Boolean function

Symmetric dierence

Zhegalkin polynomial

51.3 Examples of Boolean algebras


Boolean domain

Interior algebra

LindenbaumTarski algebra

Two-element Boolean algebra

51.4 Extensions and generalizations


Complete Boolean algebra

Derivative algebra (abstract algebra)

First-order logic

Free Boolean algebra

De Morgan algebra

Heyting algebra

Monadic Boolean algebra

skew Boolean algebra

51.5 Syntax
Algebraic normal form

Boolean conjunctive query

Canonical form (Boolean algebra)

Conjunctive normal form

Disjunctive normal form

Formal system
220 CHAPTER 51. LIST OF BOOLEAN ALGEBRA TOPICS

51.6 Technical applications


And-inverter graph

Logic gate

Boolean analysis

51.7 Theorems and specic laws


Boolean prime ideal theorem

Compactness theorem

Consensus theorem

De Morgans laws

Duality (order theory)

Laws of classical logic

Peirces law

Stones representation theorem for Boolean algebras

51.8 People
Boole, George

De Morgan, Augustus

Jevons, William Stanley

Peirce, Charles Sanders

Stone, Marshall Harvey

Venn, John

Zhegalkin, Ivan Ivanovich

51.9 Philosophy
Booles syllogistic

Boolean implicant

Entitative graph

Existential graph

Laws of Form

Logical graph
51.10. VISUALIZATION 221

51.10 Visualization
Truth table

Karnaugh map
Venn diagram

51.11 Unclassied
Boolean function

Boolean-valued function
Boolean-valued model

Boolean satisability problem


Indicator function (also called the characteristic function, but that term is used in probability theory for a
dierent concept)
Espresso heuristic logic minimizer

Logical matrix
Logical value

Stone duality
Stone space

Topological Boolean algebra


Chapter 52

Logic alphabet

The logic alphabet, also called the X-stem Logic Alphabet (XLA), constitutes an iconic set of symbols that system-
atically represents the sixteen possible binary truth functions of logic. The logic alphabet was developed by Shea
Zellweger. The major emphasis of his iconic logic alphabet is to provide a more cognitively ergonomic notation for
logic. Zellwegers visually iconic system more readily reveals, to the novice and expert alike, the underlying symmetry
relationships and geometric properties of the sixteen binary connectives within Boolean algebra.

52.1 Truth functions


Truth functions are functions from sequences of truth values to truth values. A unary truth function, for example,
takes a single truth value and maps it onto another truth value. Similarly, a binary truth function maps ordered pairs
of truth values onto truth values, while a ternary truth function maps ordered triples of truth values onto truth values,
and so on.
In the unary case, there are two possible inputs, viz. T and F, and thus four possible unary truth functions: one
mapping T to T and F to F, one mapping T to F and F to F, one mapping T to T and F to T, and nally one mapping
T to F and F to T, this last one corresponding to the familiar operation of logical negation. In the form of a table,
the four unary truth functions may be represented as follows.
In the binary case, there are four possible inputs, viz. (T,T), (T,F), (F,T), and (F,F), thus yielding sixteen possible
n
binary truth functions. Quite generally, for any number n, there are 22 possible n-ary truth functions. The sixteen
possible binary truth functions are listed in the table below.

52.2 Content
Zellwegers logic alphabet oers a visually systematic way of representing each of the sixteen binary truth functions.
The idea behind the logic alphabet is to rst represent the sixteen binary truth functions in the form of a square
matrix rather than the more familiar tabular format seen in the table above, and then to assign a letter shape to each
of these matrices. Letter shapes are derived from the distribution of Ts in the matrix. When drawing a logic symbol,
one passes through each square with assigned F values while stopping in a square with assigned T values. In the
extreme examples, the symbol for tautology is a X (stops in all four squares), while the symbol for contradiction is an
O (passing through all squares without stopping). The square matrix corresponding to each binary truth function, as
well as its corresponding letter shape, are displayed in the table below.

52.3 Signicance
The interest of the logic alphabet lies in its aesthetic, symmetric, and geometric qualities. These qualities combine
to allow an individual to more easily, rapidly and visually manipulate the relationships between entire truth tables. A
logic operation performed on a two dimensional logic alphabet connective, with its geometric qualities, produces a
symmetry transformation. When a symmetry transformation occurs, each input symbol, without any further thought,

222
52.4. SEE ALSO 223

immediately changes into the correct output symbol. For example, by reecting the symbol for NAND (viz. 'h')
across the vertical axis we produce the symbol for , whereas by reecting it across the horizontal axis we produce
the symbol for , and by reecting it across both the horizontal and vertical axes we produce the symbol for .
Similar symmetry transformations can be obtained by operating upon the other symbols.
In eect, the X-stem Logic Alphabet is derived from three disciplines that have been stacked and combined: (1)
mathematics, (2) logic, and (3) semiotics. This happens because, in keeping with the mathelogical semiotics, the
connectives have been custom designed in the form of geometric letter shapes that serve as iconic replicas of their
corresponding square-framed truth tables. Logic cannot do it alone. Logic is sandwiched between mathematics and
semiotics. Indeed, Zellweger has constructed intriguing structures involving the symbols of the logic alphabet on
the basis of these symmetries ( ). The considerable aesthetic appeal of the logic alphabet has led to exhibitions of
Zellwegers work at the Museum of Jurassic Technology in Los Angeles, among other places.
The value of the logic alphabet lies in its use as a visually simpler pedagogical tool than the traditional system for logic
notation. The logic alphabet eases the introduction to the fundamentals of logic, especially for children, at much earlier
stages of cognitive development. Because the logic notation system, in current use today, is so deeply embedded in our
computer culture, the logic alphabets adoption and value by the eld of logic itself, at this juncture, is questionable.
Additionally, systems of natural deduction, for example, generally require introduction and elimination rules for each
connective, meaning that the use of all sixteen binary connectives would result in a highly complex proof system.
Various subsets of the sixteen binary connectives (e.g., {,&,,~}, {,~}, {&, ~}, {,~}) are themselves functionally
complete in that they suce to dene the remaining connectives. In fact, both NAND and NOR are sole sucient
operators, meaning that the remaining connectives can all be dened solely in terms of either of them. Nonetheless,
the logic alphabets 2-dimensional geometric letter shapes along with its group symmetry properties can help ease the
learning curve for children and adult students alike, as they become familiar with the interrelations and operations on
all 16 binary connectives. Giving children and students this advantage is a decided gain.

52.4 See also


Polish notation
Propositional logic

Boolean function
Boolean algebra (logic)

Logic gate

52.5 External links


Page dedicated to Zellwegers logic alphabet

Exhibition in a small museum: Flickr photopage, including a discussion between Tilman Piesk and probably
Shea Zellweger
Chapter 53

Logic optimization

For other uses, see Minimisation.

Logic optimization, a part of logic synthesis in electronics, is the process of nding an equivalent representation of
the specied logic circuit under one or more specied constraints. Generally the circuit is constrained to minimum
chip area meeting a prespecied delay.

53.1 Introduction
With the advent of logic synthesis, one of the biggest challenges faced by the electronic design automation (EDA)
industry was to nd the best netlist representation of the given design description. While two-level logic optimiza-
tion had long existed in the form of the QuineMcCluskey algorithm, later followed by the Espresso heuristic logic
minimizer, the rapidly improving chip densities, and the wide adoption of HDLs for circuit description, formalized
the logic optimization domain as it exists today.
Today, logic optimization is divided into various categories:
Based on circuit representation

Two-level logic optimization

Multi-level logic optimization

Based on circuit characteristics

Sequential logic optimization

Combinational logic optimization

Based on type of execution

Graphical optimization methods

Tabular optimization methods

Algebraic optimization methods

While a two-level circuit representation of circuits strictly refers to the attened view of the circuit in terms of SOPs
(sum-of-products) which is more applicable to a PLA implementation of the design a multi-level representation
is a more generic view of the circuit in terms of arbitrarily connected SOPs, POSs (product-of-sums), factored form
etc. Logic optimization algorithms generally work either on the structural (SOPs, factored form) or functional (BDDs,
ADDs) representation of the circuit.

224
53.2. TWO-LEVEL VERSUS MULTI-LEVEL REPRESENTATIONS 225

53.2 Two-level versus multi-level representations

If we have two functions F 1 and F 2 :

F1 = AB + AC + AD,

F2 = A B + A C + A E.

The above 2-level representation takes six product terms and 24 transistors in CMOS Rep.
A functionally equivalent representation in multilevel can be:

P = B + C.

F 1 = AP + AD.

F 2 = A'P + A'E.

While the number of levels here is 3, the total number of product terms and literals reduce because of the sharing of
the term B + C.
Similarly, we distinguish between sequential and combinational circuits, whose behavior can be described in terms
of nite-state machine state tables/diagrams or by Boolean functions and relations respectively.

53.3 Circuit minimization in Boolean algebra

In Boolean algebra, circuit minimization is the problem of obtaining the smallest logic circuit (Boolean formula) that
represents a given Boolean function or truth table. The unbounded circuit minimization problem was long-conjectured
[1]
to be P
2 -complete, a result nally proved in 2008, but there are eective heuristics such as Karnaugh maps and
the QuineMcCluskey algorithm that facilitate the process.

53.3.1 Purpose

The problem with having a complicated circuit (i.e. one with many elements, such as logic gates) is that each element
takes up physical space in its implementation and costs time and money to produce in itself. Circuit minimization
may be one form of logic optimization used to reduce the area of complex logic in integrated circuits.

53.3.2 Example

While there are many ways to minimize a circuit, this is an example that minimizes (or simplies) a boolean function.
Note that the boolean function carried out by the circuit is directly related to the algebraic expression from which the
function is implemented.[2] Consider the circuit used to represent (A B) (A B) . It is evident that two negations,
two conjunctions, and a disjunction are used in this statement. This means that to build the circuit one would need
two inverters, two AND gates, and an OR gate.
We can simplify (minimize) the circuit by applying logical identities or using intuition. Since the example states that
A is true when B is false or the other way around, we can conclude that this simply means A = B . In terms of logical
gates, inequality simply means an XOR gate (exclusive or). Therefore, (A B) (A B) A = B . Then
the two circuits shown below are equivalent:
226 CHAPTER 53. LOGIC OPTIMIZATION

Original Circuit
A B

Simplied (Minimized) Circuit


A B

You can additionally check the correctness of the result using a truth table.

53.4 Graphical two-level logic minimization methods


Graphical minimization methods for two-level logic include:

Marquand diagram (1881) by Allan Marquand (18531924)[3][4]


Harvard minimizing chart (1951) by Howard H. Aiken and Martha L. Whitehouse of the Harvard Computation
Laboratory[5][6][7][8]
Veitch chart (1952) by Edward Veitch (19242013)[9][4]
Karnaugh map (1953) by Maurice Karnaugh (1924)
Svobodas graphical aids (1956) and triadic map by Antonn Svoboda (19071980)[10][11][12][13]
Hndler circle graph (aka Hndlerscher Kreisgraph, Kreisgraph nach Hndler, Hndler-Kreisgraph, Hndler-
Diagramm, Minimisierungsgraph [sic]) (1958) by Wolfgang Hndler (19201998)[14][15][16][12][17][18][19][20][21]
Graph method (1965) by Herbert Kortum (19071979)[22][23][24][25][26][27]

53.5 See also


Binary decision diagram
Circuit minimization
Espresso heuristic logic minimizer
Karnaugh map
Petricks method
Prime implicant
Circuit complexity
53.6. REFERENCES 227

Function composition
Function decomposition
Gate underutilization

53.6 References
[1] Buchfuhrer, D.; Umans, C. (2011). The complexity of Boolean formula minimization. Journal of Computer and System
Sciences. 77: 142. doi:10.1016/j.jcss.2010.06.011. This is an extended version of the conference paper Buchfuhrer,
D.; Umans, C. (2008). The Complexity of Boolean Formula Minimization. Automata, Languages and Programming.
Lecture Notes in Computer Science. 5125. p. 24. ISBN 978-3-540-70574-1. doi:10.1007/978-3-540-70575-8_3.

[2] M. Mano, C. Kime. Logic and Computer Design Fundamentals (Fourth Edition). Pg 54

[3] Marquand, Allan (1881). XXXIII: On Logical Diagrams for n terms. The London, Edinburgh, and Dublin Philosophical
Magazine and Journal of Science. 5. 12 (75): 266270. doi:10.1080/14786448108627104. Retrieved 2017-05-15. (NB.
Quite many secondary sources erroneously cite this work as A logical diagram for n terms or On a logical diagram for
n terms.)

[4] Brown, Frank Markham (2012) [2003, 1990]. Boolean Reasoning - The Logic of Boolean Equations (reissue of 2nd ed.).
Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-42785-0.

[5] Aiken, Howard H.; Blaauw, Gerrit; Burkhart, William; Burns, Robert J.; Cali, Lloyd; Canepa, Michele; Ciampa, Carmela
M.; Coolidge, Jr., Charles A.; Fucarile, Joseph R.; Gadd, Jr., J. Orten; Gucker, Frank F.; Harr, John A.; Hawkins, Robert
L.; Hayes, Miles V.; Hofheimer, Richard; Hulme, William F.; Jennings, Betty L.; Johnson, Stanley A.; Kalin, Theodore;
Kincaid, Marshall; Lucchini, E. Edward; Minty, William; Moore, Benjamin L.; Remmes, Joseph; Rinn, Robert J.; Roche,
John W.; Sanbord, Jacquelin; Semon, Warren L.; Singer, Theodore; Smith, Dexter; Smith, Leonard; Strong, Peter F.;
Thomas, Helene V.; Wang, An; Whitehouse, Martha L.; Wilkins, Holly B.; Wilkins, Robert E.; Woo, Way Dong; Lit-
tle, Elbert P.; McDowell, M. Scudder (1952) [January 1951]. Chapter V: Minimizing charts. Synthesis of electronic
computing and control circuits (second printing, revised ed.). Write-Patterson Air Force Base: Harvard University Press
(Cambridge, Massachusetts, USA) / Georey Cumberlege Oxford University Press (London). pp. preface, 5067. Re-
trieved 2017-04-16. [] Martha Whitehouse constructed the minimizing charts used so profusely throughout this book,
and in addition prepared minimizing charts of seven and eight variables for experimental purposes. [] Hence, the present
writer is obliged to record that the general algebraic approach, the switching function, the vacuum-tube operator, and the
minimizing chart are his proposals, and that he is responsible for their inclusion herein. [] (NB. Work commenced in
April 1948.)

[6] Karnaugh, Maurice (November 1953) [1953-04-23, 1953-03-17]. The Map Method for Synthesis of Combinational Logic
Circuits (PDF). Transactions of the American Institute of Electrical Engineers part I. 72 (9): 593599. doi:10.1109/TCE.1953.6371932.
Paper 53-217. Retrieved 2017-04-16. (NB. Also contains a short review by Samuel H. Caldwell.)

[7] Phister, Jr., Montgomery (1959) [December 1958]. Logical design of digital computers. New York, USA: John Wiley &
Sons Inc. pp. 7583. ISBN 0471688053.

[8] Curtis, H. Allen (1962). A new approach to the design of switching circuits. Princeton: D. van Nostrand Company.

[9] Veitch, Edward W. (1952-05-03) [1952-05-02]. A Chart Method for Simplifying Truth Functions. ACM Annual Con-
ference/Annual Meeting: Proceedings of the 1952 ACM Annual Meeting (Pittsburg). New York, USA: ACM: 127133.
doi:10.1145/609784.609801.

[10] Svoboda, Antonn (1956). Gracko-mechanick pomcky uvan pi analyse a synthese kontaktovch obvod [Utilization
of graphical-mechanical aids for the analysis and synthesis of contact circuits]. Stroje na zpracovn informac [Symphosium
IV on information processing machines] (in Czech). IV. Prague: Czechoslovak Academy of Sciences, Research Institute
of Mathematical Machines. pp. 921.

[11] Svoboda, Antonn (1956). Graphical Mechanical Aids for the Synthesis of Relay Circuits. Nachrichtentechnische Fach-
berichte (NTF), Beihefte der Nachrichtentechnischen Zeitschrift (NTZ). Braunschweig, Germany: Vieweg-Verlag.

[12] Steinbuch, Karl W.; Weber, Wolfgang; Heinemann, Traute, eds. (1974) [1967]. Taschenbuch der Informatik - Band II
- Struktur und Programmierung von EDV-Systemen. Taschenbuch der Nachrichtenverarbeitung (in German). 2 (3 ed.).
Berlin, Germany: Springer-Verlag. pp. 25, 62, 96, 122123, 238. ISBN 3-540-06241-6. LCCN 73-80607.

[13] Svoboda, Antonn; White, Donnamaie E. (2016) [1979-08-01]. Advanced Logical Circuit Design Techniques (PDF) (re-
typed electronic reissue ed.). Garland STPM Press (original issue) / WhitePubs (reissue). ISBN 978-0-8240-7014-4.
Archived (PDF) from the original on 2016-03-15. Retrieved 2017-04-15.
228 CHAPTER 53. LOGIC OPTIMIZATION

[14] Hndler, Wolfgang (1958). Ein Minimisierungsverfahren zur Synthese von Schaltkreisen: Minimisierungsgraphen (Disser-
tation) (in German). Technische Hochschule Darmstadt. D 17. (NB. Although written by a German, the title contains an
anglicism; the correct German term would be Minimierung instead of Minimisierung.)

[15] Hndler, Wolfgang (2013) [1961]. Zum Gebrauch von Graphen in der Schaltkreis- und Schaltwerktheorie. In Peschl,
Ernst Ferdinand; Unger, Heinz. Colloquium ber Schaltkreis- und Schaltwerk-Theorie - Vortragsauszge vom 26. bis 28.
Oktober 1960 in Bonn - Band 3 von Internationale Schriftenreihe zur Numerischen Mathematik [International Series of
Numerical Mathematics] (ISNM) (in German). 3. Institut fr Angewandte Mathematik, Universitt Saarbrcken, Rheinisch-
Westflisches Institut fr Instrumentelle Mathematik: Springer Basel AG / Birkhuser Verlag Basel. pp. 169198. ISBN
978-3-0348-5771-0. doi:10.1007/978-3-0348-5770-3.

[16] Berger, Erich R.; Hndler, Wolfgang (1967) [1962]. Steinbuch, Karl W.; Wagner, Siegfried W., eds. Taschenbuch der
Nachrichtenverarbeitung (in German) (2 ed.). Berlin, Germany: Springer-Verlag OHG. pp. 64, 10341035, 1036, 1038.
LCCN 67-21079. Title No. 1036. [] bersichtlich ist die Darstellung nach Hndler, die smtliche Punkte, numeriert
nach dem Gray-Code [], auf dem Umfeld eines Kreises anordnet. Sie erfordert allerdings sehr viel Platz. [] [Hndlers
illustration, where all points, numbered according to the Gray code, are arranged on the circumference of a circle, is easily
comprehensible. It needs, however, a lot of space.]

[17] Hotz, Gnter (1974). Schaltkreistheorie [Switching circuit theory]. DeGruyter Lehrbuch (in German). Walter de Gruyter
& Co. p. 117. ISBN 3-11-00-2050-5. [] Der Kreisgraph von Hndler ist fr das Aunden von Primimplikanten
gut brauchbar. Er hat den Nachteil, da er schwierig zu zeichnen ist. Diesen Nachteil kann man allerdings durch die
Verwendung von Schablonen verringern. [] [The circle graph by Hndler is well suited to nd prime implicants. A
disadvantage is that it is dicult to draw. This can be remedied using stencils.]

[18] Informatik Sammlung Erlangen (ISER)" (in German). Erlangen, Germany: Friedrich-Alexander Universitt. 2012-03-
13. Retrieved 2017-04-12. (NB. Shows a picture of a Kreisgraph by Hndler.)

[19] Informatik Sammlung Erlangen (ISER) - Impressum (in German). Erlangen, Germany: Friedrich-Alexander Universitt.
2012-03-13. Archived from the original on 2012-02-26. Retrieved 2017-04-15. (NB. Shows a picture of a Kreisgraph by
Hndler.)

[20] Zemanek, Heinz (2013) [1990]. Geschichte der Schaltalgebra [History of circuit switching algebra]. In Broy, Man-
fred. Informatik und Mathematik [Computer Sciences and Mathematics] (in German). Springer-Verlag. pp. 4372. ISBN
9783642766770. Einen Weg besonderer Art, der damals zu wenig beachtet wurde, wies W. Hndler in seiner Dissertation
[] mit einem Kreisdiagramm. [] (NB. Collection of papers at a colloquium held at the Bayerische Akademie der
Wissenschaften, 1989-06-12/14, in honor of Friedrich L. Bauer.)

[21] Bauer, Friedrich Ludwig; Wirsing, Martin (March 1991). Elementare Aussagenlogik (in German). Berlin / Heidelberg:
Springer-Verlag. pp. 5456, 71, 112113, 138139. ISBN 978-3-540-52974-3. [] handelt es sich um ein Hndler-
Diagramm [], mit den Wrfelecken als Ecken eines 2m -gons. [] Abb. [] zeigt auch Gegenstcke fr andere Di-
mensionen. Durch waagerechte Linien sind dabei Tupel verbunden, die sich nur in der ersten Komponente unterscheiden;
durch senkrechte Linien solche, die sich nur in der zweiten Komponente unterscheiden; durch 45-Linien und 135-Linien
solche, die sich nur in der dritten Komponente unterscheiden usw. Als Nachteil der Hndler-Diagramme wird angefhrt,
da sie viel Platz beanspruchen. []

[22] Kortum, Herbert (1965). Minimierung von Kontaktschaltungen durch Kombination von Krzungsverfahren und Graphen-
methoden. messen-steuern-regeln (msr) (in German). Verlag Technik. 8 (12): 421425.

[23] Kortum, Herbert (1966). Konstruktion und Minimierung von Halbleiterschaltnetzwerken mittels Graphentransformation.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (1): 912.

[24] Kortum, Herbert (1966). Weitere Bemerkungen zur Minimierung von Schaltnetzwerken mittels Graphenmethoden.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (3): 96102.

[25] Kortum, Herbert (1966). Weitere Bemerkungen zur Behandlung von Schaltnetzwerken mittels Graphen. messen-steuern-
regeln (msr) (in German). Verlag Technik. 9 (5): 151157.

[26] Kortum, Herbert (1967). "ber zweckmige Anpassung der Graphenstruktur diskreter Systeme an vorgegebene Auf-
gabenstellungen. messen-steuern-regeln (msr) (in German). Verlag Technik. 10 (6): 208211.

[27] Tafel, Hans Jrg (1971). 4.3.5. Graphenmethode zur Vereinfachung von Schaltfunktionen. Written at RWTH, Aachen,
Germany. Einfhrung in die digitale Datenverarbeitung [Introduction to digital information processing] (in German). Mu-
nich, Germany: Carl Hanser Verlag. pp. 98105, 107113. ISBN 3-446-10569-7.
53.7. FURTHER READING 229

53.7 Further reading


De Micheli, Giovanni (1994). Synthesis and Optimization of Digital Circuits. McGraw-Hill. ISBN 0-07-
016333-2. (NB. Chapters 7-9 cover combinatorial two-level, combinatorial multi-level, and respectively se-
quential circuit optimization.)

Hachtel, Gary D.; Somenzi, Fabio (2006) [1996]. Logic Synthesis and Verication Algorithms. Springer Science
& Business Media. ISBN 978-0-387-31005-3.

Zvi Kohavi, Niraj K. Jha. Switching and Finite Automata Theory. 3rd ed. Cambridge University Press. 2009.
ISBN 978-0-521-85748-2, chapters 46
Knuth, Donald E. (2010). chapter 7.1.2: Boolean Evaluation. The Art of Computer Programming. 4A.
Addison-Wesley. pp. 96133. ISBN 0-201-03804-8.
Multi-level minimization part I, partII: CMU lectures slides by Rob A. Rutenbar

Tomaszewski, S. P., Celik, I. U., Antoniou, G. E., WWW-based Boolean function minimization International
Journal of Applied Mathematics and Computer Science, VOL 13; PART 4, pages 577-584, 2003.
Chapter 54

Logic redundancy

Logic redundancy occurs in a digital gate network containing circuitry that does not aect the static logic function.
There are several reasons why logic redundancy may exist. One reason is that it may have been added deliberately to
suppress transient glitches (thus causing a race condition) in the output signals by having two or more product terms
overlap with a third one.
Consider the following equation:

Y = AB + AC + BC.

The third product term BC is a redundant consensus term. If A switches from 1 to 0 while B = 1 and C = 1 , Y
remains 1. During the transition of signal A in logic gates, both the rst and second term may be 0 momentarily. The
third term prevents a glitch since its value of 1 in this case is not aected by the transition of signal A .
Another reason for logic redundancy is poor design practices which unintentionally result in logically redundantly
terms. This causes an unnecessary increase in network complexity, and possibly hampering the ability to test manu-
factured designs using traditional test methods (single stuck-at fault models). (Note: testing might be possible using
IDDQ models.)

54.1 Removing logic redundancy


Logic redundancy is, in general, not desired. Redundancy, by denition, requires extra parts (in this case: logical
terms) which raises the cost of implementation (either actual cost of physical parts or CPU time to process). Logic
redundancy can be removed by several well-known techniques, such as Karnaugh maps, the QuineMcCluskey algo-
rithm, and the heuristic computer method.

54.2 Adding logic redundancy


Main article: hazard (logic)
In some cases it may be desirable to add logic redundancy. One of those cases is to avoid race conditions whereby
an output can uctuate because dierent terms are racing to turn o and on. To explain this in more concrete terms
the Karnaugh map to the right shows the minterms and maxterms for the following function:

f (A, B, C, D) = E(6, 8, 9, 10, 11, 12, 13, 14).

The boxes represent the minimal AND/OR terms needed to implement this function:

F = AC + AB + BCD.

230
54.3. SEE ALSO 231

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
A k-map showing a particular logic function

The k-map visually shows where race conditions occur in the minimal expression by having gaps between minterms or
gaps between maxterms. For example, the gap between the blue and green rectangles. If the input ABCD = 1110
were to change to ABCD = 1010 then a race will occur between BCD turning o and AB turning o. If the
blue term switches o before the green turns on then the output will uctuate and may register as 0. Another race
condition is between the blue and the red for transition of ABCD = 1110 to ABCD = 1100 .
The race condition is removed by adding in logic redundancy, which is contrary to the aims of using a k-map in
the rst place. Both minterm race conditions are covered by addition of the yellow term AD . (The maxterm race
condition is covered by addition of the green-bordered grey term A + D .)
In this case, the addition of logic redundancy has stabilized the output to avoid output uctuations because terms are
racing each other to change state.

54.3 See also


232 CHAPTER 54. LOGIC REDUNDANCY

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
Above k-map with the AD term added to avoid race hazards
Chapter 55

Logical matrix

A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0,1) matrix is a matrix with entries from
the Boolean domain B = {0, 1}. Such a matrix can be used to represent a binary relation between a pair of nite sets.

55.1 Matrix representation of a relation


If R is a binary relation between the nite indexed sets X and Y (so R XY), then R can be represented by the
logical matrix M whose row and column indices index the elements of X and Y, respectively, such that the entries of
M are dened by:

{
1 (xi , yj ) R
Mi,j =
0 (xi , yj ) R

In order to designate the row and column numbers of the matrix, the sets X and Y are indexed with positive integers:
i ranges from 1 to the cardinality (size) of X and j ranges from 1 to the cardinality of Y. See the entry on indexed sets
for more detail.

55.1.1 Example
The binary relation R on the set {1, 2, 3, 4} is dened so that aRb holds if and only if a divides b evenly, with no
remainder. For example, 2R4 holds because 2 divides 4 without leaving a remainder, but 3R4 does not hold because
when 3 divides 4 there is a remainder of 1. The following set is the set of pairs for which the relation R holds.

{(1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (2, 4), (3, 3), (4, 4)}.

The corresponding representation as a Boolean matrix is:


1 1 1 1
0 1 0 1
.
0 0 1 0
0 0 0 1

55.2 Other examples


A permutation matrix is a (0,1)-matrix, all of whose columns and rows each have exactly one nonzero element.

A Costas array is a special case of a permutation matrix

233
234 CHAPTER 55. LOGICAL MATRIX

An incidence matrix in combinatorics and nite geometry has ones to indicate incidence between points (or
vertices) and lines of a geometry, blocks of a block design, or edges of a graph (discrete mathematics)
A design matrix in analysis of variance is a (0,1)-matrix with constant row sums.
An adjacency matrix in graph theory is a matrix whose rows and columns represent the vertices and whose
entries represent the edges of the graph. The adjacency matrix of a simple, undirected graph is a binary
symmetric matrix with zero diagonal.
The biadjacency matrix of a simple, undirected bipartite graph is a (0,1)-matrix, and any (0,1)-matrix arises
in this way.
The prime factors of a list of m square-free, n-smooth numbers can be described as a m(n) (0,1)-matrix,
where is the prime-counting function and aij is 1 if and only if the jth prime divides the ith number. This
representation is useful in the quadratic sieve factoring algorithm.
A bitmap image containing pixels in only two colors can be represented as a (0,1)-matrix in which the 0s
represent pixels of one color and the 1s represent pixels of the other color.
A binary matrix can be used to check the game rules in the game of Go [1]

55.3 Some properties


The matrix representation of the equality relation on a nite set is an identity matrix, that is, one whose entries on the
diagonal are all 1, while the others are all 0.
If the Boolean domain is viewed as a semiring, where addition corresponds to logical OR and multiplication to logical
AND, the matrix representation of the composition of two relations is equal to the matrix product of the matrix
representations of these relation. This product can be computed in expected time O(n2 ).[2]
Frequently operations on binary matrices are dened in terms of modular arithmetic mod 2that is, the elements
are treated as elements of the Galois eld GF(2) = 2 . They arise in a variety of representations and have a number
of more restricted special forms. They are applied e.g. in XOR-satisability.
The number of distinct m-by-n binary matrices is equal to 2mn , and is thus nite.

55.4 See also


List of matrices
Binatorix (a binary De Bruijn torus)
Redheer matrix
Relation algebra

55.5 Notes
[1] Petersen, Kjeld (February 8, 2013). Binmatrix. Retrieved August 11, 2017.
[2] Patrick E. O'Neil, Elizabeth J. O'Neil (1973). A Fast Expected Time Algorithm for Boolean Matrix Multiplication and
Transitive Closure (PDF). Information and Control. 22 (2): 132138. doi:10.1016/s0019-9958(73)90228-3. The
algorithm relies on addition being idempotent, cf. p.134 (bottom).

55.6 References
Hogben, Leslie (2006), Handbook of Linear Algebra (Discrete Mathematics and Its Applications), Boca Raton:
Chapman & Hall/CRC, ISBN 978-1-58488-510-8, section 31.3, Binary Matrices
Kim, Ki Hang, Boolean Matrix Theory and Applications, ISBN 0-8247-1788-0
55.7. EXTERNAL LINKS 235

55.7 External links


Hazewinkel, Michiel, ed. (2001) [1994], Logical matrix, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Chapter 56

Lupanov representation

Lupanovs (k, s)-representation, named after Oleg Lupanov, is a way of representing Boolean circuits so as to show
that the reciprocal of the Shannon eect. Shannon had showed that almost all Boolean functions of n variables need
a circuit of size at least 2n n1 . The reciprocal is that:

All Boolean functions of n variables can be computed with a circuit of at most 2n n1 + o(2n n1 )
gates.

56.1 Denition
The idea is to represent the values of a boolean function in a table of 2k rows, representing the possible values of
the k rst variables x1 , ..., ,xk, and 2nk columns representing the values of the other variables.
Let A1 , ..., Ap be a partition of the rows of this table such that for i < p, |Ai| = s and |Ap | = s s . Let i(x) = (x)
i x Ai.
Moreover, let Bi,w be the set of the columns whose intersection with Ai is w .

56.2 See also


Course material describing the Lupanov representation
An additional example from the same course material

236
Chapter 57

Maharam algebra

In mathematics, a Maharam algebra is a complete Boolean algebra with a continuous submeasure. They were
introduced by Maharam (1947).

57.1 Denitions
A continuous submeasure or Maharam submeasure on a Boolean algebra is a real-valued function m such that

m(0) = 0, m(1) = 1, m(x) > 0 if x 0.

If x < y then m(x) < m(y)


m(x y) m(x) + m(y)

If xn is a decreasing sequence with intersection 0, then the sequence m(xn) has limit 0.

A Maharam algebra is a complete Boolean algebra with a continuous submeasure.

57.2 Examples
Every probability measure is a continuous submeasure, so as the corresponding Boolean algebra of measurable sets
modulo measure zero sets is complete it is a Maharam algebra.

57.3 References
Balcar, Bohuslav; Jech, Thomas (2006), Weak distributivity, a problem of von Neumann and the mystery of
measurability, Bulletin of Symbolic Logic, 12 (2): 241266, MR 2223923, Zbl 1120.03028, doi:10.2178/bsl/1146620061
Maharam, Dorothy (1947), An algebraic characterization of measure algebras, Annals of Mathematics (2),
48: 154167, JSTOR 1969222, MR 0018718, Zbl 0029.20401, doi:10.2307/1969222
Velickovic, Boban (2005), ccc forcing and splitting reals, Israel J. Math., 147: 209220, MR 2166361, Zbl
1118.03046, doi:10.1007/BF02785365

237
Chapter 58

Majority function

In Boolean logic, the majority function (also called the median operator) is a function from n inputs to one out-
put. The value of the operation is false when n/2 or more arguments are false, and true otherwise. Alternatively,
representing true values as 1 and false values as 0, we may use the formula

n
1 ( i=1 pi ) 1/2
Majority (p1 , . . . , pn ) = + .
2 n

The "1/2 in the formula serves to break ties in favor of zeros when n is even. If the term "1/2 is omitted, the
formula can be used for a function that breaks ties in favor of ones.

58.1 Boolean circuits

Four bit majority circuit

A majority gate is a logical gate used in circuit complexity and other applications of Boolean circuits. A majority gate
returns true if and only if more than 50% of its inputs are true.
For instance, in a full adder, the carry output is found by applying a majority function to the three inputs, although
frequently this part of the adder is broken down into several simpler logical gates.

238
58.2. MONOTONE FORMULAE FOR MAJORITY 239

Many systems have triple modular redundancy; they use the majority function for majority logic decoding to imple-
ment error correction.
A major result in circuit complexity asserts that the majority function cannot be computed by AC0 circuits of subex-
ponential size.

58.2 Monotone formulae for majority


For n = 1 the median operator is just the unary identity operation x. For n = 3 the ternary median operator can be
expressed using conjunction and disjunction as xy + yz + zx. Remarkably this expression denotes the same operation
independently of whether the symbol + is interpreted as inclusive or or exclusive or.
For an arbitrary n there exists a monotone formula for majority of size O(n5.3 ).[1] This is proved using probabilistic
method. Thus, this formula is non-constructive. However, one can obtain an explicit formula for majority of poly-
nomial size using a sorting network of Ajtai, Komls, and Szemerdi.
The majority function produces 1 when more than half of the inputs are 1; it produces 0 when more than half
the inputs are 0. Most applications deliberately force an odd number of inputs so they don't have to deal with the
question of what happens when exactly half the inputs are 0 and exactly half the inputs are 1. The few systems that
calculate the majority function on an even number of inputs are often biased towards 0they produce 0 when
exactly half the inputs are 0 -- for example, a 4-input majority gate has a 0 output only when two or more 0s appear
at its inputs.[2] In a few systems, a 4-input majority network randomly chooses 1 or 0 when exactly two 0s appear
at its inputs.[3]

58.3 Properties
For any x, y, and z, the ternary median operator x, y, z satises the following equations.

x, y, y = y

x, y, z = z, x, y

x, y, z = x, z, y

x, w, y, w, z = x, w, y, w, z

An abstract system satisfying these as axioms is a median algebra.

58.4 Notes
[1] Valiant, Leslie (1984). Short monotone formulae for the majority function. Journal of Algorithms. 5 (3): 363366.
doi:10.1016/0196-6774(84)90016-6.

[2] Peterson, William Wesley; Weldon, E.J. (1972). Error-correcting Codes. MIT Press. ISBN 9780262160391.

[3] Chaouiya, Claudine; Ourrad, Ouerdia; Lima, Ricardo (July 2013). Majority Rules with Random Tie-Breaking in Boolean
Gene Regulatory Networks. PLoS ONE. 8 (7). Public Library of Science. doi:10.1371/journal.pone.0069626.

58.5 References

Knuth, Donald E. (2008). Introduction to combinatorial algorithms and Boolean functions. The Art of Com-
puter Programming. 4a. Upper Saddle River, NJ: Addison-Wesley. pp. 6474. ISBN 0-321-53496-4.
240 CHAPTER 58. MAJORITY FUNCTION

58.6 See also


Media related to Majority functions at Wikimedia Commons

Boolean algebra (structure)


Boolean algebras canonically dened

BoyerMoore majority vote algorithm

Majority problem (cellular automaton)


Chapter 59

Veitch chart

The Karnaugh map (KM or K-map) is a method of simplifying Boolean algebra expressions. Maurice Karnaugh
introduced it in 1953[1] as a renement of Edward Veitch's 1952 Veitch chart,[2][3] which actually was a rediscovery
of Allan Marquand's 1881 logical diagram[4] aka Marquand diagram[3] but with a focus now set on its utility for
switching circuits.[3] Veitch charts are therefore also known as MarquandVeitch diagrams,[3] and Karnaugh maps as
KarnaughVeitch maps (KV maps).
The Karnaugh map reduces the need for extensive calculations by taking advantage of humans pattern-recognition
capability.[1] It also permits the rapid identication and elimination of potential race conditions.
The required Boolean results are transferred from a truth table onto a two-dimensional grid where, in Karnaugh maps,
the cells are ordered in Gray code,[5][3] and each cell position represents one combination of input conditions, while
each cell value represents the corresponding output value. Optimal groups of 1s or 0s are identied, which represent
the terms of a canonical form of the logic in the original truth table.[6] These terms can be used to write a minimal
Boolean expression representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using a minimum
number of physical logic gates. A sum-of-products expression can always be implemented using AND gates feeding
into an OR gate, and a product-of-sums expression leads to OR gates feeding an AND gate.[7] Karnaugh maps can
also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional
statements, can get very complicated, which makes the code dicult to read and to maintain. Once minimised,
canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic
operators.[8]

59.1 Example
Karnaugh maps are used to facilitate the simplication of Boolean algebra functions. For example, consider the
Boolean function described by the following truth table.
Following are two dierent notations describing the same function in unsimplied Boolean algebra, using the Boolean
variables A, B, C, D, and their inverses.


f (A, B, C, D) = mi , i {6, 8, 9, 10, 11, 12, 13, 14} where mi are the minterms to map (i.e., rows that
have output 1 in the truth table).

f (A, B, C, D) = Mi , i {0, 1, 2, 3, 4, 5, 7, 15} where Mi are the maxterms to map (i.e., rows that have
output 0 in the truth table).

59.1.1 Karnaugh map

In the example above, the four input variables can be combined in 16 dierent ways, so the truth table has 16 rows,
and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 4 grid.

241
242 CHAPTER 59. VEITCH CHART

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
An example Karnaugh map. This image actually shows two Karnaugh maps: for the function , using minterms (colored rectangles)
and
for its complement, using maxterms (gray rectangles). In the image, E() signies a sum of minterms, denoted in the article as
mi .

The row and column indices (shown across the top, and down the left side of the Karnaugh map) are ordered in Gray
code rather than binary numerical order. Gray code ensures that only one variable changes between each pair of
adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the functions output
for that combination of inputs.
After the Karnaugh map has been constructed, it is used to nd one of the simplest possible forms a canonical
form for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify
the expression. The minterms ('minimal terms) for the nal expression are found by encircling groups of 1s in the
map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8). Minterm
rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one
larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and
green groups overlap. The red group is a 2 2 square, the green group is a 4 1 rectangle, and the overlap area is
indicated in brown.
The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For
59.1. EXAMPLE 243

K-map drawn on a torus, and in a plane. The dot-marked cells are adjacent.

AB
00 01 11 10
ABCD ABCD
00

0 4 12 8 0000 - 0 1000 - 8
0001 - 1 1001 - 9
01

1 5 13 9 0010 - 2 1010 - 10
CD

0011 - 3 1011 - 11
0100 - 4 1100 - 12
11

3 7 15 11
0101 - 5 1101 - 13
0110 - 6 1110 - 14
10

2 6 14 10
0111 - 7 1111 - 15

K-map construction. Instead of containing output values, this diagram shows the numbers of outputs, therefore it is not a Karnaugh
map.

example, AD would mean a cell which covers the 2x2 area where A and D are true, i.e. the cells numbered 13, 9,
15, 11 in the diagram above. On the other hand, AD would mean the cells where A is true and D is false (that is, D
is true).
The grid is toroidally connected, which means that rectangular groups can wrap across the edges (see picture). Cells
on the extreme right are actually 'adjacent' to those on the far left; similarly, so are those at the very top and those
at the bottom. Therefore, AD can be a valid termit includes cells 12 and 8 at the top, and wraps to the bottom to
include cells 10 and 14as is B, D, which includes the four corners.
244 CHAPTER 59. VEITCH CHART

In three dimensions, one can bend a rectangle into a torus.

59.1.2 Solution
Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic
minterms can be found by examining which variables stay the same within each box.
For the red grouping:

A is the same and is equal to 1 throughout the box, therefore it should be included in the algebraic representation
of the red minterm.
B does not maintain the same state (it shifts from 1 to 0), and should therefore be excluded.
C does not change. It is always 0, so its complement, NOT-C, should be included. Thus, C should be included.
D changes, so it is excluded.

Thus the rst minterm in the Boolean sum-of-products expression is AC.


For the green grouping, A and B maintain the same state, while C and D change. B is 0 and has to be negated before
it can be included. The second term is therefore AB. Note that it is acceptable that the green grouping overlaps with
the red one.
In the same way, the blue grouping gives the term BCD.
The solutions of each grouping are combined: the normal form of the circuit is AC + AB + BCD .
Thus the Karnaugh map has guided a simplication of

f (A, B, C, D) = ABCD + AB C D + AB CD + ABCD +


ABCD + ABC D + ABCD + ABCD
= AC + AB + BCD
59.1. EXAMPLE 245

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
Diagram showing two K-maps. The K-map for the function f(A, B, C, D) is shown as colored rectangles which correspond to
minterms. The brown region is an overlap of the red 22 square and the green 41 rectangle. The K-map for the inverse of f is
shown as gray rectangles, which correspond to maxterms.

It would also have been possible to derive this simplication by carefully applying the axioms of boolean algebra, but
the time it takes to do that grows exponentially with the number of terms.

59.1.3 Inverse
The inverse of a function is solved in the same way by grouping the 0s instead.
The three terms to cover the inverse are all shown with grey boxes with dierent colored borders:

brown: A, B

gold: A, C

blue: BCD
246 CHAPTER 59. VEITCH CHART

This yields the inverse:

f (A, B, C, D) = A B + A C + BCD

Through the use of De Morgans laws, the product of sums can be determined:

f (A, B, C, D) = A B + A C + BCD
f (A, B, C, D) = A B + A C + BCD
( )
f (A, B, C, D) = (A + B) (A + C) B + C + D

59.1.4 Don't cares

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 X 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=A+BCD'
F=(A+B)(A+C)(A+D')
The value of for ABCD = 1111 is replaced by a don't care. This removes the green term completely and allows the red term to be
larger. It also allows blue inverse term to shift and become larger
59.2. RACE HAZARDS 247

Karnaugh maps also allow easy minimizations of functions whose truth tables include "don't care" conditions. A
don't care condition is a combination of inputs for which the designer doesn't care what the output is. Therefore,
don't care conditions can either be included in or excluded from any rectangular group, whichever makes it larger.
They are usually indicated on the map with a dash or X.
The example on the right is the same as the example above but with the value of f(1,1,1,1) replaced by a don't care.
This allows the red term to expand all the way down and, thus, removes the green term completely.
This yields the new minimum equation:

f (A, B, C, D) = A + BCD

Note that the rst term is just A, not AC. In this case, the don't care has dropped a term (the green rectangle);
simplied another (the red one); and removed the race hazard (removing the yellow term as shown in the following
section on race hazards).
The inverse case is simplied as follows:

f (A, B, C, D) = A B + A C + AD

59.2 Race hazards

59.2.1 Elimination
Karnaugh maps are useful for detecting and eliminating race conditions. Race hazards are very easy to spot using a
Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions
circumscribed on the map. However, because of the nature of Gray coding, adjacent has a special denition explained
above - we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides.

In the example above, a potential race condition exists when C is 1 and D is 0, A is 1, and B changes from 1
to 0 (moving from the blue state to the green state). For this case, the output is dened to remain unchanged
at 1, but because this transition is not covered by a specic term in the equation, a potential for a glitch (a
momentary transition of the output to 0) exists.
There is a second potential glitch in the same example that is more dicult to spot: when D is 0 and A and B
are both 1, with C changing from 1 to 0 (moving from the blue state to the red state). In this case the glitch
wraps around from the top of the map to the bottom.

Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to
worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in
time to meet the timing deadline. In our example, we are not considering clocked logic.
In our case, an additional term of AD would eliminate the potential race hazard, bridging between the green and blue
output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom
to the top of the right half) in the adjacent diagram.
The term is redundant in terms of the static logic of the system, but such redundant, or consensus terms, are often
needed to assure race-free dynamic performance.
Similarly, an additional term of AD must be added to the inverse to eliminate another potential(race hazard.
) Applying
De Morgans laws creates another product of sums expression for f, but with a new factor of A + D .

59.2.2 2-variable map examples


Thefollowing are all the possible 2-variable, 2 2 Karnaugh maps. Listed with each is the minterms as a function
of m() and the race hazard free (see previous section) minimum equation. A minterm is dened as an expression
that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical inter-
connected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These
248 CHAPTER 59. VEITCH CHART

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
Race hazards are present in this diagram.

expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to
be mapped. Here are all the blocks with one eld.
A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge
of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and
horizontal row. A visualization of the k-map can be considered cylindrical. The elds at edges on the left and right
are adjacent, and the top and bottom are adjacent. K-Maps for 4 variables must be depicted as a donut or torus shape.
The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables
and more.
59.2. RACE HAZARDS 249

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
Above diagram with consensus terms added to avoid race hazards.

A
0 1

0 0
0
B

0 0
1

f(A,B) = E()
K=0
K'=1 m(0); K = 0
250 CHAPTER 59. VEITCH CHART

A
0 1

1 0
0
B
0 0
1

f(A,B) = E(1)
K=A'B'
K'=A+B m(1); K = AB

A
0 1

0 1
0
B

0 0
1

f(A,B) = E(2)
K=AB'
K'=A'+B m(2); K = AB

A
0 1

0 0
0
B

1 0
1

f(A,B) = E(3)
K=A'B
K'=A+B' m(3); K = AB

A
0 1

0 0
0
B

0 1
1

f(A,B) = E(4)
K=AB
K'=A'+B' m(4); K = AB

A
0 1

1 1
0
B

0 0
1

f(A,B) = E(1,2)
K=B'
K'=B m(1,2); K = B

A
0 1

1 0
0
B

1 0
1

f(A,B) = E(1,3)
K=A'
K'=A m(1,3); K = A
59.2. RACE HAZARDS 251

A
0 1

1 0
0
B
0 1
1

f(A,B) = E(1,4)
K=A'B'+AB
K'=AB'+A'B m(1,4); K = AB + AB

A
0 1

0 1
0
B

1 0
1

f(A,B) = E(2,3)
K=AB'+A'B
K'=A'B'+AB m(2,3); K = AB + AB

A
0 1

0 1
0
B

0 1
1

f(A,B) = E(2,4)
K=A
K'=A' m(2,4); K = A

A
0 1

0 0
0
B

1 1
1

f(A,B) = E(3,4)
K=B
K'=B' m(3,4); K = B

A
0 1

1 1
0
B

1 0
1

f(A,B) = E(1,2,3)
K=A'+B'
K'=AB m(1,2,3); K = A' + B

A
0 1

1 1
0
B

0 1
1

f(A,B) = E(1,2,4)
K=B'+A
K'=A'B m(1,2,4); K = A + B
252 CHAPTER 59. VEITCH CHART

A
0 1

1 0
0
B
1 1
1

f(A,B) = E(1,3,4)
K=A'+B
K'=AB' m(1,3,4); K = A + B

A
0 1

0 1
0
B

1 1
1

f(A,B) = E(2,3,4)
K=A+B
K'=A'B' m(2,3,4); K = A + B

A
0 1

1 1
0
B

1 1
1

f(A,B) = E(1,2,3,4)
K=1
K'=0 m(1,2,3,4); K = 1

59.3 Other graphical methods


Alternative graphical minimization methods include:

Marquand diagram (1881) by Allan Marquand (18531924)[4][3]


Harvard minimizing chart (1951) by Howard H. Aiken and Martha L. Whitehouse of the Harvard Computation
Laboratory[9][1][10][11]
Veitch chart (1952) by Edward Veitch (19242013)[2][3]
Svobodas graphical aids (1956) and triadic map by Antonn Svoboda (19071980)[12][13][14][15]
Hndler circle graph (aka Hndlerscher Kreisgraph, Kreisgraph nach Hndler, Hndler-Kreisgraph, Hndler-
Diagramm, Minimisierungsgraph [sic]) (1958) by Wolfgang Hndler (19201998)[16][17][18][14][19][20][21][22][23]
Graph method (1965) by Herbert Kortum (19071979)[24][25][26][27][28][29]

59.4 See also


Circuit minimization
Espresso heuristic logic minimizer
List of Boolean algebra topics
QuineMcCluskey algorithm
Algebraic normal form (ANF)
Ring sum normal form (RSNF)
59.5. REFERENCES 253

Zhegalkin normal form


Reed-Muller expansion
Venn diagram
Punnett square (a similar diagram in biology)

59.5 References
[1] Karnaugh, Maurice (November 1953) [1953-04-23, 1953-03-17]. The Map Method for Synthesis of Combinational Logic
Circuits (PDF). Transactions of the American Institute of Electrical Engineers part I. 72 (9): 593599. doi:10.1109/TCE.1953.6371932.
Paper 53-217. Archived (PDF) from the original on 2017-04-16. Retrieved 2017-04-16. (NB. Also contains a short review
by Samuel H. Caldwell.)

[2] Veitch, Edward W. (1952-05-03) [1952-05-02]. A Chart Method for Simplifying Truth Functions. ACM Annual Con-
ference/Annual Meeting: Proceedings of the 1952 ACM Annual Meeting (Pittsburg). New York, USA: ACM: 127133.
doi:10.1145/609784.609801.

[3] Brown, Frank Markham (2012) [2003, 1990]. Boolean Reasoning - The Logic of Boolean Equations (reissue of 2nd ed.).
Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-42785-0.

[4] Marquand, Allan (1881). XXXIII: On Logical Diagrams for n terms. The London, Edinburgh, and Dublin Philosophical
Magazine and Journal of Science. 5. 12 (75): 266270. doi:10.1080/14786448108627104. Retrieved 2017-05-15. (NB.
Quite many secondary sources erroneously cite this work as A logical diagram for n terms or On a logical diagram for
n terms.)

[5] Wakerly, John F. (1994). Digital Design: Principles & Practices. New Jersey, USA: Prentice Hall. pp. 222, 4849. ISBN
0-13-211459-3. (NB. The two page sections taken together say that K-maps are labeled with Gray code. The rst section
says that they are labeled with a code that changes only one bit between entries and the second section says that such a code
is called Gray code.)

[6] Belton, David (April 1998). Karnaugh Maps Rules of Simplication. Archived from the original on 2017-04-18.
Retrieved 2009-05-30.

[7] Dodge, Nathan B. (September 2015). Simplifying Logic Circuits with Karnaugh Maps (PDF). The University of Texas
at Dallas, Erik Jonsson School of Engineering and Computer Science. Archived (PDF) from the original on 2017-04-18.
Retrieved 2017-04-18.

[8] Cook, Aaron. Using Karnaugh Maps to Simplify Code. Quantum Rarity. Archived from the original on 2017-04-18.
Retrieved 2012-10-07.

[9] Aiken, Howard H.; Blaauw, Gerrit; Burkhart, William; Burns, Robert J.; Cali, Lloyd; Canepa, Michele; Ciampa, Carmela
M.; Coolidge, Jr., Charles A.; Fucarile, Joseph R.; Gadd, Jr., J. Orten; Gucker, Frank F.; Harr, John A.; Hawkins, Robert
L.; Hayes, Miles V.; Hofheimer, Richard; Hulme, William F.; Jennings, Betty L.; Johnson, Stanley A.; Kalin, Theodore;
Kincaid, Marshall; Lucchini, E. Edward; Minty, William; Moore, Benjamin L.; Remmes, Joseph; Rinn, Robert J.; Roche,
John W.; Sanbord, Jacquelin; Semon, Warren L.; Singer, Theodore; Smith, Dexter; Smith, Leonard; Strong, Peter F.;
Thomas, Helene V.; Wang, An; Whitehouse, Martha L.; Wilkins, Holly B.; Wilkins, Robert E.; Woo, Way Dong; Lit-
tle, Elbert P.; McDowell, M. Scudder (1952) [January 1951]. Chapter V: Minimizing charts. Synthesis of electronic
computing and control circuits (second printing, revised ed.). Write-Patterson Air Force Base: Harvard University Press
(Cambridge, Massachusetts, USA) / Georey Cumberlege Oxford University Press (London). pp. preface, 5067. Re-
trieved 2017-04-16. [] Martha Whitehouse constructed the minimizing charts used so profusely throughout this book,
and in addition prepared minimizing charts of seven and eight variables for experimental purposes. [] Hence, the present
writer is obliged to record that the general algebraic approach, the switching function, the vacuum-tube operator, and the
minimizing chart are his proposals, and that he is responsible for their inclusion herein. [] (NB. Work commenced in
April 1948.)

[10] Phister, Jr., Montgomery (1959) [December 1958]. Logical design of digital computers. New York, USA: John Wiley &
Sons Inc. pp. 7583. ISBN 0471688053.

[11] Curtis, H. Allen (1962). A new approach to the design of switching circuits. Princeton: D. van Nostrand Company.

[12] Svoboda, Antonn (1956). Gracko-mechanick pomcky uvan pi analyse a synthese kontaktovch obvod [Utilization
of graphical-mechanical aids for the analysis and synthesis of contact circuits]. Stroje na zpracovn informac [Symphosium
IV on information processing machines] (in Czech). IV. Prague: Czechoslovak Academy of Sciences, Research Institute
of Mathematical Machines. pp. 921.
254 CHAPTER 59. VEITCH CHART

[13] Svoboda, Antonn (1956). Graphical Mechanical Aids for the Synthesis of Relay Circuits. Nachrichtentechnische Fach-
berichte (NTF), Beihefte der Nachrichtentechnischen Zeitschrift (NTZ). Braunschweig, Germany: Vieweg-Verlag.

[14] Steinbuch, Karl W.; Weber, Wolfgang; Heinemann, Traute, eds. (1974) [1967]. Taschenbuch der Informatik - Band II
- Struktur und Programmierung von EDV-Systemen. Taschenbuch der Nachrichtenverarbeitung (in German). 2 (3 ed.).
Berlin, Germany: Springer-Verlag. pp. 25, 62, 96, 122123, 238. ISBN 3-540-06241-6. LCCN 73-80607.

[15] Svoboda, Antonn; White, Donnamaie E. (2016) [1979-08-01]. Advanced Logical Circuit Design Techniques (PDF) (re-
typed electronic reissue ed.). Garland STPM Press (original issue) / WhitePubs (reissue). ISBN 978-0-8240-7014-4.
Archived (PDF) from the original on 2017-04-14. Retrieved 2017-04-15.

[16] Hndler, Wolfgang (1958). Ein Minimisierungsverfahren zur Synthese von Schaltkreisen: Minimisierungsgraphen (Disser-
tation) (in German). Technische Hochschule Darmstadt. D 17. (NB. Although written by a German, the title contains an
anglicism; the correct German term would be Minimierung instead of Minimisierung.)

[17] Hndler, Wolfgang (2013) [1961]. Zum Gebrauch von Graphen in der Schaltkreis- und Schaltwerktheorie. In Peschl,
Ernst Ferdinand; Unger, Heinz. Colloquium ber Schaltkreis- und Schaltwerk-Theorie - Vortragsauszge vom 26. bis 28.
Oktober 1960 in Bonn - Band 3 von Internationale Schriftenreihe zur Numerischen Mathematik [International Series of
Numerical Mathematics] (ISNM) (in German). 3. Institut fr Angewandte Mathematik, Universitt Saarbrcken, Rheinisch-
Westflisches Institut fr Instrumentelle Mathematik: Springer Basel AG / Birkhuser Verlag Basel. pp. 169198. ISBN
978-3-0348-5771-0. doi:10.1007/978-3-0348-5770-3.

[18] Berger, Erich R.; Hndler, Wolfgang (1967) [1962]. Steinbuch, Karl W.; Wagner, Siegfried W., eds. Taschenbuch der
Nachrichtenverarbeitung (in German) (2 ed.). Berlin, Germany: Springer-Verlag OHG. pp. 64, 10341035, 1036, 1038.
LCCN 67-21079. Title No. 1036. [] bersichtlich ist die Darstellung nach Hndler, die smtliche Punkte, numeriert
nach dem Gray-Code [], auf dem Umfeld eines Kreises anordnet. Sie erfordert allerdings sehr viel Platz. [] [Hndlers
illustration, where all points, numbered according to the Gray code, are arranged on the circumference of a circle, is easily
comprehensible. It needs, however, a lot of space.]

[19] Hotz, Gnter (1974). Schaltkreistheorie [Switching circuit theory]. DeGruyter Lehrbuch (in German). Walter de Gruyter
& Co. p. 117. ISBN 3-11-00-2050-5. [] Der Kreisgraph von Hndler ist fr das Aunden von Primimplikanten
gut brauchbar. Er hat den Nachteil, da er schwierig zu zeichnen ist. Diesen Nachteil kann man allerdings durch die
Verwendung von Schablonen verringern. [] [The circle graph by Hndler is well suited to nd prime implicants. A
disadvantage is that it is dicult to draw. This can be remedied using stencils.]

[20] Informatik Sammlung Erlangen (ISER)" (in German). Erlangen, Germany: Friedrich-Alexander Universitt. 2012-03-
13. Retrieved 2017-04-12. (NB. Shows a picture of a Kreisgraph by Hndler.)

[21] Informatik Sammlung Erlangen (ISER) - Impressum (in German). Erlangen, Germany: Friedrich-Alexander Universitt.
2012-03-13. Archived from the original on 2012-02-26. Retrieved 2017-04-15. (NB. Shows a picture of a Kreisgraph by
Hndler.)

[22] Zemanek, Heinz (2013) [1990]. Geschichte der Schaltalgebra [History of circuit switching algebra]. In Broy, Man-
fred. Informatik und Mathematik [Computer Sciences and Mathematics] (in German). Springer-Verlag. pp. 4372. ISBN
9783642766770. Einen Weg besonderer Art, der damals zu wenig beachtet wurde, wies W. Hndler in seiner Dissertation
[] mit einem Kreisdiagramm. [] (NB. Collection of papers at a colloquium held at the Bayerische Akademie der
Wissenschaften, 1989-06-12/14, in honor of Friedrich L. Bauer.)

[23] Bauer, Friedrich Ludwig; Wirsing, Martin (March 1991). Elementare Aussagenlogik (in German). Berlin / Heidelberg:
Springer-Verlag. pp. 5456, 71, 112113, 138139. ISBN 978-3-540-52974-3. [] handelt es sich um ein Hndler-
Diagramm [], mit den Wrfelecken als Ecken eines 2m -gons. [] Abb. [] zeigt auch Gegenstcke fr andere Di-
mensionen. Durch waagerechte Linien sind dabei Tupel verbunden, die sich nur in der ersten Komponente unterscheiden;
durch senkrechte Linien solche, die sich nur in der zweiten Komponente unterscheiden; durch 45-Linien und 135-Linien
solche, die sich nur in der dritten Komponente unterscheiden usw. Als Nachteil der Hndler-Diagramme wird angefhrt,
da sie viel Platz beanspruchen. []

[24] Kortum, Herbert (1965). Minimierung von Kontaktschaltungen durch Kombination von Krzungsverfahren und Graphen-
methoden. messen-steuern-regeln (msr) (in German). Verlag Technik. 8 (12): 421425.

[25] Kortum, Herbert (1966). Konstruktion und Minimierung von Halbleiterschaltnetzwerken mittels Graphentransformation.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (1): 912.

[26] Kortum, Herbert (1966). Weitere Bemerkungen zur Minimierung von Schaltnetzwerken mittels Graphenmethoden.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (3): 96102.

[27] Kortum, Herbert (1966). Weitere Bemerkungen zur Behandlung von Schaltnetzwerken mittels Graphen. messen-steuern-
regeln (msr) (in German). Verlag Technik. 9 (5): 151157.
59.6. FURTHER READING 255

[28] Kortum, Herbert (1967). "ber zweckmige Anpassung der Graphenstruktur diskreter Systeme an vorgegebene Auf-
gabenstellungen. messen-steuern-regeln (msr) (in German). Verlag Technik. 10 (6): 208211.

[29] Tafel, Hans Jrg (1971). 4.3.5. Graphenmethode zur Vereinfachung von Schaltfunktionen. Written at RWTH, Aachen,
Germany. Einfhrung in die digitale Datenverarbeitung [Introduction to digital information processing] (in German). Mu-
nich, Germany: Carl Hanser Verlag. pp. 98105, 107113. ISBN 3-446-10569-7.

59.6 Further reading


Katz, Randy Howard (1998) [1994]. Contemporary Logic Design. The Benjamin/Cummings Publishing Com-
pany. pp. 7085. ISBN 0-8053-2703-7. doi:10.1016/0026-2692(95)90052-7.

Vingron, Shimon Peter (2004) [2003-11-05]. Karnaugh Maps. Switching Theory: Insight Through Predicate
Logic. Berlin, Heidelberg, New York: Springer-Verlag. pp. 5776. ISBN 3-540-40343-4.

Wickes, William E. (1968). Logic Design with Integrated Circuits. New York, USA: John Wiley & Sons.
pp. 3649. LCCN 68-21185. A renement of the Venn diagram in that circles are replaced by squares and
arranged in a form of matrix. The Veitch diagram labels the squares with the minterms. Karnaugh assigned 1s
and 0s to the squares and their labels and deduced the numbering scheme in common use.

Maxeld, Clive Max (2006-11-29). Reed-Muller Logic. Logic 101. EETimes. Part 3. Archived from the
original on 2017-04-19. Retrieved 2017-04-19.

59.7 External links


Detect Overlapping Rectangles, by Herbert Glarner.
Using Karnaugh maps in practical applications, Circuit design project to control trac lights.

K-Map Tutorial for 2,3,4 and 5 variables


Karnaugh Map Example

POCKETPC BOOLEAN FUNCTION SIMPLIFICATION, Ledion Bitincka George E. Antoniou


Chapter 60

Modal algebra

In algebra and logic, a modal algebra is a structure A, , , , 0, 1, such that

A, , , , 0, 1 is a Boolean algebra,

is a unary operation on A satisfying 1 = 1 and (x y) = x y for all x, y in A.

Modal algebras provide models of propositional modal logics in the same way as Boolean algebras are models of
classical logic. In particular, the variety of all modal algebras is the equivalent algebraic semantics of the modal logic
K in the sense of abstract algebraic logic, and the lattice of its subvarieties is dually isomorphic to the lattice of normal
modal logics.
Stones representation theorem can be generalized to the JnssonTarski duality, which ensures that each modal
algebra can be represented as the algebra of admissible sets in a modal general frame.

60.1 See also


interior algebra

Heyting algebra

60.2 References
A. Chagrov and M. Zakharyaschev, Modal Logic, Oxford Logic Guides vol. 35, Oxford University Press, 1997.
ISBN 0-19-853779-4

256
Chapter 61

Monadic Boolean algebra

In abstract algebra, a monadic Boolean algebra is an algebraic structure A with signature

, +, ', 0, 1, of type 2,2,1,0,0,1,

where A, , +, ', 0, 1 is a Boolean algebra.


The monadic/unary operator denotes the existential quantier, which satises the identities (using the received
prex notation for ):

0 = 0
x x
(x + y) = x + y
xy = (xy).

x is the existential closure of x. Dual to is the unary operator , the universal quantier, dened as x := (x' )'.
A monadic Boolean algebra has a dual denition and notation that take as primitive and as dened, so that x
:= (x ' )' . (Compare this with the denition of the dual Boolean algebra.) Hence, with this notation, an algebra A
has signature , +, ', 0, 1, , with A, , +, ', 0, 1 a Boolean algebra, as before. Moreover, satises the following
dualized version of the above identities:

1. 1 = 1
2. x x
3. (xy) = xy
4. x + y = (x + y).

x is the universal closure of x.

61.1 Discussion
Monadic Boolean algebras have an important connection to topology. If is interpreted as the interior operator
of topology, (1)-(3) above plus the axiom (x) = x make up the axioms for an interior algebra. But (x) =
x can be proved from (1)-(4). Moreover, an alternative axiomatization of monadic Boolean algebras consists of
the (reinterpreted) axioms for an interior algebra, plus (x)' = (x)' (Halmos 1962: 22). Hence monadic Boolean
algebras are the semisimple interior/closure algebras such that:

The universal (dually, existential) quantier interprets the interior (closure) operator;

257
258 CHAPTER 61. MONADIC BOOLEAN ALGEBRA

All open (or closed) elements are also clopen.

A more concise axiomatization of monadic Boolean algebra is (1) and (2) above, plus (xy) = xy (Halmos
1962: 21). This axiomatization obscures the connection to topology.
Monadic Boolean algebras form a variety. They are to monadic predicate logic what Boolean algebras are to propositional
logic, and what polyadic algebras are to rst-order logic. Paul Halmos discovered monadic Boolean algebras while
working on polyadic algebras; Halmos (1962) reprints the relevant papers. Halmos and Givant (1998) includes an
undergraduate treatment of monadic Boolean algebra.
Monadic Boolean algebras also have an important connection to modal logic. The modal logic S5, viewed as a
theory in S4, is a model of monadic Boolean algebras in the same way that S4 is a model of interior algebra. Like-
wise, monadic Boolean algebras supply the algebraic semantics for S5. Hence S5-algebra is a synonym for monadic
Boolean algebra.

61.2 See also


clopen set
interior algebra

Kuratowski closure axioms


ukasiewiczMoisil algebra

modal logic
monadic logic

61.3 References
Paul Halmos, 1962. Algebraic Logic. New York: Chelsea.

------ and Steven Givant, 1998. Logic as Algebra. Mathematical Association of America.
Chapter 62

Parity function

In Boolean algebra, a parity function is a Boolean function whose value is 1 if and only if the input vector has an
odd number of ones. The parity function of two inputs is also known as the XOR function.
The parity function is notable for its role in theoretical investigation of circuit complexity of Boolean functions.
The output of the Parity Function is the Parity bit.

62.1 Denition

The n -variable parity function is the Boolean function f : {0, 1}n {0, 1} with the property that f (x) = 1 if and
only if the number of ones in the vector x {0, 1}n is odd. In other words, f is dened as follows:

f (x) = x1 x2 xn

62.2 Properties

Parity only depends on the number of ones and is therefore a symmetric Boolean function.
The n-variable parity function and its negation are the only Boolean functions for which all disjunctive normal forms
have the maximal number of 2 n 1 monomials of length n and all conjunctive normal forms have the maximal number
of 2 n 1 clauses of length n.[1]

62.3 Circuit complexity

In the early 1980s, Merrick Furst, James Saxe and Michael Sipser[2] and independently Mikls Ajtai[3] established
super-polynomial lower bounds on the size of constant-depth Boolean circuits for the parity function, i.e., they showed
that polynomial-size constant-depth circuits cannot compute the parity function. Similar results were also established
for the majority, multiplication and transitive closure functions, by reduction from the parity function.[2]
Hstad (1987) established tight exponential lower bounds on the size of constant-depth Boolean circuits for the parity
function. Hstads Switching Lemma is the key technical tool used for these lower bounds and Johan Hstad was
awarded the Gdel Prize for this work in 1994. The precise result is that depth-k circuits with AND, OR, and NOT
1
gates require size exp((n k1 )) to compute the parity function. This is asymptotically almost optimal as there are
1
depth-k circuits computing parity which have size exp(O(n k1 )t) .

259
260 CHAPTER 62. PARITY FUNCTION

62.4 Innite version


An innite parity function is a function f : {0, 1} {0, 1} mapping every innite binary string to 0 or 1, having
the following property: if w and v are innite binary strings diering only on nite number of coordinates then
f (w) = f (v) if and only if w and v dier on even number of coordinates.
Assuming axiom of choice it can be easily proved that parity functions exist and there are 2c many of them - as many
as the number of all functions from {0, 1} to {0, 1} . It is enough to take one representative per equivalence class of
relation dened as follows: w v if w and v dier at nite number of coordinates. Having such representatives,
we can map all of them to 0; the rest of f values are deducted unambiguously.
Innite parity functions are often used in theoretical Computer Science and Set Theory because of their simple
denition and - on the other hand - their descriptive complexity. For example, it can be shown that an inverse image
f 1 [0] is a non-Borel set.

62.5 See also


Related topics

Error Correction

Error Detection

The output of the function

Parity bit

62.6 References
[1] Ingo Wegener, Randall J. Pruim, Complexity Theory, 2005, ISBN 3-540-21045-8, p. 260

[2] Merrick Furst, James Saxe and Michael Sipser, Parity, Circuits, and the Polynomial-Time Hierarchy, Annu. Intl. Symp.
Found.Coimputer Sci., 1981, Theory of Computing Systems, vol. 17, no. 1, 1984, pp. 1327, doi:10.1007/BF01744431

[3] Mikls Ajtai, " 11 -Formulae on Finite Structures, Annals of Pure and Applied Logic, 24 (1983) 148.

Hstad, Johan (1987), Computational limitations of small depth circuits (PDF), Ph.D. thesis, Massachusetts
Institute of Technology.
Chapter 63

Petricks method

In Boolean algebra, Petricks method (also known as the branch-and-bound method) is a technique described by
Stanley R. Petrick (19312006)[1][2] in 1956[3] for determining all minimum sum-of-products solutions from a prime
implicant chart. Petricks method is very tedious for large charts, but it is easy to implement on a computer.

1. Reduce the prime implicant chart by eliminating the essential prime implicant rows and the corresponding
columns.

2. Label the rows of the reduced prime implicant chart P1 , P2 , P3 , P4 , etc.

3. Form a logical function P which is true when all the columns are covered. P consists of a product of sums
where each sum term has the form (Pi0 + Pi1 + +PiN ) , where each Pij represents a row covering column
i.

4. Reduce P to a minimum sum of products by multiplying out and applying X + XY = X .

5. Each term in the result represents a solution, that is, a set of rows which covers all of the minterms in the
table. To determine the minimum solutions, rst nd those terms which contain a minimum number of prime
implicants.

6. Next, for each of the terms found in step ve, count the number of literals in each prime implicant and nd the
total number of literals.

7. Choose the term or terms composed of the minimum total number of literals, and write out the corresponding
sums of prime implicants.

Example of Petricks method[4]


Following is the function we want to reduce:


f (A, B, C) = m(0, 1, 2, 5, 6, 7)

The prime implicant chart from the Quine-McCluskey algorithm is as follows:


| 0 1 2 5 6 7 ---------------|------------ K (0,1) a'b' | X X L (0,2) a'c' | X X M (1,5) b'c | X X N (2,6) bc' | X X P (5,7)
ac | X X Q (6,7) ab | X X
Based on the X marks in the table above, build a product of sums of the rows where each row is added, and columns
are multiplied together:
(K+L)(K+M)(L+N)(M+P)(N+Q)(P+Q)
Use the distributive law to turn that expression into a sum of products. Also use the following equivalences to simplify
the nal expression: X + XY = X and XX = X and X+X=X
= (K+L)(K+M)(L+N)(M+P)(N+Q)(P+Q) = (K+LM)(N+LQ)(P+MQ) = (KN+KLQ+LMN+LMQ)(P+MQ) = KNP
+ KLPQ + LMNP + LMPQ + KMNQ + KLMQ + LMNQ + LMQ

261
262 CHAPTER 63. PETRICKS METHOD

Now use again the following equivalence to further reduce the equation: X + XY = X
= KNP + KLPQ + LMNP + LMQ + KMNQ
Choose products with fewest terms, in this example, there are two products with three terms:
KNP LMQ
Choose term or terms with fewest total literals. In our example, the two products both expand to six literals total
each:
KNP expands to a'b'+ bc'+ ac LMQ expands to a'c'+ b'c + ab
So either one can be used. In general, application of Petricks method is tedious for large charts, but it is easy to
implement on a computer.

63.1 References
[1] Unknown. Biographical note. Retrieved 2017-04-12. Stanley R. Petrick was born in Cedar Rapids, Iowa on August 16,
1931. He attended the Roosevelt High School and received a B. S. degree in Mathematics from the Iowa State University
in 1953. During 1953 to 1955 he attended MIT while on active duty as an Air Force ocer and received the S. M. degree
from the Department of Electrical Engineering in 1955. He was elected to Sigma Xi in 1955.
Mr. Petrick has been associated with the Applied Mathematics Board of the Data Sciences Laboratory at the Air Force
Cambridge Research Laboratories since 1955 and his recent studies at MIT have been partially supported by AFCRL.
During 1959-1962 he held the position of Lecturer in Mathematics in the Evening Graduate Division of Northeastern
University.
Mr. Petrick is currently a member of the Linguistic Society of America, The Linguistic Circle of New York, The American
Mathematical Association, The Association for Computing Machinery, and the Association for Machine Translation and
Computational Linguistics.

[2] Obituaries - Cedar Rapids - Stanley R. Petrick. The Gazette. 2006-08-05. p. 16. Retrieved 2017-04-12. [] CEDAR
RAPIDS Stanley R. Petrick, 74, formerly of Cedar Rapids, died July 27, 2006, in Presbyterian/St. Lukes Hospital, Denver,
Colo., following a 13-year battle with leukemia. A memorial service will be held Aug. 14 at the United Presbyterian
Church in Laramie, Wyo., where he lived for many years. [] Stan Petrick was born in Cedar Rapids on Aug. 6, 1931 to
Catherine Hunt Petrick and Fred Petrick. He graduated from Roosevelt High School in 1949 and received a B.S. degree
in mathematics from Iowa State University. Stan married Mary Ethel Buxton in 1953.
He joined the U.S. Air Force and was assigned as a student ocer studying digital computation at MIT, where he earned an
M.S. degree. He was then assigned to the Applied Mathematics Branch of the Air Force Cambridge Research Laboratory
and while there earned a Ph.D. in linguistics.
He spent 20 years in the Theoretical and Computational Linguistics Group of the Mathematical Sciences Department at
IBM's T.J. Watson Research Center, conducting research in formal language theory. He had served as an assistant director
of the Mathematical Sciences Department, chairman of the Special Interest Group on Symbolic and Algebraic Manipulation
of the Association for Computing Machinery and president of the Association for Computational Linguistics. He authored
many technical publications.
He taught three years at Northeastern University and 13 years at the Pratt Institute. Dr. Petrick joined the University of
Wyoming in 1987, where he was instrumental in developing and implementing the Ph.D. program in the department and
served as a thesis adviser for many graduate students. He retired in 1995. [] (NB. Includes a photo of Stanley R. Petrick.)

[3] Petrick, Stanley R. (1956-04-10). A Direct Determination of the Irredundant Forms of a Boolean Function from the Set
of Prime Implicants. Bedford, Cambridge, MA, USA: Air Force Cambridge Research Center. AFCRC Technical Report
TR-56-110.

[4] http://www.mrc.uidaho.edu/mrc/people/jff/349/lect.10 Lecture #10: Petricks Method

63.2 Further reading


Roth, Jr., Charles H. Fundamentals of Logic Design

63.3 External links


Tutorial on Quine-McCluskey and Petricks method (pdf).
63.3. EXTERNAL LINKS 263

Prime Implicant Simplication Using Petricks Method


Chapter 64

Poretskys law of forms

In Boolean algebra, Poretskys law of forms shows that the single Boolean equation f (X) = 0 is equivalent to
g(X) = h(X) if and only if g = f h , where represents exclusive or.
The law of forms was discovered by Platon Poretsky.

64.1 References
Frank Markham Brown, Boolean Reasoning: The Logic of Boolean Equations, 2nd edition, 2003, p. 100
Louis Couturat, The Algebra Of Logic, 1914, p. 53, section 0.43

Clarence Irving Lewis, A Survey of Symbolic Logic, 1918, p. 145, section 7.15

64.2 External links


Transhuman Reections - Poretsky Form to Solve

264
Chapter 65

Product term

In Boolean logic, a product term is a conjunction of literals, where each literal is either a variable or its negation.

65.1 Examples
Examples of product terms include:

AB

A (B) (C)
A

65.2 Origin
The terminology comes from the similarity of AND to multiplication as in the ring structure of Boolean rings.

65.3 Minterms
For a boolean function of n variables x1 , . . . , xn , a product term in which each of the n variables appears once (in
either its complemented or uncomplemented form) is called a minterm. Thus, a minterm is a logical expression of n
variables that employs only the complement operator and the conjunction operator.

65.4 References
Fredrick J. Hill, and Gerald R. Peterson, 1974, Introduction to Switching Theory and Logical Design, Second
Edition, John Wiley & Sons, NY, ISBN 0-471-39882-9

265
Chapter 66

Propositional calculus

Propositional calculus (also called propositional logic, sentential calculus, sentential logic, or sometimes zeroth-
order logic) is the branch of logic concerned with the study of propositions (whether they are true or false) that are
formed by other propositions with the use of logical connectives, and how their value depends on the truth value of
their components.

66.1 Explanation
Logical connectives are found in natural languages. In English for example, some examples are and (conjunction),
or (disjunction), not (negation) and if (but only when used to denote material conditional).
The following is an example of a very simple inference within the scope of propositional logic:

Premise 1: If its raining then its cloudy.


Premise 2: Its raining.
Conclusion: Its cloudy.

Both premises and the conclusion are propositions. The premises are taken for granted and then with the application
of modus ponens (an inference rule) the conclusion follows.
As propositional logic is not concerned with the structure of propositions beyond the point where they can't be decom-
posed anymore by logical connectives, this inference can be restated replacing those atomic statements with statement
letters, which are interpreted as variables representing statements:

P Q

P
Q
The same can be stated succinctly in the following way:

P Q, P Q

When P is interpreted as Its raining and Q as its cloudy the above symbolic expressions can be seen to exactly
correspond with the original expression in natural language. Not only that, but they will also correspond with any
other inference of this form, which will be valid on the same basis that this inference is.
Propositional logic may be studied through a formal system in which formulas of a formal language may be interpreted
to represent propositions. A system of inference rules and axioms allows certain formulas to be derived. These
derived formulas are called theorems and may be interpreted to be true propositions. A constructed sequence of such

266
66.2. HISTORY 267

formulas is known as a derivation or proof and the last formula of the sequence is the theorem. The derivation may
be interpreted as proof of the proposition represented by the theorem.
When a formal system is used to represent formal logic, only statement letters are represented directly. The natural
language propositions that arise when they're interpreted are outside the scope of the system, and the relation between
the formal system and its interpretation is likewise outside the formal system itself.
Usually in truth-functional propositional logic, formulas are interpreted as having either a truth value of true or a
truth value of false. Truth-functional propositional logic and systems isomorphic to it, are considered to be zeroth-
order logic.

66.2 History
Main article: History of logic

Although propositional logic (which is interchangeable with propositional calculus) had been hinted by earlier philoso-
phers, it was developed into a formal logic by Chrysippus in the 3rd century BC[1] and expanded by his successor
Stoics. The logic was focused on propositions. This advancement was dierent from the traditional syllogistic logic
which was focused on terms. However, later in antiquity, the propositional logic developed by the Stoics was no
longer understood . Consequently, the system was essentially reinvented by Peter Abelard in the 12th century.[2]
Propositional logic was eventually rened using symbolic logic. The 17th/18th-century mathematician Gottfried
Leibniz has been credited with being the founder of symbolic logic for his work with the calculus ratiocinator. Al-
though his work was the rst of its kind, it was unknown to the larger logical community. Consequently, many of the
advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan completely
independent of Leibniz.[3]
Just as propositional logic can be considered an advancement from the earlier syllogistic logic, Gottlob Freges
predicate logic was an advancement from the earlier propositional logic. One author describes predicate logic as
combining the distinctive features of syllogistic logic and propositional logic.[4] Consequently, predicate logic ush-
ered in a new era in logics history; however, advances in propositional logic were still made after Frege, including
Natural Deduction, Truth-Trees and Truth-Tables. Natural deduction was invented by Gerhard Gentzen and Jan
ukasiewicz. Truth-Trees were invented by Evert Willem Beth.[5] The invention of truth-tables, however, is of un-
certain attribution.
Within works by Frege[6] and Bertrand Russell,[7] are ideas inuential to the invention of truth tables. The actual
tabular structure (being formatted as a table), itself, is generally credited to either Ludwig Wittgenstein or Emil
Post (or both, independently).[6] Besides Frege and Russell, others credited with having ideas preceding truth-tables
include Philo, Boole, Charles Sanders Peirce[8] , and Ernst Schrder. Others credited with the tabular structure include
Jan ukasiewicz, Ernst Schrder, Alfred North Whitehead, William Stanley Jevons, John Venn, and Clarence Irving
Lewis.[7] Ultimately, some have concluded, like John Shosky, that It is far from clear that any one person should be
given the title of 'inventor' of truth-tables..[7]

66.3 Terminology
In general terms, a calculus is a formal system that consists of a set of syntactic expressions (well-formed formulas),
a distinguished subset of these expressions (axioms), plus a set of formal rules that dene a specic binary relation,
intended to be interpreted as logical equivalence, on the space of expressions.
When the formal system is intended to be a logical system, the expressions are meant to be interpreted as statements,
and the rules, known to be inference rules, are typically intended to be truth-preserving. In this setting, the rules
(which may include axioms) can then be used to derive (infer) formulas representing true statements from given
formulas representing true statements.
The set of axioms may be empty, a nonempty nite set, a countably innite set, or be given by axiom schemata. A
formal grammar recursively denes the expressions and well-formed formulas of the language. In addition a semantics
may be given which denes truth and valuations (or interpretations).
The language of a propositional calculus consists of
268 CHAPTER 66. PROPOSITIONAL CALCULUS

1. a set of primitive symbols, variously referred to as atomic formulas, placeholders, proposition letters, or vari-
ables, and
2. a set of operator symbols, variously interpreted as logical operators or logical connectives.

A well-formed formula is any atomic formula, or any formula that can be built up from atomic formulas by means of
operator symbols according to the rules of the grammar.
Mathematicians sometimes distinguish between propositional constants, propositional variables, and schemata. Propo-
sitional constants represent some particular proposition, while propositional variables range over the set of all atomic
propositions. Schemata, however, range over all propositions. It is common to represent propositional constants by
A, B, and C, propositional variables by P, Q, and R, and schematic letters are often Greek letters, most often , ,
and .

66.4 Basic concepts


The following outlines a standard propositional calculus. Many dierent formulations exist which are all more or less
equivalent but dier in the details of:

1. their language, that is, the particular collection of primitive symbols and operator symbols,
2. the set of axioms, or distinguished formulas, and
3. the set of inference rules.

Any given proposition may be represented with a letter called a 'propositional constant', analogous to representing a
number by a letter in mathematics, for instance, a = 5. All propositions require exactly one of two truth-values: true
or false. For example, let P be the proposition that it is raining outside. This will be true (P) if it is raining outside
and false otherwise (P).

We then dene truth-functional operators, beginning with negation. P represents the negation of P, which
can be thought of as the denial of P. In the example above, P expresses that it is not raining outside, or by a
more standard reading: It is not the case that it is raining outside. When P is true, P is false; and when P is
false, P is true. P always has the same truth-value as P.
Conjunction is a truth-functional connective which forms a proposition out of two simpler propositions, for
example, P and Q. The conjunction of P and Q is written P Q, and expresses that each are true. We read P
Q for P and Q. For any two propositions, there are four possible assignments of truth values:
1. P is true and Q is true
2. P is true and Q is false
3. P is false and Q is true
4. P is false and Q is false

The conjunction of P and Q is true in case 1 and is false otherwise. Where P is the proposition that it is
raining outside and Q is the proposition that a cold-front is over Kansas, P Q is true when it is raining
outside and there is a cold-front over Kansas. If it is not raining outside, then P Q is false; and if there
is no cold-front over Kansas, then P Q is false.

Disjunction resembles conjunction in that it forms a proposition out of two simpler propositions. We write it
P Q, and it is read P or Q. It expresses that either P or Q is true. Thus, in the cases listed above, the
disjunction of P with Q is true in all cases except case 4. Using the example above, the disjunction expresses
that it is either raining outside or there is a cold front over Kansas. (Note, this use of disjunction is supposed
to resemble the use of the English word or. However, it is most like the English inclusive or, which can
be used to express the truth of at least one of two propositions. It is not like the English exclusive or, which
expresses the truth of exactly one of two propositions. That is to say, the exclusive or is false when both P
and Q are true (case 1). An example of the exclusive or is: You may have a bagel or a pastry, but not both.
66.4. BASIC CONCEPTS 269

Often in natural language, given the appropriate context, the addendum but not both is omitted but implied.
In mathematics, however, or is always inclusive or; if exclusive or is meant it will be specied, possibly by
xor.)

Material conditional also joins two simpler propositions, and we write P Q, which is read if P then Q.
The proposition to the left of the arrow is called the antecedent and the proposition to the right is called
the consequent. (There is no such designation for conjunction or disjunction, since they are commutative
operations.) It expresses that Q is true whenever P is true. Thus it is true in every case above except case 2,
because this is the only case when P is true but Q is not. Using the example, if P then Q expresses that if it is
raining outside then there is a cold-front over Kansas. The material conditional is often confused with physical
causation. The material conditional, however, only relates two propositions by their truth-valueswhich is not
the relation of cause and eect. It is contentious in the literature whether the material implication represents
logical causation.

Biconditional joins two simpler propositions, and we write P Q, which is read P if and only if Q. It
expresses that P and Q have the same truth-value, and so, in cases 1 and 4, P is true if and only if Q is true,
and false otherwise.

It is extremely helpful to look at the truth tables for these dierent operators, as well as the method of analytic tableaux.

66.4.1 Closure under operations


Propositional logic is closed under truth-functional connectives. That is to say, for any proposition , is also a
proposition. Likewise, for any propositions and , is a proposition, and similarly for disjunction, conditional,
and biconditional. This implies that, for instance, is a proposition, and so it can be conjoined with another
proposition. In order to represent this, we need to use parentheses to indicate which proposition is conjoined with
which. For instance, P Q R is not a well-formed formula, because we do not know if we are conjoining P
Q with R or if we are conjoining P with Q R. Thus we must write either (P Q) R to represent the former, or
P (Q R) to represent the latter. By evaluating the truth conditions, we see that both expressions have the same
truth conditions (will be true in the same cases), and moreover that any proposition formed by arbitrary conjunctions
will have the same truth conditions, regardless of the location of the parentheses. This means that conjunction is
associative, however, one should not assume that parentheses never serve a purpose. For instance, the sentence P
(Q R) does not have the same truth conditions of (P Q) R, so they are dierent sentences distinguished only by
the parentheses. One can verify this by the truth-table method referenced above.
Note: For any arbitrary number of propositional constants, we can form a nite number of cases which list their
possible truth-values. A simple way to generate this is by truth-tables, in which one writes P, Q, ..., Z, for any list of
k propositional constantsthat is to say, any list of propositional constants with k entries. Below this list, one writes
2k rows, and below P one lls in the rst half of the rows with true (or T) and the second half with false (or F). Below
Q one lls in one-quarter of the rows with T, then one-quarter with F, then one-quarter with T and the last quarter
with F. The next column alternates between true and false for each eighth of the rows, then sixteenths, and so on,
until the last propositional constant varies between T and F for each row. This will give a complete listing of cases or
truth-value assignments possible for those propositional constants.

66.4.2 Argument
The propositional calculus then denes an argument to be a list of propositions. A valid argument is a list of proposi-
tions, the last of which follows fromor is implied bythe rest. All other arguments are invalid. The simplest valid
argument is modus ponens, one instance of which is the following list of propositions:

1. P Q
2. P
Q

This is a list of three propositions, each line is a proposition, and the last follows from the rest. The rst two lines are
called premises, and the last line the conclusion. We say that any proposition C follows from any set of propositions
(P1 , ..., Pn ) , if C must be true whenever every member of the set (P1 , ..., Pn ) is true. In the argument above, for
270 CHAPTER 66. PROPOSITIONAL CALCULUS

any P and Q, whenever P Q and P are true, necessarily Q is true. Notice that, when P is true, we cannot consider
cases 3 and 4 (from the truth table). When P Q is true, we cannot consider case 2. This leaves only case 1, in
which Q is also true. Thus Q is implied by the premises.
This generalizes schematically. Thus, where and may be any propositions at all,

1.
2.

Other argument forms are convenient, but not necessary. Given a complete set of axioms (see below for one such
set), modus ponens is sucient to prove all other argument forms in propositional logic, thus they may be considered
to be a derivative. Note, this is not true of the extension of propositional logic to other logics like rst-order logic.
First-order logic requires at least one additional rule of inference in order to obtain completeness.
The signicance of argument in formal logic is that one may obtain new truths from established truths. In the rst
example above, given the two premises, the truth of Q is not yet known or stated. After the argument is made, Q is
deduced. In this way, we dene a deduction system to be a set of all propositions that may be deduced from another
set of propositions. For instance, given the set of propositions A = {P Q, Q R, (P Q) R} , we can
dene a deduction system, , which is the set of all propositions which follow from A. Reiteration is always assumed,
so P Q, Q R, (P Q) R . Also, from the rst element of A, last element, as well as modus ponens,
R is a consequence, and so R . Because we have not included suciently complete axioms, though, nothing
else may be deduced. Thus, even though most deduction systems studied in propositional logic are able to deduce
(P Q) (P Q) , this one is too weak to prove such a proposition.

66.5 Generic description of a propositional calculus


A propositional calculus is a formal system L = L (A, , Z, I) , where:

The alpha set A is a nite set of elements called proposition symbols or propositional variables. Syntactically
speaking, these are the most basic elements of the formal language L , otherwise referred to as atomic formulas
or terminal elements. In the examples to follow, the elements of A are typically the letters p, q, r, and so on.
The omega set is a nite set of elements called operator symbols or logical connectives. The set is partitioned
into disjoint subsets as follows:

= 0 1 . . . j . . . m .

In this partition, j is the set of operator symbols of arity j.

In the more familiar propositional calculi, is typically partitioned as follows:

1 = {},

2 {, , , }.

A frequently adopted convention treats the constant logical values as operators of arity zero, thus:

0 = {0, 1}.
66.6. EXAMPLE 1. SIMPLE AXIOM SYSTEM 271

Some writers use the tilde (~), or N, instead of ; and some use the ampersand (&), the prexed K, or
instead of . Notation varies even more for the set of logical values, with symbols like {false, true},
{F, T}, or {, } all being seen in various contexts instead of {0, 1}.

The zeta set Z is a nite set of transformation rules that are called inference rules when they acquire logical
applications.
The iota set I is a nite set of initial points that are called axioms when they receive logical interpretations.

The language of L , also known as its set of formulas, well-formed formulas, is inductively dened by the following
rules:

1. Base: Any element of the alpha set A is a formula of L .


2. If p1 , p2 , . . . , pj are formulas and f is in j , then (f (p1 , p2 , . . . , pj )) is a formula.
3. Closed: Nothing else is a formula of L .

Repeated applications of these rules permits the construction of complex formulas. For example:

1. By rule 1, p is a formula.
2. By rule 2, p is a formula.
3. By rule 1, q is a formula.
4. By rule 2, (p q) is a formula.

66.6 Example 1. Simple axiom system


Let L1 = L(A, , Z, I) , where A , , Z , I are dened as follows:

The alpha set A , is a nite set of symbols that is large enough to supply the needs of a given discussion, for
example:

A = {p, q, r, s, t, u}.

Of the three connectives for conjunction, disjunction, and implication ( , , and ), one can be taken as
primitive and the other two can be dened in terms of it and negation ().[9] Indeed, all of the logical connectives
can be dened in terms of a sole sucient operator. The biconditional () can of course be dened in terms
of conjunction and implication, with a b dened as (a b) (b a) .

= 1 2

1 = {},
2 = {}.

An axiom system discovered by Jan ukasiewicz formulates a propositional calculus in this language as follows.
The axioms are all substitution instances of:

(p (q p))

((p (q r)) ((p q) (p r)))

((p q) (q p))

The rule of inference is modus ponens (i.e., from p and (p q) , infer q). Then a b is dened as a b ,
and a b is dened as (a b) . This system is used in Metamath set.mm formal proof database.
272 CHAPTER 66. PROPOSITIONAL CALCULUS

66.7 Example 2. Natural deduction system


Let L2 = L(A, , Z, I) , where A , , Z , I are dened as follows:

The alpha set A , is a nite set of symbols that is large enough to supply the needs of a given discussion, for
example:

A = {p, q, r, s, t, u}.

The omega set = 1 2 partitions as follows:

1 = {},

2 = {, , , }.

In the following example of a propositional calculus, the transformation rules are intended to be interpreted as the
inference rules of a so-called natural deduction system. The particular system presented here has no initial points,
which means that its interpretation for logical applications derives its theorems from an empty axiom set.

The set of initial points is empty, that is, I = .


The set of transformation rules, Z , is described as follows:

Our propositional calculus has ten inference rules. These rules allow us to derive other true formulas given a set of
formulas that are assumed to be true. The rst nine simply state that we can infer certain well-formed formulas from
other well-formed formulas. The last rule however uses hypothetical reasoning in the sense that in the premise of the
rule we temporarily assume an (unproven) hypothesis to be part of the set of inferred formulas to see if we can infer
a certain other formula. Since the rst nine rules don't do this they are usually described as non-hypothetical rules,
and the last one as a hypothetical rule.
In describing the transformation rules, we may introduce a metalanguage symbol . It is basically a convenient
shorthand for saying infer that. The format is , in which is a (possibly empty) set of formulas called
premises, and is a formula called conclusion. The transformation rule means that if every proposition in is
a theorem (or has the same truth value as the axioms), then is also a theorem. Note that considering the following
rule Conjunction introduction, we will know whenever has more than one formula, we can always safely reduce it
into one formula using conjunction. So for short, from that time on we may represent as one formula instead of a
set. Another omission for convenience is when is an empty set, in which case may not appear.

Negation introduction From (p q) and (p q) , infer p .


That is, {(p q), (p q)} p .
Negation elimination From p , infer (p r) .
That is, {p} (p r) .
Double negative elimination From p , infer p.
That is, p p .
Conjunction introduction From p and q, infer (p q) .
That is, {p, q} (p q) .
Conjunction elimination From (p q) , infer p.
From (p q) , infer q.
That is, (p q) p and (p q) q .
Disjunction introduction From p, infer (p q) .
From q, infer (p q) .
66.8. BASIC AND DERIVED ARGUMENT FORMS 273

That is, p (p q) and q (p q) .


Disjunction elimination From (p q) and (p r) and (q r) , infer r.
That is, {p q, p r, q r} r .
Biconditional introduction From (p q) and (q p) , infer (p q) .
That is, {p q, q p} (p q) .
Biconditional elimination From (p q) , infer (p q) .
From (p q) , infer (q p) .
That is, (p q) (p q) and (p q) (q p) .
Modus ponens (conditional elimination) From p and (p q) , infer q.
That is, {p, p q} q .
Conditional proof (conditional introduction) From [accepting p allows a proof of q], infer (p q) .
That is, (p q) (p q) .

66.8 Basic and derived argument forms

66.9 Proofs in propositional calculus


One of the main uses of a propositional calculus, when interpreted for logical applications, is to determine relations
of logical equivalence between propositional formulas. These relationships are determined by means of the available
transformation rules, sequences of which are called derivations or proofs.
In the discussion to follow, a proof is presented as a sequence of numbered lines, with each line consisting of a single
formula followed by a reason or justication for introducing that formula. Each premise of the argument, that is, an
assumption introduced as an hypothesis of the argument, is listed at the beginning of the sequence and is marked as
a premise in lieu of other justication. The conclusion is listed on the last line. A proof is complete if every line
follows from the previous ones by the correct application of a transformation rule. (For a contrasting approach, see
proof-trees).

66.9.1 Example of a proof


To be shown that A A.
One possible proof of this (which, though valid, happens to contain more steps than are necessary) may be
arranged as follows:

Interpret A A as Assuming A, infer A. Read A A as Assuming nothing, infer that A implies A, or It is


a tautology that A implies A, or It is always true that A implies A.

66.10 Soundness and completeness of the rules


The crucial properties of this set of rules are that they are sound and complete. Informally this means that the rules
are correct and that no other rules are required. These claims can be made more formal as follows.
We dene a truth assignment as a function that maps propositional variables to true or false. Informally such a
truth assignment can be understood as the description of a possible state of aairs (or possible world) where certain
statements are true and others are not. The semantics of formulas can then be formalized by dening for which state
of aairs they are considered to be true, which is what is done by the following denition.
We dene when such a truth assignment A satises a certain well-formed formula with the following rules:
274 CHAPTER 66. PROPOSITIONAL CALCULUS

A satises the propositional variable P if and only if A(P) = true

A satises if and only if A does not satisfy

A satises ( ) if and only if A satises both and

A satises ( ) if and only if A satises at least one of either or

A satises ( ) if and only if it is not the case that A satises but not

A satises ( ) if and only if A satises both and or satises neither one of them

With this denition we can now formalize what it means for a formula to be implied by a certain set S of formulas.
Informally this is true if in all worlds that are possible given the set of formulas S the formula also holds. This
leads to the following formal denition: We say that a set S of well-formed formulas semantically entails (or implies)
a certain well-formed formula if all truth assignments that satisfy all the formulas in S also satisfy .
Finally we dene syntactical entailment such that is syntactically entailed by S if and only if we can derive it with
the inference rules that were presented above in a nite number of steps. This allows us to formulate exactly what it
means for the set of inference rules to be sound and complete:
Soundness: If the set of well-formed formulas S syntactically entails the well-formed formula then S semantically
entails .
Completeness: If the set of well-formed formulas S semantically entails the well-formed formula then S syntac-
tically entails .
For the above set of rules this is indeed the case.

66.10.1 Sketch of a soundness proof


(For most logical systems, this is the comparatively simple direction of proof)
Notational conventions: Let G be a variable ranging over sets of sentences. Let A, B and C range over sentences. For
G syntactically entails A we write G proves A. For G semantically entails A we write G implies A.
We want to show: (A)(G) (if G proves A, then G implies A).
We note that G proves A has an inductive denition, and that gives us the immediate resources for demonstrating
claims of the form If G proves A, then .... So our proof proceeds by induction.

1. Basis. Show: If A is a member of G, then G implies A.

2. Basis. Show: If A is an axiom, then G implies A.

3. Inductive step (induction on n, the length of the proof):

(a) Assume for arbitrary G and A that if G proves A in n or fewer steps, then G implies A.
(b) For each possible application of a rule of inference at step n + 1, leading to a new theorem B, show that
G implies B.

Notice that Basis Step II can be omitted for natural deduction systems because they have no axioms. When used,
Step II involves showing that each of the axioms is a (semantic) logical truth.
The Basis steps demonstrate that the simplest provable sentences from G are also implied by G, for any G. (The
proof is simple, since the semantic fact that a set implies any of its members, is also trivial.) The Inductive step will
systematically cover all the further sentences that might be provableby considering each case where we might reach
a logical conclusion using an inference ruleand shows that if a new sentence is provable, it is also logically implied.
(For example, we might have a rule telling us that from A we can derive A or B. In III.a We assume that if A is
provable it is implied. We also know that if A is provable then A or B is provable. We have to show that then A
or B too is implied. We do so by appeal to the semantic denition and the assumption we just made. A is provable
from G, we assume. So it is also implied by G. So any semantic valuation making all of G true makes A true. But
any valuation making A true makes A or B true, by the dened semantics for or. So any valuation which makes
66.10. SOUNDNESS AND COMPLETENESS OF THE RULES 275

all of G true makes A or B true. So A or B is implied.) Generally, the Inductive step will consist of a lengthy but
simple case-by-case analysis of all the rules of inference, showing that each preserves semantic implication.
By the denition of provability, there are no sentences provable other than by being a member of G, an axiom, or
following by a rule; so if all of those are semantically implied, the deduction calculus is sound.

66.10.2 Sketch of completeness proof


(This is usually the much harder direction of proof.)
We adopt the same notational conventions as above.
We want to show: If G implies A, then G proves A. We proceed by contraposition: We show instead that if G does
not prove A then G does not imply A.

1. G does not prove A. (Assumption)


2. If G does not prove A, then we can construct an (innite) Maximal Set, G , which is a superset of G and
which also does not prove A.

(a) Place an ordering on all the sentences in the language (e.g., shortest rst, and equally long ones in extended
alphabetical ordering), and number them (E 1 , E 2 , ...)
(b) Dene a series G of sets (G0 , G1 , ...) inductively:

i. G0 = G
ii. If Gk {Ek+1 } proves A, then Gk+1 = Gk
iii. If Gk {Ek+1 } does not prove A, then Gk+1 = Gk {Ek+1 }
(c) Dene G as the union of all the G . (That is, G is the set of all the sentences that are in any G .)
(d) It can be easily shown that

i. G contains (is a superset of) G (by (b.i));


ii. G does not prove A (because if it proves A then some sentence was added to some G which caused
it to prove A; but this was ruled out by denition); and
iii. G is a Maximal Set with respect to A: If any more sentences whatever were added to G , it would
prove A. (Because if it were possible to add any more sentences, they should have been added when
they were encountered during the construction of the G , again by denition)
3. If G is a Maximal Set with respect to A, then it is truth-like. This means that it contains C only if it does not
contain C; If it contains C and contains If C then B then it also contains B; and so forth.
4. If G is truth-like there is a G -Canonical valuation of the language: one that makes every sentence in G true
and everything outside G false while still obeying the laws of semantic composition in the language.
5. A G -canonical valuation will make our original set G all true, and make A false.
6. If there is a valuation on which G are true and A is false, then G does not (semantically) imply A.

QED

66.10.3 Another outline for a completeness proof


If a formula is a tautology, then there is a truth table for it which shows that each valuation yields the value true for the
formula. Consider such a valuation. By mathematical induction on the length of the subformulas, show that the truth
or falsity of the subformula follows from the truth or falsity (as appropriate for the valuation) of each propositional
variable in the subformula. Then combine the lines of the truth table together two at a time by using "(P is true implies
S) implies ((P is false implies S) implies S)". Keep repeating this until all dependencies on propositional variables
have been eliminated. The result is that we have proved the given tautology. Since every tautology is provable, the
logic is complete.
276 CHAPTER 66. PROPOSITIONAL CALCULUS

66.11 Interpretation of a truth-functional propositional calculus


An interpretation of a truth-functional propositional calculus P is an assignment to each propositional symbol of
P of one or the other (but not both) of the truth values truth (T) and falsity (F), and an assignment to the connective
symbols of P of their usual truth-functional meanings. An interpretation of a truth-functional propositional calculus
may also be expressed in terms of truth tables.[11]
For n distinct propositional symbols there are 2n distinct possible interpretations. For any particular symbol a , for
example, there are 21 = 2 possible interpretations:

1. a is assigned T, or
2. a is assigned F.

For the pair a , b there are 22 = 4 possible interpretations:

1. both are assigned T,


2. both are assigned F,
3. a is assigned T and b is assigned F, or
4. a is assigned F and b is assigned T.[11]

Since P has 0 , that is, denumerably many propositional symbols, there are 20 = c , and therefore uncountably
many distinct possible interpretations of P .[11]

66.11.1 Interpretation of a sentence of truth-functional propositional logic


Main article: Interpretation (logic)

If and are formulas of P and I is an interpretation of P then:

A sentence of propositional logic is true under an interpretation I i I assigns the truth value T to that sentence.
If a sentence is true under an interpretation, then that interpretation is called a model of that sentence.
is false under an interpretation I i is not true under I .[11]
A sentence of propositional logic is logically valid if it is true under every interpretation

|= means that is logically valid

A sentence of propositional logic is a semantic consequence of a sentence i there is no interpretation


under which is true and is false.
A sentence of propositional logic is consistent i it is true under at least one interpretation. It is inconsistent if
it is not consistent.

Some consequences of these denitions:

For any given interpretation a given formula is either true or false.[11]


No formula is both true and false under the same interpretation.[11]
is false for a given interpretation i is true for that interpretation; and is true under an interpretation
i is false under that interpretation.[11]
If and ( ) are both true under a given interpretation, then is true under that interpretation.[11]
If |=P and |=P ( ) , then |=P .[11]
66.12. ALTERNATIVE CALCULUS 277

is true under I i is not true under I .

( ) is true under I i either is not true under I or is true under I .[11]

A sentence of propositional logic is a semantic consequence of a sentence i ( ) is logically valid,


that is, |=P i |=P ( ) .[11]

66.12 Alternative calculus


It is possible to dene another version of propositional calculus, which denes most of the syntax of the logical
operators by means of axioms, and which uses only one inference rule.

66.12.1 Axioms

Let , , and stand for well-formed formulas. (The well-formed formulas themselves would not contain any Greek
letters, but only capital Roman letters, connective operators, and parentheses.) Then the axioms are as follows:

Axiom THEN-2 may be considered to be a distributive property of implication with respect to implication.

Axioms AND-1 and AND-2 correspond to conjunction elimination. The relation between AND-1 and AND-
2 reects the commutativity of the conjunction operator.

Axiom AND-3 corresponds to conjunction introduction.

Axioms OR-1 and OR-2 correspond to disjunction introduction. The relation between OR-1 and OR-2
reects the commutativity of the disjunction operator.

Axiom NOT-1 corresponds to reductio ad absurdum.

Axiom NOT-2 says that anything can be deduced from a contradiction.

Axiom NOT-3 is called "tertium non datur" (Latin: a third is not given) and reects the semantic valuation
of propositional formulas: a formula can have a truth-value of either true or false. There is no third truth-value,
at least not in classical logic. Intuitionistic logicians do not accept the axiom NOT-3.

66.12.2 Inference rule

The inference rule is modus ponens:

66.12.3 Meta-inference rule

Let a demonstration be represented by a sequence, with hypotheses to the left of the turnstile and the conclusion to
the right of the turnstile. Then the deduction theorem can be stated as follows:

If the sequence

1 , 2 , ..., n ,

has been demonstrated, then it is also possible to demonstrate the sequence

1 , 2 , ..., n
278 CHAPTER 66. PROPOSITIONAL CALCULUS

This deduction theorem (DT) is not itself formulated with propositional calculus: it is not a theorem of propositional
calculus, but a theorem about propositional calculus. In this sense, it is a meta-theorem, comparable to theorems
about the soundness or completeness of propositional calculus.
On the other hand, DT is so useful for simplifying the syntactical proof process that it can be considered and used as
another inference rule, accompanying modus ponens. In this sense, DT corresponds to the natural conditional proof
inference rule which is part of the rst version of propositional calculus introduced in this article.
The converse of DT is also valid:

If the sequence

1 , 2 , ..., n

has been demonstrated, then it is also possible to demonstrate the sequence

1 , 2 , ..., n ,

in fact, the validity of the converse of DT is almost trivial compared to that of DT:

If

1 , ..., n

then

1 , ..., n ,

1 , ..., n ,
and from (1) and (2) can be deduced

1 , ..., n ,

by means of modus ponens, Q.E.D.

The converse of DT has powerful implications: it can be used to convert an axiom into an inference rule. For example,
the axiom AND-1,

can be transformed by means of the converse of the deduction theorem into the inference rule

which is conjunction elimination, one of the ten inference rules used in the rst version (in this article) of the propo-
sitional calculus.

66.12.4 Example of a proof


The following is an example of a (syntactical) demonstration, involving only axioms THEN-1 and THEN-2:
Prove: A A (Reexivity of implication).
Proof:

1. (A ((B A) A)) ((A (B A)) (A A))

Axiom THEN-2 with = A, = B A, = A


66.13. EQUIVALENCE TO EQUATIONAL LOGICS 279

2. A ((B A) A)
Axiom THEN-1 with = A, = B A
3. (A (B A)) (A A)
From (1) and (2) by modus ponens.
4. A (B A)
Axiom THEN-1 with = A, = B
5. A A
From (3) and (4) by modus ponens.

66.13 Equivalence to equational logics


The preceding alternative calculus is an example of a Hilbert-style deduction system. In the case of propositional
systems the axioms are terms built with logical connectives and the only inference rule is modus ponens. Equational
logic as standardly used informally in high school algebra is a dierent kind of calculus from Hilbert systems. Its
theorems are equations and its inference rules express the properties of equality, namely that it is a congruence on
terms that admits substitution.
Classical propositional calculus as described above is equivalent to Boolean algebra, while intuitionistic propositional
calculus is equivalent to Heyting algebra. The equivalence is shown by translation in each direction of the theorems
of the respective systems. Theorems of classical or intuitionistic propositional calculus are translated as equations
= 1 of Boolean or Heyting algebra respectively. Conversely theorems x = y of Boolean or Heyting algebra are
translated as theorems (x y) (y x) of classical or intuitionistic calculus respectively, for which x y is a
standard abbreviation. In the case of Boolean algebra x = y can also be translated as (x y) (x y) , but this
translation is incorrect intuitionistically.
In both Boolean and Heyting algebra, inequality x y can be used in place of equality. The equality x = y is
expressible as a pair of inequalities x y and y x . Conversely the inequality x y is expressible as the equality
x y = x , or as x y = y . The signicance of inequality for Hilbert-style systems is that it corresponds to the
latters deduction or entailment symbol . An entailment

1 , 2 , . . . , n

is translated in the inequality version of the algebraic framework as

1 2 . . . n

Conversely the algebraic inequality x y is translated as the entailment

x y

The dierence between implication x y and inequality or entailment x y or x y is that the former is internal
to the logic while the latter is external. Internal implication between two terms is another term of the same kind.
Entailment as external implication between two terms expresses a metatruth outside the language of the logic, and
is considered part of the metalanguage. Even when the logic under study is intuitionistic, entailment is ordinarily
understood classically as two-valued: either the left side entails, or is less-or-equal to, the right side, or it is not.
Similar but more complex translations to and from algebraic logics are possible for natural deduction systems as
described above and for the sequent calculus. The entailments of the latter can be interpreted as two-valued, but a
280 CHAPTER 66. PROPOSITIONAL CALCULUS

more insightful interpretation is as a set, the elements of which can be understood as abstract proofs organized as
the morphisms of a category. In this interpretation the cut rule of the sequent calculus corresponds to composition
in the category. Boolean and Heyting algebras enter this picture as special categories having at most one morphism
per homset, i.e., one proof per entailment, corresponding to the idea that existence of proofs is all that matters: any
proof will do and there is no point in distinguishing them.

66.14 Graphical calculi


It is possible to generalize the denition of a formal language from a set of nite sequences over a nite basis to include
many other sets of mathematical structures, so long as they are built up by nitary means from nite materials. Whats
more, many of these families of formal structures are especially well-suited for use in logic.
For example, there are many families of graphs that are close enough analogues of formal languages that the concept
of a calculus is quite easily and naturally extended to them. Indeed, many species of graphs arise as parse graphs
in the syntactic analysis of the corresponding families of text structures. The exigencies of practical computation on
formal languages frequently demand that text strings be converted into pointer structure renditions of parse graphs,
simply as a matter of checking whether strings are well-formed formulas or not. Once this is done, there are many
advantages to be gained from developing the graphical analogue of the calculus on strings. The mapping from strings
to parse graphs is called parsing and the inverse mapping from parse graphs to strings is achieved by an operation
that is called traversing the graph.

66.15 Other logical calculi


Propositional calculus is about the simplest kind of logical calculus in current use. It can be extended in several ways.
(Aristotelian syllogistic calculus, which is largely supplanted in modern logic, is in some ways simpler but in other
ways more complex than propositional calculus.) The most immediate way to develop a more complex logical
calculus is to introduce rules that are sensitive to more ne-grained details of the sentences being used.
First-order logic (a.k.a. rst-order predicate logic) results when the atomic sentences of propositional logic are
broken up into terms, variables, predicates, and quantiers, all keeping the rules of propositional logic with some
new ones introduced. (For example, from All dogs are mammals we may infer If Rover is a dog then Rover is
a mammal.) With the tools of rst-order logic it is possible to formulate a number of theories, either with explicit
axioms or by rules of inference, that can themselves be treated as logical calculi. Arithmetic is the best known of these;
others include set theory and mereology. Second-order logic and other higher-order logics are formal extensions of
rst-order logic. Thus, it makes sense to refer to propositional logic as zeroth-order logic, when comparing it with
these logics.
Modal logic also oers a variety of inferences that cannot be captured in propositional calculus. For example, from
Necessarily p we may infer that p. From p we may infer It is possible that p. The translation between modal
logics and algebraic logics concerns classical and intuitionistic logics but with the introduction of a unary operator on
Boolean or Heyting algebras, dierent from the Boolean operations, interpreting the possibility modality, and in the
case of Heyting algebra a second operator interpreting necessity (for Boolean algebra this is redundant since necessity
is the De Morgan dual of possibility). The rst operator preserves 0 and disjunction while the second preserves 1 and
conjunction.
Many-valued logics are those allowing sentences to have values other than true and false. (For example, neither and
both are standard extra values"; continuum logic allows each sentence to have any of an innite number of degrees
of truth between true and false.) These logics often require calculational devices quite distinct from propositional
calculus. When the values form a Boolean algebra (which may have more than two or even innitely many values),
many-valued logic reduces to classical logic; many-valued logics are therefore only of independent interest when the
values form an algebra that is not Boolean.

66.16 Solvers
Finding solutions to propositional logic formulas is an NP-complete problem. However, practical methods exist (e.g.,
DPLL algorithm, 1962; Cha algorithm, 2001) that are very fast for many useful cases. Recent work has extended
66.17. SEE ALSO 281

the SAT solver algorithms to work with propositions containing arithmetic expressions; these are the SMT solvers.

66.17 See also

66.17.1 Higher logical levels


First-order logic
Second-order propositional logic
Second-order logic
Higher-order logic

66.17.2 Related topics

66.18 References
[1] Bobzien, Susanne (1 January 2016). Zalta, Edward N., ed. The Stanford Encyclopedia of Philosophy via Stanford
Encyclopedia of Philosophy.
[2] Marenbon, John (2007). Medieval philosophy: an historical and philosophical introduction. Routledge. p. 137.
[3] Peckhaus, Volker (1 January 2014). Zalta, Edward N., ed. The Stanford Encyclopedia of Philosophy via Stanford Ency-
clopedia of Philosophy.
[4] Hurley, Patrick (2007). A Concise Introduction to Logic 10th edition. Wadsworth Publishing. p. 392.
[5] Beth, Evert W.; Semantic entailment and formal derivability, series: Mededlingen van de Koninklijke Nederlandse
Akademie van Wetenschappen, Afdeling Letterkunde, Nieuwe Reeks, vol. 18, no. 13, Noord-Hollandsche Uitg. Mij.,
Amsterdam, 1955, pp. 30942. Reprinted in Jaakko Intikka (ed.) The Philosophy of Mathematics, Oxford University
Press, 1969
[6] Truth in Frege
[7] Russell: the Journal of Bertrand Russell Studies.
[8] Anellis, Irving H. (2012). Peirces Truth-functional Analysis and the Origin of the Truth Table. History and Philosophy
of Logic. 33: 8797. doi:10.1080/01445340.2011.621702.
[9] Wernick, William (1942) Complete Sets of Logical Functions, Transactions of the American Mathematical Society 51,
pp. 117132.
[10] Toida, Shunichi (2 August 2009). Proof of Implications. CS381 Discrete Structures/Discrete Mathematics Web Course
Material. Department Of Computer Science, Old Dominion University. Retrieved 10 March 2010.
[11] Hunter, Georey (1971). Metalogic: An Introduction to the Metatheory of Standard First-Order Logic. University of
California Pres. ISBN 0-520-02356-0.

66.19 Further reading


Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer
Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY.
Chang, C.C. and Keisler, H.J. (1973), Model Theory, North-Holland, Amsterdam, Netherlands.
Kohavi, Zvi (1978), Switching and Finite Automata Theory, 1st edition, McGrawHill, 1970. 2nd edition,
McGrawHill, 1978.
Korfhage, Robert R. (1974), Discrete Computational Structures, Academic Press, New York, NY.
Lambek, J. and Scott, P.J. (1986), Introduction to Higher Order Categorical Logic, Cambridge University Press,
Cambridge, UK.
Mendelson, Elliot (1964), Introduction to Mathematical Logic, D. Van Nostrand Company.
282 CHAPTER 66. PROPOSITIONAL CALCULUS

66.19.1 Related works


Hofstadter, Douglas (1979). Gdel, Escher, Bach: An Eternal Golden Braid. Basic Books. ISBN 978-0-465-
02656-2.

66.20 External links


Klement, Kevin C. (2006), Propositional Logic, in James Fieser and Bradley Dowden (eds.), Internet Ency-
clopedia of Philosophy, Eprint.

Formal Predicate Calculus, contains a systematic formal development along the lines of Alternative calculus

forall x: an introduction to formal logic, by P.D. Magnus, covers formal semantics and proof theory for sen-
tential logic.

Chapter 2 / Propositional Logic from Logic In Action


Propositional sequent calculus prover on Project Nayuki. (note: implication can be input in the form !X|Y, and
a sequent can be a single formula prexed with > and having no commas)
Chapter 67

Propositional directed acyclic graph

A propositional directed acyclic graph (PDAG) is a data structure that is used to represent a Boolean function. A
Boolean function can be represented as a rooted, directed acyclic graph of the following form:

Leaves are labeled with (true), (false), or a Boolean variable.

Non-leaves are (logical and), (logical or) and (logical not).

- and -nodes have at least one child.

-nodes have exactly one child.

Leaves labeled with ( ) represent the constant Boolean function which always evaluates to 1 (0). A leaf labeled
with a Boolean variable x is interpreted as the assignment x = 1 , i.e. it represents the Boolean function which
evaluates to 1 if and only if x = 1 . The Boolean function represented by a -node is the one that evaluates to
1, if and only if the Boolean function of all its children evaluate to 1. Similarly, a -node represents the Boolean
function that evaluates to 1, if and only if the Boolean function of at least one child evaluates to 1. Finally, a -node
represents the complemenatary Boolean function its child, i.e. the one that evaluates to 1, if and only if the Boolean
function of its child evaluates to 0.

67.1 PDAG, BDD, and NNF


Every binary decision diagram (BDD) and every negation normal form (NNF) are also a PDAG with some
particular properties. The following pictures represent the Boolean function f (x1, x2, x3) = x1 x2 x3 +
x1 x2 + x2 x3 :

67.2 See also


Data structure

Boolean satisability problem

Proposition

67.3 References
M. Wachter & R. Haenni, Propositional DAGs: a New Graph-Based Language for Representing Boolean
Functions, KR'06, 10th International Conference on Principles of Knowledge Representation and Reasoning,
Lake District, UK, 2006.

283
284 CHAPTER 67. PROPOSITIONAL DIRECTED ACYCLIC GRAPH

M. Wachter & R. Haenni, Probabilistic Equivalence Checking with Propositional DAGs, Technical Report
iam-2006-001, Institute of Computer Science and Applied Mathematics, University of Bern, Switzerland,
2006.

M. Wachter, R. Haenni & J. Jonczy, Reliability and Diagnostics of Modular Systems: a New Probabilistic
Approach, DX'06, 18th International Workshop on Principles of Diagnosis, Pearanda de Duero, Burgos,
Spain, 2006.
Chapter 68

Propositional formula

In propositional logic, a propositional formula is a type of syntactic formula which is well formed and has a truth
value. If the values of all variables in a propositional formula are given, it determines a unique truth value. A
propositional formula may also be called a propositional expression, a sentence, or a sentential formula.
A propositional formula is constructed from simple propositions, such as ve is greater than three or propositional
variables such as P and Q, using connectives such as NOT, AND, OR, and IMPLIES; for example:

(P AND NOT Q) IMPLIES (P OR Q).

In mathematics, a propositional formula is often more briey referred to as a "proposition", but, more precisely, a
propositional formula is not a proposition but a formal expression that denotes a proposition, a formal object under
discussion, just like an expression such as "x + y" is not a value, but denotes a value. In some contexts, maintaining
the distinction may be of importance.

68.1 Propositions
For the purposes of the propositional calculus, propositions (utterances, sentences, assertions) are considered to be
either simple or compound.[1] Compound propositions are considered to be linked by sentential connectives, some
of the most common of which are AND, OR, IF ... THEN ..., NEITHER ... NOR..., "... IS EQUIVALENT
TO ... . The linking semicolon ";", and connective BUT are considered to be expressions of AND. A sequence
of discrete sentences are considered to be linked by AND"s, and formal analysis applies a recursive parenthesis
rule with respect to sequences of simple propositions (see more below about well-formed formulas).

For example: The assertion: This cow is blue. That horse is orange but this horse here is purple. is
actually a compound proposition linked by AND"s: ( (This cow is blue AND that horse is orange)
AND this horse here is purple ) .

Simple propositions are declarative in nature, that is, they make assertions about the condition or nature of a particular
object of sensation e.g. This cow is blue, Theres a coyote!" (That coyote IS there, behind the rocks.).[2] Thus
the simple primitive assertions must be about specic objects or specic states of mind. Each must have at least a
subject (an immediate object of thought or observation), a verb (in the active voice and present tense preferred), and
perhaps an adjective or adverb. Dog!" probably implies I see a dog but should be rejected as too ambiguous.

Example: That purple dog is running, This cow is blue, Switch M31 is closed, This cap is o,
Tomorrow is Friday.

For the purposes of the propositional calculus a compound proposition can usually be reworded into a series of simple
sentences, although the result will probably sound stilted.

285
286 CHAPTER 68. PROPOSITIONAL FORMULA

68.1.1 Relationship between propositional and predicate formulas


The predicate calculus goes a step further than the propositional calculus to an analysis of the inner structure of
propositions[3] It breaks a simple sentence down into two parts (i) its subject (the object (singular or plural) of
discourse) and (ii) a predicate (a verb or possibly verb-clause that asserts a quality or attribute of the object(s)).
The predicate calculus then generalizes the subject|predicate form (where | symbolizes concatenation (stringing
together) of symbols) into a form with the following blank-subject structure " ___|predicate, and the predicate in
turn generalized to all things with that property.

Example: This blue pig has wings becomes two sentences in the propositional calculus: This pig has
wings AND This pig is blue, whose internal structure is not considered. In contrast, in the predicate
calculus, the rst sentence breaks into this pig as the subject, and has wings as the predicate. Thus
it asserts that object this pig is a member of the class (set, collection) of winged things. The second
sentence asserts that object this pig has an attribute blue and thus is a member of the class of blue
things. One might choose to write the two sentences connected with AND as:

p|W AND p|B

The generalization of this pig to a (potential) member of two classes winged things and blue things means that
it has a truth-relationship with both of these classes. In other words, given a domain of discourse winged things,
either we nd p to be a member of this domain or not. Thus we have a relationship W (wingedness) between p (pig)
and { T, F }, W(p) evaluates to { T, F }. Likewise for B (blueness) and p (pig) and { T, F }: B(p) evaluates to { T, F
}. So we now can analyze the connected assertions B(p) AND W(p)" for its overall truth-value, i.e.:

( B(p) AND W(p) ) evaluates to { T, F }

In particular, simple sentences that employ notions of all, some, a few, one of, etc. are treated by the predicate
calculus. Along with the new function symbolism F(x)" two new symbols are introduced: (For all), and (There
exists ..., At least one of ... exists, etc.). The predicate calculus, but not the propositional calculus, can establish the
formal validity of the following statement:

All blue pigs have wings but some pigs have no wings, hence some pigs are not blue.

68.1.2 Identity
Tarski asserts that the notion of IDENTITY (as distinguished from LOGICAL EQUIVALENCE) lies outside the
propositional calculus; however, he notes that if a logic is to be of use for mathematics and the sciences it must contain
a theory of IDENTITY.[4] Some authors refer to predicate logic with identity to emphasize this extension. See
more about this below.

68.2 An algebra of propositions, the propositional calculus


An algebra (and there are many dierent ones), loosely dened, is a method by which a collection of symbols called
variables together with some other symbols such as parentheses (, ) and some sub-set of symbols such as *, +, ~, &,
, =, , , are manipulated within a system of rules. These symbols, and well-formed strings of them, are said
to represent objects, but in a specic algebraic system these objects do not have meanings. Thus work inside the
algebra becomes an exercise in obeying certain laws (rules) of the algebras syntax (symbol-formation) rather than
in semantics (meaning) of the symbols. The meanings are to be found outside the algebra.
For a well-formed sequence of symbols in the algebraa formula -- to have some usefulness outside the algebra the
symbols are assigned meanings and eventually the variables are assigned values; then by a series of rules the formula
is evaluated.
When the values are restricted to just two and applied to the notion of simple sentences (e.g. spoken utterances
or written assertions) linked by propositional connectives this whole algebraic system of symbols and rules and
evaluation-methods is usually called the propositional calculus or the sentential calculus.
68.2. AN ALGEBRA OF PROPOSITIONS, THE PROPOSITIONAL CALCULUS 287

While some of the familiar rules of arithmetic algebra continue to hold in the algebra of propositions (e.g. the
commutative and associative laws for AND and OR), some do not (e.g. the distributive laws for AND, OR and
NOT).

68.2.1 Usefulness of propositional formulas

Analysis: In deductive reasoning, philosophers, rhetoricians and mathematicians reduce arguments to formulas and
then study them (usually with truth tables) for correctness (soundness). For example: Is the following argument
sound?

Given that consciousness is sucient for an articial intelligence and only conscious entities can pass
the Turing test, before we can conclude that a robot is an articial intelligence the robot must pass the
Turing test.

Engineers analyze the logic circuits they have designed using synthesis techniques and then apply various reduction
and minimization techniques to simplify their designs.
Synthesis: Engineers in particular synthesize propositional formulas (that eventually end up as circuits of symbols)
from truth tables. For example, one might write down a truth table for how binary addition should behave given the
addition of variables b and a and carry_in ci, and the results carry_out co and sum :

Example: in row 5, ( (b+a) + ci ) = ( (1+0) + 1 ) = the number 2. written as a binary number this is 102 ,
where co"=1 and =0 as shown in the right-most columns.

68.2.2 Propositional variables

The simplest type of propositional formula is a propositional variable. Propositions that are simple (atomic), sym-
bolic expressions are often denoted by variables named a, b, or A, B, etc. A propositional variable is intended to
represent an atomic proposition (assertion), such as It is Saturday = a (here the symbol = means " ... is assigned
the variable named ...) or I only go to the movies on Monday = b.

68.2.3 Truth-value assignments, formula evaluations

Evaluation of a propositional formula begins with assignment of a truth value to each variable. Because each
variable represents a simple sentence, the truth values are being applied to the truth or falsity of these simple
sentences.
Truth values in rhetoric, philosophy and mathematics: The truth values are only two: { TRUTH T, FALSITY
F }. An empiricist puts all propositions into two broad classes: analytictrue no matter what (e.g. tautology), and
syntheticderived from experience and thereby susceptible to conrmation by third parties (the verication theory of
meaning).[5] Empiricits hold that, in general, to arrive at the truth-value of a synthetic proposition, meanings (pattern-
matching templates) must rst be applied to the words, and then these meaning-templates must be matched against
whatever it is that is being asserted. For example, my utterance That cow is blue!" Is this statement a TRUTH? Truly
I said it. And maybe I am seeing a blue cowunless I am lying my statement is a TRUTH relative to the object of
my (perhaps awed) perception. But is the blue cow really there"? What do you see when you look out the same
window? In order to proceed with a verication, you will need a prior notion (a template) of both cow and blue,
and an ability to match the templates against the object of sensation (if indeed there is one).
Truth values in engineering: Engineers try to avoid notions of truth and falsity that bedevil philosophers, but in the
nal analysis engineers must trust their measuring instruments. In their quest for robustness, engineers prefer to pull
known objects from a small libraryobjects that have well-dened, predictable behaviors even in large combinations,
(hence their name for the propositional calculus: combinatorial logic). The fewest behaviors of a single object are
two (e.g. { OFF, ON }, { open, shut }, { UP, DOWN } etc.), and these are put in correspondence with { 0, 1 }.
Such elements are called digital; those with a continuous range of behaviors are called analog. Whenever decisions
must be made in an analog system, quite often an engineer will convert an analog behavior (the door is 45.32146%
UP) to digital (e.g. DOWN=0 ) by use of a comparator.[6]
288 CHAPTER 68. PROPOSITIONAL FORMULA

Thus an assignment of meaning of the variables and the two value-symbols { 0, 1 } comes from outside the formula
that represents the behavior of the (usually) compound object. An example is a garage door with two limit switches,
one for UP labelled SW_U and one for DOWN labelled SW_D, and whatever else is in the doors circuitry. Inspection
of the circuit (either the diagram or the actual objects themselvesdoor, switches, wires, circuit board, etc.) might
reveal that, on the circuit board node 22 goes to +0 volts when the contacts of switch SW_D are mechanically in
contact (closed) and the door is in the down position (95% down), and node 29 goes to +0 volts when the door
is 95% UP and the contacts of switch SW_U are in mechanical contact (closed).[7] The engineer must dene the
meanings of these voltages and all possible combinations (all 4 of them), including the bad ones (e.g. both nodes
22 and 29 at 0 volts, meaning that the door is open and closed at the same time). The circuit mindlessly responds to
whatever voltages it experiences without any awareness of TRUTH or FALSEHOOD, RIGHT or WRONG, SAFE
or DANGEROUS.

68.3 Propositional connectives


Arbitrary propositional formulas are built from propositional variables and other propositional formulas using propositional
connectives. Examples of connectives include:

The unary negation connective. If is a formula, then is a formula.

The classical binary connectives , , , . Thus, for example, if and are formulas, so is ( ) .

Other binary connectives, such as NAND, NOR, and XOR

The ternary connective IF ... THEN ... ELSE ...

Constant 0-ary connectives and (alternately, constants { T, F }, { 1, 0 } etc. )

The theory-extension connective EQUALS (alternately, IDENTITY, or the sign " = " as distinguished from
the logical connective )

68.3.1 Connectives of rhetoric, philosophy and mathematics

The following are the connectives common to rhetoric, philosophy and mathematics together with their truth tables.
The symbols used will vary from author to author and between elds of endeavor. In general the abbreviations T
and F stand for the evaluations TRUTH and FALSITY applied to the variables in the propositional formula (e.g.
the assertion: That cow is blue will have the truth-value T for Truth or F for Falsity, as the case may be.).
The connectives go by a number of dierent word-usages, e.g. a IMPLIES b is also said IF a THEN b. Some of
these are shown in the table.

68.3.2 Engineering connectives

In general, the engineering connectives are just the same as the mathematics connectives excepting they tend to
evaluate with 1 = T and 0 = F. This is done for the purposes of analysis/minimization and synthesis of
formulas by use of the notion of minterms and Karnaugh maps (see below). Engineers also use the words logical
product from Boole's notion (a*a = a) and logical sum from Jevons' notion (a+a = a).[8]

68.3.3 CASE connective: IF ... THEN ... ELSE ...

The IF ... THEN ... ELSE ... connective appears as the simplest form of CASE operator of recursion theory
and computation theory and is the connective responsible for conditional gotos (jumps, branches). From this one
connective all other connectives can be constructed (see more below). Although " IF c THEN b ELSE a " sounds
like an implication it is, in its most reduced form, a switch that makes a decision and oers as outcome only one of
two alternatives a or b (hence the name switch statement in the C programming language).[9]
The following three propositions are equivalent (as indicated by the logical equivalence sign ):
68.3. PROPOSITIONAL CONNECTIVES 289

Engineering symbols have varied over the years, but these are commonplace. Sometimes they appear simply as boxes with symbols
in them. a and b are called the inputs and c is called the output. An output will typical connect to an input (unless it is the
nal connective); this accomplishes the mathematical notion of substitution.

1. ( IF 'counter is zero' THEN 'go to instruction b ' ELSE 'go to instruction a ')
2. ( (c b) & (~c a) ) ( ( IF 'counter is zero' THEN 'go to instruction b ' ) AND ( IF 'It is NOT the case that
counter is zero' THEN 'go to instruction a ) "
3. ( (c & b) (~c & a) ) " ( 'Counter is zero' AND 'go to instruction b ) OR ( 'It is NOT the case that 'counter
is zero' AND 'go to instruction a ) "

Thus IF ... THEN ... ELSEunlike implicationdoes not evaluate to an ambiguous TRUTH when the rst
proposition is false i.e. c = F in (c b). For example, most people would reject the following compound proposition
as a nonsensical non sequitur because the second sentence is not connected in meaning to the rst.[10]

Example: The proposition " IF 'Winston Churchill was Chinese' THEN 'The sun rises in the east' "
evaluates as a TRUTH given that 'Winston Church was Chinese' is a FALSEHOOD and 'The sun rises
in the east' evaluates as a TRUTH.

In recognition of this problem, the sign of formal implication in the propositional calculus is called material
implication to distinguish it from the everyday, intuitive implication.[11]
The use of the IF ... THEN ... ELSE construction avoids controversy because it oers a completely deterministic
choice between two stated alternatives; it oers two objects (the two alternatives b and a), and it selects between
them exhaustively and unabiguously.[12] In the truth table below, d1 is the formula: ( (IF c THEN b) AND (IF NOT-c
THEN a) ). Its fully reduced form d2 is the formula: ( (c AND b) OR (NOT-c AND a). The two formulas are
equivalent as shown by the columns "=d1 and "=d2. Electrical engineers call the fully reduced formula the AND-
OR-SELECT operator. The CASE (or SWITCH) operator is an extension of the same idea to n possible, but mutually
exclusive outcomes. Electrical engineers call the CASE operator a multiplexer.

68.3.4 IDENTITY and evaluation


The rst table of this section stars *** the entry logical equivalence to note the fact that "Logical equivalence" is not
the same thing as identity. For example, most would agree that the assertion That cow is blue is identical to the
assertion That cow is blue. On the other hand, logical equivalence sometimes appears in speech as in this example:
" 'The sun is shining' means 'I'm biking' " Translated into a propositional formula the words become: IF 'the sun is
shining' THEN 'I'm biking', AND IF 'I'm biking' THEN 'the sun is shining'":[13]

IF 's THEN 'b' AND IF 'b' THEN 's " is written as ((s b) & (b s)) or in an abbreviated form as
(s b). As the rightmost symbol string is a denition for a new symbol in terms of the symbols on the
left, the use of the IDENTITY sign = is appropriate:
290 CHAPTER 68. PROPOSITIONAL FORMULA

((s b) & (b s)) = (s b)

Dierent authors use dierent signs for logical equivalence: (e.g. Suppes, Goodstein, Hamilton), (e.g. Robbin),
(e.g. Bender and Williamson). Typically identity is written as the equals sign =. One exception to this rule is found
in Principia Mathematica. For more about the philosophy of the notion of IDENTITY see Leibnizs law.
As noted above, Tarski considers IDENTITY to lie outside the propositional calculus, but he asserts that without the
notion, logic is insucient for mathematics and the deductive sciences. In fact the sign comes into the propositional
calculus when a formula is to be evaluated.[14]
In some systems there are no truth tables, but rather just formal axioms (e.g. strings of symbols from a set { ~, , (,
), variables p1 , p2 , p3 , ... } and formula-formation rules (rules about how to make more symbol strings from previous
strings by use of e.g. substitution and modus ponens). the result of such a calculus will be another formula (i.e. a
well-formed symbol string). Eventually, however, if one wants to use the calculus to study notions of validity and
truth, one must add axioms that dene the behavior of the symbols called the truth values {T, F} ( or {1, 0}, etc.)
relative to the other symbols.
For example, Hamilton uses two symbols = and when he denes the notion of a valuation v of any ws A and B
in his formal statement calculus L. A valuation v is a function from the ws of his system L to the range (output)
{ T, F }, given that each variable p1 , p2 , p3 in a w is assigned an arbitrary truth value { T, F }.

The two denitions (i) and (ii) dene the equivalent of the truth tables for the ~ (NOT) and (IMPLICATION)
connectives of his system. The rst one derives F T and T F, in other words " v(A) does not mean v(~A)".
Denition (ii) species the third row in the truth table, and the other three rows then come from an application of
denition (i). In particular (ii) assigns the value F (or a meaning of F) to the entire expression. The denitions
also serve as formation rules that allow substitution of a value previously derived into a formula:
Some formal systems specify these valuation axioms at the outset in the form of certain formulas such as the law of
contradiction or laws of identity and nullity. The choice of which ones to use, together with laws such as commutation
and distribution, is up to the systems designer as long as the set of axioms is complete (i.e. sucient to form and to
evaluate any well-formed formula created in the system).

68.4 More complex formulas


As shown above, the CASE (IF c THEN b ELSE a ) connective is constructed either from the 2-argument connectives
IF...THEN... and AND or from OR and AND and the 1-argument NOT. Connectives such as the n-argument AND
(a & b & c & ... & n), OR (a b c ... n) are constructed from strings of two-argument AND and OR and
written in abbreviated form without the parentheses. These, and other connectives as well, can then used as building
blocks for yet further connectives. Rhetoricians, philosophers, and mathematicians use truth tables and the various
theorems to analyze and simplify their formulas.
Electrical engineering uses drawn symbols and connect them with lines that stand for the mathematicals act of sub-
stitution and replacement. They then verify their drawings with truth tables and simplify the expressions as shown
below by use of Karnaugh maps or the theorems. In this way engineers have created a host of combinatorial logic
(i.e. connectives without feedback) such as decoders, encoders, mutifunction gates, majority logic, binary
adders, arithmetic logic units, etc.

68.4.1 Denitions
A denition creates a new symbol and its behavior, often for the purposes of abbreviation. Once the denition is
presented, either form of the equivalent symbol or formula can be used. The following symbolism =D is following
the convention of Reichenbach.[15] Some examples of convenient denitions drawn from the symbol set { ~, &, (,
) } and variables. Each denition is producing a logically equivalent formula that can be used for substitution or
replacement.
68.5. INDUCTIVE DEFINITION 291

denition of a new variable: (c & d) =D s


OR: ~(~a & ~b) =D (a b)
IMPLICATION: (~a b) =D (a b)
XOR: (~a & b) (a & ~b) =D (a b)
LOGICAL EQUIVALENCE: ( (a b) & (b a) ) =D ( a b )

68.4.2 Axiom and denition schemas


The denitions above for OR, IMPLICATION, XOR, and logical equivalence are actually schemas (or schemata),
that is, they are models (demonstrations, examples) for a general formula format but shown (for illustrative purposes)
with specic letters a, b, c for the variables, whereas any variable letters can go in their places as long as the letter
substitutions follow the rule of substitution below.

Example: In the denition (~a b) =D (a b), other variable-symbols such as SW2 and CON1
might be used, i.e. formally:

a =D SW2, b =D CON1, so we would have as an instance of the denition schema (~SW2


CON1) =D (SW2 CON1)

68.4.3 Substitution versus replacement


Substitution: The variable or sub-formula to be substituted with another variable, constant, or sub-formula must be
replaced in all instances throughout the overall formula.

Example: (c & d) (p & ~(c & ~d)), but (q1 & ~q2) d. Now wherever variable d occurs, substitute
(q1 & ~q2 ):

(c & (q1 & ~q2 )) (p & ~(c & ~(q1 & ~q2 )))

Replacement: (i) the formula to be replaced must be within a tautology, i.e. logically equivalent ( connected by
or ) to the formula that replaces it, and (ii) unlike substitution its permissible for the replacement to occur only in
one place (i.e. for one formula).

Example: Use this set of formula schemas/equivalences:

1. ( (a 0) a ).
2. ( (a & ~a) 0 ).
3. ( (~a b) =D (a b) ).
4. ( ~(~a) a )

1. start with a": a


2. Use 1 to replace a with (a 0): (a 0)
3. Use the notion of schema to substitute b for a in 2: ( (a & ~a) 0 )
4. Use 2 to replace 0 with (b & ~b): ( a (b & ~b) )
5. (see below for how to distribute a " over (b & ~b), etc.)

68.5 Inductive denition


The classical presentation of propositional logic (see Enderton 2002) uses the connectives , , , , . The set
of formulas over a given set of propositional variables is inductively dened to be the smallest set of expressions such
that:
292 CHAPTER 68. PROPOSITIONAL FORMULA

Each propositional variable in the set is a formula,


() is a formula whenever is, and
( ) is a formula whenever and are formulas and is one of the binary connectives , , , .

This inductive denition can be easily extended to cover additional connectives.


The inductive denition can also be rephrased in terms of a closure operation (Enderton 2002). Let V denote a set of
propositional variables and let XV denote the set of all strings from an alphabet including symbols in V, left and right
parentheses, and all the logical connectives under consideration. Each logical connective corresponds to a formula
building operation, a function from XXV to XXV:

Given a string z, the operation E (z) returns (z) .


Given strings y and z, the operation E (y, z) returns (y x) . There are similar operations E , E , and E
corresponding to the other binary connectives.

The set of formulas over V is dened to be the smallest subset of XXV containing V and closed under all the formula
building operations.

68.6 Parsing formulas


The following laws of the propositional calculus are used to reduce complex formulas. The laws can be easily
veried with truth tables. For each law, the principal (outermost) connective is associated with logical equivalence
or identity =. A complete analysis of all 2n combinations of truth-values for its n distinct variables will result in
a column of 1s (Ts) underneath this connective. This nding makes each law, by denition, a tautology. And, for
a given law, because its formula on the left and right are equivalent (or identical) they can be substituted for one
another.

Example: The following truth table is De Morgans law for the behavior of NOT over OR: ~(a b) (~a &
~b). To the left of the principal connective (yellow column labelled taut) the formula ~(b a) evaluates
to (1, 0, 0, 0) under the label P. On the right of taut the formula (~(b) ~(a)) also evaluates to (1, 0, 0,
0) under the label Q. As the two columns have equivalent evaluations, the logical equivalence under taut
evaluates to (1, 1, 1, 1), i.e. P Q. Thus either formula can be substituted for the other if it appears in a larger
formula.

Enterprising readers might challenge themselves to invent an axiomatic system that uses the symbols { , &, ~, (,
), variables a, b, c }, the formation rules specied above, and as few as possible of the laws listed below, and then
derive as theorems the others as well as the truth-table valuations for , &, and ~. One set attributed to Huntington
(1904) (Suppes:204) uses eight of the laws dened below.
Note that if used in an axiomatic system, the symbols 1 and 0 (or T and F) are considered to be ws and thus obey
all the same rules as the variables. Thus the laws listed below are actually axiom schemas, that is, they stand in place
of an innite number of instances. Thus ( x y ) ( y x ) might be used in one instance, ( p 0 ) ( 0 p ) and
in another instance ( 1 q ) ( q 1 ), etc.

68.6.1 Connective seniority (symbol rank)


In general, to avoid confusion during analysis and evaluation of propositional formulas make liberal use parentheses.
However, quite often authors leave them out. To parse a complicated formula one rst needs to know the seniority, or
rank, that each of the connectives (excepting *) has over the other connectives. To well-form a formula, start with
the connective with the highest rank and add parentheses around its components, then move down in rank (paying
close attention to the connectives scope over which the it is working). From most- to least-senior, with the predicate
signs x and x, the IDENTITY = and arithmetic signs added for completeness:[16]

(LOGICAL EQUIVALENCE)
68.6. PARSING FORMULAS 293

(IMPLICATION)
& (AND)
(OR)
~ (NOT)
x (FOR ALL x)
x (THERE EXISTS AN x)
= (IDENTITY)
+ (arithmetic sum)
* (arithmetic multiply)
' (s, arithmetic successor).

Thus the formula can be parsedbut note that, because NOT does not obey the distributive law, the parentheses
around the inner formula (~c & ~d) is mandatory:

Example: " d & c w " rewritten is ( (d & c) w )


Example: " a & a b a & ~a b " rewritten (rigorously) is

has seniority: ( ( a & a b ) ( a & ~a b ) )


has seniority: ( ( a & (a b) ) ( a & ~a b ) )
& has seniority both sides: ( ( ( (a) & (a b) ) ) ( ( (a) & (~a b) ) )
~ has seniority: ( ( ( (a) & (a b) ) ) ( ( (a) & (~(a) b) ) )
check 9 ( -parenthesis and 9 ) -parenthesis: ( ( ( (a) & (a b) ) ) ( ( (a) & (~(a) b)
))

Example:

d & c p & ~(c & ~d) c & d p & c p & ~d rewritten is ( ( (d & c) ( p & ~((c & ~(d))
) ) ) ( (c & d) (p & c) (p & ~(d)) ) )

68.6.2 Commutative and associative laws

Both AND and OR obey the commutative law and associative law:

Commutative law for OR: ( a b ) ( b a )

Commutative law for AND: ( a & b ) ( b & a )

Associative law for OR: (( a b ) c ) ( a (b c) )

Associative law for AND: (( a & b ) & c ) ( a & (b & c) )

Omitting parentheses in strings of AND and OR: The connectives are considered to be unary (one-variable, e.g.
NOT) and binary (i.e. two-variable AND, OR, IMPLIES). For example:

( (c & d) (p & c) (p & ~d) ) above should be written ( ((c & d) (p & c)) (p & ~(d) ) ) or possibly
( (c & d) ( (p & c) (p & ~(d)) ) )

However, a truth-table demonstration shows that the form without the extra parentheses is perfectly adequate.
Omitting parentheses with regards to a single-variable NOT: While ~(a) where a is a single variable is perfectly
clear, ~a is adequate and is the usual way this literal would appear. When the NOT is over a formula with more than
one symbol, then the parentheses are mandatory, e.g. ~(a b).
294 CHAPTER 68. PROPOSITIONAL FORMULA

68.6.3 Distributive laws

OR distributes over AND and AND distributes over OR. NOT does not distribute over AND or OR. See below about
De Morgans law:

Distributive law for OR: ( c ( a & b) ) ( (c a) & (c b) )

Distributive law for AND: ( c & ( a b) ) ( (c & a) (c & b) )

68.6.4 De Morgans laws

NOT, when distributed over OR or AND, does something peculiar (again, these can be veried with a truth-table):

De Morgans law for OR: ~(a b) (~a & ~b)

De Morgans law for AND: ~(a & b) (~a ~b)

68.6.5 Laws of absorption

Absorption, in particular the rst one, causes the laws of logic to dier from the laws of arithmetic:

Absorption (idempotency) for OR: (a a) a

Absorption (idempotency) for AND: (a & a) a

68.6.6 Laws of evaluation: Identity, nullity, and complement

The sign " = " (as distinguished from logical equivalence , alternately or ) symbolizes the assignment of value or
meaning. Thus the string (a & ~(a)) symbolizes 1, i.e. it means the same thing as symbol 1 ". In some systems
this will be an axiom (denition) perhaps shown as ( (a & ~(a)) =D 1 ); in other systems, it may be derived in the
truth table below:

Commutation of equality: (a = b) (b = a)

Identity for OR: (a 0) = a or (a F) = a

Identity for AND: (a & 1) = a or (a & T) = a

Nullity for OR: (a 1) = 1 or (a T) = T

Nullity for AND: (a & 0) = 0 or (a & F) = F

Complement for OR: (a ~a) = 1 or (a ~a) = T, law of excluded middle

Complement for AND: (a & ~a) = 0 or (a & ~a) = F, law of contradiction

68.6.7 Double negative (involution)

~(~a) = a
68.7. WELL-FORMED FORMULAS (WFFS) 295

68.7 Well-formed formulas (ws)

A key property of formulas is that they can be uniquely parsed to determine the structure of the formula in terms
of its propositional variables and logical connectives. When formulas are written in inx notation, as above, unique
readability is ensured through an appropriate use of parentheses in the denition of formulas. Alternatively, formulas
can be written in Polish notation or reverse Polish notation, eliminating the need for parentheses altogether.
The inductive denition of inx formulas in the previous section can be converted to a formal grammar in Backus-
Naur form:
<formula> ::= <propositional variable> | ( <formula> ) | ( <formula> <formula>) | ( <formula> <formula> ) |
( <formula> <formula> ) | ( <formula> <formula> )

It can be shown that any expression matched by the grammar has a balanced number of left and right parentheses, and
any nonempty initial segment of a formula has more left than right parentheses.[17] This fact can be used to give an
algorithm for parsing formulas. For example, suppose that an expression x begins with ( . Starting after the second
symbol, match the shortest subexpression y of x that has balanced parentheses. If x is a formula, there is exactly one
symbol left after this expression, this symbol is a closing parenthesis, and y itself is a formula. This idea can be used
to generate a recursive descent parser for formulas.
Example of parenthesis counting:
This method locates as 1 the principal connective the connective under which the overall evaluation of the
formula occurs for the outer-most parentheses (which are often omitted).[18] It also locates the inner-most connective
where one would begin evaluatation of the formula without the use of a truth table, e.g. at level 6.

68.7.1 Ws versus valid formulas in inferences

The notion of valid argument is usually applied to inferences in arguments, but arguments reduce to propositional
formulas and can be evaluated the same as any other propositional formula. Here a valid inference means: The
formula that represents the inference evaluates to truth beneath its principal connective, no matter what truth-
values are assigned to its variables, i.e. the formula is a tautology.[19] Quite possibly a formula will be well-formed
but not valid. Another way of saying this is: Being well-formed is necessary for a formula to be valid but it is not
sucient. The only way to nd out if it is both well-formed and valid is to submit it to verication with a truth table
or by use of the laws":

Example 1: What does one make of the following dicult-to-follow assertion? Is it valid? If its sunny, but
if the frog is croaking then its not sunny, then its the same as saying that the frog isn't croaking. Convert this
to a propositional formula as follows:

" IF (a AND (IF b THEN NOT-a) THEN NOT-a where " a " represents its sunny and
" b " represents the frog is croaking":
( ( (a) & ( (b) ~(a) ) ~(b) )
This is well-formed, but is it valid? In other words, when evaluated will this yield a tautology (all
T) beneath the logical-equivalence symbol ? The answer is NO, it is not valid. However, if
reconstructed as an implication then the argument is valid.
Saying its sunny, but if the frog is croaking then its not sunny, implies that the frog isn't croaking.
Other circumstances may be preventing the frog from croaking: perhaps a crane ate it.

Example 2 (from Reichenbach via Bertrand Russell):

If pigs have wings, some winged animals are good to eat. Some winged animals are good to eat,
so pigs have wings.
( ((a) (b)) & (b) (a) ) is well formed, but an invalid argument as shown by the red evaluation
under the principal implication:
296 CHAPTER 68. PROPOSITIONAL FORMULA

The engineering symbol for the NAND connective (the 'stroke') can be used to build any propositional formula. The notion that truth
(1) and falsity (0) can be dened in terms of this connective is shown in the sequence of NANDs on the left, and the derivations of
the four evaluations of a NAND b are shown along the bottom. The more common method is to use the denition of the NAND from
the truth table.

68.8 Reduced sets of connectives


A set of logical connectives is called complete if every propositional formula is tautologically equivalent to a formula
with just the connectives in that set. There are many complete sets of connectives, including {, } , {, } , and
{, } . There are two binary connectives that are complete on their own, corresponding to NAND and NOR,
respectively.[20] Some pairs are not complete, for example {, } .

68.8.1 The stroke (NAND)

The binary connective corresponding to NAND is called the Sheer stroke, and written with a vertical bar | or vertical
arrow . The completeness of this connective was noted in Principia Mathematica (1927:xvii). Since it is complete on
its own, all other connectives can be expressed using only the stroke. For example, where the symbol " " represents
logical equivalence:

~p p|p
p q p|~q
p q ~p|~q
p & q ~(p|q)
68.9. NORMAL FORMS 297

In particular, the zero-ary connectives (representing truth) and (representing falsity) can be expressed using the
stroke:

(a|(a|a))

(|)

68.8.2 IF ... THEN ... ELSE

This connective together with { 0, 1 }, ( or { F, T } or { , } ) forms a complete set. In the following the
IF...THEN...ELSE relation (c, b, a) = d represents ( (c b) (~c a) ) ( (c & b) (~c & a) ) = d

(c, b, a):
(c, 0, 1) ~c
(c, b, 1) (c b)
(c, c, a) (c a)
(c, b, c) (c & b)

Example: The following shows how a theorem-based proof of "(c, b, 1) (c b)" would proceed, below the proof
is its truth-table verication. ( Note: (c b) is dened to be (~c b) ):

Begin with the reduced form: ( (c & b) (~c & a) )


Substitute 1 for a: ( (c & b) (~c & 1) )
Identity (~c & 1) = ~c: ( (c & b) (~c) )
Law of commutation for V: ( (~c) (c & b) )
Distribute "~c V over (c & b): ( ((~c) c ) & ((~c) b )
Law of excluded middle (((~c) c ) = 1 ): ( (1) & ((~c) b ) )
Distribute "(1) &" over ((~c) b): ( ((1) & (~c)) ((1) & b )) )
Commutivity and Identity (( 1 & ~c) = (~c & 1) = ~c, and (( 1 & b) (b & 1) b: ( ~c b )
( ~c b ) is dened as c b Q. E. D.

In the following truth table the column labelled taut for tautology evaluates logical equivalence (symbolized here by
) between the two columns labelled d. Because all four rows under taut are 1s, the equivalence indeed represents
a tautology.

68.9 Normal forms


An arbitrary propositional formula may have a very complicated structure. It is often convenient to work with formulas
that have simpler forms, known as normal forms. Some common normal forms include conjunctive normal form
and disjunctive normal form. Any propositional formula can be reduced to its conjunctive or disjunctive normal form.

68.9.1 Reduction to normal form


Reduction to normal form is relatively simple once a truth table for the formula is prepared. But further attempts to
minimize the number of literals (see below) requires some tools: reduction by De Morgans laws and truth tables
can be unwieldy, but Karnaugh maps are very suitable a small number of variables (5 or less). Some sophisticated
tabular methods exist for more complex circuits with multiple outputs but these are beyond the scope of this article;
for more see QuineMcCluskey algorithm.
298 CHAPTER 68. PROPOSITIONAL FORMULA

Literal, term and alterm

In electrical engineering a variable x or its negation ~(x) is lumped together into a single notion called a literal. A
string of literals connected by ANDs is called a term. A string of literals connected by OR is called an alterm.
Typically the literal ~(x) is abbreviated ~x. Sometimes the &-symbol is omitted altogether in the manner of algebraic
multiplication.

Examples
1. a, b, c, d are variables. ((( a & ~(b) ) & ~(c)) & d) is a term. This can be abbreviated as (a & ~b & ~c &
d), or a~b~cd.
2. p, q, r, s are variables. (((p & ~(q) ) & r) & ~(s) ) is an alterm. This can be abbreviated as (p ~q r
~s).

Minterms

In the same way that a 2n -row truth table displays the evaluation of a propositional formula for all 2n possible values
of its variables, n variables produces a 2n -square Karnaugh map (even though we cannot draw it in its full-dimensional
realization). For example, 3 variables produces 23 = 8 rows and 8 Karnaugh squares; 4 variables produces 16 truth-
table rows and 16 squares and therefore 16 minterms. Each Karnaugh-map square and its corresponding truth-table
evaluation represents one minterm.
Any propositional formula can be reduced to the logical sum (OR) of the active (i.e. 1"- or T"-valued) minterms.
When in this form the formula is said to be in disjunctive normal form. But even though it is in this form, it is not
necessarily minimized with respect to either the number of terms or the number of literals.
In the following table, observe the peculiar numbering of the rows: (0, 1, 3, 2, 6, 7, 5, 4, 0). The rst column is the
decimal equivalent of the binary equivalent of the digits cba, in other words:

Example
cba2 = c*22 + b*21 + a*20 :
cba = (c=1, b=0, a=0) = 1012 = 1*22 + 0*21 + 1*20 = 510

This numbering comes about because as one moves down the table from row to row only one variable at a time
changes its value. Gray code is derived from this notion. This notion can be extended to three and four-dimensional
hypercubes called Hasse diagrams where each corners variables change only one at a time as one moves around the
edges of the cube. Hasse diagrams (hypercubes) attened into two dimensions are either Veitch diagrams or Karnaugh
maps (these are virtually the same thing).
When working with Karnaugh maps one must always keep in mind that the top edge wrap arounds to the bottom
edge, and the left edge wraps around to the right edgethe Karnaugh diagram is really a three- or four- or n-
dimensional attened object.

68.9.2 Reduction by use of the map method (Veitch, Karnaugh)


Veitch improved the notion of Venn diagrams by converting the circles to abutting squares, and Karnaugh simplied
the Veitch diagram by converting the minterms, written in their literal-form (e.g. ~abc~d) into numbers.[21] The
method proceeds as follows:

Produce the formulas truth table

Produce the formulas truth table. Number its rows using the binary-equivalents of the variables (usually just sequen-
tially 0 through n-1) for n variables.

Technically, the propositional function has been reduced to its (unminimized) conjunctive normal form:
each row has its minterm expression and these can be OR'd to produce the formula in its (unminimized)
conjunctive normal form.
68.9. NORMAL FORMS 299

Example: ((c & d) (p & ~(c & (~d)))) = q in conjunctive normal form is:

( (~p & d & c ) (p & d & c) (p & d & ~c) (p & ~d & ~c) ) = q

However, this formula be reduced both in the number of terms (from 4 to 3) and in the total count of its literals (12
to 6).

Create the formulas Karnaugh map

Use the values of the formula (e.g. p) found by the truth-table method and place them in their into their respective
(associated) Karnaugh squares (these are numbered per the Gray code convention). If values of d for don't care
appear in the table, this adds exibility during the reduction phase.

Reduce minterms

Minterms of adjacent (abutting) 1-squares (T-squares) can be reduced with respect to the number of their literals,
and the number terms also will be reduced in the process. Two abutting squares (2 x 1 horizontal or 1 x 2 vertical,
even the edges represent abutting squares) lose one literal, four squares in a 4 x 1 rectangle (horizontal or vertical)
or 2 x 2 square (even the four corners represent abutting squares) lose two literals, eight squares in a rectangle lose 3
literals, etc. (One seeks out the largest square or rectangles and ignores the smaller squares or rectangles contained
totally within it. ) This process continues until all abutting squares are accounted for, at which point the propositional
formula is minimized.
For example, squares #3 and #7 abut. These two abutting squares can lose one literal (e.g. p from squares #3 and
#7), four squares in a rectangle or square lose two literals, eight squares in a rectangle lose 3 literals, etc. (One seeks
out the largest square or rectangles.) This process continues until all abutting squares are accounted for, at which
point the propositional formula is said to be minimized.
Example: The map method usually is done by inspection. The following example expands the algebraic method to
show the trick behind the combining of terms on a Karnaugh map:

Minterms #3 and #7 abut, #7 and #6 abut, and #4 and #6 abut (because the tables edges wrap around).
So each of these pairs can be reduced.

Observe that by the Idempotency law (A A) = A, we can create more terms. Then by association and distributive
laws the variables to disappear can be paired, and then disappeared with the Law of contradiction (x & ~x)=0. The
following uses brackets [ and ] only to keep track of the terms; they have no special signicance:

Put the formula in conjunctive normal form with the formula to be reduced:

q = ( (~p & d & c ) (p & d & c) (p & d & ~c) (p & ~d & ~c) ) = ( #3
#7 #6 #4 )

Idempotency (absorption) [ A A) = A:

( #3 [ #7 #7 ] [ #6 #6 ] #4 )

Associative law (x (y z)) = ( (x y) z )

( [ #3 #7 ] [ #7 #6 ] [ #6 #4] )
[ (~p & d & c ) (p & d & c) ] [ (p & d & c) (p & d & ~c) ] [ (p & d & ~c)
(p & ~d & ~c) ].

Distributive law ( x & (y z) ) = ( (x & y) (x & z) ) :

( [ (d & c) (~p & p) ] [ (p & d) (~c & c) ] [ (p & ~c) (c & ~c) ] )
300 CHAPTER 68. PROPOSITIONAL FORMULA

Commutative law and law of contradiction (x & ~x) = (~x & x) = 0:

( [ (d & c) (0) ] [ (p & d) (0) ] [ (p & ~c) (0) ] )

Law of identity ( x 0 ) = x leading to the reduced form of the formula:

q = ( (d & c) (p & d) (p & ~c) )

Verify reduction with a truth table

68.10 Impredicative propositions


Given the following examples-as-denitions, what does one make of the subsequent reasoning:

(1) This sentence is simple. (2) This sentence is complex, and it is conjoined by AND.

Then assign the variable s to the left-most sentence This sentence is simple. Dene compound c = not simple
~s, and assign c = ~s to This sentence is compound"; assign j to It [this sentence] is conjoined by AND. The
second sentence can be expressed as:

( NOT(s) AND j )

If truth values are to be placed on the sentences c = ~s and j, then all are clearly FALSEHOODS: e.g. This sentence
is complex is a FALSEHOOD (it is simple, by denition). So their conjunction (AND) is a falsehood. But when
taken in its assembed form, the sentence a TRUTH.
This is an example of the paradoxes that result from an impredicative denitionthat is, when an object m has a
property P, but the object m is dened in terms of property P.[22] The best advice for a rhetorician or one involved
in deductive analysis is avoid impredicative denitions but at the same time be on the lookout for them because they
can indeed create paradoxes. Engineers, on the other hand, put them to work in the form of propositional formulas
with feedback.

68.11 Propositional formula with feedback


The notion of a propositional formula appearing as one of its own variables requires a formation rule that allows the
assignment of the formula to a variable. In general there is no stipulation (either axiomatic or truth-table systems of
objects and relations) that forbids this from happening.[23]
The simplest case occurs when an OR formula becomes one its own inputs e.g. p = q. Begin with (p s) = q, then
let p = q. Observe that qs denition depends on itself q as well as on s and the OR connective; this denition
of q is thus impredicative. Either of two conditions can result:[24] oscillation or memory.
It helps to think of the formula as a black box. Without knowledge of what is going on inside the formula-"box
from the outside it would appear that the output is no longer a function of the inputs alone. That is, sometimes
one looks at q and sees 0 and other times 1. To avoid this problem one has to know the state (condition) of the
hidden variable p inside the box (i.e. the value of q fed back and assigned to p). When this is known the apparent
inconsistency goes away.
To understand [predict] the behavior of formulas with feedback requires the more sophisticated analysis of sequential
circuits. Propositional formulas with feedback lead, in their simplest form, to state machines; they also lead to
memories in the form of Turing tapes and counter-machine counters. From combinations of these elements one
can build any sort of bounded computational model (e.g. Turing machines, counter machines, register machines,
Macintosh computers, etc.).
68.11. PROPOSITIONAL FORMULA WITH FEEDBACK 301

68.11.1 Oscillation
In the abstract (ideal) case the simplest oscillating formula is a NOT fed back to itself: ~(~(p=q)) = q. Analysis of
an abstract (ideal) propositional formula in a truth-table reveals an inconsistency for both p=1 and p=0 cases: When
p=1, q=0, this cannot be because p=q; ditto for when p=0 and q=1.
Oscillation with delay: If an delay[25] (ideal or non-ideal) is inserted in the abstract formula between p and q then
p will oscillate between 1 and 0: 101010...101... ad innitum. If either of the delay and NOT are not abstract (i.e.
not ideal), the type of analysis to be used will be dependent upon the exact nature of the objects that make up the
oscillator; such things fall outside mathematics and into engineering.
Analysis requires a delay to be inserted and then the loop cut between the delay and the input p. The delay must
be viewed as a kind of proposition that has qd (q-delayed) as output for q as input. This new proposition adds
another column to the truth table. The inconsistency is now between qd and p as shown in red; two stable states
resulting:

68.11.2 Memory
Without delay, inconsistencies must be eliminated from a truth table analysis. With the notion of delay, this con-
dition presents itself as a momentary inconsistency between the fed-back output variable q and p = q .
A truth table reveals the rows where inconsistencies occur between p = q at the input and q at the output. After
breaking the feed-back,[26] the truth table construction proceeds in the conventional manner. But afterwards, in
every row the output q is compared to the now-independent input p and any inconsistencies between p and q are noted
(i.e. p=0 together with q=1, or p=1 and q=0); when the line is remade both are rendered impossible by the Law
of contradiction ~(p & ~p)). Rows revealing inconsistencies are either considered transient states or just eliminated
as inconsistent and hence impossible.

Once-ip memory

About the simplest memory results when the output of an OR feeds back to one of its inputs, in this case output
q feeds back into p. Given that the formula is rst evaluated (initialized) with p=0 & q=0, it will ip once
when set by s=1. Thereafter, output q will sustain q in the ipped condition (state q=1). This behavior, now
time-dependent, is shown by the state diagram to the right of the once-ip.

Flip-op memory

The next simplest case is the set-reset ip-op shown below the once-ip. Given that r=0 & s=0 and q=0 at the
outset, it is set (s=1) in a manner similar to the once-ip. It however has a provision to reset q=0 when r"=1.
And additional complication occurs if both set=1 and reset=1. In this formula, the set=1 forces the output q=1 so
when and if (s=0 & r=1) the ip-op will be reset. Or, if (s=1 & r=0) the ip-op will be set. In the abstract (ideal)
instance in which s=1 s=0 & r=1 r=0 simultaneously, the formula q will be indeterminate (undecidable). Due
to delays in real OR, AND and NOT the result will be unknown at the outset but thereafter predicable.

Clocked ip-op memory

The formula known as clocked ip-op memory (c is the clock and d is the data) is given below. It works
as follows: When c = 0 the data d (either 0 or 1) cannot get through to aect output q. When c = 1 the data d gets
through and output q follows ds value. When c goes from 1 to 0 the last value of the data remains trapped at
output q. As long as c=0, d can change value without causing q to change.

Examples

1. ( ( c & d ) ( p & ( ~( c & ~( d ) ) ) ) = q, but now let p = q:


2. ( ( c & d ) ( q & ( ~( c & ~( d ) ) ) ) = q

The state diagram is similar in shape to the ip-ops state diagram, but with dierent labelling on the transitions.
302 CHAPTER 68. PROPOSITIONAL FORMULA

68.12 Historical development


Bertrand Russell (1912:74) lists three laws of thought that derive from Aristotle: (1) The law of identity: Whatever
is, is., (2) The law of contradiction: Nothing cannot both be and not be, and (3) The law of excluded middle:
Everything must be or not be.

Example: Here O is an expression about an objects BEING or QUALITY:


1. Law of Identity: O = O
2. Law of contradiction: ~(O & ~(O))
3. Law of excluded middle: (O ~(O))

The use of the word everything in the law of excluded middle renders Russells expression of this law open to
debate. If restricted to an expression about BEING or QUALITY with reference to a nite collection of objects (a
nite universe of discourse) -- the members of which can be investigated one after another for the presence or
absence of the assertionthen the law is considered intuitionistically appropriate. Thus an assertion such as: This
object must either BE or NOT BE (in the collection)", or This object must either have this QUALITY or NOT have
this QUALITY (relative to the objects in the collection)" is acceptable. See more at Venn diagram.
Although a propositional calculus originated with Aristotle, the notion of an algebra applied to propositions had to
wait until the early 19th century. In an (adverse) reaction to the 2000 year tradition of Aristotles syllogisms, John
Locke's Essay concerning human understanding (1690) used the word semiotics (theory of the use of symbols). By
1826 Richard Whately had critically analyzed the syllogistic logic with a sympathy toward Lockes semiotics. George
Bentham's work (1827) resulted in the notion of quantication of the predicate (1827) (nowadays symbolized as
for all). A row instigated by William Hamilton over a priority dispute with Augustus De Morgan inspired
George Boole to write up his ideas on logic, and to publish them as MAL [Mathematical Analysis of Logic] in 1847
(Grattin-Guinness and Bornet 1997:xxviii).
About his contribution Grattin-Guinness and Bornet comment:

Booles principal single innovation was [the] law [ xn = x ] for logic: it stated that the mental acts of
choosing the property x and choosing x again and again is the same as choosing x once... As consequence
of it he formed the equations x(1-x)=0 and x+(1-x)=1 which for him expressed respectively the law of
contradiction and the law of excluded middle (p. xxvii). For Boole 1 was the universe of discourse
and 0 was nothing.

Gottlob Frege's massive undertaking (1879) resulted in a formal calculus of propositions, but his symbolism is so
daunting that it had little inuence excepting on one person: Bertrand Russell. First as the student of Alfred North
Whitehead he studied Freges work and suggested a (famous and notorious) emendation with respect to it (1904)
around the problem of an antinomy that he discovered in Freges treatment ( cf Russells paradox ). Russells work
led to a collatoration with Whitehead that, in the year 1912, produced the rst volume of Principia Mathematica
(PM). It is here that what we consider modern propositional logic rst appeared. In particular, PM introduces NOT
and OR and the assertion symbol as primitives. In terms of these notions they dene IMPLICATION ( def.
*1.01: ~p q ), then AND (def. *3.01: ~(~p ~q) ), then EQUIVALENCE p q (*4.01: (p q) & ( q p ) ).

Henry M. Sheer (1921) and Jean Nicod demonstrate that only one connective, the stroke | is sucient to
express all propositional formulas.
Emil Post (1921) develops the truth-table method of analysis in his Introduction to a general theory of ele-
mentary propositions. He notes Nicods stroke | .
Whitehead and Russell add an introduction to their 1927 re-publication of PM adding, in part, a favorable
treatment of the stroke.

Computation and switching logic:

William Eccles and F. W. Jordan (1919) describe a trigger relay made from a vacuum tube.
George Stibitz (1937) invents the binary adder using mechanical relays. He builds this on his kitchen table.
68.13. FOOTNOTES 303

Example: Given binary bits a and b and carry-in ( c_in), their summation and carry-out (c_out) are:
( ( a XOR b ) XOR c_in )=
( a & b ) c_in ) = c_out;

Alan Turing builds a multiplier using relays (19371938). He has to hand-wind his own relay coils to do this.
Textbooks about switching circuits appear in early 1950s.
Willard Quine 1952 and 1955, E. W. Veitch 1952, and M. Karnaugh (1953) develop map-methods for simpli-
fying propositional functions.
George H. Mealy (1955) and Edward F. Moore (1956) address the theory of sequential (i.e. switching-circuit)
machines.
E. J. McCluskey and H. Shorr develop a method for simplifying propositional (switching) circuits (1962).

68.13 Footnotes
[1] Hamilton 1978:1

[2] PM p. 91 eschews the because they require a clear-cut object of sensation"; they stipulate the use of this

[3] (italics added) Reichenbach p.80.

[4] Tarski p.54-68. Suppes calls IDENTITY a further rule of inference and has a brief development around it; Robbin,
Bender and Williamson, and Goodstein introduce the sign and its usage without comment or explanation. Hamilton p. 37
employs two signs and = with respect to the valuation of a formula in a formal calculus. Kleene p. 70 and Hamilton p.
52 place it in the predicate calculus, in particular with regards to the arithmetic of natural numbers.

[5] Empiricits eschew the notion of a priori (built-in, born-with) knowledge. Radical reductionists such as John Locke
and David Hume held that every idea must either originate directly in sense experience or else be compounded of ideas
thus originating"; quoted from Quine reprinted in 1996 The Emergence of Logical Empriricism, Garland Publishing Inc.
http://www.marxists.org/reference/subject/philosophy/works/us/quine.htm

[6] Neural net modelling oers a good mathematical model for a comparator as follows: Given a signal S and a threshold thr,
subtract thr from S and substitute this dierence d to a sigmoid function: For large gains k, e.g. k=100, 1/( 1 + ek*d
) = 1/( 1 + ek*(S-thr) ) = { 0, 1 }. For example, if The door is DOWN means The door is less than 50% of the way
up, then a threshold thr=0.5 corresponding to 0.5*5.0 = +2.50 volts could be applied to a linear measuring-device with
an output of 0 volts when fully closed and +5.0 volts when fully open.

[7] In actuality the digital 1 and 0 are dened over non-overlapping ranges e.g. { 1 = +5/+0.2/1.0 volts, 0 = +0.5/0.2 volts
}. When a value falls outside the dened range(s) the value becomes u -- unknown; e.g. +2.3 would be u.

[8] While the notion of logical product is not so peculiar (e.g. 0*0=0, 0*1=0, 1*0=0, 1*1=1), the notion of (1+1=1 is peculiar;
in fact (a "+" b) = (a + (b - a*b)) where "+" is the logical sum but + and - are the true arithmetic counterparts. Occasionally
all four notions do appear in a formula: A AND B = 1/2*( A plus B minus ( A XOR B ) ] (cf p. 146 in John Wakerly 1978,
Error Detecting Codes, Self-Checking Circuits and Applications, North-Holland, New York, ISBN 0-444-00259-6 pbk.)

[9] A careful look at its Karnaugh map shows that IF...THEN...ELSE can also be expressed, in a rather round-about way, in
terms of two exclusive-ORs: ( (b AND (c XOR a)) OR (a AND (c XOR b)) ) = d.

[10] Robbin p. 3.

[11] Rosenbloom p. 30 and p. 54 discusses this problem of implication at some length. Most philosophers and mathematicians
just accept the material denition as given above. But some do not, including the intuitionists; they consider it a form of
the law of excluded middle misapplied.

[12] Indeed, exhaustive selection between alternatives -- mutual exclusion -- is required by the denition that Kleene gives the
CASE operator (Kleene 1952229)

[13] The use of quote marks around the expressions is not accidental. Tarski comments on the use of quotes in his 18. Identity
of things and identity of their designations; use of quotation marks p. 58.

[14] Hamilton p. 37. Bender and Williamson p. 29 state In what follows, we'll replace equals with the symbol " "
(equivalence) which is usually used in logic. We use the more familiar " = " for assigning meaning and values.
304 CHAPTER 68. PROPOSITIONAL FORMULA

[15] Reichenbach p. 20-22 and follows the conventions of PM. The symbol =D is in the metalanguage and is not a formal
symbol with the following meaning: by symbol ' s ' is to have the same meaning as the formula '(c & d)' ".

[16] Rosenbloom 1950:32. Kleene 1952:73-74 ranks all 11 symbols.

[17] cf Minsky 1967:75, section 4.2.3 The method of parenthesis counting. Minsky presents a state machine that will do the
job, and by use of induction (recursive denition) Minsky proves the method and presents a theorem as the result. A
fully generalized parenthesis grammar requires an innite state machine (e.g. a Turing machine) to do the counting.

[18] Robbin p. 7

[19] cf Reichenbach p. 68 for a more involved discussion: If the inference is valid and the premises are true, the inference is
called conclusive.

[20] As well as the rst three, Hamilton pp.19-22 discusses logics built from only | (NAND), and (NOR).

[21] Wickes 1967:36. Wickes oers a good example of 8 of the 2 x 4 (3-variable maps) and 16 of the 4 x 4 (4-variable)
maps. As an arbitrary 3-variable map could represent any one of 28 =256 2x4 maps, and an arbitrary 4-variable map could
represent any one of 216 = 65,536 dierent formula-evaluations, writing down every one is infeasible.

[22] This denition is given by Stephen Kleene. Both Kurt Gdel and Kleene believed that the classical paradoxes are uniformly
examples of this sort of denition. But Kleene went on to assert that the problem has not been solved satisfactorily and
impredicative denitions can be found in analysis. He gives as example the denition of the least upper bound (l.u.b) u of
M. Given a Dedekind cut of the number line C and the two parts into which the number line is cut, i.e. M and (C - M),
l.u.b. = u is dened in terms of the notion M, whereas M is dened in terms of C. Thus the denition of u, an element
of C, is dened in terms of the totality C and this makes its denition impredicative. Kleene asserts that attempts to argue
this away can be used to uphold the impredicative denitions in the paradoxes.(Kleene 1952:43).

[23] McCluskey comments that it could be argued that the analysis is still incomplete because the word statement The outputs
are equal to the previous values of the inputs has not been obtained"; he goes on to dismiss such worries because English
is not a formal language in a mathematical sense, [and] it is not really possible to have a formal procedure for obtaining
word statements (p. 185).

[24] More precisely, given enough loop gain, either oscillation or memory will occur (cf McCluskey p. 191-2). In abstract
(idealized) mathematical systems adequate loop gain is not a problem.

[25] The notion of delay and the principle of local causation as caused ultimately by the speed of light appears in Robin Gandy
(1980), Churchs thesis and Principles for Mechanisms, in J. Barwise, H. J. Keisler and K. Kunen, eds., The Kleene
Symposium, North-Holland Publishing Company (1980) 123-148. Gandy considered this to be the most important of his
principles: Contemporary physics rejects the possibility of instantaneous action at a distance (p. 135). Gandy was Alan
Turing's student and close friend.

[26] McKlusky p. 194-5 discusses breaking the loop and inserts ampliers to do this; Wickes (p. 118-121) discuss inserting
delays. McCluskey p. 195 discusses the problem of races caused by delays.

68.14 References
Bender, Edward A. and Williamson, S. Gill, 2005, A Short Course in Discrete Mathematics, Dover Publications,
Mineola NY, ISBN 0-486-43946-1. This text is used in a lower division two-quarter [computer science]
course at UC San Diego.

Enderton, H. B., 2002, A Mathematical Introduction to Logic. Harcourt/Academic Press. ISBN 0-12-238452-0

Goodstein, R. L., (Pergamon Press 1963), 1966, (Dover edition 2007), Boolean Algebra, Dover Publications,
Inc. Minola, New York, ISBN 0-486-45894-6. Emphasis on the notion of algebra of classes with set-
theoretic symbols such as , , ' (NOT), (IMPLIES). Later Goldstein replaces these with &, , , (re-
spectively) in his treatment of Sentence Logic pp. 7693.

Ivor Grattan-Guinness and Grard Bornet 1997, George Boole: Selected Manuscripts on Logic and its Philoso-
phy, Birkhuser Verlag, Basil, ISBN 978-0-8176-5456-6 (Boston).

A. G. Hamilton 1978, Logic for Mathematicians, Cambridge University Press, Cambridge UK, ISBN 0-521-
21838-1.
68.14. REFERENCES 305

E. J. McCluskey 1965, Introduction to the Theory of Switching Circuits, McGraw-Hill Book Company, New
York. No ISBN. Library of Congress Catalog Card Number 65-17394. McCluskey was a student of Willard
Quine and developed some notable theorems with Quine and on his own. For those interested in the history,
the book contains a wealth of references.
Marvin L. Minsky 1967, Computation: Finite and Innite Machines, Prentice-Hall, Inc, Englewood Clis, N.J..
No ISBN. Library of Congress Catalog Card Number 67-12342. Useful especially for computability, plus good
sources.

Paul C. Rosenbloom 1950, Dover edition 2005, The Elements of Mathematical Logic, Dover Publications, Inc.,
Mineola, New York, ISBN 0-486-44617-4.

Joel W. Robbin 1969, 1997, Mathematical Logic: A First Course, Dover Publications, Inc., Mineola, New
York, ISBN 0-486-45018-X (pbk.).

Patrick Suppes 1957 (1999 Dover edition), Introduction to Logic, Dover Publications, Inc., Mineola, New York.
ISBN 0-486-40687-3 (pbk.). This book is in print and readily available.
On his page 204 in a footnote he references his set of axioms to E. V. Huntington, Sets of Independent
Postulates for the Algebra of Logic, Transactions of the American Mathematical Society, Vol. 5 91904) pp.
288-309.

Alfred Tarski 1941 (1995 Dover edition), Introduction to Logic and to the Methodology of Deductive Sciences,
Dover Publications, Inc., Mineola, New York. ISBN 0-486-28462-X (pbk.). This book is in print and readily
available.
Jean van Heijenoort 1967, 3rd printing with emendations 1976, From Frege to Gdel: A Source Book in Mathe-
matical Logic, 1879-1931, Harvard University Press, Cambridge, Massachusetts. ISBN 0-674-32449-8 (pbk.)
Translation/reprints of Frege (1879), Russells letter to Frege (1902) and Freges letter to Russell (1902),
Richards paradox (1905), Post (1921) can be found here.
Alfred North Whitehead and Bertrand Russell 1927 2nd edition, paperback edition to *53 1962, Principia
Mathematica, Cambridge University Press, no ISBN. In the years between the rst edition of 1912 and the 2nd
edition of 1927, H. M. Sheer 1921 and M. Jean Nicod (no year cited) brought to Russells and Whiteheads
attention that what they considered their primitive propositions (connectives) could be reduced to a single |,
nowadays known as the stroke or NAND (NOT-AND, NEITHER ... NOR...). Russell-Whitehead discuss
this in their Introduction to the Second Edition and makes the denitions as discussed above.
William E. Wickes 1968, Logic Design with Integrated Circuits, John Wiley & Sons, Inc., New York. No
ISBN. Library of Congress Catalog Card Number: 68-21185. Tight presentation of engineerings analysis and
synthesis methods, references McCluskey 1965. Unlike Suppes, Wickes presentation of Boolean algebra
starts with a set of postulates of a truth-table nature and then derives the customary theorems of them (p.
18).
306 CHAPTER 68. PROPOSITIONAL FORMULA

A truth table will contain 2n rows, where n is the number of variables (e.g. three variables p, d, c produce 23 rows). Each
68.14. REFERENCES 307

Steps in the reduction using a Karnaugh map. The nal result is the OR (logical sum) of the three reduced terms.
308 CHAPTER 68. PROPOSITIONAL FORMULA
68.14. REFERENCES 309

About the simplest memory results when the output of an OR feeds back to one of its inputs, in this case output q feeding back
into p. The next simplest is the ip-op shown below the once-ip. Analysis of these sorts of formulas can be done by either
cutting the feedback path(s) or inserting (ideal) delay in the path. A cut path and an assumption that no delay occurs anywhere in the
circuit results in inconsistencies for some of the total states (combination of inputs and outputs, e.g. (p=0, s=1, r=1) results in an
inconsistency). When delay is present these inconsistencies are merely transient and expire when the delay(s) expire. The drawings
on the right are called state diagrams.
310 CHAPTER 68. PROPOSITIONAL FORMULA

A clocked ip-op memory (c is the clock and d is the data). The data can change at any time when clock c=0; when clock
c=1 the output q tracks the value of data d. When c goes from 1 to 0 it traps d = qs value and this continues to appear at q no
matter what d does (as long as c remains 0).
Chapter 69

QuineMcCluskey algorithm

The QuineMcCluskey algorithm (or the method of prime implicants) is a method used for minimization of
Boolean functions that was developed by Willard V. Quine and extended by Edward J. McCluskey.[1][2][3] It is func-
tionally identical to Karnaugh mapping, but the tabular form makes it more ecient for use in computer algorithms,
and it also gives a deterministic way to check that the minimal form of a Boolean function has been reached. It is
sometimes referred to as the tabulation method.
The method involves two steps:

1. Finding all prime implicants of the function.

2. Use those prime implicants in a prime implicant chart to nd the essential prime implicants of the function, as
well as other prime implicants that are necessary to cover the function.

69.1 Complexity
Although more practical than Karnaugh mapping when dealing with more than four variables, the QuineMcCluskey
algorithm also has a limited range of use since the problem it solves is NP-hard: the runtime of the QuineMcCluskey
algorithm grows exponentially with the number of variables. It can be shown that for a function of n variables the
upper bound on the number of prime implicants is 3n ln(n). If n = 32 there may be over 6.5 * 1015 prime implicants.
Functions with a large number of variables have to be minimized with potentially non-optimal heuristic methods, of
which the Espresso heuristic logic minimizer is the de-facto standard in 1995.[4]

69.2 Example

69.2.1 Step 1: nding prime implicants


Minimizing an arbitrary function:


f (A, B, C, D) = m(4, 8, 10, 11, 12, 15) + d(9, 14).

This expression says that the output function f will be 1 for the minterms 4,8,10,11,12 and 15 (denoted by the 'm'
term). But it also says that we don't care about the output for 9 and 14 combinations (denoted by the 'd' term). ('x'
stands for don't care).
One can easily form the canonical sum of products expression from this table, simply by summing the minterms
(leaving out don't-care terms) where the function evaluates to one:

fA,B,C,D = A BC D + AB C D + AB CD + AB CD + ABC D + ABCD.

311
312 CHAPTER 69. QUINEMCCLUSKEY ALGORITHM

which is not minimal. So to optimize, all minterms that evaluate to one are rst placed in a minterm table. Don't-care
terms are also added into this table, so they can be combined with minterms:
At this point, one can start combining minterms with other minterms. If two terms vary by only a single digit changing,
that digit can be replaced with a dash indicating that the digit doesn't matter. Terms that can't be combined any more
are marked with an asterisk (*). When going from Size 2 to Size 4, treat '-' as a third bit value. For instance, 110
and 100 can be combined, as well as 110 and 11-, but 110 and 011- cannot. (Trick: Match up the '-' rst.)
Note: In this example, none of the terms in the size 4 implicants table can be combined any further. Be aware that
this processing should be continued otherwise (size 8 etc.).

69.2.2 Step 2: prime implicant chart


None of the terms can be combined any further than this, so at this point we construct an essential prime implicant
table. Along the side goes the prime implicants that have just been generated, and along the top go the minterms
specied earlier. The don't care terms are not placed on topthey are omitted from this section because they are not
necessary inputs.
To nd the essential prime implicants, we run along the top row. We have to look for columns with only 1 X. If a
column has only 1 X, this means that the minterm can only be covered by 1 prime implicant. This prime implicant
is essential.
For example: in the rst column, with minterm 4, there is only 1 X. This means that m(4,12) is essential. So we
place a star next to it. Minterm 15 also has only 1 X, so m(10,11,14,15) is also essential. Now all columns with 1
X are covered.
The second prime implicant can be 'covered' by the third and fourth, and the third prime implicant can be 'covered'
by the second and rst, and neither is thus essential. If a prime implicant is essential then, as would be expected, it is
necessary to include it in the minimized boolean equation. In some cases, the essential prime implicants do not cover
all minterms, in which case additional procedures for chart reduction can be employed. The simplest additional
procedure is trial and error, but a more systematic way is Petricks method. In the current example, the essential
prime implicants do not handle all of the minterms, so, in this case, one can combine the essential implicants with
one of the two non-essential ones to yield one equation:

fA,B,C,D = BC D + AB + AC [5]

Both of those nal equations are functionally equivalent to the original, verbose equation:

fA,B,C,D = A BC D + AB C D + AB C D + AB CD + AB CD + ABC D + ABCD + ABCD.

69.3 See also


Buchbergers algorithm analogous algorithm for algebraic geometry
Petricks method

69.4 References
[1] Quine, Willard Van Orman (October 1952). The Problem of Simplifying Truth Functions. The American Mathematical
Monthly. 59 (8): 521531. JSTOR 2308219. doi:10.2307/2308219.
[2] Quine, Willard Van Orman (November 1955). A Way to Simplify Truth Functions. The American Mathematical Monthly.
62 (9): 627631. JSTOR 2307285. doi:10.2307/2307285.
[3] McCluskey, Jr., Edward J. (November 1956). Minimization of Boolean Functions. Bell System Technical Journal. 35
(6): 14171444. doi:10.1002/j.1538-7305.1956.tb03835.x. Retrieved 2014-08-24.
[4] Nelson, Victor P.; et al. (1995). Digital Logic Circuit Analysis and Design. Prentice Hall. p. 234. Retrieved 2014-08-26.
[5] Logic Friday program
69.5. EXTERNAL LINKS 313

69.5 External links


Quine-McCluskey Solver, by Hatem Hassan.

Quine-McCluskey algorithm implementation with a search of all solutions, by Frdric Carpon.


Modied Quine-McCluskey Method, by Vitthal Jadhav, Amar Buchade.

All about Quine-McClusky, article by Jack Crenshaw comparing Quine-McClusky to Karnaugh maps

Karma 3, A set of logic synthesis tools including Karnaugh maps, Quine-McCluskey minimization, BDDs,
probabilities, teaching module and more. Logic Circuits Synthesis Labs (LogiCS) - UFRGS, Brazil.

A. Costa BFunc, QMC based boolean logic simpliers supporting up to 64 inputs / 64 outputs (independently)
or 32 outputs (simultaneously)

Python Implementation by Robert Dick, with an optimized version.


Python Implementation for symbolically reducing Boolean expressions.

Quinessence, an open source implementation written in Free Pascal by Marco Caminati.


QCA an open source, R based implementation used in the social sciences, by Adrian Dua

A series of two articles describing the algorithm(s) implemented in R: rst article and second article. The R
implementation is exhaustive and it oers complete and exact solutions. It processes up to 20 input variables.

minBool an implementation by Andrey Popov.


QMC applet, an applet for a step by step analyze of the QMC- algorithm by Christian Roth

C++ implementation SourceForge.net C++ program implementing the algorithm.


Perl Module by Darren M. Kulp.

Tutorial Tutorial on Quine-McCluskey and Petricks method (pdf).


Petrick C++ implementation (including Petrick) based on the tutorial above

C program Public Domain console based C program on SourceForge.net.


Tomaszewski, S. P., Celik, I. U., Antoniou, G. E., WWW-based Boolean function minimization INTER-
NATIONAL JOURNAL OF APPLIED MATHEMATICS AND COMPUTER SCIENCE, VOL 13; PART
4, pages 577-584, 2003.
For a fully worked out example visit: http://www.cs.ualberta.ca/~{}amaral/courses/329/webslides/Topic5-QuineMcCluskey/
sld024.htm
An excellent resource detailing each step: Olivier Coudert Two-level logic minimization: an overview IN-
TEGRATION, the VLSI journal, 17-2, pp. 97140, October 1994
The Boolean Bot: A JavaScript implementation for the web: http://booleanbot.com/

open source gui QMC minimizer


Computer Simulation Codes for the Quine-McCluskey Method, by Sourangsu Banerji.
Chapter 70

Random algebra

In set theory, the random algebra or random real algebra is the Boolean algebra of Borel sets of the unit interval
modulo the ideal of measure zero sets. It is used in random forcing to add random reals to a model of set theory.
The random algebra was studied by John von Neumann in 1935 (in work later published as Neumann (1998, p. 253))
who showed that it is not isomorphic to the Cantor algebra of Borel sets modulo meager sets. Random forcing was
introduced by Solovay (1970).

70.1 References
Bartoszyski, Tomek (2010), Invariants of measure and category, Handbook of set theory, 2, Springer, pp.
491555, MR 2768686
Bukowsk, Lev (1977), Random forcing, Set theory and hierarchy theory, V (Proc. Third Conf., Bierutowice,
1976), Lecture Notes in Math., 619, Berlin: Springer, pp. 101117, MR 0485358
Solovay, Robert M. (1970), A model of set-theory in which every set of reals is Lebesgue measurable, Annals
of Mathematics. Second Series, 92: 156, ISSN 0003-486X, JSTOR 1970696, MR 0265151
Neumann, John von (1998) [1960], Continuous geometry, Princeton Landmarks in Mathematics, Princeton
University Press, ISBN 978-0-691-05893-1, MR 0120174

314
Chapter 71

Read-once function

In mathematics, a read-once function is a special type of Boolean function that can be described by a Boolean
expression in which each variable appears only once.
More precisely, the expression is required to use only the operations of logical conjunction, logical disjunction, and
negation. By applying De Morgans laws, such an expression can be transformed into one in which negation is used
only on individual variables (still with each variable appearing only once). By replacing each negated variable with a
new positive variable representing its negation, such a function can be transformed into an equivalent positive read-
once Boolean function, represented by a read-once expression without negations.[1]

71.1 Examples
For example, for three variables a, b, and c, the expressions

abc

a (b c)
(a b) c
abc
are all read-once (as are the other functions obtained by permuting the variables in these expressions). However, the
Boolean median operation, given by the expression

(a b) (a c) (b c)

is not read-once: this formula has more than one copy of each variable, and there is no equivalent formula that uses
each variable only once.[2]

71.2 Characterization
The disjunctive normal form of a (positive) read-once function is not generally itself read-once. Nevertheless, it
carries important information about the function. In particular, if one forms a co-occurrence graph in which the
vertices represent variables, and edges connect pairs of variables that both occur in the same clause of the conjunctive
normal form, then the co-occurrence graph of a read-once function is necessarily a cograph. More precisely, a positive
Boolean function is read-once if and only if its co-occurrence graph is a cograph, and in addition every maximal clique
of the co-occurrence graph forms one of the conjunctions (prime implicants) of the disjunctive normal form.[3] That
is, when interpreted as a function on sets of vertices of its co-occurrence graph, a read-once function is true for
sets of vertices that contain a maximal clique, and false otherwise. For instance the median function has the same

315
316 CHAPTER 71. READ-ONCE FUNCTION

co-occurrence graph as the conjunction of three variables, a triangle graph, but the three-vertex complete subgraph
of this graph (the whole graph) forms a subset of a clause only for the conjunction and not for the median.[4] Two
variables of a positive read-once expression are adjacent in the co-occurrence graph if and only if their lowest common
ancestor in the expression is a conjunction,[5] so the expression tree can be interpreted as a cotree for the corresponding
cograph.[6]
Another alternative characterization of positive read-once functions combines their disjunctive and conjunctive normal
form. A positive function of a given system of variables, that uses all of its variables, is read-once if and only if every
prime implicant of the disjunctive normal form and every clause of the conjunctive normal form have exactly one
variable in common.[7]

71.3 Recognition
It is possible to recognize read-once functions from their disjunctive normal form expressions in polynomial time.[8]
It is also possible to nd a read-once expression for a positive read-once function, given access to the function only
through a black box that allows its evaluation at any truth assignment, using only a quadratic number of function
evaluations.[9]

71.4 Notes
[1] Golumbic & Gurvich (2011), p. 519.

[2] Golumbic & Gurvich (2011), p. 520.

[3] Golumbic & Gurvich (2011), Theorem 10.1, p. 521; Golumbic, Mintz & Rotics (2006).

[4] Golumbic & Gurvich (2011), Examples f 2 and f 3 , p. 521.

[5] Golumbic & Gurvich (2011), Lemma 10.1, p. 529.

[6] Golumbic & Gurvich (2011), Remark 10.4, pp. 540541.

[7] Gurvi (1977); Mundici (1989); Karchmer et al. (1993).

[8] Golumbic & Gurvich (2011), Theorem 10.8, p. 541; Golumbic, Mintz & Rotics (2006); Golumbic, Mintz & Rotics (2008).

[9] Golumbic & Gurvich (2011), Theorem 10.9, p. 548; Angluin, Hellerstein & Karpinski (1993).

71.5 References
Angluin, Dana; Hellerstein, Lisa; Karpinski, Marek (1993), Learning read-once formulas with queries,
Journal of the ACM, 40 (1): 185210, MR 1202143, doi:10.1145/138027.138061.
Golumbic, Martin C.; Gurvich, Vladimir (2011), Read-once functions (PDF), in Crama, Yves; Hammer, Pe-
ter L., Boolean functions, Encyclopedia of Mathematics and its Applications, 142, Cambridge University Press,
Cambridge, pp. 519560, ISBN 978-0-521-84751-3, MR 2742439, doi:10.1017/CBO9780511852008.
Golumbic, Martin Charles; Mintz, Aviad; Rotics, Udi (2006), Factoring and recognition of read-once func-
tions using cographs and normality and the readability of functions associated with partial k-trees, Discrete
Applied Mathematics, 154 (10): 14651477, MR 2222833, doi:10.1016/j.dam.2005.09.016.
Golumbic, Martin Charles; Mintz, Aviad; Rotics, Udi (2008), An improvement on the complexity of fac-
toring read-once Boolean functions, Discrete Applied Mathematics, 156 (10): 16331636, MR 2432929,
doi:10.1016/j.dam.2008.02.011.
Gurvi, V. A. (1977), Repetition-free Boolean functions, Uspekhi Matematicheskikh Nauk, 32 (1(193)):
183184, MR 0441560.
Karchmer, M.; Linial, N.; Newman, I.; Saks, M.; Wigderson, A. (1993), Combinatorial characterization of
read-once formulae, Discrete Mathematics, 114 (1-3): 275282, MR 1217758, doi:10.1016/0012-365X(93)90372-
Z.
71.5. REFERENCES 317

Mundici, Daniele (1989), Functions computed by monotone Boolean formulas with no repeated variables,
Theoretical Computer Science, 66 (1): 113114, MR 1018849, doi:10.1016/0304-3975(89)90150-3.
Chapter 72

ReedMuller expansion

In Boolean logic, a ReedMuller expansion (or Davio expansion) is a decomposition of a Boolean function.
For a Boolean function f (x1 , . . . , xn ) we set with respect to xi :

fxi (x) = f (x1 , . . . , xi1 , 1, xi+1 , . . . , xn )


fxi (x) = f (x1 , . . . , xi1 , 0, xi+1 , . . . , xn )
f
= fxi (x) fxi (x)
xi
as the positive and negative cofactors of f , and the boolean derivation of f , where denotes the XOR operator.
Then we have for the ReedMuller or positive Davio expansion:

f
f = fxi xi .
xi
This equation is written in a way that it resembles a Taylor expansion of f about xi = 0 . There is a similar
decomposition corresponding to an expansion about xi = 1 (negative Davio):

f
f = fxi xi .
xi
Repeated application of the ReedMuller expansion results in an XOR polynomial in x1 , . . . , xn :

f = a1 a2 x1 a3 x2 a4 x1 x2 . . . a2n x1 xn

This representation is unique and sometimes also called ReedMuller expansion.[1]


E.g. for n = 2 the result would be

fx2 fx1 2f
f (x1 , x2 ) = fx1 x2 x1 x2 x1 x2
x1 x2 x1 x2
where

2f
= fx1 x2 fx1 x2 fx1 x2 fx1 x2
x1 x2
For n = 3 the result would be

318
72.1. DERIVATIONS 319

fx2 x3 fx1 x3 fx1 x2 2 fx3 2 fx2 2 fx1 3f


f (x1 , x2 , x3 ) = fx1 x2 x3 x1 x2 x3 x1 x2 x1 x3 x2 x3 x1 x2 x3
x1 x2 x3 x1 x2 x1 x3 x2 x3 x1 x2 x3
where

3f
= fx1 x2 x3 fx1 x2 x3 fx1 x2 x3 fx1 x2 x3 fx1 x2 x3 fx1 x2 x3 fx1 x2 x3 fx1 x2 x3
x1 x2 x3
This n = 3 case can be given a cubical geometric interpretation (or a graph-theoretic interpretation) as follows: when
moving along the edge from x1 x2 x3 to x1 x2 x3 , XOR up the functions of the two end-vertices of the edge in order
to obtain the coecient of x1 . To move from x1 x2 x3 to x1 x2 x3 there are two shortest paths: one is a two-edge path
passing through x1 x2 x3 and the other one a two-edge path passing through x1 x2 x3 . These two paths encompass
four vertices of a square, and XORing up the functions of these four vertices yields the coecient of x1 x2 . Finally, to
move from x1 x2 x3 to x1 x2 x3 there are six shortest paths which are three-edge paths, and these six paths encompass
all the vertices of the cube, therefore the coecient of x1 x2 x3 can be obtained by XORing up the functions of all
eight of the vertices. (The other, unmentioned coecients can be obtained by symmetry.)
The shortest paths all involve monotonic changes to the values of the variables, whereas non-shortest paths all involve
non-monotonic changes of such variables; or, to put it another way, the shortest paths all have lengths equal to the
Hamming distance between the starting and destination vertices. This means that it should be easy to generalize an
algorithm for obtaining coecients from a truth table by XORing up values of the function from appropriate rows of a
truth table, even for hyperdimensional cases ( n = 4 and above). Between the starting and destination rows of a truth
table, some variables have their values remaining xed: nd all the rows of the truth table such that those variables
likewise remain xed at those given values, then XOR up their functions and the result should be the coecient for
the monomial corresponding to the destination row. (In such monomial, include any variable whose value is 1 (at that
row) and exclude any variable whose value is 0 (at that row), instead of including the negation of the variable whose
value is 0, as in the minterm style.)
Similar to binary decision diagrams (BDDs), where nodes represent Shannon expansion with respect to the according
variable, we can dene a decision diagram based on the ReedMuller expansion. These decision diagrams are called
functional BDDs (FBDDs).

72.1 Derivations
The ReedMuller expansion can be derived from the XOR-form of the Shannon decomposition, using the identity
x=1x:

f = xi fxi xi fxi
= xi fxi (1 xi )fxi
= xi fxi fxi xi fxi
f
= fxi xi .
xi
Derivation of the expansion for n = 2 :

f
f = fx1 x1
x1
( )
( f ) f x 2 x 2
f
x2
= fx2 x2 x1
x2 x1 x1
fx1 ( f 2f )
x2
= fx1 x2 x2 x1 x2
x2 x1 x1 x2
fx1 fx2 2f
= fx1 x2 x2 x1 x1 x2 .
x2 x1 x1 x2
320 CHAPTER 72. REEDMULLER EXPANSION

Derivation of the second-order boolean derivative:

2f ( f )
= = (fx2 fx2 )
x1 x2 x1 x2 x1
= (fx2 fx2 )x1 (fx2 fx2 )x1
= fx1 x2 fx1 x2 fx1 x2 fx1 x2 .

72.2 See also


Algebraic normal form (ANF)
Ring sum normal form (RSNF)

Zhegalkin normal form


Karnaugh map

Irving Stoy Reed


David Eugene Muller

ReedMuller code

72.3 References
[1] Kebschull, U.; Schubert, E.; Rosenstiel, W. (1992). Multilevel logic synthesis based on functional decision diagrams.
Proceedings 3rd European Conference on Design Automation.

72.4 Further reading


Kebschull, U.; Rosenstiel, W. (1993). Ecient graph-based computation and manipulation of functional
decision diagrams. Proceedings 4th European Conference on Design Automation: 278282.

Maxeld, Clive Max (2006-11-29). Reed-Muller Logic. Logic 101. EETimes. Part 3. Archived from the
original on 2017-04-19. Retrieved 2017-04-19.
Chapter 73

Relation algebra

Not to be confused with relational algebra, a framework for nitary relations and relational databases.

In mathematics and abstract algebra, a relation algebra is a residuated Boolean algebra expanded with an involution
called converse, a unary operation. The motivating example of a relation algebra is the algebra 2X of all binary
relations on a set X, that is, subsets of the cartesian square X2 , with RS interpreted as the usual composition of
binary relations R and S, and with the converse of R interpreted as the inverse relation.
Relation algebra emerged in the 19th-century work of Augustus De Morgan and Charles Peirce, which culminated
in the algebraic logic of Ernst Schrder. The equational form of relation algebra treated here was developed by
Alfred Tarski and his students, starting in the 1940s. Tarski and Givant (1987) applied relation algebra to a variable-
free treatment of axiomatic set theory, with the implication that mathematics founded on set theory could itself be
conducted without variables.

73.1 Denition
A relation algebra (L, , , , 0, 1, , I, ) is an algebraic structure equipped with the Boolean operations of conjunc-
tion xy, disjunction xy, and negation x , the Boolean constants 0 and 1, the relational operations of composition
xy and converse x, and the relational constant I, such that these operations and constants satisfy certain equations
constituting an axiomatization of relation algebras. Roughly, a relation algebra is to a system of binary relations on a
set containing the empty (0), complete (1), and identity (I) relations and closed under these ve operations as a group
is to a system of permutations of a set containing the identity permutation and closed under composition and inverse.
However, the rst order theory of relation algebras is not complete for such systems of binary relations.
Following Jnsson and Tsinakis (1993) it is convenient to dene additional operations xy = xy, and, dually, xy
= xy . Jnsson and Tsinakis showed that Ix = xI, and that both were equal to x. Hence a relation algebra can
equally well be dened as an algebraic structure (L, , , , 0, 1, , I, , ). The advantage of this signature over
the usual one is that a relation algebra can then be dened in full simply as a residuated Boolean algebra for which
Ix is an involution, that is, I(Ix) = x . The latter condition can be thought of as the relational counterpart of the
equation 1/(1/x) = x for ordinary arithmetic reciprocal, and some authors use reciprocal as a synonym for converse.
Since residuated Boolean algebras are axiomatized with nitely many identities, so are relation algebras. Hence the
latter form a variety, the variety RA of relation algebras. Expanding the above denition as equations yields the
following nite axiomatization.

73.1.1 Axioms
The axioms B1-B10 below are adapted from Givant (2006: 283), and were rst set out by Tarski in 1948.[1]
L is a Boolean algebra under binary disjunction, , and unary complementation () :

B1: A B = B A
B2: A (B C) = (A B) C

321
322 CHAPTER 73. RELATION ALGEBRA

B3: (A B) (A B ) = A

This axiomatization of Boolean algebra is due to Huntington (1933). Note that the meet of the implied Boolean
algebra is not the operator (even though it distributes over like a meet does), nor is the 1 of the Boolean algebra
the I constant.
L is a monoid under binary composition () and nullary identity I:

B4: A(BC) = (AB)C


B5: AI = A

Unary converse () is an involution with respect to composition:

B6: A = A
B7: (AB) = BA

Axiom B6 denes conversion as an involution, whereas B7 expresses the antidistributive property of conversion
relative to composition.[2]
Converse and composition distribute over disjunction:

B8: (AB) = AB
B9: (AB)C = (AC)(BC)

B10 is Tarskis equational form of the fact, discovered by Augustus De Morgan, that AB C AC B CB
A .

B10: (A(AB) )B = B

These axioms are ZFC theorems; for the purely Boolean B1-B3, this fact is trivial. After each of the following axioms
is shown the number of the corresponding theorem in Chapter 3 of Suppes (1960), an exposition of ZFC: B4 27, B5
45, B6 14, B7 26, B8 16, B9 23.

73.2 Expressing properties of binary relations in RA


The following table shows how many of the usual properties of binary relations can be expressed as succinct RA
equalities or inequalities. Below, an inequality of the form AB is shorthand for the Boolean equation AB = B.
The most complete set of results of this nature is Chapter C of Carnap (1958), where the notation is rather distant
from that of this entry. Chapter 3.2 of Suppes (1960) contains fewer results, presented as ZFC theorems and using a
notation that more resembles that of this entry. Neither Carnap nor Suppes formulated their results using the RA of
this entry, or in an equational manner.

73.3 Expressive power


The metamathematics of RA are discussed at length in Tarski and Givant (1987), and more briey in Givant (2006).
RA consists entirely of equations manipulated using nothing more than uniform replacement and the substitution of
equals for equals. Both rules are wholly familiar from school mathematics and from abstract algebra generally. Hence
RA proofs are carried out in a manner familiar to all mathematicians, unlike the case in mathematical logic generally.
RA can express any (and up to logical equivalence, exactly the) rst-order logic (FOL) formulas containing no more
than three variables. (A given variable can be quantied multiple times and hence quantiers can be nested arbitrarily
deeply by reusing variables.) Surprisingly, this fragment of FOL suces to express Peano arithmetic and almost
all axiomatic set theories ever proposed. Hence RA is, in eect, a way of algebraizing nearly all mathematics,
73.4. EXAMPLES 323

while dispensing with FOL and its connectives, quantiers, turnstiles, and modus ponens. Because RA can express
Peano arithmetic and set theory, Gdels incompleteness theorems apply to it; RA is incomplete, incompletable, and
undecidable. (N.B. The Boolean algebra fragment of RA is complete and decidable.)
The representable relation algebras, forming the class RRA, are those relation algebras isomorphic to some re-
lation algebra consisting of binary relations on some set, and closed under the intended interpretation of the RA
operations. It is easily shown, e.g. using the method of pseudoelementary classes, that RRA is a quasivariety, that
is, axiomatizable by a universal Horn theory. In 1950, Roger Lyndon proved the existence of equations holding in
RRA that did not hold in RA. Hence the variety generated by RRA is a proper subvariety of the variety RA. In
1955, Alfred Tarski showed that RRA is itself a variety. In 1964, Donald Monk showed that RRA has no nite
axiomatization, unlike RA, which is nitely axiomatized by denition.

73.3.1 Q-Relation Algebras


An RA is a Q-Relation Algebra (QRA) if, in addition to B1-B10, there exist some A and B such that (Tarski and
Givant 1987: 8.4):

Q0: AA I
Q1: BB I
Q2: AB = 1

Essentially these axioms imply that the universe has a (non-surjective) pairing relation whose projections are A and
B. It is a theorem that every QRA is a RRA (Proof by Maddux, see Tarski & Givant 1987: 8.4(iii) ).
Every QRA is representable (Tarski and Givant 1987). That not every relation algebra is representable is a fun-
damental way RA diers from QRA and Boolean algebras, which, by Stones representation theorem for Boolean
algebras, are always representable as sets of subsets of some set, closed under union, intersection, and complement.

73.4 Examples
1. Any Boolean algebra can be turned into a RA by interpreting conjunction as composition (the monoid multipli-
cation ), i.e. xy is dened as xy. This interpretation requires that converse interpret identity ( = y), and that both
residuals y\x and x/y interpret the conditional yx (i.e., yx).
2. The motivating example of a relation algebra depends on the denition of a binary relation R on a set X as any
subset R X, where X is the Cartesian square of X. The power set 2X consisting of all binary relations on X is
a Boolean algebra. While 2X can be made a relation algebra by taking RS = RS, as per example (1) above, the
standard interpretation of is instead x(RS)z = y:xRy.ySz. That is, the ordered pair (x,z) belongs to the relation RS
just when there exists y X such that (x,y) R and (y,z) S. This interpretation uniquely determines R\S as consisting
of all pairs (y,z) such that for all x X, if xRy then xSz. Dually, S/R consists of all pairs (x,y) such that for all z X,
if yRz then xSz. The translation = (y\I) then establishes the converse R of R as consisting of all pairs (y,x) such
that (x,y) R.
3. An important generalization of the previous example is the power set 2E where E X is any equivalence relation on
the set X. This is a generalization because X is itself an equivalence relation, namely the complete relation consisting
of all pairs. While 2E is not a subalgebra of 2X when E X (since in that case it does not contain the relation X,
the top element 1 being E instead of X), it is nevertheless turned into a relation algebra using the same denitions
of the operations. Its importance resides in the denition of a representable relation algebra as any relation algebra
isomorphic to a subalgebra of the relation algebra 2E for some equivalence relation E on some set. The previous
section says more about the relevant metamathematics.
4. Let G be group. Then the power set 2G is a relation algebra with the obvious boolean algebra operations, com-
position given by the product of group subsets, the converse by the inverse subset ( A1 = {a1 | a A} ), and
the identity by the singleton subset {e} . There is a relation algebra homomorphism embedding 2G in 2GG which
sends each subset A G to the relation RA = {(g, h) G G | h Ag} . The image of this homomorphism is
the set of all right-invariant relations on G .
5. If group sum or product interprets composition, group inverse interprets converse, group identity interprets I, and
if R is a one-to-one correspondence, so that RR = RR = I,[3] then L is a group as well as a monoid. B4-B7 become
324 CHAPTER 73. RELATION ALGEBRA

well-known theorems of group theory, so that RA becomes a proper extension of group theory as well as of Boolean
algebra.

73.5 Historical remarks


De Morgan founded RA in 1860, but C. S. Peirce took it much further and became fascinated with its philosophical
power. The work of DeMorgan and Peirce came to be known mainly in the extended and denitive form Ernst
Schrder gave it in Vol. 3 of his Vorlesungen (18901905). Principia Mathematica drew strongly on Schrders RA,
but acknowledged him only as the inventor of the notation. In 1912, Alwin Korselt proved that a particular formula
in which the quantiers were nested four deep had no RA equivalent.[4] This fact led to a loss of interest in RA until
Tarski (1941) began writing about it. His students have continued to develop RA down to the present day. Tarski
returned to RA in the 1970s with the help of Steven Givant; this collaboration resulted in the monograph by Tarski
and Givant (1987), the denitive reference for this subject. For more on the history of RA, see Maddux (1991, 2006).

73.6 Software
RelMICS / Relational Methods in Computer Science maintained by Wolfram Kahl

Carsten Sinz: ARA / An Automatic Theorem Prover for Relation Algebras

73.7 See also

73.8 Footnotes
[1] Alfred Tarski (1948) Abstract: Representation Problems for Relation Algebras, Bulletin of the AMS 54: 80.

[2] Chris Brink; Wolfram Kahl; Gunther Schmidt (1997). Relational Methods in Computer Science. Springer. pp. 4 and 8.
ISBN 978-3-211-82971-4.

[3] Tarski, A. (1941), p. 87.

[4] Korselt did not publish his nding. It was rst published in Leopold Loewenheim (1915) "ber Mglichkeiten im Rela-
tivkalkl, Mathematische Annalen 76: 447470. Translated as On possibilities in the calculus of relatives in Jean van
Heijenoort, 1967. A Source Book in Mathematical Logic, 18791931. Harvard Univ. Press: 228251.

73.9 References
Rudolf Carnap (1958) Introduction to Symbolic Logic and its Applications. Dover Publications.

Givant, Steven (2006). The calculus of relations as a foundation for mathematics. Journal of Automated
Reasoning. 37: 277322. doi:10.1007/s10817-006-9062-x.

Halmos, P. R., 1960. Naive Set Theory. Van Nostrand.

Leon Henkin, Alfred Tarski, and Monk, J. D., 1971. Cylindric Algebras, Part 1, and 1985, Part 2. North
Holland.

Hirsch R., and Hodkinson, I., 2002, Relation Algebra by Games, vol. 147 in Studies in Logic and the Foundations
of Mathematics. Elsevier Science.

Jnsson, Bjarni; Tsinakis, Constantine (1993). Relation algebras as residuated Boolean algebras. Algebra
Universalis. 30: 46978. doi:10.1007/BF01195378.

Maddux, Roger (1991). The Origin of Relation Algebras in the Development and Axiomatization of the
Calculus of Relations (PDF). Studia Logica. 50 (34): 421455. doi:10.1007/BF00370681.
73.10. EXTERNAL LINKS 325

--------, 2006. Relation Algebras, vol. 150 in Studies in Logic and the Foundations of Mathematics. Elsevier
Science.
Patrick Suppes, 1960. Axiomatic Set Theory. Van Nostrand. Dover reprint, 1972. Chapter 3.

Gunther Schmidt, 2010. Relational Mathematics. Cambridge University Press.


Tarski, Alfred (1941). On the calculus of relations. Journal of Symbolic Logic. 6: 7389. JSTOR 2268577.
doi:10.2307/2268577.
------, and Givant, Steven, 1987. A Formalization of Set Theory without Variables. Providence RI: American
Mathematical Society.

73.10 External links


Yohji AKAMA, Yasuo Kawahara, and Hitoshi Furusawa, "Constructing Allegory from Relation Algebra and
Representation Theorems."

Richard Bird, Oege de Moor, Paul Hoogendijk, "Generic Programming with Relations and Functors."
R.P. de Freitas and Viana, "A Completeness Result for Relation Algebra with Binders."

Peter Jipsen:
Relation algebras. In Mathematical structures. If there are problems with LaTeX, see an old HTML
version here.
"Foundations of Relations and Kleene Algebra."
"Computer Aided Investigations of Relation Algebras."
"A Gentzen System And Decidability For Residuated Lattices.
Vaughan Pratt:

"Origins of the Calculus of Binary Relations." A historical treatment.


"The Second Calculus of Binary Relations."
Priss, Uta:

"An FCA interpretation of Relation Algebra."


"Relation Algebra and FCA" Links to publications and software

Kahl, Wolfram, and Schmidt, Gunther, "Exploring (Finite) Relation Algebras Using Tools Written in Haskell."
See homepage of the whole project.
Chapter 74

Residuated Boolean algebra

In mathematics, a residuated Boolean algebra is a residuated lattice whose lattice structure is that of a Boolean
algebra. Examples include Boolean algebras with the monoid taken to be conjunction, the set of all formal languages
over a given alphabet under concatenation, the set of all binary relations on a given set X under relational composi-
tion, and more generally the power set of any equivalence relation, again under relational composition. The original
application was to relation algebras as a nitely axiomatized generalization of the binary relation example, but there
exist interesting examples of residuated Boolean algebras that are not relation algebras, such as the language example.

74.1 Denition
A residuated Boolean algebra is an algebraic structure (L, , , , 0, 1, , I, \, /) such that

(i) (L, , , , I, \, /) is a residuated lattice, and


(ii) (L, , , , 0, 1) is a Boolean algebra.

An equivalent signature better suited to the relation algebra application is (L, , , , 0, 1, , I, , ) where the unary
operations x\ and x are intertranslatable in the manner of De Morgans laws via

x\y = (xy), xy = (x\y), and dually /y and y as


x/y = (xy), xy = (x/y),

with the residuation axioms in the residuated lattice article reorganized accordingly (replacing z by z) to read

(xz)y = 0 (xy)z = 0 (zy)x = 0

This De Morgan dual reformulation is motivated and discussed in more detail in the section below on conjugacy.
Since residuated lattices and Boolean algebras are each denable with nitely many equations, so are residuated
Boolean algebras, whence they form a nitely axiomatizable variety.

74.2 Examples
1. Any Boolean algebra, with the monoid multiplication taken to be conjunction and both residuals taken to be
material implication xy. Of the remaining 15 binary Boolean operations that might be considered in place of
conjunction for the monoid multiplication, only ve meet the monotonicity requirement, namely 0, 1, x, y, and
xy. Setting y = z = 0 in the residuation axiom y x\z xy z, we have 0 x\0 x0 0, which is falsied
by taking x = 1 when xy = 1, x, or xy. The dual argument for z/y rules out xy = y. This just leaves xy = 0 (a
constant binary operation independent of x and y), which satises almost all the axioms when the residuals are
both taken to be the constant operation x/y = x\y = 1. The axiom it fails is xI = x = Ix, for want of a suitable
value for I. Hence conjunction is the only binary Boolean operation making the monoid multiplication that of
a residuated Boolean algebra.

326
74.3. CONJUGACY 327

2. The power set 2X made a Boolean algebra as usual with , and complement relative to X, and made a
monoid with relational composition. The monoid unit I is the identity relation {(x,x)|x X}. The right residual
R\S is dened by x(R\S)y if and only if for all z in X, zRx implies zSy. Dually the left residual S/R is dened by
y(S/R)x if and only if for all z in X, xRz implies ySz.
3. The power set 2* made a Boolean algebra as for example 2, but with language concatenation for the monoid.
Here the set is used as an alphabet while * denotes the set of all nite (including empty) words over that
alphabet. The concatenation LM of languages L and M consists of all words uv such that u L and v M.
The monoid unit is the language {} consisting of just the empty word . The right residual M\L consists of
all words w over such that Mw L. The left residual L/M is the same with wM in place of Mw.

74.3 Conjugacy
The De Morgan duals and of residuation arise as follows. Among residuated lattices, Boolean algebras are special
by virtue of having a complementation operation . This permits an alternative expression of the three inequalities

y x\z xy z x z/y

in the axiomatization of the two residuals in terms of disjointness, via the equivalence x y xy = 0. Abbreviating
xy = 0 to x # y as the expression of their disjointness, and substituting z for z in the axioms, they become with a
little Boolean manipulation

(x\z) # y xy # z (z/y) # x

Now (x\z) is reminiscent of De Morgan duality, suggesting that x\ be thought of as a unary operation f, dened by
f(y) = x\y, that has a De Morgan dual f(y), analogous to x(x) = x(x). Denoting this dual operation as x,
we dene xz as (x\z). Similarly we dene another operation zy as (z/y). By analogy with x\ as the residual
operation associated with the operation x, we refer to x as the conjugate operation, or simply conjugate, of x.
Likewise y is the conjugate of y. Unlike residuals, conjugacy is an equivalence relation between operations: if f
is the conjugate of g then g is also the conjugate of f, i.e. the conjugate of the conjugate of f is f. Another advantage
of conjugacy is that it becomes unnecessary to speak of right and left conjugates, that distinction now being inherited
from the dierence between x and x, which have as their respective conjugates x and x. (But this advantage
accrues also to residuals when x\ is taken to be the residual operation to x.)
All this yields (along with the Boolean algebra and monoid axioms) the following equivalent axiomatization of a
residuated Boolean algebra.

y # xz xy # z x # zy

With this signature it remains the case that this axiomatization can be expressed as nitely many equations.

74.4 Converse
In examples 2 and 3 it can be shown that xI = Ix. In example 2 both sides equal the converse x of x, while
in example 3 both sides are I when x contains the empty word and 0 otherwise. In the former case x = x. This is
impossible for the latter because xI retains hardly any information about x. Hence in example 2 we can substitute
x for x in xI = x = Ix and cancel (soundly) to give

xI = x = Ix.

x = x can be proved from these two equations. Tarski's notion of a relation algebra can be dened as a residuated
Boolean algebra having an operation x satisfying these two equations.
The cancellation step in the above is not possible for example 3, which therefore is not a relation algebra, x being
uniquely determined as xI.
Consequences of this axiomatization of converse include x = x, (x) = (x), (xy) = xy, and (xy) = yx.
328 CHAPTER 74. RESIDUATED BOOLEAN ALGEBRA

74.5 References
Bjarni Jnsson and Constantine Tsinakis, Relation algebras as residuated Boolean algebras, Algebra Universalis,
30 (1993) 469-478.
Peter Jipsen, Computer aided investigations of relation algebras, Ph.D. Thesis, Vanderbilt University, May 1992.
Chapter 75

Robbins algebra

In abstract algebra, a Robbins algebra is an algebra containing a single binary operation, usually denoted by , and
a single unary operation usually denoted by . These operations satisfy the following axioms:
For all elements a, b, and c:

1. Associativity: a (b c) = (a b) c
2. Commutativity: a b = b a
3. Robbins equation: ( (a b) (a b)) = a

For many years, it was conjectured, but unproven, that all Robbins algebras are Boolean algebras. This was proved
in 1996, so the term Robbins algebra is now simply a synonym for Boolean algebra.

75.1 History
In 1933, Edward Huntington proposed a new set of axioms for Boolean algebras, consisting of (1) and (2) above,
plus:

Huntingtons equation: (a b) (a b) = a.

From these axioms, Huntington derived the usual axioms of Boolean algebra.
Very soon thereafter, Herbert Robbins posed the Robbins conjecture, namely that the Huntington equation could
be replaced with what came to be called the Robbins equation, and the result would still be Boolean algebra. would
interpret Boolean join and Boolean complement. Boolean meet and the constants 0 and 1 are easily dened from
the Robbins algebra primitives. Pending verication of the conjecture, the system of Robbins was called Robbins
algebra.
Verifying the Robbins conjecture required proving Huntingtons equation, or some other axiomatization of a Boolean
algebra, as theorems of a Robbins algebra. Huntington, Robbins, Alfred Tarski, and others worked on the problem,
but failed to nd a proof or counterexample.
William McCune proved the conjecture in 1996, using the automated theorem prover EQP. For a complete proof
of the Robbins conjecture in one consistent notation and following McCune closely, see Mann (2003). Dahn (1998)
simplied McCunes machine proof.

75.2 See also


Boolean algebra
Algebraic structure

329
330 CHAPTER 75. ROBBINS ALGEBRA

75.3 References
Dahn, B. I. (1998) Abstract to "Robbins Algebras Are Boolean: A Revision of McCunes Computer-Generated
Solution of Robbins Problem," Journal of Algebra 208(2): 52632.
Mann, Allen (2003) "A Complete Proof of the Robbins Conjecture."

William McCune, "Robbins Algebras Are Boolean," With links to proofs and other papers.
Chapter 76

Sigma-algebra

"-algebra redirects here. For an algebraic structure admitting a given signature of operations, see Universal al-
gebra.

In mathematical analysis and in probability theory, a -algebra (also -eld) on a set X is a collection of subsets
of X that includes the empty subset, is closed under complement, and is closed under countable unions and countable
intersections. The pair (X, ) is called a measurable space.
A -algebra specializes the concept of a set algebra. An algebra of sets needs only to be closed under the union or
intersection of nitely many subsets.[1]
The main use of -algebras is in the denition of measures; specically, the collection of those subsets for which
a given measure is dened is necessarily a -algebra. This concept is important in mathematical analysis as the
foundation for Lebesgue integration, and in probability theory, where it is interpreted as the collection of events which
can be assigned probabilities. Also, in probability, -algebras are pivotal in the denition of conditional expectation.
In statistics, (sub) -algebras are needed for the formal mathematical denition of a sucient statistic,[2] particularly
when the statistic is a function or a random process and the notion of conditional density is not applicable.
If X = {a, b, c, d}, one possible -algebra on X is = { , {a, b}, {c, d}, {a, b, c, d} }, where is the empty set. In
general, a nite algebra is always a -algebra.
If {A1 , A2 , A3 , } is a countable partition of X then the collection of all unions of sets in the partition (including
the empty set) is a -algebra.
A more useful example is the set of subsets of the real line formed by starting with all open intervals and adding
in all countable unions, countable intersections, and relative complements and continuing this process (by transnite
iteration through all countable ordinals) until the relevant closure properties are achieved (a construction known as
the Borel hierarchy).

76.1 Motivation
There are at least three key motivators for -algebras: dening measures, manipulating limits of sets, and managing
partial information characterized by sets.

76.1.1 Measure
A measure on X is a function that assigns a non-negative real number to subsets of X; this can be thought of as making
precise a notion of size or volume for sets. We want the size of the union of disjoint sets to be the sum of their
individual sizes, even for an innite sequence of disjoint sets.
One would like to assign a size to every subset of X, but in many natural settings, this is not possible. For example,
the axiom of choice implies that when the size under consideration is the ordinary notion of length for subsets of the
real line, then there exist sets for which no size exists, for example, the Vitali sets. For this reason, one considers
instead a smaller collection of privileged subsets of X. These subsets will be called the measurable sets. They are

331
332 CHAPTER 76. SIGMA-ALGEBRA

closed under operations that one would expect for measurable sets, that is, the complement of a measurable set is a
measurable set and the countable union of measurable sets is a measurable set. Non-empty collections of sets with
these properties are called -algebras.

76.1.2 Limits of sets


Many uses of measure, such as the probability concept of almost sure convergence, involve limits of sequences of
sets. For this, closure under countable unions and intersections is paramount. Set limits are dened as follows on
-algebras.

The limit supremum of a sequence A1 , A2 , A3 , ..., each of which is a subset of X, is



lim sup An = Am .
n
n=1 m=n

The limit inmum of a sequence A1 , A2 , A3 , ..., each of which is a subset of X, is



lim inf An = Am .
n
n=1 m=n

If, in fact,

lim inf An = lim sup An ,


n n

then the limn An exists as that common set.

76.1.3 Sub -algebras


In much of probability, especially when conditional expectation is involved, one is concerned with sets that represent
only part of all the possible information that can be observed. This partial information can be characterized with a
smaller -algebra which is a subset of the principal -algebra; it consists of the collection of subsets relevant only to
and determined only by the partial information. A simple example suces to illustrate this idea.
Imagine you and another person are betting on a game that involves ipping a coin repeatedly and observing whether
it comes up Heads (H) or Tails (T). Since you and your opponent are each innitely wealthy, there is no limit to how
long the game can last. This means the sample space must consist of all possible innite sequences of H or T:

= {H, T } = {(x1 , x2 , x3 , . . . ) : xi {H, T }, i 1}.


However, after n ips of the coin, you may want to determine or revise your betting strategy in advance of the next
ip. The observed information at that point can be described in terms of the 2n possibilities for the rst n ips.
Formally, since you need to use subsets of , this is codied as the -algebra

Gn = {A {H, T } : A {H, T }n }.
Observe that then

G1 G2 G3 G ,
where G is the smallest -algebra containing all the others.
76.2. DEFINITION AND PROPERTIES 333

76.2 Denition and properties

76.2.1 Denition
Let X be some set, and let 2X represent its power set. Then a subset 2X is called a -algebra if it satises the
following three properties:[3]

1. X is in , and X is considered to be the universal set in the following context.


2. is closed under complementation: If A is in , then so is its complement, X \ A.
3. is closed under countable unions: If A1 , A2 , A3 , ... are in , then so is A = A1 A2 A3 .

From these properties, it follows that the -algebra is also closed under countable intersections (by applying De
Morgans laws).
It also follows that the empty set is in , since by (1) X is in and (2) asserts that its complement, the empty
set, is also in . Moreover, since {X, } satises condition (3) as well, it follows that {X, } is the smallest possible
-algebra on X. The largest possible -algebra on X is 2X .
Elements of the -algebra are called measurable sets. An ordered pair (X, ), where X is a set and is a -algebra
over X, is called a measurable space. A function between two measurable spaces is called a measurable function if
the preimage of every measurable set is measurable. The collection of measurable spaces forms a category, with the
measurable functions as morphisms. Measures are dened as certain types of functions from a -algebra to [0, ].
A -algebra is both a -system and a Dynkin system (-system). The converse is true as well, by Dynkins theorem
(below).

76.2.2 Dynkins - theorem


This theorem (or the related monotone class theorem) is an essential tool for proving many results about properties
of specic -algebras. It capitalizes on the nature of two simpler classes of sets, namely the following.

A -system P is a collection of subsets of X that is closed under nitely many intersections, and
a Dynkin system (or -system) D is a collection of subsets of X that contains X and is closed under
complement and under countable unions of disjoint subsets.

Dynkins - theorem says, if P is a -system and D is a Dynkin system that contains P then the -algebra (P)
generated by P is contained in D. Since certain -systems are relatively simple classes, it may not be hard to verify
that all sets in P enjoy the property under consideration while, on the other hand, showing that the collection D of all
subsets with the property is a Dynkin system can also be straightforward. Dynkins - Theorem then implies that
all sets in (P) enjoy the property, avoiding the task of checking it for an arbitrary set in (P).
One of the most fundamental uses of the - theorem is to show equivalence of separately dened measures or
integrals. For example, it is used to equate a probability for a random variable X with the Lebesgue-Stieltjes integral
typically associated with computing the probability:

P(X A) = A
F (dx) for all A in the Borel -algebra on R,

where F(x) is the cumulative distribution function for X, dened on R, while P is a probability measure, dened on
a -algebra of subsets of some sample space .

76.2.3 Combining -algebras


Suppose { : A} is a collection of -algebras on a space X.

The intersection of a collection of -algebras is a -algebra. To emphasize its character as a -algebra, it often
is denoted by:
334 CHAPTER 76. SIGMA-ALGEBRA


.
A

Sketch of Proof: Let denote the intersection. Since X is in every , is not empty. Closure under
complement and countable unions for every implies the same must be true for . Therefore, is a
-algebra.

The union of a collection of -algebras is not generally a -algebra, or even an algebra, but it generates a
-algebra known as the join which typically is denoted

( )

= .
A A

A -system that generates the join is


{n }

P= Ai : Ai i , i A, n 1 .
i=1

Sketch of Proof: By the case n = 1, it is seen that each P , so



P.
A

This implies
( )

(P)
A

by the denition of a -algebra generated by a collection of subsets. On the other hand,


( )

P
A

which, by Dynkins - theorem, implies


( )

(P) .
A

76.2.4 -algebras for subspaces


Suppose Y is a subset of X and let (X, ) be a measurable space.

The collection {Y B: B } is a -algebra of subsets of Y.


Suppose (Y, ) is a measurable space. The collection {A X : A Y } is a -algebra of subsets of X.

76.2.5 Relation to -ring


A -algebra is just a -ring that contains the universal set X.[4] A -ring need not be a -algebra, as for example
measurable subsets of zero Lebesgue measure in the real line are a -ring, but not a -algebra since the real line
has innite measure and thus cannot be obtained by their countable union. If, instead of zero measure, one takes
measurable subsets of nite Lebesgue measure, those are a ring but not a -ring, since the real line can be obtained
by their countable union yet its measure is not nite.
76.3. PARTICULAR CASES AND EXAMPLES 335

76.2.6 Typographic note


-algebras are sometimes denoted using calligraphic capital letters, or the Fraktur typeface. Thus (X, ) may be
denoted as (X, F ) or (X, F) .

76.3 Particular cases and examples

76.3.1 Separable -algebras


A separable -algebra (or separable -eld) is a -algebra F that is a separable space when considered as a metric
space with metric (A, B) = (AB) for A, B F and a given measure (and with being the symmetric dif-
ference operator).[5] Note that any -algebra generated by a countable collection of sets is separable, but the converse
need not hold. For example, the Lebesgue -algebra is separable (since every Lebesgue measurable set is equivalent
to some Borel set) but not countably generated (since its cardinality is higher than continuum).
A separable measure space has a natural pseudometric that renders it separable as a pseudometric space. The distance
between two sets is dened as the measure of the symmetric dierence of the two sets. Note that the symmetric
dierence of two distinct sets can have measure zero; hence the pseudometric as dened above need not to be a true
metric. However, if sets whose symmetric dierence has measure zero are identied into a single equivalence class,
the resulting quotient set can be properly metrized by the induced metric. If the measure space is separable, it can
be shown that the corresponding metric space is, too.

76.3.2 Simple set-based examples


Let X be any set.

The family consisting only of the empty set and the set X, called the minimal or trivial -algebra over X.
The power set of X, called the discrete -algebra.
The collection {, A, Ac , X} is a simple -algebra generated by the subset A.
The collection of subsets of X which are countable or whose complements are countable is a -algebra (which
is distinct from the power set of X if and only if X is uncountable). This is the -algebra generated by the
singletons of X. Note: countable includes nite or empty.
The collection of all unions of sets in a countable partition of X is a -algebra.

76.3.3 Stopping time sigma-algebras


A stopping time can dene a -algebra F , the so-called stopping time sigma-algebra, which in a ltered probability
space describes the information up to the random time in the sense that, if the ltered probability space is interpreted
as a random experiment, the maximum information that can be found out about the experiment from arbitrarily often
repeating it until the time is F .[6]

76.4 -algebras generated by families of sets

76.4.1 -algebra generated by an arbitrary family


Let F be an arbitrary family of subsets of X. Then there exists a unique smallest -algebra which contains every set
in F (even though F may or may not itself be a -algebra). It is, in fact, the intersection of all -algebras containing
F. (See intersections of -algebras above.) This -algebra is denoted (F) and is called the -algebra generated by
F.
If F is empty, then (F)={X, }. Otherwise (F) consists of all the subsets of X that can be made from elements of
F by a countable number of complement, union and intersection operations.
336 CHAPTER 76. SIGMA-ALGEBRA

For a simple example, consider the set X = {1, 2, 3}. Then the -algebra generated by the single subset {1} is
({{1}}) = {, {1}, {2, 3}, {1, 2, 3}}. By an abuse of notation, when a collection of subsets contains only one
element, A, one may write (A) instead of ({A}); in the prior example ({1}) instead of ({{1}}). Indeed, using
(A1 , A2 , ...) to mean ({A1 , A2 , ...}) is also quite common.
There are many families of subsets that generate useful -algebras. Some of these are presented here.

76.4.2 -algebra generated by a function


If f is a function from a set X to a set Y and B is a -algebra of subsets of Y , then the -algebra generated by
the function f , denoted by (f ) , is the collection of all inverse images f 1 (S) of the sets S in B . i.e.

(f ) = {f 1 (S) | S B}.

A function f from a set X to a set Y is measurable with respect to a -algebra of subsets of X if and only if (f) is
a subset of .
One common situation, and understood by default if B is not specied explicitly, is when Y is a metric or topological
space and B is the collection of Borel sets on Y.
If f is a function from X to Rn then (f) is generated by the family of subsets which are inverse images of inter-
vals/rectangles in Rn :

( )
(f ) = {f 1 ((a1 , b1 ] (an , bn ]) : ai , bi R} .

A useful property is the following. Assume f is a measurable map from (X, X) to (S, S) and g is a measurable
map from (X, X) to (T, T). If there exists a measurable map h from (T, T) to (S, S) such that f(x) = h(g(x))
for all x, then (f) (g). If S is nite or countably innite or, more generally, (S, S) is a standard Borel space
(e.g., a separable complete metric space with its associated Borel sets), then the converse is also true.[7] Examples of
standard Borel spaces include Rn with its Borel sets and R with the cylinder -algebra described below.

76.4.3 Borel and Lebesgue -algebras


An important example is the Borel algebra over any topological space: the -algebra generated by the open sets (or,
equivalently, by the closed sets). Note that this -algebra is not, in general, the whole power set. For a non-trivial
example that is not a Borel set, see the Vitali set or Non-Borel sets.
On the Euclidean space Rn , another -algebra is of importance: that of all Lebesgue measurable sets. This -algebra
contains more sets than the Borel -algebra on Rn and is preferred in integration theory, as it gives a complete measure
space.

76.4.4 Product -algebra


Let (X1 , 1 ) and (X2 , 2 ) be two measurable spaces. The -algebra for the corresponding product space X1 X2
is called the product -algebra and is dened by

1 2 = ({B1 B2 : B1 1 , B2 2 }).

Observe that {B1 B2 : B1 1 , B2 2 } is a -system.


The Borel -algebra for Rn is generated by half-innite rectangles and by nite rectangles. For example,

B(Rn ) = ({(, b1 ] (, bn ] : bi R}) = ({(a1 , b1 ] (an , bn ] : ai , bi R}) .

For each of these two examples, the generating family is a -system.


76.4. -ALGEBRAS GENERATED BY FAMILIES OF SETS 337

76.4.5 -algebra generated by cylinder sets


Suppose

X RT = {f : f (t) R, t T}

is a set of real-valued functions. Let B(R) denote the Borel subsets of R. A cylinder subset of X is a nitely restricted
set dened as

Ct1 ,...,tn (B1 , . . . , Bn ) = {f X : f (ti ) Bi , 1 i n}.

Each

{Ct1 ,...,tn (B1 , . . . , Bn ) : Bi B(R), 1 i n}

is a -system that generates a -algebra t1 ,...,tn . Then the family of subsets



FX = t1 ,...,tn
n=1 ti T,in

is an algebra that generates the cylinder -algebra for X. This -algebra is a subalgebra of the Borel -algebra
determined by the product topology of RT restricted to X.
An important special case is when T is the set of natural numbers and X is a set of real-valued sequences. In this
case, it suces to consider the cylinder sets

Cn (B1 , . . . , Bn ) = (B1 Bn R ) X = {(x1 , x2 , . . . , xn , xn+1 , . . . ) X : xi Bi , 1 i n},

for which

n = ({Cn (B1 , . . . , Bn ) : Bi B(R), 1 i n})

is a non-decreasing sequence of -algebras.

76.4.6 -algebra generated by random variable or vector


Suppose (, , P) is a probability space. If Y : Rn is measurable with respect to the Borel -algebra on Rn
then Y is called a random variable (n = 1) or random vector (n > 1). The -algebra generated by Y is

(Y ) = {Y 1 (A) : A B(Rn )}.

76.4.7 -algebra generated by a stochastic process


Suppose (, , P) is a probability space and RT is the set of real-valued functions on T . If Y : X RT is
measurable with respect to the cylinder -algebra (FX ) (see above) for X then Y is called a stochastic process or
random process. The -algebra generated by Y is

{ }
(Y ) = Y 1 (A) : A (FX ) = ({Y 1 (A) : A FX }),

the -algebra generated by the inverse images of cylinder sets.


338 CHAPTER 76. SIGMA-ALGEBRA

76.5 See also


Join (sigma algebra)

Measurable function
Sample space

Sigma ring

Sigma additivity

76.6 References
[1] Probability, Mathematical Statistics, Stochastic Processes. Random. University of Alabama in Huntsville, Department
of Mathematical Sciences. Retrieved 30 March 2016.

[2] Billingsley, Patrick (2012). Probability and Measure (Anniversary ed.). Wiley. ISBN 978-1-118-12237-2.

[3] Rudin, Walter (1987). Real & Complex Analysis. McGraw-Hill. ISBN 0-07-054234-1.

[4] Vestrup, Eric M. (2009). The Theory of Measures and Integration. John Wiley & Sons. p. 12. ISBN 978-0-470-31795-2.

[5] Damonja, Mirna; Kunen, Kenneth (1995). Properties of the class of measure separable compact spaces (PDF). Fundamenta
Mathematicae: 262. If is a Borel measure on X , the measure algebra of (X, ) is the Boolean algebra of all Borel sets
modulo -null sets. If is nite, then such a measure algebra is also a metric space, with the distance between the two
sets being the measure of their symmetric dierence. Then, we say that is separable i this metric space is separable as
a topological space.

[6] Fischer, Tom (2013). On simple representations of stopping times and stopping time sigma-algebras. Statistics and
Probability Letters. 83 (1): 345349. doi:10.1016/j.spl.2012.09.024.

[7] Kallenberg, Olav (2001). Foundations of Modern Probability (2nd ed.). Springer. p. 7. ISBN 0-387-95313-2.

76.7 External links


Hazewinkel, Michiel, ed. (2001) [1994], Algebra of sets, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Sigma Algebra from PlanetMath.
Chapter 77

Stone functor

In mathematics, the Stone functor is a functor S: Topop Bool, where Top is the category of topological spaces
and Bool is the category of Boolean algebras and Boolean homomorphisms. It assigns to each topological space X
the Boolean algebra S(X) of its clopen subsets, and to each morphism f op : X Y in Topop (i.e., a continuous map
f: Y X) the homomorphism S(f): S(X) S(Y) given by S(f)(Z) = f 1 [Z].

77.1 See also


Stones representation theorem for Boolean algebras
Pointless topology

77.2 References
Abstract and Concrete Categories. The Joy of Cats. Jiri Admek, Horst Herrlich, George E. Strecker.

Peter T. Johnstone, Stone Spaces. (1982) Cambridge university Press ISBN 0-521-23893-5

339
Chapter 78

Stone space

In topology, and related areas of mathematics, a Stone space is a non-empty compact totally disconnected Hausdor
space.[1] Such spaces are also called pronite spaces.[2] They are named after Marshall Harvey Stone.
A form of Stones representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to the
Boolean algebra of clopen sets of a Stone space. This isomorphism forms a category-theoretic duality between the
categories of Boolean algebras and Stone spaces.

78.1 References
[1] Hazewinkel, Michiel, ed. (2001) [1994], Stone space, Encyclopedia of Mathematics, Springer Science+Business Media
B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

[2] Stone space in nLab

340
Chapter 79

Stones representation theorem for


Boolean algebras

In mathematics, Stones representation theorem for Boolean algebras states that every Boolean algebra is isomorphic
to a certain eld of sets. The theorem is fundamental to the deeper understanding of Boolean algebra that emerged
in the rst half of the 20th century. The theorem was rst proved by Marshall H. Stone (1936). Stone was led to it
by his study of the spectral theory of operators on a Hilbert space.

79.1 Stone spaces


Each Boolean algebra B has an associated topological space, denoted here S(B), called its Stone space. The points in
S(B) are the ultralters on B, or equivalently the homomorphisms from B to the two-element Boolean algebra. The
topology on S(B) is generated by a (closed) basis consisting of all sets of the form

{x S(B) | b x},

where b is an element of B. This is the topology of pointwise convergence of nets of homomorphisms into the two-
element Boolean algebra.
For every Boolean algebra B, S(B) is a compact totally disconnected Hausdor space; such spaces are called Stone
spaces (also pronite spaces). Conversely, given any topological space X, the collection of subsets of X that are clopen
(both closed and open) is a Boolean algebra.

79.2 Representation theorem


A simple version of Stones representation theorem states that every Boolean algebra B is isomorphic to the algebra
of clopen subsets of its Stone space S(B). The isomorphism sends an element bB to the set of all ultralters that
contain b. This is a clopen set because of the choice of topology on S(B) and because B is a Boolean algebra.
Restating the theorem using the language of category theory; the theorem states that there is a duality between the
category of Boolean algebras and the category of Stone spaces. This duality means that in addition to the corre-
spondence between Boolean algebras and their Stone spaces, each homomorphism from a Boolean algebra A to a
Boolean algebra B corresponds in a natural way to a continuous function from S(B) to S(A). In other words, there is
a contravariant functor that gives an equivalence between the categories. This was an early example of a nontrivial
duality of categories.
The theorem is a special case of Stone duality, a more general framework for dualities between topological spaces
and partially ordered sets.
The proof requires either the axiom of choice or a weakened form of it. Specically, the theorem is equivalent to the
Boolean prime ideal theorem, a weakened choice principle that states that every Boolean algebra has a prime ideal.

341
342 CHAPTER 79. STONES REPRESENTATION THEOREM FOR BOOLEAN ALGEBRAS

An extension of the classical Stone duality to the category of Boolean spaces (= zero-dimensional locally compact
Hausdor spaces) and continuous maps (respectively, perfect maps) was obtained by G. D. Dimov (respectively, by
H. P. Doctor) (see the references below).

79.3 See also


Field of sets

List of Boolean algebra topics


Stonean space

Stone functor
Pronite group

Representation theorem

79.4 References
Paul Halmos, and Givant, Steven (1998) Logic as Algebra. Dolciani Mathematical Expositions No. 21. The
Mathematical Association of America.
Johnstone, Peter T. (1982) Stone Spaces. Cambridge University Press. ISBN 0-521-23893-5.

Marshall H. Stone (1936) "The Theory of Representations of Boolean Algebras," Transactions of the American
Mathematical Society 40: 37-111.

G. D. Dimov (2012) Some generalizations of the Stone Duality Theorem. Publ. Math. Debrecen 80: 255293.
H. P. Doctor (1964) The categories of Boolean lattices, Boolean rings and Boolean spaces. Canad. Math.
Bulletin 7: 245252.

A monograph available free online:

Burris, Stanley N., and H.P. Sankappanavar, H. P.(1981) A Course in Universal Algebra. Springer-Verlag.
ISBN 3-540-90578-2.
Chapter 80

Suslin algebra

In mathematics, a Suslin algebra is a Boolean algebra that is complete, atomless, countably distributive, and satises
the countable chain condition. They are named after Mikhail Yakovlevich Suslin.
The existence of Suslin algebras is independent of the axioms of ZFC, and is equivalent to the existence of Suslin
trees or Suslin lines.

80.1 References
Jech, Thomas (2003). Set theory (third millennium (revised and expanded) ed.). Springer-Verlag. ISBN 3-
540-44085-2. OCLC 174929965. Zbl 1007.03002.

343
Chapter 81

Symmetric Boolean function

In mathematics, a symmetric Boolean function is a Boolean function whose value does not depend on the permutation
of its input bits, i.e., it depends only on the number of ones in the input.[1]
From the denition follows, that there are 2n+1 symmetric n-ary Boolean functions. It implies that instead of the truth
table, traditionally used to represent Boolean functions, one may use a more compact representation for an n-variable
symmetric Boolean function: the (n + 1)-vector, whose i-th entry (i = 0, ..., n) is the value of the function on an input
vector with i ones.

81.1 Special cases


A number of special cases are recognized.[1]

Threshold functions: their value is 1 on input vectors with k or more ones for a xed k

Exact-value functions: their value is 1 on input vectors with k ones for a xed k
Counting functions : their value is 1 on input vectors with the number of ones congruent to k mod m for xed
k, m
Parity functions: their value is 1 if the input vector has odd number of ones.

81.2 References
[1] Ingo Wegener, The Complexity of Symmetric Boolean Functions, in: Computation Theory and Logic, Lecture Notes in
Computer Science, vol. 270, 1987, pp. 433442

81.3 See also


Majority function

344
Chapter 82

True quantied Boolean formula

In computational complexity theory, the language TQBF is a formal language consisting of the true quantied
Boolean formulas. A (fully) quantied Boolean formula is a formula in quantied propositional logic where every
variable is quantied (or bound), using either existential or universal quantiers, at the beginning of the sentence.
Such a formula is equivalent to either true or false (since there are no free variables). If such a formula evaluates to
true, then that formula is in the language TQBF. It is also known as QSAT (Quantied SAT).

82.1 Overview
In computational complexity theory, the quantied Boolean formula problem (QBF) is a generalization of the
Boolean satisability problem in which both existential quantiers and universal quantiers can be applied to each
variable. Put another way, it asks whether a quantied sentential form over a set of Boolean variables is true or false.
For example, the following is an instance of QBF:

x y z ((x z) y)
QBF is the canonical complete problem for PSPACE, the class of problems solvable by a deterministic or nonde-
terministic Turing machine in polynomial space and unlimited time.[1] Given the formula in the form of an abstract
syntax tree, the problem can be solved easily by a set of mutually recursive procedures which evaluate the formula.
Such an algorithm uses space proportional to the height of the tree, which is linear in the worst case, but uses time
exponential in the number of quantiers.
Provided that MA PSPACE, which is widely believed, QBF cannot be solved, nor can a given solution even be
veried, in either deterministic or probabilistic polynomial time (in fact, unlike the satisability problem, theres no
known way to specify a solution succinctly). It is trivial to solve using an alternating Turing machine in linear time,
which is no surprise since in fact AP = PSPACE, where AP is the class of problems alternating machines can solve
in polynomial time.[2]
When the seminal result IP = PSPACE was shown (see interactive proof system), it was done by exhibiting an
interactive proof system that could solve QBF by solving a particular arithmetization of the problem.[3]
QBF formulas have a number of useful canonical forms. For example, it can be shown that there is a polynomial-
time many-one reduction that will move all quantiers to the front of the formula and make them alternate between
universal and existential quantiers. There is another reduction that proved useful in the IP = PSPACE proof where
no more than one universal quantier is placed between each variables use and the quantier binding that variable.
This was critical in limiting the number of products in certain subexpressions of the arithmetization.

82.2 Prenex normal form


A fully quantied Boolean formula can be assumed to have a very specic form, called prenex normal form. It has two
basic parts: a portion containing only quantiers and a portion containing an unquantied Boolean formula usually
denoted as . If there are n Boolean variables, the entire formula can be written as

345
346 CHAPTER 82. TRUE QUANTIFIED BOOLEAN FORMULA

x1 x2 x3 Qn xn (x1 , x2 , x3 , . . . , xn )

where every variable falls within the scope of some quantier. By introducing dummy variables, any formula in
prenex normal form can be converted into a sentence where existential and universal quantiers alternate. Using the
dummy variable y1 ,

x1 x2 (x1 , x2 ) 7 x1 y1 x2 (x1 , x2 )

The second sentence has the same truth value but follows the restricted syntax. Assuming fully quantied Boolean
formulas to be in prenex normal form is a frequent feature of proofs.

82.3 Solving
There is a simple recursive algorithm for determining whether a QBF is in TQBF (i.e. is true). Given some QBF

Q1 x1 Q2 x2 Qn xn (x1 , x2 , . . . , xn ).

If the formula contains no quantiers, we can just return the formula. Otherwise, we take o the rst quantier and
check both possible values for the rst variable:

A = Q2 x2 Qn xn (0, x2 , . . . , xn ),

B = Q2 x2 Qn xn (1, x2 , . . . , xn ).
If Q1 = , then return A B . If Q1 = , then return A B .
How fast does this algorithm run? For every quantier in the initial QBF, the algorithm makes two recursive calls on
only a linearly smaller subproblem. This gives the algorithm an exponential runtime O(2n ).
How much space does this algorithm use? Within each invocation of the algorithm, it needs to store the intermediate
results of computing A and B. Every recursive call takes o one quantier, so the total recursive depth is linear in the
number of quantiers. Formulas that lack quantiers can be evaluated in space logarithmic in the number of variables.
The initial QBF was fully quantied, so there are at least as many quantiers as variables. Thus, this algorithm uses
O(n + log n) = O(n) space. This makes the TQBF language part of the PSPACE complexity class.

82.4 PSPACE-completeness
The TQBF language serves in complexity theory as the canonical PSPACE-complete problem. Being PSPACE-
complete means that a language is in PSPACE and that the language is also PSPACE-hard. The algorithm above
shows that TQBF is in PSPACE. Showing that TQBF is PSPACE-hard requires showing that any language in the
complexity class PSPACE can be reduced to TQBF in polynomial time. I.e.,

L PSPACE, L p TQBF.

This means that, for a PSPACE language L, whether an input x is in L can be decided by checking whether f (x) is in
TQBF, for some function f that is required to run in polynomial time (relative to the length of the input) Symbolically,

x L f (x) TQBF.

Proving that TQBF is PSPACE-hard, requires specication of f.


82.5. MISCELLANY 347

So, suppose that L is a PSPACE language. This means that L can be decided by a polynomial space deterministic
Turing machine (DTM). This is very important for the reduction of L to TQBF, because the congurations of any
such Turing Machine can be represented as Boolean formulas, with Boolean variables representing the state of the
machine as well as the contents of each cell on the Turing Machine tape, with the position of the Turing Machine head
encoded in the formula by the formulas ordering. In particular, our reduction will use the variables c1 and c2 , which
represent two possible congurations of the DTM for L, and a natural number t, in constructing a QBF c1 ,c2 ,t which
is true if and only if the DTM for L can go from the conguration encoded in c1 to the conguration encoded in c2 in
no more than t steps. The function f, then, will construct from the DTM for L a QBF cstart ,caccept ,T , where cstart
is the DTMs starting conguration, caccept is the DTMs accepting conguration, and T is the maximum number of
steps the DTM could need to move from one conguration to the other. We know that T = O(exp(n)), where n is the
length of the input, because this bounds the total number of possible congurations of the relevant DTM. Of course,
it cannot take the DTM more steps than there are possible congurations to reach caccept unless it enters a loop, in
which case it will never reach caccept anyway.
At this stage of the proof, we have already reduced the question of whether an input formula w (encoded, of course,
in cstart ) is in L to the question of whether the QBF cstart ,caccept ,T , i.e., f (w) , is in TQBF. The remainder of
this proof proves that f can be computed in polynomial time.
For t = 1 , computation of c1 ,c2 ,t is straightforwardeither one of the congurations changes to the other in one
step or it does not. Since the Turing Machine that our formula represents is deterministic, this presents no problem.
For t > 1 , computation of c1 ,c2 ,t involves a recursive evaluation, looking for a so-called middle point m1 . In
this case, we rewrite the formula as follows:

c1 ,c2 ,t = m1 (c1 ,m1 ,t/2 m1 ,c2 ,t/2 ).

This converts the question of whether c1 can reach c2 in t steps to the question of whether c1 reaches a middle point
m1 in t/2 steps, which itself reaches c2 in t/2 steps. The answer to the latter question of course gives the answer to
the former.
Now, t is only bounded by T, which is exponential (and so not polynomial) in the length of the input. Additionally,
each recursive layer virtually doubles the length of the formula. (The variable m1 is only one midpointfor greater t,
there are more stops along the way, so to speak.) So the time required to recursively evaluate c1 ,c2 ,t in this manner
could be exponential as well, simply because the formula could become exponentially large. This problem is solved
by universally quantifying using variables c3 and c4 over the conguration pairs (e.g., {(c1 , m1 ), (m1 , c2 )} ), which
prevents the length of the formula from expanding due to recursive layers. This yields the following interpretation of
c1 ,c2 ,t :

c1 ,c2 ,t = m1 (c3 , c4 ) {(c1 , m1 ), (m1 , c2 )}(c3 ,c4 ,t/2 ).

This version of the formula can indeed be computed in polynomial time, since any one instance of it can be computed
in polynomial time. The universally quantied ordered pair simply tells us that whichever choice of (c3 , c4 ) is made,
c1 ,c2 ,t c3 ,c4 ,t/2 .
Thus, L PSPACE, L p TQBF , so TQBF is PSPACE-hard. Together with the above result that TQBF is in
PSPACE, this completes the proof that TQBF is a PSPACE-complete language.
(This proof follows Sipser 2006 pp. 310313 in all essentials. Papadimitriou 1994 also includes a proof.)

82.5 Miscellany
One important subproblem in TQBF is the Boolean satisability problem. In this problem, you wish to know
whether a given Boolean formula can be made true with some assignment of variables. This is equivalent to
the TQBF using only existential quantiers:

x1 xn (x1 , . . . , xn )
348 CHAPTER 82. TRUE QUANTIFIED BOOLEAN FORMULA

This is also an example of the larger result NP PSPACE which follows directly from the observation
that a polynomial time verier for a proof of a language accepted by a NTM (Non-deterministic Turing
machine) requires polynomial space to store the proof.

Any class in the polynomial hierarchy (PH) has TQBF as a hard problem. In other words, for the class com-
prising all languages L for which there exists a poly-time TM V, a verier, such that for all input x and some
constant i,

x L y1 y2 Qi yi V (x, y1 , y2 , . . . , yi ) = 1

which has a specic QBF formulation that is given as

such that x1 x2 Qi xi (x1 , x2 , . . . , xi ) = 1

xi

It is important to note that while TQBF the language is dened as the collection of true quantied Boolean
formulas, the abbreviation TQBF is often used (even in this article) to stand for a totally quantied Boolean
formula, often simply called a QBF (quantied Boolean formula, understood as fully or totally quanti-
ed). It is important to distinguish contextually between the two uses of the abbreviation TQBF in reading the
literature.

A TQBF can be thought of as a game played between two players, with alternating moves. Existentially quan-
tied variables are equivalent to the notion that a move is available to a player at a turn. Universally quantied
variables mean that the outcome of the game does not depend on what move a player makes at that turn. Also, a
TQBF whose rst quantier is existential corresponds to a formula game in which the rst player has a winning
strategy.

A TQBF for which the quantied formula is in 2-CNF may be solved in linear time, by an algorithm involving
strong connectivity analysis of its implication graph. The 2-satisability problem is a special case of TQBF for
these formulas, in which every quantier is existential.[4][5]

There is a systematic treatment of restricted versions of quantied boolean formulas (giving Schaefer-type
classications) provided in an expository paper by Hubie Chen.[6]

82.6 Notes and references


[1] M. Garey & D. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman,
San Francisco, California. ISBN 0-7167-1045-5.

[2] A. Chandra, D. Kozen, and L. Stockmeyer (1981). Alternation. Journal of the ACM. 28 (1): 114133. doi:10.1145/322234.322243.

[3] Adi Shamir (1992). Ip = Pspace. Journal of the ACM. 39 (4): 869877. doi:10.1145/146585.146609.

[4] Krom, Melven R. (1967). The Decision Problem for a Class of First-Order Formulas in Which all Disjunctions are Binary.
Zeitschrift fr Mathematische Logik und Grundlagen der Mathematik. 13: 1520. doi:10.1002/malq.19670130104.

[5] Aspvall, Bengt; Plass, Michael F.; Tarjan, Robert E. (1979). A linear-time algorithm for testing the truth of certain
quantied boolean formulas (PDF). Information Processing Letters. 8 (3): 121123. doi:10.1016/0020-0190(79)90002-
4.

[6] Chen, Hubie (December 2009). A Rendezvous of Logic, Complexity, and Algebra. ACM Computing Surveys. ACM. 42
(1): 1. doi:10.1145/1592451.1592453.

Fortnow & Homer (2003) provides some historical background for PSPACE and TQBF.

Zhang (2003) provides some historical background of Boolean formulas.


82.7. SEE ALSO 349

Arora, Sanjeev. (2001). COS 522: Computational Complexity. Lecture Notes, Princeton University. Retrieved
October 10, 2005.
Fortnow, Lance & Steve Homer. (2003, June). A short history of computational complexity. The Computa-
tional Complexity Column, 80. Retrieved October 9, 2005.
Papadimitriou, C. H. (1994). Computational Complexity. Reading: Addison-Wesley.

Sipser, Michael. (2006). Introduction to the Theory of Computation. Boston: Thomson Course Technology.
Zhang, Lintao. (2003). Searching for truth: Techniques for satisability of boolean formulas. Retrieved
October 10, 2005.

82.7 See also


CookLevin theorem, stating that SAT is NP-complete
Generalized geography

82.8 External links


The Quantied Boolean Formulas Library (QBFLIB)
DepQBF - a search-based solver for quantied boolean formula

International Workshop on Quantied Boolean Formulas


Chapter 83

Truth table

A truth table is a mathematical table used in logicspecically in connection with Boolean algebra, boolean func-
tions, and propositional calculuswhich sets out the functional values of logical expressions on each of their func-
tional arguments, that is, for each combination of values taken by their logical variables (Enderton, 2001). In partic-
ular, truth tables can be used to show whether a propositional expression is true for all legitimate input values, that
is, logically valid.
A truth table has one column for each input variable (for example, P and Q), and one nal column showing all of
the possible results of the logical operation that the table represents (for example, P XOR Q). Each row of the truth
table contains one possible conguration of the input variables (for instance, P=true Q=false), and the result of the
operation for those values. See the examples below for further clarication. Ludwig Wittgenstein is often credited
with inventing the truth table in his Tractatus Logico-Philosophicus,[1] though it appeared at least a year earlier in a
paper on propositional logic by Emil Leon Post.[2]

83.1 Unary operations


There are 4 unary operations:

Always true

Never true, unary falsum

Unary Identity

Unary negation

83.1.1 Logical true

The output value is always true, regardless of the input value of p

83.1.2 Logical false

The output value is never true: that is, always false, regardless of the input value of p

83.1.3 Logical identity

Logical identity is an operation on one logical value p, for which the output value remains p.
The truth table for the logical identity operator is as follows:

350
83.2. BINARY OPERATIONS 351

83.1.4 Logical negation


Logical negation is an operation on one logical value, typically the value of a proposition, that produces a value of
true if its operand is false and a value of false if its operand is true.
The truth table for NOT p (also written as p, Np, Fpq, or ~p) is as follows:

83.2 Binary operations


There are 16 possible truth functions of two binary variables:

83.2.1 Truth table for all binary logical operators


Here is an extended truth table giving denitions of all possible truth functions of two Boolean variables P and Q:[note 1]
where

T = true.
F = false.
The Com row indicates whether an operator, op, is commutative - P op Q = Q op P.
The L id row shows the operators left identities if it has any - values I such that I op Q = Q.
The R id row shows the operators right identities if it has any - values I such that P op I = P.[note 2]

The four combinations of input values for p, q, are read by row from the table above. The output function for each p,
q combination, can be read, by row, from the table.
Key:
The following table is oriented by column, rather than by row. There are four columns rather than four rows, to
display the four combinations of p, q, as input.
p: T T F F
q: T F T F
There are 16 rows in this key, one row for each binary function of the two binary variables, p, q. For example, in
row 2 of this Key, the value of Converse nonimplication (' ') is solely T, for the column denoted by the unique
combination p=F, q=T; while in row 2, the value of that ' ' operation is F for the three remaining columns of p, q.
The output row for is thus
2: F F T F
and the 16-row[3] key is
Logical operators can also be visualized using Venn diagrams.

83.2.2 Logical conjunction (AND)


Logical conjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if both of its operands are true.
The truth table for p AND q (also written as p q, Kpq, p & q, or p q) is as follows:
In ordinary language terms, if both p and q are true, then the conjunction p q is true. For all other assignments of
logical values to p and to q the conjunction p q is false.
It can also be said that if p, then p q is q, otherwise p q is p.

83.2.3 Logical disjunction (OR)


Logical disjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if at least one of its operands is true.
352 CHAPTER 83. TRUTH TABLE

The truth table for p OR q (also written as p q, Apq, p || q, or p + q) is as follows:


Stated in English, if p, then p q is p, otherwise p q is q.

83.2.4 Logical implication

Logical implication and the material conditional are both associated with an operation on two logical values, typically
the values of two propositions, which produces a value of false if the rst operand is true and the second operand is
false, and a value of true otherwise.
The truth table associated with the logical implication p implies q (symbolized as p q, or more rarely Cpq) is as
follows:
The truth table associated with the material conditional if p then q (symbolized as p q) is as follows:
It may also be useful to note that p q and p q are equivalent to p q.

83.2.5 Logical equality

Logical equality (also known as biconditional) is an operation on two logical values, typically the values of two
propositions, that produces a value of true if both operands are false or both operands are true.
The truth table for p XNOR q (also written as p q, Epq, p = q, or p q) is as follows:
So p EQ q is true if p and q have the same truth value (both true or both false), and false if they have dierent truth
values.

83.2.6 Exclusive disjunction

Exclusive disjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if one but not both of its operands is true.
The truth table for p XOR q (also written as p q, Jpq, p q, or p q) is as follows:
For two propositions, XOR can also be written as (p q) (p q).

83.2.7 Logical NAND

The logical NAND is an operation on two logical values, typically the values of two propositions, that produces a
value of false if both of its operands are true. In other words, it produces a value of true if at least one of its operands
is false.
The truth table for p NAND q (also written as p q, Dpq, or p | q) is as follows:
It is frequently useful to express a logical operation as a compound operation, that is, as an operation that is built up or
composed from other operations. Many such compositions are possible, depending on the operations that are taken
as basic or primitive and the operations that are taken as composite or derivative.
In the case of logical NAND, it is clearly expressible as a compound of NOT and AND.
The negation of a conjunction: (p q), and the disjunction of negations: (p) (q) can be tabulated as follows:

83.2.8 Logical NOR

The logical NOR is an operation on two logical values, typically the values of two propositions, that produces a value
of true if both of its operands are false. In other words, it produces a value of false if at least one of its operands is
true. is also known as the Peirce arrow after its inventor, Charles Sanders Peirce, and is a Sole sucient operator.
The truth table for p NOR q (also written as p q, or Xpq) is as follows:
The negation of a disjunction (p q), and the conjunction of negations (p) (q) can be tabulated as follows:
83.3. APPLICATIONS 353

Inspection of the tabular derivations for NAND and NOR, under each assignment of logical values to the functional
arguments p and q, produces the identical patterns of functional values for (p q) as for (p) (q), and for (p q)
as for (p) (q). Thus the rst and second expressions in each pair are logically equivalent, and may be substituted
for each other in all contexts that pertain solely to their logical values.
This equivalence is one of De Morgans laws.

83.3 Applications
Truth tables can be used to prove many other logical equivalences. For example, consider the following truth table:
This demonstrates the fact that p q is logically equivalent to p q .

83.3.1 Truth table for most commonly used logical operators


Here is a truth table that gives denitions of the 6 most commonly used out of the 16 possible truth functions of two
Boolean variables P and Q:
where

T = true
F = false
= AND (logical conjunction)
= OR (logical disjunction)
= XOR (exclusive or)
= XNOR (exclusive nor)
= conditional if-then
= conditional then-if
= biconditional if-and-only-if.

83.3.2 Condensed truth tables for binary operators


For binary operators, a condensed form of truth table is also used, where the row headings and the column headings
specify the operands and the table cells specify the result. For example, Boolean logic uses this condensed truth table
notation:
This notation is useful especially if the operations are commutative, although one can additionally specify that the
rows are the rst operand and the columns are the second operand. This condensed notation is particularly useful
in discussing multi-valued extensions of logic, as it signicantly cuts down on combinatoric explosion of the number
of rows otherwise needed. It also provides for quickly recognizable characteristic shape of the distribution of the
values in the table which can assist the reader in grasping the rules more quickly.

83.3.3 Truth tables in digital logic


Truth tables are also used to specify the function of hardware look-up tables (LUTs) in digital logic circuitry. For
an n-input LUT, the truth table will have 2^n values (or rows in the above tabular format), completely specifying
a boolean function for the LUT. By representing each boolean value as a bit in a binary number, truth table values
can be eciently encoded as integer values in electronic design automation (EDA) software. For example, a 32-bit
integer can encode the truth table for a LUT with up to 5 inputs.
When using an integer representation of a truth table, the output value of the LUT can be obtained by calculating a bit
index k based on the input values of the LUT, in which case the LUTs output value is the kth bit of the integer. For
example, to evaluate the output value of a LUT given an array of n boolean input values, the bit index of the truth tables
output value can be computed as follows: if the ith input is true, let Vi = 1 , else let Vi = 0 . Then the kth bit of the
354 CHAPTER 83. TRUTH TABLE

binary representation of the truth table is the LUTs output value, where k = V0 20 +V1 21 +V2 22 + +Vn 2n
.
Truth tables are a simple and straightforward way to encode boolean functions, however given the exponential growth
in size as the number of inputs increase, they are not suitable for functions with a large number of inputs. Other
representations which are more memory ecient are text equations and binary decision diagrams.

83.3.4 Applications of truth tables in digital electronics


In digital electronics and computer science (elds of applied logic engineering and mathematics), truth tables can be
used to reduce basic boolean operations to simple correlations of inputs to outputs, without the use of logic gates or
code. For example, a binary addition can be represented with the truth table:
A B | C R 1 1 | 1 0 1 0 | 0 1 0 1 | 0 1 0 0 | 0 0 where A = First Operand B = Second Operand C = Carry R = Result
This truth table is read left to right:

Value pair (A,B) equals value pair (C,R).

Or for this example, A plus B equal result R, with the Carry C.

Note that this table does not describe the logic operations necessary to implement this operation, rather it simply
species the function of inputs to output values.
With respect to the result, this example may be arithmetically viewed as modulo 2 binary addition, and as logically
equivalent to the exclusive-or (exclusive disjunction) binary logic operation.
In this case it can be used for only very simple inputs and outputs, such as 1s and 0s. However, if the number of types
of values one can have on the inputs increases, the size of the truth table will increase.
For instance, in an addition operation, one needs two operands, A and B. Each can have one of two values, zero or
one. The number of combinations of these two values is 22, or four. So the result is four possible outputs of C and
R. If one were to use base 3, the size would increase to 33, or nine possible outputs.
The rst addition example above is called a half-adder. A full-adder is when the carry from the previous operation
is provided as input to the next adder. Thus, a truth table of eight rows would be needed to describe a full adder's
logic:
A B C* | C R 0 0 0 | 0 0 0 1 0 | 0 1 1 0 0 | 0 1 1 1 0 | 1 0 0 0 1 | 0 1 0 1 1 | 1 0 1 0 1 | 1 0 1 1 1 | 1 1 Same as previous,
but.. C* = Carry from previous adder

83.4 History
Irving Anellis has done the research to show that C.S. Peirce appears to be the earliest logician (in 1893) to devise a
truth table matrix.[4] From the summary of his paper:

In 1997, John Shosky discovered, on the verso of a page of the typed transcript of Bertrand Russells
1912 lecture on The Philosophy of Logical Atomism truth table matrices. The matrix for negation is
Russells, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein.
It is shown that an unpublished manuscript identied as composed by Peirce in 1893 includes a truth
table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An
unpublished manuscript by Peirce identied as having been composed in 188384 in connection with
the composition of Peirces On the Algebra of Logic: A Contribution to the Philosophy of Notation
that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth
table for the conditional.

83.5 Notes
[1] Information about notation may be found in Bocheski (1959), Enderton (2001), and Quine (1982).
83.6. SEE ALSO 355

[2] The operators here with equal left and right identities (XOR, AND, XNOR, and OR) are also commutative monoids because
they are also associative. While this distinction may be irrelevant in a simple discussion of logic, it can be quite important
in more advanced mathematics. For example, in category theory an enriched category is described as a base category
enriched over a monoid, and any of these operators can be used for enrichment.

83.6 See also


Boolean domain

Boolean-valued function

Espresso heuristic logic minimizer

Excitation table

First-order logic

Functional completeness

Karnaugh maps

Logic gate

Logical connective

Logical graph

Method of analytic tableaux

Propositional calculus

Truth function

83.7 References
[1] Georg Henrik von Wright (1955). Ludwig Wittgenstein, A Biographical Sketch. The Philosophical Review. 64 (4):
527545 (p. 532, note 9). JSTOR 2182631. doi:10.2307/2182631.

[2] Emil Post (July 1921). Introduction to a general theory of elementary propositions. American Journal of Mathematics.
43 (3): 163185. JSTOR 2370324. doi:10.2307/2370324.

[3] Ludwig Wittgenstein (1922) Tractatus Logico-Philosophicus Proposition 5.101

[4] Anellis, Irving H. (2012). Peirces Truth-functional Analysis and the Origin of the Truth Table. History and Philosophy
of Logic. 33: 8797. doi:10.1080/01445340.2011.621702.

83.8 Further reading


Bocheski, Jzef Maria (1959), A Prcis of Mathematical Logic, translated from the French and German edi-
tions by Otto Bird, Dordrecht, South Holland: D. Reidel.

Enderton, H. (2001). A Mathematical Introduction to Logic, second edition, New York: Harcourt Academic
Press. ISBN 0-12-238452-0

Quine, W.V. (1982), Methods of Logic, 4th edition, Cambridge, MA: Harvard University Press.
356 CHAPTER 83. TRUTH TABLE

83.9 External links


Hazewinkel, Michiel, ed. (2001) [1994], Truth table, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Truth Tables, Tautologies, and Logical Equivalence

PEIRCE'S TRUTH-FUNCTIONAL ANALYSIS AND THE ORIGIN OF TRUTH TABLES by Irving H.


Anellis
Converting truth tables into Boolean expressions
Chapter 84

Two-element Boolean algebra

In mathematics and abstract algebra, the two-element Boolean algebra is the Boolean algebra whose underlying set
(or universe or carrier) B is the Boolean domain. The elements of the Boolean domain are 1 and 0 by convention, so
that B = {0, 1}. Paul Halmos's name for this algebra "2" has some following in the literature, and will be employed
here.

84.1 Denition
B is a partially ordered set and the elements of B are also its bounds.
An operation of arity n is a mapping from Bn to B. Boolean algebra consists of two binary operations and unary
complementation. The binary operations have been named and notated in various ways. Here they are called 'sum'
and 'product', and notated by inx '+' and '', respectively. Sum and product commute and associate, as in the usual
algebra of real numbers. As for the order of operations, brackets are decisive if present. Otherwise '' precedes '+'.
Hence AB + C is parsed as (AB) + C and not as A(B + C). Complementation is denoted by writing an overbar
over its argument. The numerical analog of the complement of X is 1 X. In the language of universal algebra, a
Boolean algebra is a B, +, ., .., 1, 0 algebra of type 2, 2, 1, 0, 0 .
Either one-to-one correspondence between {0,1} and {True,False} yields classical bivalent logic in equational form,
with complementation read as NOT. If 1 is read as True, '+' is read as OR, and '' as AND, and vice versa if 1 is read
as False.

84.2 Some basic identities


2 can be seen as grounded in the following trivial Boolean arithmetic:

1+1=1+0=0+1=1
0+0=0
00=01=10=0
11=1
1=0
0=1

Note that:

'+' and '' work exactly as in numerical arithmetic, except that 1+1=1. '+' and '' are derived by analogy from
numerical arithmetic; simply set any nonzero number to 1.

Swapping 0 and 1, and '+' and '' preserves truth; this is the essence of the duality pervading all Boolean algebras.

357
358 CHAPTER 84. TWO-ELEMENT BOOLEAN ALGEBRA

This Boolean arithmetic suces to verify any equation of 2, including the axioms, by examining every possible
assignment of 0s and 1s to each variable (see decision procedure).
The following equations may now be veried:

A+A=A
AA=A
A+0=A
A+1=1
A0=0
A=A

Each of '+' and '' distributes over the other:

A (B + C) = A B + A C;

A + (B C) = (A + B) (A + C).

That '' distributes over '+' agrees with elementary algebra, but not '+' over ''. For this and other reasons, a sum of
products (leading to a NAND synthesis) is more commonly employed than a product of sums (leading to a NOR
synthesis).
Each of '+' and '' can be dened in terms of the other and complementation:

AB =A+B

A + B = A B.

We only need one binary operation, and concatenation suces to denote it. Hence concatenation and overbar suce
to notate 2. This notation is also that of Quine's Boolean term schemata. Letting (X) denote the complement of X
and "()" denote either 0 or 1 yields the syntax of the primary algebra.
A basis for 2 is a set of equations, called axioms, from which all of the above equations (and more) can be de-
rived. There are many known bases for all Boolean algebras and hence for 2. An elegant basis notated using only
concatenation and overbar is:

1. ABC = BCA (Concatenation commutes, associates)

2. AA = 1 (2 is a complemented lattice, with an upper bound of 1)

3. A0 = A (0 is the lower bound).

4. AAB = AB (2 is a distributive lattice)

Where concatenation = OR, 1 = true, and 0 = false, or concatenation = AND, 1 = false, and 0 = true. (overbar is
negation in both cases.)
If 0=1, (1)-(3) are the axioms for an abelian group.
(1) only serves to prove that concatenation commutes and associates. First assume that (1) associates from either the
left or the right, then prove commutativity. Then prove association from the other direction. Associativity is simply
association from the left and right combined.
This basis makes for an easy approach to proof, called calculation, that proceeds by simplifying expressions to 0 or
1, by invoking axioms (2)(4), and the elementary identities AA = A, A = A, 1 + A = 1 , and the distributive law.
84.3. METATHEORY 359

84.3 Metatheory
De Morgans theorem states that if one does the following, in the given order, to any Boolean function:

Complement every variable;


Swap '+' and '' operators (taking care to add brackets to ensure the order of operations remains the same);

Complement the result,

the result is logically equivalent to what you started with. Repeated application of De Morgans theorem to parts of
a function can be used to drive all complements down to the individual variables.
A powerful and nontrivial metatheorem states that any theorem of 2 holds for all Boolean algebras.[1] Conversely,
an identity that holds for an arbitrary nontrivial Boolean algebra also holds in 2. Hence all the mathematical content
of Boolean algebra is captured by 2. This theorem is useful because any equation in 2 can be veried by a decision
procedure. Logicians refer to this fact as "2 is decidable". All known decision procedures require a number of steps
that is an exponential function of the number of variables N appearing in the equation to be veried. Whether there
exists a decision procedure whose steps are a polynomial function of N falls under the P = NP conjecture.

84.4 See also


Boolean algebra

Bounded set
Lattice (order)

Order theory

84.5 References
[1] Givant, S., and Halmos, P. (2009) Introduction to Boolean Algebras, Springer Verlag. Theorem 9.

84.6 Further reading


Many elementary texts on Boolean algebra were published in the early years of the computer era. Perhaps the best
of the lot, and one still in print, is:

Mendelson, Elliot, 1970. Schaums Outline of Boolean Algebra. McGrawHill.

The following items reveal how the two-element Boolean algebra is mathematically nontrivial.

Stanford Encyclopedia of Philosophy: "The Mathematics of Boolean Algebra," by J. Donald Monk.


Burris, Stanley N., and H.P. Sankappanavar, H. P., 1981. A Course in Universal Algebra. Springer-Verlag.
ISBN 3-540-90578-2.
Chapter 85

Vector logic

Vector logic[1][2] is an algebraic model of elementary logic based on matrix algebra. Vector logic assumes that the
truth values map on vectors, and that the monadic and dyadic operations are executed by matrix operators.

85.1 Overview
Classic binary logic is represented by a small set of mathematical functions depending on one (monadic ) or two
(dyadic) variables. In the binary set, the value 1 corresponds to true and the value 0 to false. A two-valued vector
logic requires a correspondence between the truth-values true (t) and false (f), and two q-dimensional normalized
column vectors composed by real numbers s and n, hence:

t 7 s and f 7 n

(where q 2 is an arbitrary natural number, and normalized means that the length of the vector is 1; usually s
and n are orthogonal vectors). This correspondence generates a space of vector truth-values: V 2 = {s,n}. The basic
logical operations dened using this set of vectors lead to matrix operators.
The operations of vector logic are based on the scalar product between q-dimensional column vectors: uT v = u, v
: the orthonormality between vectors s and n implies that u, v = 1 if u = v , and u, v = 0 if u = v .

85.1.1 Monadic operators


The monadic operators result from the application M on : V2 V2 , and the associated matrices have q rows and q
columns. The two basic monadic operators for this two-valued vector logic are the identity and the negation:

Identity: A logical identity ID(p)is represented by matrix I = ssT + nnT . This matrix operates as follows:
Ip = p, p V 2 ; due to the orthogonality of s respect to n, we have Is = ssT s + nnT s = ss, s + nn, s = s
, and conversely In = n .

Negation: A logical negation p is represented by matrix N = nsT + snT Consequently, Ns = n and Nn = s.


The involutory behavior of the logical negation, namely that (p) equals p, corresponds with the fact that N 2
= I. Is important to note that this vector logic identity matrix is not generally an identity matrix in the sense of
matrix algebra.

85.1.2 Dyadic operators


The 16 two-valued dyadic operators correspond to functions of the type Dyad : V2 V2 V2 ; the dyadic matrices
have q rows and q2 columns. The matrices that execute these dyadic operations are based on the properties of the
Kronecker product.
Two properties of this product are essential for the formalism of vector logic:

360
85.1. OVERVIEW 361

1. The mixed-product property


If A, B, C and D are matrices of such size that one can form the matrix products AC and BD, then

(A B)(C D) = AC BD

2. Distributive transpose The operation of transposition is distributive over the Kronecker product:

(A B)T = AT B T .

Using these properties, expressions for dyadic logic functions can be obtained:

Conjunction. The conjunction (pq) is executed by a matrix that acts on two vector truth-values: C(u v)
.This matrix reproduces the features of the classical conjunction truth-table in its formulation:

C = s(s s)T + n(s n)T + n(n s)T + n(n n)T

and veries

C(s s) = s,

C(s n) = C(n s) = C(n n) = n.

Disjunction. The disjunction (pq) is executed by the matrix

D = s(s s)T + s(s n)T + s(n s)T + n(n n)T ,

D(s s) = D(s n) = D(n s) = s


D(n n) = n.

Implication. The implication corresponds in classical logic to the expression p q p q. The vector logic
version of this equivalence leads to a matrix that represents this implication in vector logic: L = D(N I) .
The explicit expression for this implication is:

L = s(s s)T + n(s n)T + s(n s)T + n(n n)T ,

and the properties of classical implication are satised:


L(s s) = L(n s) = L(n n) = s and
L(s n) = n.

Equivalence and Exclusive or. In vector logic the equivalence pq is represented by the following matrix:

E = s(s s)T + n(s n)T + n(n s)T + s(n n)T


362 CHAPTER 85. VECTOR LOGIC

E(s s) = E(n n) = s

E(s n) = E(n s) = n.

X = NE

X = n(s s)T + s(s n)T + s(n s)T + n(n n)T ,

X(s s) = X(n n) = n

X(s n) = X(n s) = s.

NAND and NOR

The matrices S and P correspond to the Sheer (NAND) and the Peirce (NOR) operations, respectively:

S = NC

P = ND

85.1.3 De Morgans law

In the two-valued logic, the conjunction and the disjunction operations satisfy the De Morgans law: pq(pq),
and its dual: pq(pq)). For the two-valued vector logic this Law is also veried:

C(u v) = N D(N u N v) , where u and v are two logic vectors.

The Kronecker product implies the following factorization:

C(u v) = N D(N N )(u v).

Then it can be proved that in the twodimensional vector logic the De Morgans law is a law involving operators, and
not only a law concerning operations:[3]

C = N D(N N )
85.2. MANY-VALUED TWO-DIMENSIONAL LOGIC 363

85.1.4 Law of contraposition


In the classical propositional calculus, the Law of Contraposition p q q p is proved because the equivalence
holds for all the possible combinations of truth-values of p and q.[4] Instead, in vector logic, the law of contraposition
emerges from a chain of equalities within the rules of matrix algebra and Kronecker products, as shown in what
follows:

L(u v) = D(N I)(u v) = D(N u v) = D(N u N N v) =


D(N N v N u) = D(N I)(N v N u) = L(N v N u)

This result is based in the fact that D, the disjunction matrix, represents a commutative operation.

85.2 Many-valued two-dimensional logic


Many-valued logic was developed by many researchers, particularly by Jan ukasiewicz and allows extending logical
operations to truth-values that include uncertainties.[5] In the case of two-valued vector logic, uncertainties in the
truth values can be introduced using vectors with s and n weighted by probabilities.
Let f = s + n , with , [0, 1], + = 1 be this kind of probabilistic vectors. Here, the many-valued
character of the logic is introduced a posteriori via the uncertainties introduced in the inputs.[1]

85.2.1 Scalar projections of vector outputs


The outputs of this many-valued logic can be projected on scalar functions and generate a particular class of prob-
abilistic logic with similarities with the many-valued logic of Reichenbach.[6][7][8] Given two vectors u = s + n
and v = s + n and a dyadic logical matrix G , a scalar probabilistic logic is provided by the projection over
vector s:

V al(scalars) = sT G(vectors)

Here are the main results of these projections:

N OT () = sT N u = 1
OR(, ) = sT D(u v) = +
AN D(, ) = sT C(u v) =
IM P L(, ) = sT L(u v) = 1 (1 )
XOR(, ) = sT X(u v) = + 2

The associated negations are:

N OR(, ) = 1 OR(, )
N AN D(, ) = 1 AN D(, )
EQU I(, ) = 1 XOR(, )

If the scalar values belong to the set {0, , 1}, this many-valued scalar logic is for many of the operators almost
identical to the 3-valued logic of ukasiewicz. Also, it has been proved that when the monadic or dyadic operators
act over probabilistic vectors belonging to this set, the output is also an element of this set.[3]
364 CHAPTER 85. VECTOR LOGIC

85.3 History
The approach has been inspired in neural network models based on the use of high-dimensional matrices and vectors.[9][10]
Vector logic is a direct translation into a matrix-vector formalism of the classical Boolean polynomials.[11] This kind
of formalism has been applied to develop a fuzzy logic in terms of complex numbers.[12] Other matrix and vec-
tor approaches to logical calculus have been developed in the framework of quantum physics, computer science
and optics.[13][14][15] Early attempts to use linear algebra to represent logic operations can be referred to Peirce and
Copilowish.[16] The Indian biophysicist G.N. Ramachandran developed a formalism using algebraic matrices and
vectors to represent many operations of classical Jain Logic known as Syad and Saptbhangi. Indian logic.[17] It re-
quires independent armative evidence for each assertion in a proposition, and does not make the assumption for
binary complementation.

85.4 Boolean polynomials


George Boole established the development of logical operations as polynomials.[11] For the case of monadic operators
(such as identity or negation), the Boolean polynomials look as follows:

f (x) = f (1)x + f (0)(1 x)

The four dierent monadic operations result from the dierent binary values for the coecients. Identity operation
requires f(1) = 1 and f(0) = 0, and negation occurs if f(1) = 0 and f(0) = 1. For the 16 dyadic operators, the Boolean
polynomials are of the form:

f (x, y) = f (1, 1)xy + f (1, 0)x(1 y) + f (0, 1)(1 x)y + f (0, 0)(1 x)(1 y)

The dyadic operations can be translated to this polynomial format when the coecients f take the values indicated in
the respective truth tables. For instance: the NAND operation requires that:

f (1, 1) = 0 and f (1, 0) = f (0, 1) = f (0, 0) = 1 .

These Boolean polynomials can be immediately extended to any number of variables, producing a large potential
variety of logical operators. In vector logic, the matrix-vector structure of logical operators is an exact translation to
the format of liner algebra of these Boolean polynomials, where the x and 1-x correspond to vectors s and n respectively
(the same for y and 1-y). In the example of NAND, f(1,1)=n and f(1,0)=f(0,1)=f(0,0)=s and the matrix version
becomes:

S = n(s s)T + s[(s n)T + (n s)T + (n n)T ]

85.5 Extensions
Vector logic can be extended to include many truth values since large dimensional vector spaces allow to create
many orthogonal truth values and the corresponding logical matrices.[2]
Logical modalities can be fully represented in this context, with recursive process inspired in neural models[2][18]
Some cognitive problems about logical computations can be analyzed using this formalism, in particular re-
cursive decisions. Any logical expression of classical propositional calculus can be naturally represented by a
tree structure.[4] This fact is retained by vector logic, and has been partially used in neural models focused in
the investigation of the branched structure of natural languages.[19][20][21][22][23][24]
85.6. SEE ALSO 365

The computation via reversible operations as the Fredkin gate can be implemented in vector logic. This im-
plementations provides explicit expressions for matrix operators that produce the input format and the output
ltering necessary for obtaining computations[2][3]
Elementary cellular automata can be analyzed using the operator structure of vector logic; this analysis leads
to a spectral decomposition of the laws governing its dynamics[25][26]
In addition, based on this formalism, a discrete dierential and integral calculus has been developed[27]

85.6 See also


Fuzzy logic
Quantum logic
Boolean algebra
Propositional calculus
George Boole
Jan ukasiewicz

85.7 References
[1] Mizraji, E. (1992). Vector logics: the matrix-vector representation of logical calculus. Fuzzy Sets and Systems, 50, 179
185, 1992

[2] Mizraji, E. (2008) Vector logic: a natural algebraic representation of the fundamental logical gates. Journal of Logic and
Computation, 18, 97121, 2008

[3] Mizraji, E. (1996) The operators of vector logic. Mathematical Logic Quarterly, 42, 2739

[4] Suppes, P. (1957) Introduction to Logic, Van Nostrand Reinhold, New York.

[5] ukasiewicz, J. (1980) Selected Works. L. Borkowski, ed., pp. 153178. North-Holland, Amsterdam, 1980

[6] Rescher, N. (1969) Many-Valued Logic. McGrawHill, New York

[7] Blanch, R. (1968) Introduction la Logique Contemporaine, Armand Colin, Paris

[8] Klir, G.J., Yuan, G. (1995) Fuzzy Sets and Fuzzy Logic. PrenticeHall, New Jersey

[9] Kohonen, T. (1977) Associative Memory: A System-Theoretical Approach. Springer-Verlag, New York

[10] Mizraji, E. (1989) Context-dependent associations in linear distributed memories. Bulletin of Mathematical Biology, 50,
195205

[11] Boole, G. (1854) An Investigation of the Laws of Thought, on which are Founded the Theories of Logic and Probabilities.
Macmillan, London, 1854; Dover, New York Reedition, 1958

[12] Dick, S. (2005) Towards complex fuzzy logic. IEEE Transactions on Fuzzy Systems, 15,405414, 2005

[13] Mittelstaedt, P. (1968) Philosophische Probleme der Modernen Physik, Bibliographisches Institut, Mannheim

[14] Stern, A. (1988) Matrix Logic: Theory and Applications. North-Holland, Amsterdam

[15] Westphal, J., Hardy, J. (2005) Logic as a vector system. Journal of Logic and Computation, 15, 751765

[16] Copilowish, I.M. (1948) Matrix development of the calculus of relations. Journal of Symbolic Logic, 13, 193203

[17] Jain, M.K. (2011) Logic of evidence-based inference propositions, Current Science, 16631672, 100

[18] Mizraji, E. (1994) Modalities in vector logic. Notre Dame Journal of Formal Logic, 35, 272283

[19] Mizraji, E., Lin, J. (2002) The dynamics of logical decisions. Physica D, 168169, 386396
366 CHAPTER 85. VECTOR LOGIC

[20] beim Graben, P., Potthast, R. (2009). Inverse problems in dynamic cognitive modeling. Chaos, 19, 015103

[21] beim Graben, P., Pinotsis, D., Saddy, D., Potthast, R. (2008). Language processing with dynamic elds. Cogn. Neurodyn.,
2, 7988

[22] beim Graben, P., Gerth, S., Vasishth, S.(2008) Towards dynamical system models of language-related brain potentials.
Cogn. Neurodyn., 2, 229255

[23] beim Graben, P., Gerth, S. (2012) Geometric representations for minimalist grammars. Journal of Logic, Language and
Information, 21, 393-432 .

[24] Binazzi, A.(2012) Cognizione logica e modelli mentali. Studi sulla formazione, 12012, pag. 6984

[25] Mizraji, E. (2006) The parts and the whole: inquiring how the interaction of simple subsystems generates complexity.
International Journal of General Systems, 35, pp. 395415.

[26] Arruti, C., Mizraji, E. (2006) Hidden potentialities. International Journal of General Systems, 35, 461469.

[27] Mizraji, E. (2015) Dierential and integral calculus for logical operations. A matrixvector approach Journal of Logic and
Computation 25, 613-638, 2015
Chapter 86

Veitch chart

The Karnaugh map (KM or K-map) is a method of simplifying Boolean algebra expressions. Maurice Karnaugh
introduced it in 1953[1] as a renement of Edward Veitch's 1952 Veitch chart,[2][3] which actually was a rediscovery
of Allan Marquand's 1881 logical diagram[4] aka Marquand diagram[3] but with a focus now set on its utility for
switching circuits.[3] Veitch charts are therefore also known as MarquandVeitch diagrams,[3] and Karnaugh maps as
KarnaughVeitch maps (KV maps).
The Karnaugh map reduces the need for extensive calculations by taking advantage of humans pattern-recognition
capability.[1] It also permits the rapid identication and elimination of potential race conditions.
The required Boolean results are transferred from a truth table onto a two-dimensional grid where, in Karnaugh maps,
the cells are ordered in Gray code,[5][3] and each cell position represents one combination of input conditions, while
each cell value represents the corresponding output value. Optimal groups of 1s or 0s are identied, which represent
the terms of a canonical form of the logic in the original truth table.[6] These terms can be used to write a minimal
Boolean expression representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using a minimum
number of physical logic gates. A sum-of-products expression can always be implemented using AND gates feeding
into an OR gate, and a product-of-sums expression leads to OR gates feeding an AND gate.[7] Karnaugh maps can
also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional
statements, can get very complicated, which makes the code dicult to read and to maintain. Once minimised,
canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic
operators.[8]

86.1 Example
Karnaugh maps are used to facilitate the simplication of Boolean algebra functions. For example, consider the
Boolean function described by the following truth table.
Following are two dierent notations describing the same function in unsimplied Boolean algebra, using the Boolean
variables A, B, C, D, and their inverses.


f (A, B, C, D) = mi , i {6, 8, 9, 10, 11, 12, 13, 14} where mi are the minterms to map (i.e., rows that
have output 1 in the truth table).

f (A, B, C, D) = Mi , i {0, 1, 2, 3, 4, 5, 7, 15} where Mi are the maxterms to map (i.e., rows that have
output 0 in the truth table).

86.1.1 Karnaugh map

In the example above, the four input variables can be combined in 16 dierent ways, so the truth table has 16 rows,
and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 4 grid.

367
368 CHAPTER 86. VEITCH CHART

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
An example Karnaugh map. This image actually shows two Karnaugh maps: for the function , using minterms (colored rectangles)
and
for its complement, using maxterms (gray rectangles). In the image, E() signies a sum of minterms, denoted in the article as
mi .

The row and column indices (shown across the top, and down the left side of the Karnaugh map) are ordered in Gray
code rather than binary numerical order. Gray code ensures that only one variable changes between each pair of
adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the functions output
for that combination of inputs.
After the Karnaugh map has been constructed, it is used to nd one of the simplest possible forms a canonical
form for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify
the expression. The minterms ('minimal terms) for the nal expression are found by encircling groups of 1s in the
map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8). Minterm
rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one
larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and
green groups overlap. The red group is a 2 2 square, the green group is a 4 1 rectangle, and the overlap area is
indicated in brown.
The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For
86.1. EXAMPLE 369

K-map drawn on a torus, and in a plane. The dot-marked cells are adjacent.

AB
00 01 11 10
ABCD ABCD
00

0 4 12 8 0000 - 0 1000 - 8
0001 - 1 1001 - 9
01

1 5 13 9 0010 - 2 1010 - 10
CD

0011 - 3 1011 - 11
0100 - 4 1100 - 12
11

3 7 15 11
0101 - 5 1101 - 13
0110 - 6 1110 - 14
10

2 6 14 10
0111 - 7 1111 - 15

K-map construction. Instead of containing output values, this diagram shows the numbers of outputs, therefore it is not a Karnaugh
map.

example, AD would mean a cell which covers the 2x2 area where A and D are true, i.e. the cells numbered 13, 9,
15, 11 in the diagram above. On the other hand, AD would mean the cells where A is true and D is false (that is, D
is true).
The grid is toroidally connected, which means that rectangular groups can wrap across the edges (see picture). Cells
on the extreme right are actually 'adjacent' to those on the far left; similarly, so are those at the very top and those
at the bottom. Therefore, AD can be a valid termit includes cells 12 and 8 at the top, and wraps to the bottom to
include cells 10 and 14as is B, D, which includes the four corners.
370 CHAPTER 86. VEITCH CHART

In three dimensions, one can bend a rectangle into a torus.

86.1.2 Solution
Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic
minterms can be found by examining which variables stay the same within each box.
For the red grouping:

A is the same and is equal to 1 throughout the box, therefore it should be included in the algebraic representation
of the red minterm.
B does not maintain the same state (it shifts from 1 to 0), and should therefore be excluded.
C does not change. It is always 0, so its complement, NOT-C, should be included. Thus, C should be included.
D changes, so it is excluded.

Thus the rst minterm in the Boolean sum-of-products expression is AC.


For the green grouping, A and B maintain the same state, while C and D change. B is 0 and has to be negated before
it can be included. The second term is therefore AB. Note that it is acceptable that the green grouping overlaps with
the red one.
In the same way, the blue grouping gives the term BCD.
The solutions of each grouping are combined: the normal form of the circuit is AC + AB + BCD .
Thus the Karnaugh map has guided a simplication of

f (A, B, C, D) = ABCD + AB C D + AB CD + ABCD +


ABCD + ABC D + ABCD + ABCD
= AC + AB + BCD
86.1. EXAMPLE 371

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
Diagram showing two K-maps. The K-map for the function f(A, B, C, D) is shown as colored rectangles which correspond to
minterms. The brown region is an overlap of the red 22 square and the green 41 rectangle. The K-map for the inverse of f is
shown as gray rectangles, which correspond to maxterms.

It would also have been possible to derive this simplication by carefully applying the axioms of boolean algebra, but
the time it takes to do that grows exponentially with the number of terms.

86.1.3 Inverse
The inverse of a function is solved in the same way by grouping the 0s instead.
The three terms to cover the inverse are all shown with grey boxes with dierent colored borders:

brown: A, B

gold: A, C

blue: BCD
372 CHAPTER 86. VEITCH CHART

This yields the inverse:

f (A, B, C, D) = A B + A C + BCD

Through the use of De Morgans laws, the product of sums can be determined:

f (A, B, C, D) = A B + A C + BCD
f (A, B, C, D) = A B + A C + BCD
( )
f (A, B, C, D) = (A + B) (A + C) B + C + D

86.1.4 Don't cares

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 X 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=A+BCD'
F=(A+B)(A+C)(A+D')
The value of for ABCD = 1111 is replaced by a don't care. This removes the green term completely and allows the red term to be
larger. It also allows blue inverse term to shift and become larger
86.2. RACE HAZARDS 373

Karnaugh maps also allow easy minimizations of functions whose truth tables include "don't care" conditions. A
don't care condition is a combination of inputs for which the designer doesn't care what the output is. Therefore,
don't care conditions can either be included in or excluded from any rectangular group, whichever makes it larger.
They are usually indicated on the map with a dash or X.
The example on the right is the same as the example above but with the value of f(1,1,1,1) replaced by a don't care.
This allows the red term to expand all the way down and, thus, removes the green term completely.
This yields the new minimum equation:

f (A, B, C, D) = A + BCD

Note that the rst term is just A, not AC. In this case, the don't care has dropped a term (the green rectangle);
simplied another (the red one); and removed the race hazard (removing the yellow term as shown in the following
section on race hazards).
The inverse case is simplied as follows:

f (A, B, C, D) = A B + A C + AD

86.2 Race hazards

86.2.1 Elimination
Karnaugh maps are useful for detecting and eliminating race conditions. Race hazards are very easy to spot using a
Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions
circumscribed on the map. However, because of the nature of Gray coding, adjacent has a special denition explained
above - we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides.

In the example above, a potential race condition exists when C is 1 and D is 0, A is 1, and B changes from 1
to 0 (moving from the blue state to the green state). For this case, the output is dened to remain unchanged
at 1, but because this transition is not covered by a specic term in the equation, a potential for a glitch (a
momentary transition of the output to 0) exists.
There is a second potential glitch in the same example that is more dicult to spot: when D is 0 and A and B
are both 1, with C changing from 1 to 0 (moving from the blue state to the red state). In this case the glitch
wraps around from the top of the map to the bottom.

Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to
worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in
time to meet the timing deadline. In our example, we are not considering clocked logic.
In our case, an additional term of AD would eliminate the potential race hazard, bridging between the green and blue
output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom
to the top of the right half) in the adjacent diagram.
The term is redundant in terms of the static logic of the system, but such redundant, or consensus terms, are often
needed to assure race-free dynamic performance.
Similarly, an additional term of AD must be added to the inverse to eliminate another potential(race hazard.
) Applying
De Morgans laws creates another product of sums expression for f, but with a new factor of A + D .

86.2.2 2-variable map examples


Thefollowing are all the possible 2-variable, 2 2 Karnaugh maps. Listed with each is the minterms as a function
of m() and the race hazard free (see previous section) minimum equation. A minterm is dened as an expression
that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical inter-
connected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These
374 CHAPTER 86. VEITCH CHART

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
Race hazards are present in this diagram.

expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to
be mapped. Here are all the blocks with one eld.
A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge
of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and
horizontal row. A visualization of the k-map can be considered cylindrical. The elds at edges on the left and right
are adjacent, and the top and bottom are adjacent. K-Maps for 4 variables must be depicted as a donut or torus shape.
The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables
and more.
86.2. RACE HAZARDS 375

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
Above diagram with consensus terms added to avoid race hazards.

A
0 1

0 0
0
B

0 0
1

f(A,B) = E()
K=0
K'=1 m(0); K = 0
376 CHAPTER 86. VEITCH CHART

A
0 1

1 0
0
B
0 0
1

f(A,B) = E(1)
K=A'B'
K'=A+B m(1); K = AB

A
0 1

0 1
0
B

0 0
1

f(A,B) = E(2)
K=AB'
K'=A'+B m(2); K = AB

A
0 1

0 0
0
B

1 0
1

f(A,B) = E(3)
K=A'B
K'=A+B' m(3); K = AB

A
0 1

0 0
0
B

0 1
1

f(A,B) = E(4)
K=AB
K'=A'+B' m(4); K = AB

A
0 1

1 1
0
B

0 0
1

f(A,B) = E(1,2)
K=B'
K'=B m(1,2); K = B

A
0 1

1 0
0
B

1 0
1

f(A,B) = E(1,3)
K=A'
K'=A m(1,3); K = A
86.2. RACE HAZARDS 377

A
0 1

1 0
0
B
0 1
1

f(A,B) = E(1,4)
K=A'B'+AB
K'=AB'+A'B m(1,4); K = AB + AB

A
0 1

0 1
0
B

1 0
1

f(A,B) = E(2,3)
K=AB'+A'B
K'=A'B'+AB m(2,3); K = AB + AB

A
0 1

0 1
0
B

0 1
1

f(A,B) = E(2,4)
K=A
K'=A' m(2,4); K = A

A
0 1

0 0
0
B

1 1
1

f(A,B) = E(3,4)
K=B
K'=B' m(3,4); K = B

A
0 1

1 1
0
B

1 0
1

f(A,B) = E(1,2,3)
K=A'+B'
K'=AB m(1,2,3); K = A' + B

A
0 1

1 1
0
B

0 1
1

f(A,B) = E(1,2,4)
K=B'+A
K'=A'B m(1,2,4); K = A + B
378 CHAPTER 86. VEITCH CHART

A
0 1

1 0
0
B
1 1
1

f(A,B) = E(1,3,4)
K=A'+B
K'=AB' m(1,3,4); K = A + B

A
0 1

0 1
0
B

1 1
1

f(A,B) = E(2,3,4)
K=A+B
K'=A'B' m(2,3,4); K = A + B

A
0 1

1 1
0
B

1 1
1

f(A,B) = E(1,2,3,4)
K=1
K'=0 m(1,2,3,4); K = 1

86.3 Other graphical methods


Alternative graphical minimization methods include:

Marquand diagram (1881) by Allan Marquand (18531924)[4][3]


Harvard minimizing chart (1951) by Howard H. Aiken and Martha L. Whitehouse of the Harvard Computation
Laboratory[9][1][10][11]
Veitch chart (1952) by Edward Veitch (19242013)[2][3]
Svobodas graphical aids (1956) and triadic map by Antonn Svoboda (19071980)[12][13][14][15]
Hndler circle graph (aka Hndlerscher Kreisgraph, Kreisgraph nach Hndler, Hndler-Kreisgraph, Hndler-
Diagramm, Minimisierungsgraph [sic]) (1958) by Wolfgang Hndler (19201998)[16][17][18][14][19][20][21][22][23]
Graph method (1965) by Herbert Kortum (19071979)[24][25][26][27][28][29]

86.4 See also


Circuit minimization
Espresso heuristic logic minimizer
List of Boolean algebra topics
QuineMcCluskey algorithm
Algebraic normal form (ANF)
Ring sum normal form (RSNF)
86.5. REFERENCES 379

Zhegalkin normal form


Reed-Muller expansion
Venn diagram
Punnett square (a similar diagram in biology)

86.5 References
[1] Karnaugh, Maurice (November 1953) [1953-04-23, 1953-03-17]. The Map Method for Synthesis of Combinational Logic
Circuits (PDF). Transactions of the American Institute of Electrical Engineers part I. 72 (9): 593599. doi:10.1109/TCE.1953.6371932.
Paper 53-217. Archived (PDF) from the original on 2017-04-16. Retrieved 2017-04-16. (NB. Also contains a short review
by Samuel H. Caldwell.)

[2] Veitch, Edward W. (1952-05-03) [1952-05-02]. A Chart Method for Simplifying Truth Functions. ACM Annual Con-
ference/Annual Meeting: Proceedings of the 1952 ACM Annual Meeting (Pittsburg). New York, USA: ACM: 127133.
doi:10.1145/609784.609801.

[3] Brown, Frank Markham (2012) [2003, 1990]. Boolean Reasoning - The Logic of Boolean Equations (reissue of 2nd ed.).
Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-42785-0.

[4] Marquand, Allan (1881). XXXIII: On Logical Diagrams for n terms. The London, Edinburgh, and Dublin Philosophical
Magazine and Journal of Science. 5. 12 (75): 266270. doi:10.1080/14786448108627104. Retrieved 2017-05-15. (NB.
Quite many secondary sources erroneously cite this work as A logical diagram for n terms or On a logical diagram for
n terms.)

[5] Wakerly, John F. (1994). Digital Design: Principles & Practices. New Jersey, USA: Prentice Hall. pp. 222, 4849. ISBN
0-13-211459-3. (NB. The two page sections taken together say that K-maps are labeled with Gray code. The rst section
says that they are labeled with a code that changes only one bit between entries and the second section says that such a code
is called Gray code.)

[6] Belton, David (April 1998). Karnaugh Maps Rules of Simplication. Archived from the original on 2017-04-18.
Retrieved 2009-05-30.

[7] Dodge, Nathan B. (September 2015). Simplifying Logic Circuits with Karnaugh Maps (PDF). The University of Texas
at Dallas, Erik Jonsson School of Engineering and Computer Science. Archived (PDF) from the original on 2017-04-18.
Retrieved 2017-04-18.

[8] Cook, Aaron. Using Karnaugh Maps to Simplify Code. Quantum Rarity. Archived from the original on 2017-04-18.
Retrieved 2012-10-07.

[9] Aiken, Howard H.; Blaauw, Gerrit; Burkhart, William; Burns, Robert J.; Cali, Lloyd; Canepa, Michele; Ciampa, Carmela
M.; Coolidge, Jr., Charles A.; Fucarile, Joseph R.; Gadd, Jr., J. Orten; Gucker, Frank F.; Harr, John A.; Hawkins, Robert
L.; Hayes, Miles V.; Hofheimer, Richard; Hulme, William F.; Jennings, Betty L.; Johnson, Stanley A.; Kalin, Theodore;
Kincaid, Marshall; Lucchini, E. Edward; Minty, William; Moore, Benjamin L.; Remmes, Joseph; Rinn, Robert J.; Roche,
John W.; Sanbord, Jacquelin; Semon, Warren L.; Singer, Theodore; Smith, Dexter; Smith, Leonard; Strong, Peter F.;
Thomas, Helene V.; Wang, An; Whitehouse, Martha L.; Wilkins, Holly B.; Wilkins, Robert E.; Woo, Way Dong; Lit-
tle, Elbert P.; McDowell, M. Scudder (1952) [January 1951]. Chapter V: Minimizing charts. Synthesis of electronic
computing and control circuits (second printing, revised ed.). Write-Patterson Air Force Base: Harvard University Press
(Cambridge, Massachusetts, USA) / Georey Cumberlege Oxford University Press (London). pp. preface, 5067. Re-
trieved 2017-04-16. [] Martha Whitehouse constructed the minimizing charts used so profusely throughout this book,
and in addition prepared minimizing charts of seven and eight variables for experimental purposes. [] Hence, the present
writer is obliged to record that the general algebraic approach, the switching function, the vacuum-tube operator, and the
minimizing chart are his proposals, and that he is responsible for their inclusion herein. [] (NB. Work commenced in
April 1948.)

[10] Phister, Jr., Montgomery (1959) [December 1958]. Logical design of digital computers. New York, USA: John Wiley &
Sons Inc. pp. 7583. ISBN 0471688053.

[11] Curtis, H. Allen (1962). A new approach to the design of switching circuits. Princeton: D. van Nostrand Company.

[12] Svoboda, Antonn (1956). Gracko-mechanick pomcky uvan pi analyse a synthese kontaktovch obvod [Utilization
of graphical-mechanical aids for the analysis and synthesis of contact circuits]. Stroje na zpracovn informac [Symphosium
IV on information processing machines] (in Czech). IV. Prague: Czechoslovak Academy of Sciences, Research Institute
of Mathematical Machines. pp. 921.
380 CHAPTER 86. VEITCH CHART

[13] Svoboda, Antonn (1956). Graphical Mechanical Aids for the Synthesis of Relay Circuits. Nachrichtentechnische Fach-
berichte (NTF), Beihefte der Nachrichtentechnischen Zeitschrift (NTZ). Braunschweig, Germany: Vieweg-Verlag.

[14] Steinbuch, Karl W.; Weber, Wolfgang; Heinemann, Traute, eds. (1974) [1967]. Taschenbuch der Informatik - Band II
- Struktur und Programmierung von EDV-Systemen. Taschenbuch der Nachrichtenverarbeitung (in German). 2 (3 ed.).
Berlin, Germany: Springer-Verlag. pp. 25, 62, 96, 122123, 238. ISBN 3-540-06241-6. LCCN 73-80607.

[15] Svoboda, Antonn; White, Donnamaie E. (2016) [1979-08-01]. Advanced Logical Circuit Design Techniques (PDF) (re-
typed electronic reissue ed.). Garland STPM Press (original issue) / WhitePubs (reissue). ISBN 978-0-8240-7014-4.
Archived (PDF) from the original on 2017-04-14. Retrieved 2017-04-15.

[16] Hndler, Wolfgang (1958). Ein Minimisierungsverfahren zur Synthese von Schaltkreisen: Minimisierungsgraphen (Disser-
tation) (in German). Technische Hochschule Darmstadt. D 17. (NB. Although written by a German, the title contains an
anglicism; the correct German term would be Minimierung instead of Minimisierung.)

[17] Hndler, Wolfgang (2013) [1961]. Zum Gebrauch von Graphen in der Schaltkreis- und Schaltwerktheorie. In Peschl,
Ernst Ferdinand; Unger, Heinz. Colloquium ber Schaltkreis- und Schaltwerk-Theorie - Vortragsauszge vom 26. bis 28.
Oktober 1960 in Bonn - Band 3 von Internationale Schriftenreihe zur Numerischen Mathematik [International Series of
Numerical Mathematics] (ISNM) (in German). 3. Institut fr Angewandte Mathematik, Universitt Saarbrcken, Rheinisch-
Westflisches Institut fr Instrumentelle Mathematik: Springer Basel AG / Birkhuser Verlag Basel. pp. 169198. ISBN
978-3-0348-5771-0. doi:10.1007/978-3-0348-5770-3.

[18] Berger, Erich R.; Hndler, Wolfgang (1967) [1962]. Steinbuch, Karl W.; Wagner, Siegfried W., eds. Taschenbuch der
Nachrichtenverarbeitung (in German) (2 ed.). Berlin, Germany: Springer-Verlag OHG. pp. 64, 10341035, 1036, 1038.
LCCN 67-21079. Title No. 1036. [] bersichtlich ist die Darstellung nach Hndler, die smtliche Punkte, numeriert
nach dem Gray-Code [], auf dem Umfeld eines Kreises anordnet. Sie erfordert allerdings sehr viel Platz. [] [Hndlers
illustration, where all points, numbered according to the Gray code, are arranged on the circumference of a circle, is easily
comprehensible. It needs, however, a lot of space.]

[19] Hotz, Gnter (1974). Schaltkreistheorie [Switching circuit theory]. DeGruyter Lehrbuch (in German). Walter de Gruyter
& Co. p. 117. ISBN 3-11-00-2050-5. [] Der Kreisgraph von Hndler ist fr das Aunden von Primimplikanten
gut brauchbar. Er hat den Nachteil, da er schwierig zu zeichnen ist. Diesen Nachteil kann man allerdings durch die
Verwendung von Schablonen verringern. [] [The circle graph by Hndler is well suited to nd prime implicants. A
disadvantage is that it is dicult to draw. This can be remedied using stencils.]

[20] Informatik Sammlung Erlangen (ISER)" (in German). Erlangen, Germany: Friedrich-Alexander Universitt. 2012-03-
13. Retrieved 2017-04-12. (NB. Shows a picture of a Kreisgraph by Hndler.)

[21] Informatik Sammlung Erlangen (ISER) - Impressum (in German). Erlangen, Germany: Friedrich-Alexander Universitt.
2012-03-13. Archived from the original on 2012-02-26. Retrieved 2017-04-15. (NB. Shows a picture of a Kreisgraph by
Hndler.)

[22] Zemanek, Heinz (2013) [1990]. Geschichte der Schaltalgebra [History of circuit switching algebra]. In Broy, Man-
fred. Informatik und Mathematik [Computer Sciences and Mathematics] (in German). Springer-Verlag. pp. 4372. ISBN
9783642766770. Einen Weg besonderer Art, der damals zu wenig beachtet wurde, wies W. Hndler in seiner Dissertation
[] mit einem Kreisdiagramm. [] (NB. Collection of papers at a colloquium held at the Bayerische Akademie der
Wissenschaften, 1989-06-12/14, in honor of Friedrich L. Bauer.)

[23] Bauer, Friedrich Ludwig; Wirsing, Martin (March 1991). Elementare Aussagenlogik (in German). Berlin / Heidelberg:
Springer-Verlag. pp. 5456, 71, 112113, 138139. ISBN 978-3-540-52974-3. [] handelt es sich um ein Hndler-
Diagramm [], mit den Wrfelecken als Ecken eines 2m -gons. [] Abb. [] zeigt auch Gegenstcke fr andere Di-
mensionen. Durch waagerechte Linien sind dabei Tupel verbunden, die sich nur in der ersten Komponente unterscheiden;
durch senkrechte Linien solche, die sich nur in der zweiten Komponente unterscheiden; durch 45-Linien und 135-Linien
solche, die sich nur in der dritten Komponente unterscheiden usw. Als Nachteil der Hndler-Diagramme wird angefhrt,
da sie viel Platz beanspruchen. []

[24] Kortum, Herbert (1965). Minimierung von Kontaktschaltungen durch Kombination von Krzungsverfahren und Graphen-
methoden. messen-steuern-regeln (msr) (in German). Verlag Technik. 8 (12): 421425.

[25] Kortum, Herbert (1966). Konstruktion und Minimierung von Halbleiterschaltnetzwerken mittels Graphentransformation.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (1): 912.

[26] Kortum, Herbert (1966). Weitere Bemerkungen zur Minimierung von Schaltnetzwerken mittels Graphenmethoden.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (3): 96102.

[27] Kortum, Herbert (1966). Weitere Bemerkungen zur Behandlung von Schaltnetzwerken mittels Graphen. messen-steuern-
regeln (msr) (in German). Verlag Technik. 9 (5): 151157.
86.6. FURTHER READING 381

[28] Kortum, Herbert (1967). "ber zweckmige Anpassung der Graphenstruktur diskreter Systeme an vorgegebene Auf-
gabenstellungen. messen-steuern-regeln (msr) (in German). Verlag Technik. 10 (6): 208211.

[29] Tafel, Hans Jrg (1971). 4.3.5. Graphenmethode zur Vereinfachung von Schaltfunktionen. Written at RWTH, Aachen,
Germany. Einfhrung in die digitale Datenverarbeitung [Introduction to digital information processing] (in German). Mu-
nich, Germany: Carl Hanser Verlag. pp. 98105, 107113. ISBN 3-446-10569-7.

86.6 Further reading


Katz, Randy Howard (1998) [1994]. Contemporary Logic Design. The Benjamin/Cummings Publishing Com-
pany. pp. 7085. ISBN 0-8053-2703-7. doi:10.1016/0026-2692(95)90052-7.

Vingron, Shimon Peter (2004) [2003-11-05]. Karnaugh Maps. Switching Theory: Insight Through Predicate
Logic. Berlin, Heidelberg, New York: Springer-Verlag. pp. 5776. ISBN 3-540-40343-4.

Wickes, William E. (1968). Logic Design with Integrated Circuits. New York, USA: John Wiley & Sons.
pp. 3649. LCCN 68-21185. A renement of the Venn diagram in that circles are replaced by squares and
arranged in a form of matrix. The Veitch diagram labels the squares with the minterms. Karnaugh assigned 1s
and 0s to the squares and their labels and deduced the numbering scheme in common use.

Maxeld, Clive Max (2006-11-29). Reed-Muller Logic. Logic 101. EETimes. Part 3. Archived from the
original on 2017-04-19. Retrieved 2017-04-19.

86.7 External links


Detect Overlapping Rectangles, by Herbert Glarner.
Using Karnaugh maps in practical applications, Circuit design project to control trac lights.

K-Map Tutorial for 2,3,4 and 5 variables


Karnaugh Map Example

POCKETPC BOOLEAN FUNCTION SIMPLIFICATION, Ledion Bitincka George E. Antoniou


Chapter 87

Wolfram axiom

The Wolfram axiom is the result of a computer exploration undertaken by Stephen Wolfram[1] in his A New Kind of
Science looking for the shortest single axiom equivalent to the axioms of Boolean algebra (or propositional calculus).
The result[2] of his search was an axiom with six Nands and three variables equivalent to Boolean algebra:

((a|b) | c) | (a | ((a|c) | a)) = c

With the vertical bar representing the Nand logical operation (also known as the Sheer stroke), with the following
meaning: p Nand q is true if and only if not both p and q are true. It is named for Henry M. Sheer, who proved that
all the usual operators of Boolean algebra (Not, And, Or, Implies) could be expressed in terms of Nand. This means
that logic can be set up using a single operator.
Wolframs 25 candidates are precisely the set of Sheer identities of length less or equal to 15 elements (excluding
mirror images) that have no noncommutative models of size less or equal to 4 (variables).[3]
Researchers have known for some time that single equational axioms (i.e., 1-bases) exist for Boolean algebra, includ-
ing representation in terms of disjunction and negation and in terms of the Sheer stroke. Wolfram proved that there
were no smaller 1-bases candidates than the axiom he found using the techniques described in his NKS book. The
proof is given in two pages (in 4-point type) in Wolframs book. Wolframs axiom is, therefore, the single simplest
axiom by number of operators and variables needed to reproduce Boolean algebra.
Sheer identities were independently obtained by dierent means and reported in a technical memorandum[4] in June
2000 acknowledging correspondence with Wolfram in February 2000 in which Wolfram discloses to have found the
axiom in 1999 while preparing his book. In[5] is also shown that a pair of equations (conjectured by Stephen Wolfram)
are equivalent to Boolean algebra.

87.1 See also


Boolean algebra

87.2 References
[1] Stephen Wolfram, A New Kind of Science, 2002, p. 808811 and 1174.

[2] Rudy Rucker, A review of NKS, The Mathematical Association of America, Monthly 110, 2003.

[3] William Mccune, Robert Vero, Branden Fitelson, Kenneth Harris, Andrew Feist and Larry Wos, Short Single Axioms
for Boolean algebra, J. Automated Reasoning, 2002.

[4] Robert Vero and William McCune, A Short Sheer Axiom for Boolean algebra, Technical Memorandum No. 244

[5] Robert Vero, Short 2-Bases for Boolean algebra in Terms of the Sheer stroke. Tech. Report TR-CS-2000-25, Computer
Science Department, University of New Mexico, Albuquerque, NM

382
87.3. EXTERNAL LINKS 383

87.3 External links


Stephen Wolfram, 2002, "A New Kind of Science, online.

Weisstein, Eric W. Wolfram Axiom. MathWorld.

http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/nand.html
Weisstein, Eric W. Boolean algebra. MathWorld.

Weisstein, Eric W. Robbins Axiom. MathWorld.

Weisstein, Eric W. Huntington Axiom. MathWorld.


Chapter 88

Zhegalkin polynomial

Zhegalkin (also Zegalkin or Gegalkine) polynomials form one of many possible representations of the operations
of Boolean algebra. Introduced by the Russian mathematician I. I. Zhegalkin in 1927, they are the polynomials of
ordinary high school algebra interpreted over the integers mod 2. The resulting degeneracies of modular arithmetic
result in Zhegalkin polynomials being simpler than ordinary polynomials, requiring neither coecients nor exponents.
Coecients are redundant because 1 is the only nonzero coecient. Exponents are redundant because in arithmetic
mod 2, x2 = x. Hence a polynomial such as 3x2 y5 z is congruent to, and can therefore be rewritten as, xyz.

88.1 Boolean equivalent


Prior to 1927 Boolean algebra had been considered a calculus of logical values with logical operations of conjunc-
tion, disjunction, negation, etc. Zhegalkin showed that all Boolean operations could be written as ordinary numeric
polynomials, thinking of the logical constants 0 and 1 as integers mod 2. The logical operation of conjunction is re-
alized as the arithmetic operation of multiplication xy, and logical exclusive-or as arithmetic addition mod 2, (written
here as xy to avoid confusion with the common use of + as a synonym for inclusive-or ). Logical complement
x is then derived from 1 and as x1. Since and form a sucient basis for the whole of Boolean algebra,
meaning that all other logical operations are obtainable as composites of these basic operations, it follows that the
polynomials of ordinary algebra can represent all Boolean operations, allowing Boolean reasoning to be performed
reliably by appealing to the familiar laws of high school algebra without the distraction of the dierences from high
school algebra that arise with disjunction in place of addition mod 2.
An example application is the representation of the Boolean 2-out-of-3 threshold or median operation as the Zhegalkin
polynomial xyyzzx, which is 1 when at least two of the variables are 1 and 0 otherwise.

88.2 Formal properties


Formally a Zhegalkin monomial is the product of a nite set of distinct variables (hence square-free), including
the empty set whose product is denoted 1. There are 2n possible Zhegalkin monomials in n variables, since each
monomial is fully specied by the presence or absence of each variable. A Zhegalkin polynomial is the sum (exclusive-
or) of a set of Zhegalkin monomials, with the empty set denoted by 0. A given monomials presence or absence
in a polynomial corresponds to that monomials coecient being 1 or 0 respectively. The Zhegalkin monomials,
being linearly independent, span a 2n -dimensional vector space over the Galois eld GF(2) (NB: not GF(2n ), whose
n
multiplication is quite dierent). The 22 vectors of this space, i.e. the linear combinations of those monomials as
unit vectors, constitute the Zhegalkin polynomials. The exact agreement with the number of Boolean operations on
n variables, which exhaust the n-ary operations on {0,1}, furnishes a direct counting argument for completeness of
the Zhegalkin polynomials as a Boolean basis.
This vector space is not equivalent to the free Boolean algebra on n generators because it lacks complementation
(bitwise logical negation) as an operation (equivalently, because it lacks the top element as a constant). This is
not to say that the space is not closed under complementation or lacks top (the all-ones vector) as an element, but
rather that the linear transformations of this and similarly constructed spaces need not preserve complement and top.

384
88.3. METHOD OF COMPUTATION 385

Those that do preserve them correspond to the Boolean homomorphisms, e.g. there are four linear transformations
from the vector space of Zhegalkin polynomials over one variable to that over none, only two of which are Boolean
homomorphisms.

88.3 Method of computation

Computing the Zhegalkin polynomial for an example function P by the table method

There are three known methods generally used for the computation of the Zhegalkin polynomial.

Using the method of indeterminate coecients


By constructing the canonical disjunctive normal form
By using tables

88.3.1 The method of indeterminate coecients


Using the method of indeterminate coecients, a linear system consisting of all the tuples of the function and their
values is generated. Solving the linear gives the coecients of the Zhegalkin polynomial.

88.3.2 Using the canonical disjunctive normal form


Using this method, the canonical disjunctive normal form (a fully expanded disjunctive normal form) is computed
rst. Then the negations in this expression are replaced by an equivalent expression using the mod 2 sum of the
variable and 1. The disjunction signs are changed to addition mod 2, the brackets are opened, and the resulting
Boolean expression is simplied. This simplication results in the Zhegalkin polynomial.

88.4 Related work


In the same year as Zhegalkins paper (1927) the American mathematician E. T. Bell published a sophisticated
arithmetization of Boolean algebra based on Dedekinds ideal theory and general modular arithmetic (as opposed to
386 CHAPTER 88. ZHEGALKIN POLYNOMIAL

arithmetic mod 2). The much simpler arithmetic character of Zhegalkin polynomials was rst noticed in the west
(independently, communication between Soviet and Western mathematicians being very limited in that era) by the
American mathematician Marshall Stone in 1936 when he observed while writing up his celebrated Stone duality
theorem that the supposedly loose analogy between Boolean algebras and rings could in fact be formulated as an
exact equivalence holding for both nite and innite algebras, leading him to substantially reorganize his paper.

88.5 See also


Ivan Ivanovich Zhegalkin

Algebraic normal form (ANF)


Ring sum normal form (RSNF)

Reed-Muller expansion
Boolean algebra (logic)

Boolean domain
Boolean function

Boolean-valued function
Karnaugh map

88.6 References
Bell, Eric Temple (1927). Arithmetic of Logic. Transactions of the American Mathematical Society. Trans-
actions of the American Mathematical Society, Vol. 29, No. 3. 29 (3): 597611. JSTOR 1989098.
doi:10.2307/1989098.
Gindikin, S. G. (1972). Algebraic Logic (Russian: ). Moscow: Nauka (English
translation Springer-Verlag 1985). ISBN 0-387-96179-8.

Stone, Marshall (1936). The Theory of Representations for Boolean Algebras. Transactions of the American
Mathematical Society. Transactions of the American Mathematical Society, Vol. 40, No. 1. 40 (1): 37111.
ISSN 0002-9947. JSTOR 1989664. doi:10.2307/1989664.
Zhegalkin, Ivan Ivanovich (1927). On the Technique of Calculating Propositions in Symbolic Logic. Matematicheskii
Sbornik. 43: 928.
88.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 387

88.7 Text and image sources, contributors, and licenses


88.7.1 Text
2-valued morphism Source: https://en.wikipedia.org/wiki/2-valued_morphism?oldid=626107757 Contributors: BD2412, Trovatore,
Gzabers, Pietdesomere, SmackBot, Bluebot, Epasseto, Iridescent, CmdrObot, Ste4k, David Eppstein, Onopearls, Hans Adler, PhiRho,
Jesse V. and Anonymous: 3
Absorption law Source: https://en.wikipedia.org/wiki/Absorption_law?oldid=759178643 Contributors: Cyp, Schneelocke, Charles Matthews,
David Newton, Dcoetzee, Dysprosia, Tea2min, Lethe, Macrakis, Bob.v.R, Chalst, Szquirrel, Bookandcoee, Kbdank71, Awis, Chobot,
NTBot~enwiki, Trovatore, Poulpy, SmackBot, RDBury, Mhss, Octahedron80, 16@r, BranStark, CRGreathouse, Gregbard, Cydebot,
PamD, David Eppstein, Policron, Jamelan, SieBot, JackSchmidt, Denisarona, Hans Adler, Skarebo, Addbot, Erik9bot, Constructive
editor, Trappist the monk, Dcirovic, ClarCharr and Anonymous: 14
Algebraic normal form Source: https://en.wikipedia.org/wiki/Algebraic_normal_form?oldid=776413263 Contributors: Michael Hardy,
Charles Matthews, Olathe, CyborgTosser, Macrakis, Mairi, Oleg Alexandrov, Linas, Ner102, GBL, Jon Awbrey, CBM, Salgueiro~enwiki,
Magioladitis, JackSchmidt, Hans Adler, Uscitizenjason, Legobot, Yobot, Omnipaedista, Klbrain, Matthiaspaul, Jiri 1984, Jochen Burghardt,
YiFeiBot, Anarchyte and Anonymous: 5
Analysis of Boolean functions Source: https://en.wikipedia.org/wiki/Analysis_of_Boolean_functions?oldid=795936394 Contributors:
Michael Hardy, Magioladitis, Dodger67, Onel5969, Yuval Filmus, Kbabej, The garmine, DrStrauss and TheSandDoctor
Balanced boolean function Source: https://en.wikipedia.org/wiki/Balanced_boolean_function?oldid=749850455 Contributors: Bearcat,
Xezbeth, Allens, Fetchcomms, Uncle Milty, Addbot, HRoestBot, Gzorg, AvicAWB, Quondum, ClueBot NG, M.r.ebraahimi, Snow Bliz-
zard, Prakhar098, Qetuth, Govind285, Ryanguyaneseboy, Lobopizza, Int80 and Anonymous: 2
Bent function Source: https://en.wikipedia.org/wiki/Bent_function?oldid=791120153 Contributors: Phil Boswell, Rich Farmbrough,
Will Orrick, Rjwilmsi, Marozols, Yahya Abdal-Aziz, Ntsimp, Headbomb, David Eppstein, Anonymous Dissident, Watchduck, Yobot,
AdmiralHood, Nageh, Citation bot 1, Tom.Reding, Trappist the monk, Gzorg, EmausBot, Quondum, Ebehn, Wcherowi, Helpful Pixie
Bot, ChrisGualtieri, Pintoch, Zieglerk, Monkbot, Cryptowarrior, InternetArchiveBot and Anonymous: 5
Binary decision diagram Source: https://en.wikipedia.org/wiki/Binary_decision_diagram?oldid=792679397 Contributors: Taw, Heron,
Michael Hardy, Charles Matthews, Greenrd, Furrykef, David.Monniaux, Rorro, Michael Snow, Gtrmp, Laudaka, Andris, Ryan Clark,
Sam Hocevar, McCart42, Andreas Kaufmann, Jkl, Kakesson, Uli, EmilJ, AshtonBenson, Mdd, Dirk Beyer~enwiki, Sade, IMeowbot,
Ruud Koot, YHoshua, Bluemoose, GregorB, Matumio, Qwertyus, Kinu, Brighterorange, YurikBot, KSchutte, Trovatore, Mikeblas, Jess
Riedel, SmackBot, Jcarroll, Karlbrace, LouScheer, Derek farn, CmdrObot, [email protected], Jay.Here, Wikid77, Headbomb, Ajo Mama,
Bobke, Hermel, Nouiz, Karsten Strehl, David Eppstein, Hitanshu D, Boute, Rohit.nadig, Aaron Rotenberg, Karltk, Rdhettinger, AMC-
Costa, Trivialist, Pranavashok, Sun Creator, Addbot, YannTM, Zorrobot, Yobot, Amirobot, Jason Recliner, Esq., SailorH, Twri, J04n,
Bigfootsbigfoot, Britlak, MondalorBot, Sirutan, Onel5969, Dewritech, EleferenBot, Ort43v, Elaz85, Tijfo098, Helpful Pixie Bot, Cal-
abe1992, BG19bot, Solomon7968, Happyuk, Divy.dv, ChrisGualtieri, Denim1965, Lone boatman, Mark viking, Behandeem, Melcous,
Thibaut120094, Damonamc, Cjdrake1, JMP EAX, InternetArchiveBot, Magic links bot and Anonymous: 91
Bitwise operation Source: https://en.wikipedia.org/wiki/Bitwise_operation?oldid=800409914 Contributors: AxelBoldt, Tim Starling,
Wapcaplet, Delirium, MichaelJanich, Ahoerstemeier, Dwo, Dcoetzee, Furrykef, Lowellian, Tea2min, Giftlite, DavidCary, AlistairMcMil-
lan, Nayuki, Nathan Hamblen, Vadmium, Ak301, Jimwilliams57, Andreas Kaufmann, Discospinster, Xezbeth, Paul August, Bender235,
ESkog, ZeroOne, Plugwash, Pilatus, RoyBoy, Spoon!, JeR, Hooperbloob, Cburnett, Suruena, Voxadam, Forderud, Brookie, Jonathan
de Boyne Pollard, Distalzou, Bluemoose, Kevin.dickerson, Turnstep, Tslocum, Qwertyus, Kbdank71, Sjakkalle, XP1, Moskvax, Math-
bot, GnniX, Quuxplusone, Visor, YurikBot, Wavelength, NTBot~enwiki, FrenchIsAwesome, Locke411, Rsrikanth05, Troller Trolling
Rodriguez, Trovatore, Sekelsenmat, Mikeblas, Klutzy, Cedar101, Cmglee, Amalthea, SmackBot, Incnis Mrsi, Mr link, Rmosler2100,
Plainjane335, @modi, Oli Filth, Baa, Torzsmokus, Teehee123, BlindWanderer, Loadmaster, Optakeover, Glen Pepicelli, Hu12, Yageroy,
Amniarix, Ceran, CRGreathouse, RomanXNS, ClearQ, Thijs!bot, Kubanczyk, N5iln, Acetate~enwiki, Widefox, Guy Macon, Chico75,
Snjrdn, Altamel, JAnDbot, ZZninepluralZalpha, Hackster78, David Eppstein, Gwern, Numbo3, Ian.thomson, Owlgorithm, NewEng-
landYankee, Rbakker99, Seraphim, Quindraco, Sillygwailo, Yintan, SimonTrew, ClueBot, Panchoy, Watchduck, Heckledpie, RedYeti,
Dthomsen8, Dsimic, Addbot, WQDW412, Jarble, Luckas-bot, Yobot, Tubybb, Fraggle81, Timeroot, AnomieBOT, Xqbot, Jayeshsen-
jaliya, Cal27, Rimcob, Frosted14, Perplextase, Intelliproject, 0x0309, Tilkax, Jopazhani, D'ohBot, Vareyn, Kmdouglass, Guillefc, Zvn,
Jveldeb, EmausBot, Set theorist, Noloader, Dewritech, Mateen Ulhaq, Scgtrp, Da Scientist, ZroBot, Dennis714, Jajabinks97, Sbmeirow,
Wikiloop, ClueBot NG, Matthiaspaul, Jcgoble3, Sharkqwy, Episcophagus, Helpful Pixie Bot, Iste Praetor, BG19bot, SimonZucker-
braun, AtrumVentus, PartTimeGnome, Johnny honestly, BattyBot, Oalders, Sfarney, Tagremover, FoCuSandLeArN, Fuebar, Poka-
janje, Jochen Burghardt, Zziccardi, Ben-Yeudith, Artoria2e5, Edgarphs, Franois Robere, Zenzhong8383, JoseEN, Zeenamoe, Xxiggy,
CoolOppo, AdityaKPorwal, Kajhutan, Eavestn, User000name, Boehm, JaimeGallego, JustSomeRandomPersonWithAComputer, Ttt74,
Alokaabasan123, Fmadd, NoToleranceForIntolerance, Peacecop kalmer and Anonymous: 229
Booles expansion theorem Source: https://en.wikipedia.org/wiki/Boole{}s_expansion_theorem?oldid=791896639 Contributors: Michael
Hardy, SebastianHelm, Charles Matthews, Giftlite, SamB, Macrakis, McCart42, Photonique, Qwertyus, Siddhant, Trovatore, Closed-
mouth, SmackBot, Javalenok, Bwgames, Freewol, Harrigan, AndrewHowse, Hamaryns, Plm209, DAGwyn, Cebus, LOTRrules, Denis-
arona, Addbot, Loz777, Yobot, Omnipaedista, AManWithNoPlan, BabbaQ, KLBot2, Muammar Gadda, Dwija Prasad De, Bernatis123,
Engheta, InternetArchiveBot, Bender the Bot, Xhan0o, Zensei2x and Anonymous: 19
Boolean algebra Source: https://en.wikipedia.org/wiki/Boolean_algebra?oldid=800235639 Contributors: William Avery, Michael Hardy,
Dan Koehl, Tacvek, Hyacinth, Dimadick, Tea2min, Thorwald, Paul August, Bender235, ESkog, El C, EmilJ, Coolcaesar, Wtmitchell,
Mindmatrix, Michiel Helvensteijn, BD2412, Rjwilmsi, Pleiotrop3, GnniX, Jrtayloriv, Rotsor, Wavelength, Trovatore, MacMog, Arthur
Rubin, [email protected], Caballero1967, Sardanaphalus, SmackBot, Incnis Mrsi, Gilliam, Tamfang, Lambiam, Wvbailey, Khazar,
Iridescent, Vaughan Pratt, CBM, Neelix, Widefox, QuiteUnusual, Magioladitis, David Eppstein, TonyBrooke, Glrx, Jmajeremy, Nwbee-
son, Cebus, Hurkyl, JohnBlackburne, Oshwah, Tavix, Jackfork, PericlesofAthens, CMBJ, Waldhorn, Soler97, Jruderman, Francvs,
Binksternet, Bruceschuman, Excirial, Hugo Herbelin, Johnuniq, Pgallert, Fluernutter, Favonian, Yobot, AnomieBOT, Danielt998, Ma-
terialscientist, Citation bot, MetaNest, Kivgaen, Pinethicket, Minusia, Oxonienses, Gamewizard71, Trappist the monk, Jordgette, It-
sZippy, Rbaleksandar, Jmencisom, Winner 42, Dcirovic, D.Lazard, Sbmeirow, Pun, Tijfo098, SemanticMantis, LZ6387, ClueBot
NG, LuluQ, Matthiaspaul, Abecedarius, Delusion23, Jiri 1984, Calisthenis, Helpful Pixie Bot, Shantnup, BG19bot, Northamerica1000,
388 CHAPTER 88. ZHEGALKIN POLYNOMIAL

Ivannoriel, Supernerd11, Robert Thyder, LanaEditArticles, Brad7777, Wolfmanx122, Proxyma, Soa karampataki, Muammar Gadda,
Cerabot~enwiki, Fuebar, Telfordbuck, Ruby Murray, Rlwood1, Shevek1981, Seppi333, The Rahul Jain, Matthew Kastor, LarsHugo,
Happy Attack Dog, Abc 123 def 456, Trax support, Lich counter, Mathematical Truth, Ksarnek, LukasMatt, Anjana Larka, Petr.savicky,
Myra Gul, DiscantX, Striker0614, Masih.bist, KasparBot, Jamieddd, Da3mon, MegaManiaZ, Bawb131, Prayasjain7, Simplexity22,
Striker0615, Integrvl, Fmadd, Pioniepl, Bender the Bot, 72, Wikishovel, Thbreacker, Antgaucho, Neehalsharrma1419 and Anonymous:
152
Boolean algebra (structure) Source: https://en.wikipedia.org/wiki/Boolean_algebra_(structure)?oldid=791495665 Contributors: Ax-
elBoldt, Mav, Bryan Derksen, Zundark, Tarquin, Taw, Jeronimo, Ed Poor, Perry Bebbington, XJaM, Toby Bartels, Heron, Camem-
bert, Michael Hardy, Pit~enwiki, Shellreef, Justin Johnson, GTBacchus, Ellywa, , DesertSteve, Samuel~enwiki, Charles
Matthews, Timwi, Dcoetzee, Dysprosia, Jitse Niesen, OkPerson, Maximus Rex, Imc, Fibonacci, Mosesklein, Sandman~enwiki, John-
leemk, JorgeGG, Robbot, Josh Cherry, Fredrik, Romanm, Voodoo~enwiki, Robinh, Ruakh, Tea2min, Ancheta Wis, Giftlite, Markus
Krtzsch, Lethe, MSGJ, Elias, Eequor, Pvemburg, Macrakis, Gauss, Ukexpat, Eduardoporcher, Barnaby dawson, Talkstosocks, Poccil,
Guanabot, Cacycle, Slipstream, Ivan Bajlo, Mani1, Paul August, Bunny Angel13, Plugwash, Elwikipedista~enwiki, Chalst, Nortexoid,
Jojit fb, Wrs1864, Masashi49, Msh210, Andrewpmk, ABCD, Water Bottle, Cburnett, Alai, Klparrot, Woohookitty, Linas, Igny, Uncle G,
Kzollman, Graham87, Magister Mathematicae, Ilya, Qwertyus, SixWingedSeraph, Rjwilmsi, Isaac Rabinovitch, MarSch, KamasamaK,
Staecker, GOD, Salix alba, Yamamoto Ichiro, FlaBot, Mathbot, Alhutch, Celestianpower, Scythe33, Chobot, Visor, Nagytibi, Yurik-
Bot, RobotE, Hairy Dude, [email protected], KSmrq, Joebeone, Archelon, Wiki alf, Trovatore, Yahya Abdal-Aziz, Bota47, Ott2,
Kompik, StuRat, Cullinane, , Arthur Rubin, JoanneB, Ilmari Karonen, Bsod2, SmackBot, FunnyYetTasty, Incnis Mrsi, Uny-
oyega, SaxTeacher, Btwied, Srnec, ERcheck, Izehar, Ciacchi, Cybercobra, Clarepawling, Jon Awbrey, Lambiam, Cronholm144, Meco,
Condem, Avantman42, Zero sharp, Vaughan Pratt, Makeemlighter, CBM, Gregbard, Sopoforic, [email protected], Sommacal alfonso, Julian
Mendez, Tawkerbot4, Thijs!bot, Sagaciousuk, Colin Rowat, Tellyaddict, KrakatoaKatie, AntiVandalBot, JAnDbot, MER-C, Magioladitis,
Albmont, Omicron18, David Eppstein, Honx~enwiki, Dai mingjie, Samtheboy, Policron, Fylwind, Pleasantville, Enoksrd, Anonymous
Dissident, Plclark, The Tetrast, Fwehrung, Escher26, Wing gundam, CBM2, WimdeValk, He7d3r, Hans Adler, Andreasabel, Hugo
Herbelin, Aguitarheroperson, Download, LinkFA-Bot, ., Jarble, Yobot, Ht686rg90, 2D, AnomieBOT, RobertEves92, Citation
bot, ArthurBot, LilHelpa, Gonzalcg, RibotBOT, SassoBot, Charvest, Constructive editor, FrescoBot, Irmy, Citation bot 1, Microme-
sistius, DixonDBot, EmausBot, Faceless Enemy, KHamsun, Dcirovic, Thecheesykid, D.Lazard, Sbmeirow, Tijfo098, ChuispastonBot,
Anita5192, ClueBot NG, Delusion23, Jiri 1984, Widr, Helpful Pixie Bot, Solomon7968, CitationCleanerBot, ChrisGualtieri, Tagre-
mover, Freeze S, Dexbot, Kephir, Jochen Burghardt, Cosmia Nebula, GeoreyT2000, JMP EAX, Bender the Bot, Magic links bot and
Anonymous: 156
Boolean algebras canonically dened Source: https://en.wikipedia.org/wiki/Boolean_algebras_canonically_defined?oldid=782969258
Contributors: Zundark, Michael Hardy, Tea2min, Pmanderson, D6, Paul August, EmilJ, Woohookitty, Linas, BD2412, Rjwilmsi, Salix
alba, Hairy Dude, KSmrq, Robertvan1, Arthur Rubin, Bluebot, Jon Awbrey, Lambiam, Assulted Peanut, Vaughan Pratt, CBM, Gregbard,
Barticus88, Headbomb, Nick Number, Magioladitis, Srice13, David Eppstein, STBot, R'n'B, Jeepday, Michael.Deckers, The Tetrast,
Wing gundam, Fratrep, Dabomb87, Hans Adler, DOI bot, Bte99, Yobot, Pcap, AnomieBOT, Citation bot, RJGray, FrescoBot, Irmy,
Citation bot 1, Set theorist, Klbrain, Tommy2010, Tijfo098, Helpful Pixie Bot, Solomon7968, Zeke, the Mad Horrorist, Fuebar, JJMC89,
Magic links bot and Anonymous: 11
Boolean conjunctive query Source: https://en.wikipedia.org/wiki/Boolean_conjunctive_query?oldid=799407316 Contributors: Trylks,
Tizio, Cedar101, Gregbard, Alaibot, DOI bot, Cdrdata, Dcirovic and Anonymous: 5
Boolean data type Source: https://en.wikipedia.org/wiki/Boolean_data_type?oldid=799640491 Contributors: CesarB, Nikai, Charles
Matthews, Greenrd, Furrykef, Grendelkhan, Robbot, Sbisolo, Wlievens, Mattaschen, Mfc, Tea2min, Enochlau, Jorge Stol, Mboverload,
Fanf, SamSim, Zondor, Bender235, The Noodle Incident, Spoon!, Causa sui, Grick, R. S. Shaw, A-Day, Rising~enwiki, Minority Report,
Metron4, Cburnett, Tony Sidaway, RainbowOfLight, DanBishop, Chris Mason, Tabletop, Marudubshinki, Josh Parris, Rjwilmsi, Salix
alba, Darguz Parsilvan, AndyKali, Vlad Patryshev, Mathbot, Spankthecrumpet, DevastatorIIC, NevilleDNZ, RussBot, Arado, Trovatore,
Thiseye, Mikeblas, Ospalh, Bucketsofg, Praetorian42, Max Schwarz, Wknight94, Richardcavell, Andyluciano~enwiki, HereToHelp, JLa-
Tondre, SmackBot, Melchoir, Renku, Mscuthbert, Sam Pointon, Gilliam, Cyclomedia, Gracenotes, Royboycrashfan, Kindall, Kcordina,
Jpaulm, BIL, Cybercobra, Decltype, Paddy3118, Richard001, Henning Makholm, Lambiam, Doug Bell, Amine Brikci N, Hiiiiiiiiii-
iiiiiiiiiii, Mirag3, EdC~enwiki, Jafet, Nyktos, SqlPac, Jokes Free4Me, Sgould, Torc2, Thijs!bot, Epbr123, JustAGal, AntiVandalBot,
Guy Macon, Seaphoto, JAnDbot, Db099221, Datrukup, Martinkunev, Bongwarrior, Soulbot, Tonyfaull, Burbble, Gwern, MwGamera,
Rettetast, Gah4, Zorakoid, Letter M, Raise exception, Adammck, Joeinwap, Philip Trueman, Qwayzer, Technopat, Hqb, Ashrentum,
Mcclarke, Jamelan, Jamespurs, LOTRrules, Kbrose, SieBot, Jerryobject, SimonTrew, SoulComa, Ctxppc, ClueBot, Hitherebrian, Pointil-
list, AmirOnWiki, DumZiBoT, Cmcqueen1975, Nickha0, Addbot, Mortense, Btx40, Lightbot, Gail, Luckas-bot, Yobot, Wonder, Gtz,
NeatNit, Xqbot, Miym, GrouchoBot, Shirik, Mark Renier, Zero Thrust, Machine Elf 1735, HRoestBot, MastiBot, FoxBot, Staaki, Vrena-
tor, RjwilmsiBot, Ripchip Bot, EmausBot, WikitanvirBot, Ahmed Fouad(the lord of programming), Thecheesykid, Donner60, Tijfo098,
ClueBot NG, Kylecbenton, Rezabot, Masssly, BG19bot, Hasegeli, MusikAnimal, Chmarkine, Aaron613, The Anonymouse, Lesser Car-
tographies, Monkbot, Zahardzhan, Eurodyne, FiendYT, Milos996, Blahblehblihblohbluh27, KolbertBot and Anonymous: 162
Boolean domain Source: https://en.wikipedia.org/wiki/Boolean_domain?oldid=788112863 Contributors: Toby Bartels, Asparagus, El
C, Cje~enwiki, Versageek, Jerey O. Gustafson, Salix alba, DoubleBlue, Closedmouth, Bibliomaniac15, SmackBot, Incnis Mrsi, C.Fred,
Mhss, Bluebot, Octahedron80, Nbarth, Ccero, NickPenguin, Jon Awbrey, JzG, Coredesat, Slakr, KJS77, CBM, AndrewHowse, Gogo
Dodo, Pascal.Tesson, Xuanji, Hut 8.5, Brigit Zilwaukee, Yolanda Zilwaukee, Hans Dunkelberg, Matthi3010, Boute, Jamelan, Maelgwn-
bot, Francvs, Cli, Wolf of the Steppes, Doubtentry, Icharus Ixion, Hans Adler, Buchanans Navy Sec, Overstay, Marsboat, Viva La
Information Revolution!, Autocratic Uzbek, Poke Salat Annie, Flower Mound Belle, Navy Pierre, Mrs. Lovetts Meat Puppets, Chester
County Dude, Southeast Penna Poppa, Delaware Valley Girl, Theonlydavewilliams, Addbot, Yobot, Erik9bot, EmausBot, ClueBot NG,
BG19bot, Monkbot and Anonymous: 8
Boolean expression Source: https://en.wikipedia.org/wiki/Boolean_expression?oldid=798497825 Contributors: Patrick, Andreas Kauf-
mann, Bobo192, Giraedata, Eclecticos, BorgHunter, YurikBot, Michael Slone, Arado, Trovatore, StuRat, SmackBot, Rune X2, Miguel
Andrade, Sct72, Tamfang, Timcrall, David Eppstein, R'n'B, ClueBot, R000t, Addbot, Ptbotgourou, Materialscientist, Xqbot, Erik9bot,
Serols, Gryllida, NameIsRon, ClueBot NG, BG19bot, Solomon7968, Pratyya Ghosh, Jaqoc, Bender the Bot and Anonymous: 19
Boolean function Source: https://en.wikipedia.org/wiki/Boolean_function?oldid=791935592 Contributors: Patrick, Michael Hardy, Ci-
phergoth, Charles Matthews, Hyacinth, Michael Snow, Giftlite, Matt Crypto, Neilc, Gadum, Clemwang, Murtasa, Arthena, Oleg Alexan-
drov, Mindmatrix, Jok2000, CharlesC, Waldir, Qwertyus, Ner102, RobertG, Gene.arboit, NawlinWiki, Trovatore, TheKoG, SDS, Smack-
Bot, Mhss, Jon Awbrey, Poa, Bjankuloski06en~enwiki, Loadmaster, Eassin, Gregbard, Ntsimp, [email protected], Shyguy92, Steveprutz, David
88.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 389

Eppstein, BigrTex, Trusilver, AltiusBimm, TheSeven, Policron, Tatrgel, TreasuryTag, TXiKiBoT, Spinningspark, Kumioko (renamed),
ClueBot, Watchduck, Farisori, Hans Adler, Addbot, Liquidborn, Luckas-bot, Amirobot, AnomieBOT, Chillout003, Twri, Quebec99,
Ayda D, Xqbot, Omnipaedista, Erik9bot, Nageh, Theorist2, EmausBot, Sivan.rosenfeld, ClueBot NG, Jiri 1984, Rezabot, WikiPuppies,
Allanthebaws, Int80, Nigellwh, Hannasnow, Anaszt5, InternetArchiveBot and Anonymous: 31
Boolean prime ideal theorem Source: https://en.wikipedia.org/wiki/Boolean_prime_ideal_theorem?oldid=784473773 Contributors:
AxelBoldt, Michael Hardy, Chinju, Eric119, AugPi, Charles Matthews, Dfeuer, Dysprosia, Aleph4, Tea2min, Giftlite, Markus Krtzsch,
MarkSweep, Vivacissamamente, Mathbot, Trovatore, Hennobrandsma, Ott2, Mhss, Jon Awbrey, CRGreathouse, CBM, Myasuda, Head-
bomb, RobHar, Trioculite, David Eppstein, Kope, TexD, Geometry guy, Alexey Muranov, Hugo Herbelin, Addbot, Xqbot, RoodyAlien,
Gonzalcg, FrescoBot, Citation bot 1, Tkuvho, Bomazi, Helpful Pixie Bot, PhnomPencil, Avsmal, Absolutelypuremilk and Anonymous:
16
Boolean ring Source: https://en.wikipedia.org/wiki/Boolean_ring?oldid=792365063 Contributors: AxelBoldt, Michael Hardy, Takuya-
Murata, AugPi, Charles Matthews, Dcoetzee, Dysprosia, Fredrik, Giftlite, Jason Quinn, AshtonBenson, Oleg Alexandrov, Ruud Koot,
Rjwilmsi, Salix alba, R.e.b., Michael Slone, Hwasungmars, Trovatore, Vanished user 1029384756, Crasshopper, Cedar101, NSLE,
Mhss, Valley2city, Bluebot, Nbarth, Amazingbrock, Rschwieb, Vaughan Pratt, TAnthony, Albmont, Jeepday, TXiKiBoT, Wing gundam,
JackSchmidt, Watchduck, Hans Adler, Alexey Muranov, Addbot, Download, Zorrobot, Luckas-bot, Yobot, Ptbotgourou, JackieBot, The
Banner, Aliotra, Tom.Reding, Jakito, Dcirovic, Jasonanaggie, Anita5192, Matthiaspaul, Toploftical, Jochen Burghardt, Paul2520 and
Anonymous: 20
Boolean satisability algorithm heuristics Source: https://en.wikipedia.org/wiki/Boolean_satisfiability_algorithm_heuristics?oldid=
765344495 Contributors: Michael Hardy, Teb728, Postcard Cathy, Malcolmxl5, John of Reading, BG19bot, Surt91, Iordantrenkov and
Anonymous: 3
Boolean satisability problem Source: https://en.wikipedia.org/wiki/Boolean_satisfiability_problem?oldid=795334382 Contributors:
Damian Yerrick, LC~enwiki, Brion VIBBER, The Anome, Ap, Jan Hidders, ChangChienFu, B4hand, Dwheeler, Twanvl, Nealmcb,
Michael Hardy, Tim Starling, Cole Kitchen, Chinju, Zeno Gantner, Karada, Michael Shields, Schneelocke, Timwi, Dcoetzee, Dysprosia,
Wik, Markhurd, Jimbreed, Jecar, Hh~enwiki, David.Monniaux, Naddy, MathMartin, Nilmerg, Alex R S, Saforrest, Snobot, Giftlite, Ev-
eryking, Dratman, Elias, Mellum, Bacchiad, Gdr, Karl-Henner, Sam Hocevar, Creidieki, McCart42, Mjuarez, Leibniz, FT2, Night Gyr,
DcoetzeeBot~enwiki, Bender235, ZeroOne, Ben Standeven, Clement Cherlin, Chalst, Obradovic Goran, Quaternion, Diego Moya, Sur-
realWarrior, Cdetrio, Sl, Twiki-walsh, Zander, Drbreznjev, Linas, Queerudite, Jok2000, Graham87, BD2412, Mamling, Rjwilmsi, Tizio,
Mathbot, YurikBot, Hairy Dude, Canadaduane, Msoos, Jpbowen, Hohohob, Cedar101, Janto, RG2, SmackBot, Doomdayx, FlashSheri-
dan, DBeyer, Cachedio, Chris the speller, Javalenok, Mhym, LouScheer, Zarrapastro, Localzuk, Jon Awbrey, SashatoBot, Wvbailey,
J. Finkelstein, Igor Markov, Mets501, Dan Gluck, Tawkerbot2, Ylloh, CRGreathouse, CBM, Gregbard, AndrewHowse, Julian Mendez,
Oerjan, Electron9, Alphachimpbot, Wasell, Mmn100, Robby, A3nm, David Eppstein, Gwern, Andre.holzner, R'n'B, CommonsDelinker,
Vegasprof, Enmiles, Naturalog, VolkovBot, Dejan Jovanovi, LokiClock, Maghnus, Bovineboy2008, Jobu0101, Luuva, Piyush Sriva, Lo-
gan, Hattes, Fratrep, Svick, Fancieryu, PerryTachett, PsyberS, DFRussia, Simon04, Mutilin, DragonBot, Oliver Kullmann, Bender2k14,
Hans Adler, Rswarbrick, Max613, DumZiBoT, Arlolra, ~enwiki, Alexius08, Yury.chebiryak, Sergei, Favonian, Legobot,
Yobot, Mqasem, AnomieBOT, Erel Segal, Kingpin13, Citation bot, ArthurBot, Weichaoliu, Miym, Nameless23, Vaxquis, FrescoBot,
Artem M. Pelenitsyn, Milimetr88, Ahalwright, Guarani.py, Tom.Reding, Skyerise, Cnwilliams, Daniel.noland, MrX, Yaxy2k, Jowa
fan, Siteswapper, John of Reading, Wiki.Tango.Foxtrot, GoingBatty, PoeticVerse, Dcirovic, Rafaelgm, Dennis714, Zephyrus Tavvier,
Tigerauna, Orange Suede Sofa, Tijfo098, Chaotic iak, Musatovatattdotnet, Helpful Pixie Bot, Taneltammet, Saragh90, Cyberbot II,
ChrisGualtieri, JYBot, Dexbot, Girondaniel, Adrians wikiname, Jochen Burghardt, Natematic, Me, Myself, and I are Here, Sravan11k,
SiraRaven, Feydun, Juliusbier, Jacob irwin, Djhulme, RaBOTnik, 22merlin, Monkbot, Song of Spring, JeremiahY, KevinGoedecke,
EightTwoThreeFiveOneZeroSevenThreeOne, Marketanova984, InternetArchiveBot, Inblah, GreenC bot, Blue Edits, Bender the Bot,
EditingGirae and Anonymous: 177
Boolean-valued function Source: https://en.wikipedia.org/wiki/Boolean-valued_function?oldid=773507247 Contributors: Toby Bartels,
MathMartin, Giftlite, El C, Versageek, Oleg Alexandrov, Jerey O. Gustafson, Linas, BD2412, Qwertyus, Salix alba, DoubleBlue, Titoxd,
Trovatore, Closedmouth, Arthur Rubin, Bibliomaniac15, SmackBot, C.Fred, Jim62sch, Mhss, Chris the speller, Bluebot, NickPenguin,
Jon Awbrey, JzG, Coredesat, Slakr, Tawkerbot2, CRGreathouse, CBM, Sdorrance, Gregbard, Gogo Dodo, Thijs!bot, Dougher, Hut
8.5, Brigit Zilwaukee, Yolanda Zilwaukee, Seb26, Jamelan, Maelgwnbot, WimdeValk, Wolf of the Steppes, Doubtentry, Icharus Ixion,
DragonBot, Hans Adler, Buchanans Navy Sec, Overstay, Marsboat, Viva La Information Revolution!, Autocratic Uzbek, Poke Salat
Annie, Flower Mound Belle, Navy Pierre, Mrs. Lovetts Meat Puppets, Chester County Dude, Southeast Penna Poppa, Delaware Valley
Girl, AnomieBOT, Samppi111, Omnipaedista, Cnwilliams, Tijfo098, Frietjes, BG19bot, AK456, Kephir and Anonymous: 9
Boolean-valued model Source: https://en.wikipedia.org/wiki/Boolean-valued_model?oldid=782969282 Contributors: Michael Hardy,
TakuyaMurata, Charles Matthews, Tea2min, Giftlite, Marcos, Ryan Reich, BD2412, R.e.b., Mathbot, Trovatore, SmackBot, Mhss,
Ligulembot, Mets501, Zero sharp, CBM, Gregbard, Cydebot, Newbyguesses, Alexey Muranov, Addbot, Lightbot, Citation bot, DrilBot,
Kiefer.Wolfowitz, Dewritech, Radegast, Magic links bot and Anonymous: 7
Booleo Source: https://en.wikipedia.org/wiki/Booleo?oldid=794759598 Contributors: Furrykef, 2005, Mindmatrix, Gregbard, ImageR-
emovalBot, Mild Bill Hiccup, Yobot, AnomieBOT, Nemesis63, LilHelpa, Jesse V., Wcherowi, Jranbrandt, Zeke, the Mad Horrorist,
Ducknish, MrNiceGuy1113, Shevek1981 and Anonymous: 2
Canonical normal form Source: https://en.wikipedia.org/wiki/Canonical_normal_form?oldid=794763438 Contributors: Michael Hardy,
Ixfd64, Charles Matthews, Dysprosia, Sanders muc, Giftlite, Gil Dawson, Macrakis, Mako098765, Discospinster, ZeroOne, MoraSique,
Bookandcoee, Bkkbrad, Joeythehobo, Gurch, Fresheneesz, Bgwhite, YurikBot, Wavelength, Trovatore, Modify, SDS, SmackBot, Mhss,
MK8, Cybercobra, Jon Awbrey, Wvbailey, Freewol, Eric Le Bigot, Myasuda, Odwl, Pasc06, MER-C, Phosphoricx, Henriknordmark,
Jwh335, Gmoose1, AlnoktaBOT, AlleborgoBot, WereSpielChequers, Gregie156, WheezePuppet, WurmWoode, Sarbruis, Mhayeshk,
Hans Adler, Addbot, Douglaslyon, SpBot, Mess, Linket, Wrelwser43, Bluere 000, Sumurai8, Xqbot, Hughbs, Zvn, Reach Out to the
Truth, Semmendinger, Iamnitin, Tijfo098, ClueBot NG, Matthiaspaul, BG19bot, ChrisGualtieri, Enterprisey, NottNott, Sammitysam
and Anonymous: 56
Cantor algebra Source: https://en.wikipedia.org/wiki/Cantor_algebra?oldid=747432762 Contributors: R.e.b. and Bender the Bot
Cha algorithm Source: https://en.wikipedia.org/wiki/Chaff_algorithm?oldid=792773626 Contributors: Stephan Schulz, McCart42,
Andreas Kaufmann, Tizio, Salix alba, NavarroJ, Banes, Jpbowen, SmackBot, Bsilverthorn, Mets501, CBM, Pgr94, Cydebot, Alaibot,
Widefox, Hermel, TAnthony, David Eppstein, Fraulein451 and Anonymous: 4
390 CHAPTER 88. ZHEGALKIN POLYNOMIAL

Cohen algebra Source: https://en.wikipedia.org/wiki/Cohen_algebra?oldid=683514051 Contributors: Michael Hardy, R.e.b. and David
Eppstein
Collapsing algebra Source: https://en.wikipedia.org/wiki/Collapsing_algebra?oldid=615789032 Contributors: R.e.b., David Eppstein
and Deltahedron
Complete Boolean algebra Source: https://en.wikipedia.org/wiki/Complete_Boolean_algebra?oldid=712984941 Contributors: Michael
Hardy, Silversh, Charles Matthews, Giftlite, AshtonBenson, Jemiller226, R.e.b., Mathbot, Scythe33, Shanel, Trovatore, Closedmouth,
SmackBot, Melchoir, Mhss, Mets501, Zero sharp, Vaughan Pratt, CBM, Cydebot, Headbomb, Noobeditor, Tim Ayeles, Hans Adler, Ad-
dbot, Angelobear, Citation bot, VictorPorton, Qm2008q, Citation bot 1, Dcirovic, Helpful Pixie Bot, BG19bot, K9re11 and Anonymous:
7
Consensus theorem Source: https://en.wikipedia.org/wiki/Consensus_theorem?oldid=783547962 Contributors: AugPi, Macrakis, Rich
Farmbrough, Trovatore, Firetrap9254, Gregbard, Sabar, Kruckenberg.1, Magioladitis, Success dreamer89, Wiae, AlleborgoBot, Niceguyedc,
Addbot, Yobot, Pcap, Erik9bot, Merlion444, EmausBot, Kenathte, ZroBot, Wcherowi, Matthiaspaul, Solomon7968, Dexbot, Darcourse,
Hiswoundedwarriors2, Deacon Vorbis, Magic links bot and Anonymous: 9
Correlation immunity Source: https://en.wikipedia.org/wiki/Correlation_immunity?oldid=783587365 Contributors: Michael Hardy,
Apokrif, Ner102, Rjwilmsi, Intgr, Pascal.Tesson, Thijs!bot, Magioladitis, Addbot, DOI bot, Yobot, Monkbot, Cryptowarrior, Magic
links bot and Anonymous: 4
DavisPutnam algorithm Source: https://en.wikipedia.org/wiki/Davis%E2%80%93Putnam_algorithm?oldid=776063534 Contributors:
Michael Hardy, Silversh, Stephan Schulz, Gdm, Ary29, McCart42, Andreas Kaufmann, C S, Fawcett5, Iannigb, Rgrig, Linas, Tizio,
Mathbot, Algebraist, Jpbowen, SmackBot, Acipsen, Freak42, Jon Awbrey, CRGreathouse, CBM, Pgr94, Myasuda, Simeon, Gregbard,
Cydebot, Alaibot, Liquid-aim-bot, Salgueiro~enwiki, Magioladitis, David Eppstein, R'n'B, Botx, Fuenfundachtzig, AlleborgoBot, Hans
Adler, Addbot, DOI bot, Luckas-bot, Yobot, DemocraticLuntz, Omnipaedista, Citation bot 1, Trappist the monk, RjwilmsiBot, Mo ainm,
Tijfo098, Helpful Pixie Bot, Jochen Burghardt, MostlyListening, Ugog Nizdast, Omg panda bear, Monkbot, Starswager18, Dorybadboy
and Anonymous: 9
De Morgans laws Source: https://en.wikipedia.org/wiki/De_Morgan{}s_laws?oldid=798093722 Contributors: The Anome, Tarquin,
Jeronimo, Mudlock, Michael Hardy, TakuyaMurata, Ihcoyc, Ijon, AugPi, DesertSteve, Charles Matthews, Dcoetzee, Dino, Choster, Dys-
prosia, Xiaodai~enwiki, Hyacinth, David Shay, SirPeebles, Fredrik, Dor, Hadal, Giftlite, Starblue, DanielZM, Guppynsoup, Smimram,
Bender235, ESkog, Chalst, Art LaPella, EmilJ, Scrutcheld, Linj, Alphax, Boredzo, Larryv, Jumbuck, Smylers, Oleg Alexandrov, Linas,
Mindmatrix, Bkkbrad, Btyner, Graham87, Sj, Miserlou, The wub, Marozols, Mathbot, Subtractive, DVdm, YurikBot, Wavelength,
RobotE, Hairy Dude, Michael Slone, BMAH07, Cori.schlegel, Saric, Cdiggins, Lt-wiki-bot, Rodrigoq~enwiki, SmackBot, RDBury,
Gilliam, MooMan1, Mhss, JRSP, DHN-bot~enwiki, Ebertek, DHeyward, Coolv, Cybercobra, Jon Awbrey, Vina-iwbot~enwiki, Petrejo,
Gobonobo, Darktemplar, 16@r, Loadmaster, Drae, MTSbot~enwiki, Adambiswanger1, Nutster, JForget, Gregbard, Kanags, Thijs!bot,
Epbr123, Jojan, Helgus, Futurebird, AntiVandalBot, Widefox, Hannes Eder, MikeLynch, JAnDbot, Jqavins, Nitku, Stdazi, Gwern,
General Jazza, Snood1205, R'n'B, Bongomatic, Fruits Monster, Javawizard, Kratos 84, Policron, TWiStErRob, VolkovBot, TXiKiBoT,
Oshwah, Drake Redcrest, Ttennebkram, Epgui, Smoseson, SieBot, Squelle, Fratrep, Melcombe, Into The Fray, Mx. Granger, Clue-
Bot, B1atv, Mild Bill Hiccup, Cholmeister, PixelBot, Alejandrocaro35, Hans Adler, Cldoyle, Rror, Alexius08, Addbot, Mitch feaster,
Tide rolls, Luckas-bot, Yobot, Linket, KamikazeBot, Eric-Wester, AnomieBOT, Joule36e5, Materialscientist, DannyAsher, Obersach-
sebot, Xqbot, Capricorn42, Boongie, Action ben, JascalX, Omnipaedista, Jsorr, Mfwitten, Rapsar, Stpasha, RBarryYoung, DixonDBot,
Teknad, EmausBot, WikitanvirBot, Mbonet, Vernonmorris1, Donner60, Chewings72, Davikrehalt, Llightex, ClueBot NG, Wcherowi,
BarrelProof, Benjgil, Widr, Helpful Pixie Bot, David815, Sylvier11, Waleed.598, ChromaNebula, Jochen Burghardt, Epicgenius, Blue-
mathman, G S Palmer, 7Sidz, Idonei, Scotus12, Loraof, LatinAddict, Danlarteygh, Luis150902, Robert S. Barlow, Gomika, Bender the
Bot, Wanliusa, Deacon Vorbis, Magic links bot and Anonymous: 169
Derivative algebra (abstract algebra) Source: https://en.wikipedia.org/wiki/Derivative_algebra_(abstract_algebra)?oldid=628688481
Contributors: Giftlite, EmilJ, Oleg Alexandrov, Trovatore, Mhss, Bluebot, Mets501, CBM, Davyzhu, Addbot, Unara, Brad7777 and
Anonymous: 4
DiVincenzos criteria Source: https://en.wikipedia.org/wiki/DiVincenzo{}s_criteria?oldid=797507338 Contributors: Rjwilmsi, Magi-
oladitis, BG19bot, GrammarFascist, BattyBot, Mhhossein, GeoreyT2000, Reetssydney, QI Explorations 2016, KolbertBot and Anony-
mous: 2
Evasive Boolean function Source: https://en.wikipedia.org/wiki/Evasive_Boolean_function?oldid=782036740 Contributors: Michael
Hardy, David Eppstein, Mild Bill Hiccup, Watchduck, Certes, Addbot, , Yobot, AnomieBOT, MuedThud, Xnn, Sivan.rosenfeld
and Dewritech
Field of sets Source: https://en.wikipedia.org/wiki/Field_of_sets?oldid=798201434 Contributors: Charles Matthews, David Shay, Tea2min,
Giftlite, William Elliot, Rich Farmbrough, Paul August, Touriste, DaveGorman, Kuratowskis Ghost, Bart133, Oleg Alexandrov, Salix
alba, YurikBot, Trovatore, Mike Dillon, Arthur Rubin, That Guy, From That Show!, SmackBot, Mhss, Gala.martin, Stotr~enwiki,
Marek69, Mathematrucker, R'n'B, Lamro, BotMultichill, VVVBot, Hans Adler, Addbot, DaughterofSun, Jarble, AnomieBOT, Cita-
tion bot, Kiefer.Wolfowitz, Yahia.barie, EmausBot, Tijfo098, ClueBot NG, MerlIwBot, BattyBot, Deltahedron, Mohammad Abubakar
and Anonymous: 14
Formula game Source: https://en.wikipedia.org/wiki/Formula_game?oldid=662182480 Contributors: Michael Hardy, Bearcat, Alai,
BD2412, ForgeGod, Bluebot, Gregbard, Complex (de) and Deutschgirl
Free Boolean algebra Source: https://en.wikipedia.org/wiki/Free_Boolean_algebra?oldid=784766886 Contributors: Zundark, Chris-
martin, Charles Matthews, CSTAR, Chalst, Oleg Alexandrov, BD2412, R.e.b., Mathbot, Trovatore, Arthur Rubin, SmackBot, Mhss,
Vaughan Pratt, CBM, Gregbard, R'n'B, Output~enwiki, Watchduck, Addbot, Daniel Brown, AnomieBOT, LilHelpa, Jiri 1984, Helpful
Pixie Bot, Magic links bot and Anonymous: 6
Functional completeness Source: https://en.wikipedia.org/wiki/Functional_completeness?oldid=795193008 Contributors: Slrubenstein,
Michael Hardy, Paul Murray, Ancheta Wis, Kaldari, Guppynsoup, EmilJ, Nortexoid, Domster, CBright, LOL, Paxsimius, Qwertyus,
Kbdank71, MarSch, Nihiltres, Jameshsher, R.e.s., Cedar101, RichF, SmackBot, InverseHypercube, CBM, Gregbard, Cydebot, Krauss,
Swpb, Sergey Marchenko, Joshua Issac, FMasic, Saralee Arrowood Viognier, Francvs, Hans Adler, Cnoguera, Dsimic, Addbot, Yobot,
AnomieBOT, TechBot, Infvwl, Citation bot 1, Abazgiri, Dixtosa, ZroBot, Tijfo098, ClueBot NG, Helpful Pixie Bot, BG19bot, Wck000
and Anonymous: 29
88.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 391

George Boole Source: https://en.wikipedia.org/wiki/George_Boole?oldid=799946153 Contributors: Mav, Tarquin, Deb, Karen John-
son, William Avery, Heron, Hirzel, Hephaestos, Michael Hardy, Dcljr, Ellywa, Ahoerstemeier, Stan Shebs, Poor Yorick, BRG, Charles
Matthews, RickK, Reddi, Doradus, Markhurd, Hyacinth, Grendelkhan, Samsara, Proteus, Lumos3, Dimadick, Frisket, Robbot, Jaredwf,
Fredrik, Altenmann, Romanm, Smallweed, Pingveno, Blainster, Wereon, Alan Liefting, Snobot, Ancheta Wis, Giftlite, Inter, Tom har-
rison, Peruvianllama, Jason Quinn, Zoney, Djegan, Isidore, Gadum, Antandrus, PeterMKehoe, DragonySixtyseven, Pmanderson,
Icairns, Almit39, Zfr, Jackiespeel, TiMike, TonyW, Babelsch, Lucidish, D6, Discospinster, Zaheen, Xezbeth, Bender235, Djordjes,
Elwikipedista~enwiki, Kaszeta, El C, Rgdboer, Chalst, Kwamikagami, Mwanner, Kyz, Bobo192, Ruszewski, Smalljim, Bollar, Roy da
Vinci, Jumbuck, Arthena, Andrew Gray, ABCD, Orelstrigo, Wtmitchell, Notjim, Alai, Umapathy, Oleg Alexandrov, Nuker~enwiki,
Woohookitty, FeanorStar7, MattGiuca, Scjessey, Mandarax, RichardWeiss, Graham87, Lastorset, BD2412, Ketiltrout, Rjwilmsi, Crazy-
nas, The wub, MarnetteD, Yamamoto Ichiro, FlaBot, Emarsee, RexNL, Gurch, Goeagles4321, Wgfcrafty, Sodin, Introvert, Chobot,
Jaraalbe, Guliolopez, Peterl, YurikBot, Hairy Dude, RussBot, SpuriousQ, CambridgeBayWeather, Rsrikanth05, TheGrappler, Wiki alf,
Trovatore, Bayle Shanks, Banes, Samir, Tomisti, Nikkimaria, CWenger, ArielGold, Caballero1967, Katieh5584, JDspeeder1, Finell,
Tinlv7, SmackBot, Derek Andrews, Incnis Mrsi, Hydrogen Iodide, C.Fred, Jagged 85, Renesis, Eskimbot, HalfShadow, Gilliam, Slaniel,
Skizzik, Irlchrism, Bluebot, Keegan, Da nuke, Djln, MalafayaBot, DHN-bot~enwiki, Can't sleep, clown will eat me, Tamfang, DRahier,
Ww2censor, Mhym, Addshore, SundarBot, Fuhghettaboutit, Ghiraddje, Studentmrb, Jon Awbrey, Ske2, Bejnar, Kukini, Ged UK, Ohcon-
fucius, Lilhinx, SashatoBot, EDUCA33E, Breno, AppaBalwant, IronGargoyle, Ckatz, A. Parrot, Grumpyyoungman01, Timmy1, Naa-
man Brown, Ojan, Gernch, Tawkerbot2, Amniarix, Xcentaur, Tanthalas39, Ale jrb, CBM, Mr. Science, Fordmadoxfraud, Gregbard,
Logicus, Doctormatt, Cydebot, Grahamec, JFreeman, ST47, ArcherOne, DumbBOT, Nabokov, Alaibot, Wdspann, Malleus Fatuorum,
Thijs!bot, Jan Blanicky, Brainboy109, Tapir Terric, Adw2000, Pfranson, Escarbot, AntiVandalBot, RobotG, Seaphoto, Deective,
MER-C, Matthew Fennell, Db099221, Alpinu, TAnthony, Cgilmer, .anacondabot, Acroterion, Magioladitis, Bongwarrior, VoABot
II, Dr.h.friedman, Rivertorch, Waacstats, Jim Douglas, Illuminismus, SandStone, Animum, David Eppstein, Spellmaster, Edward321,
DGG, MartinBot, Genghiskhanviet, Rettetast, Keith D, R'n'B, Shellwood, J.delanoy, Skeptic2, Sageofwisdom, AltiusBimm, Alphapeta,
Chiswick Chap, JonMcLoone, KylieTastic, WJBscribe, Mxmsj, Kolja21, Inwind, CA387, Squids and Chips, Mateck, Idioma-bot, Lights,
VolkovBot, ABF, Pleasantville, Je G., Ryan032, Philip Trueman, Martinevans123, TXiKiBoT, Oshwah, Dwight666, A4bot, Hqb, Mi-
randa, GcSwRhIc, Akramm1, Vanished user ikijeirw34iuaeolaseric, Anna Lincoln, Ontoraul, The Tetrast, TedColes, BotKung, Dun-
can.Hull, Softlavender, Sylent, Brianga, Logan, EmxBot, Steven Weston, BlueBart, Cj1340, Newbyguesses, Cwkmail, Xenophon777,
Anglicanus, Monegasque, Meigan Way, Lightmouse, OKBot, Kumioko (renamed), Msrasnw, Adam Cuerden, Denisarona, Randy Kryn,
Heureka!, Loren.wilton, Martarius, Sfan00 IMG, ClueBot, Fyyer, Professorial, Gylix, Supertouch, Arakunem, CounterVandalismBot, Ot-
tawahitech, Excirial, Ketchup1147, El bot de la dieta, Thingg, Tvwatch~enwiki, LincsPaul, XLinkBot, RogDel, YeIrishJig, Chanakal, Ne-
penthes, Little Mountain 5, Alexius08, Addbot, Wsvlqc, Logicist, Ashishlohorung, Any820, Ironholds, Chimes12, Gzhanstong, LPChang,
Chzz, Favonian, Uscitizenjason, Ehrenkater, BOOLE1847, Zorrobot, Margin1522, Luckas-bot, Yobot, OrgasGirl, The Grumpy Hacker,
PMLawrence, 2008CM, ValBaz, AnomieBOT, DemocraticLuntz, Kingpin13, Materialscientist, Puncakes, Bob Burkhardt, LilHelpa,
Xqbot, Addihockey10, Capricorn42, 4twenty42o, Jmundo, Miym, Omnipaedista, Shubinator, BSTemple, A.amitkumar, GreenC, Ob-
mijtinokcus, FrescoBot, Boldstep, Atlantia, 444x1, Plucas58, Martinvl, At-par, Pmokeefe, Serols, Tyssul, Cnwilliams, 19cass20, Lotje,
Fox Wilson, Abcpathros, LilyKitty, JamAKiska, Sideways713, DARTH SIDIOUS 2, Mandolinface, John of Reading, WikitanvirBot,
IncognitoErgoSum, Mallspeeps, Dcirovic, K6ka, Kkm010, PBS-AWB, Daonguyen95, Knight1993, Luisrock2008, Rcsprinter123, Don-
ner60, Chris857, Peter Karlsen, TYelliot, Petrb, ClueBot NG, Jnorton7558, Derick1259, Goose friend, Yashowardhani, MickyDripping,
Widr, Australopithecus2, Helpful Pixie Bot, Indiangrove, BG19bot, MusikAnimal, Prashantgonarkar, Payppp, Solomon7968, Snow
Blizzard, Lekro, Rjparsons, Toploftical, Ninmacer20, FoCuSandLeArN, Webclient101, Lugia2453, VIAFbot, Grembleben, Jassson-
pet, Jochen Burghardt, Cadillac000, Blankslate8, Nimetapoeg, Red-eyed demon, Tentinator, Eric Corbett, Noyster, OccultZone, Crow,
NABRASA, Hypotune, POLY1956, Melcous, Vieque, BethNaught, Prisencolin, Trax support, Kinetic37, TheWarLizard, Wolverne,
MRD2014, JC713, RyanTQuinn, Slugsayshi, Anonimeco, Lewismason1lm, Mihai.savastre, Eteethan, GB200UCC, DatWillFarrLad,
PennywisePedantry, Sicostel, Tmould1, GeneralizationsAreBad, KasparBot, ProprioMe OW, Feminist, CyberWarfare, MBlaze Light-
ning, Erfasser, CLCStudent, Lockedsmith, Spli Joint Blunt, Dusti1000, Wobwob8888, Dicldicldicl, Chrissymad, Highly Ridiculous,
Pictomania, Doyen786, Bender the Bot, KAP03, Suede Cat, Verideous, Prasanthk18 and Anonymous: 514
GoodmanNguyenvan Fraassen algebra Source: https://en.wikipedia.org/wiki/Goodman%E2%80%93Nguyen%E2%80%93van_
Fraassen_algebra?oldid=753394249 Contributors: Michael Hardy, Trovatore, CharlotteWebb, Knorlin, Psinu, Good Olfactory, Yobot,
Worldbruce, DrilBot, Marcocapelle, Brad7777, Mark viking and Anonymous: 1
Implicant Source: https://en.wikipedia.org/wiki/Implicant?oldid=799907636 Contributors: Michael Hardy, Charles Matthews, Jmabel,
Macrakis, McCart42, Svdb, Mailer diablo, Pako, Mathbot, Fresheneesz, YurikBot, Buster79, Pwoestyn, HopeSeekr of xMule, Smack-
Bot, Mhss, Chendy, Jon Awbrey, Courcelles, Nviladkar, Odwl, Sri go, Genuineleather, Squids and Chips, VolkovBot, Ra2007, Addbot,
MrOllie, Materialscientist, Portisere, DrilBot, Fcdesign, Matthiaspaul, Ceklock and Anonymous: 25
Implication graph Source: https://en.wikipedia.org/wiki/Implication_graph?oldid=745821013 Contributors: Altenmann, Vadmium,
PWilkinson, GregorB, CBM, Thisisraja, David Eppstein, DavidCBryant, R0uge, DOI bot, Twri, Dcirovic, ClueBot NG, BG19bot, 0a.io
and Anonymous: 3
Inclusion (Boolean algebra) Source: https://en.wikipedia.org/wiki/Inclusion_(Boolean_algebra)?oldid=567164022 Contributors: Macrakis
Interior algebra Source: https://en.wikipedia.org/wiki/Interior_algebra?oldid=798387132 Contributors: Zundark, Michael Hardy, Charles
Matthews, Hyacinth, Giftlite, Kuratowskis Ghost, Oleg Alexandrov, Linas, Mathbot, Trovatore, SmackBot, Mhss, Bejnar, Mets501,
Gregbard, Gogo Dodo, MetsBot, R'n'B, LokiClock, Aspects, Hans Adler, Jarble, Yobot, Omnipaedista, EmausBot, Dewritech and Anony-
mous: 11
Join (sigma algebra) Source: https://en.wikipedia.org/wiki/Join_(sigma_algebra)?oldid=786309626 Contributors: Tsirel, Linas, Magic
links bot and Anonymous: 1
Karnaugh map Source: https://en.wikipedia.org/wiki/Karnaugh_map?oldid=800150074 Contributors: Bryan Derksen, Zundark, LA2,
PierreAbbat, Fubar Obfusco, Heron, BL~enwiki, Michael Hardy, Chan siuman, Justin Johnson, Seav, Chadloder, Iulianu, Nveitch, Bog-
dangiusca, GRAHAMUK, Jitse Niesen, Fuzheado, Colin Marquardt, Furrykef, Omegatron, Vaceituno, Ckape, Robbot, Naddy, Tex-
ture, Paul Murray, Ancheta Wis, Giftlite, DocWatson42, SamB, Bovlb, Macrakis, Mobius, Goat-see, Ktvoelker, Grunt, Perey, Dis-
cospinster, Caesar, Dcarter, MeltBanana, Murtasa, ZeroOne, Plugwash, Nigelj, Unstable-Element, Obradovic Goran, Pearle, Mdd, Phy-
zome, Jumbuck, Fritzpoll, Snowolf, Wtshymanski, Cburnett, Bonzo, Kenyon, Acerperi, Wikiklrsc, Dionyziz, Eyreland, Marudubshinki,
Jake Wartenberg, MarSch, Mike Segal, Oblivious, Ligulem, Ademkader, Mathbot, Winhunter, Fresheneesz, Tardis, LeCire~enwiki,
Bgwhite, YurikBot, RobotE, RussBot, SpuriousQ, B-Con, Anomie, Arichnad, Trovatore, RolandYoung, RazorICE, RUL3R, Rohan-
mittal, Cedar101, Tim Parenti, Gulliveig, HereToHelp, RG2, Sinan Taifour, SmackBot, InverseHypercube, Thunder Wolf, Edgar181,
392 CHAPTER 88. ZHEGALKIN POLYNOMIAL

Gilliam, Bluebot, Thumperward, Villarinho, Moonshiner, DHN-bot~enwiki, Locriani, Sct72, HLwiKi, Michael.Pohoreski, Hex4def6,
SashatoBot, Wvbailey, MagnaMopus, Freewol, Vobrcz, Jmgonzalez, Augustojd, CRGreathouse, CBM, Jokes Free4Me, Reywas92, Czar
Kirk, Tkynerd, Thijs!bot, Headbomb, JustAGal, Jonnie5, CharlotteWebb, RazoreRobin, Leuko, Ndyguy, VoABot II, Swpb, Gantoniou,
Carrige, R'n'B, Yim~enwiki, JoeFloyd, Aervanath, FreddieRic, KylieTastic, Sigra~enwiki, TXiKiBoT, Cyberjoac, Cremepu222, Mar-
tinPackerIBM, Kelum.kosala, Spinningspark, FxBit, Pitel, Serprex, SieBot, VVVBot, Aeoza, IdreamofJeanie, OKBot, Svick, Rrfwiki,
WimdeValk, Justin W Smith, Rjd0060, Unbuttered Parsnip, Czarko, Dsamarin, Watchduck, Sps00789, Hans Adler, Gciriani, B.Zsolt,
Jmanigold, Tullywinters, ChyranandChloe, Avoided, Cmr08, Writer130, Addbot, DOI bot, Loafers, Delaszk, Dmenet, AgadaUrbanit,
Luckas-bot, Kartano, Hhedeshian, SwisterTwister, Mhayes46, AnomieBOT, Jim1138, Utility Knife, Citation bot, Dannamite, Arthur-
Bot, Pnettle, Miym, GrouchoBot, TunLuek, Abed pacino, Macjohn2, BillNace, Amplitude101, Pdebonte, Biker Biker, Pinethicket,
RedBot, The gulyan89, SpaceFlight89, Trappist the monk, Vrenator, Katragadda465, RjwilmsiBot, Alessandro.goulartt, Zap Rows-
dower, Norlik, Njoutram, Rocketrod1960, Voomoo, ClueBot NG, Bukwoy, Matthiaspaul, AHA.SOLAX, Frietjes, Imyourfoot, Widr,
Danim, Jk2q3jrklse, Spudpuppy, Nbeverly, Ceklock, Giorgos.antoniou, Icigic, CARPON, Usmanraza9, Wolfmanx122, Shidh, Elec-
tricmun11, EuroCarGT, Yaxinr, Mrphious, Jochen Burghardt, Mdcoope3, TheEpTic, Akosibrixy, Microchirp, Cheater00, Lennerton,
GreenWeasel11, Loraof, Scipsycho, BILL ABK, Acayl, ShigaIntern, InternetArchiveBot, GreenC bot, Gerdhuebner, Abduw09, Dhoni
barath, NoahB123, Ngonz424, Arun8277 and Anonymous: 279
Laws of Form Source: https://en.wikipedia.org/wiki/Laws_of_Form?oldid=790730653 Contributors: Zundark, Michael Hardy, Qaz,
Charles Matthews, Timwi, Imc, Blainster, Giftlite, Lupin, Supergee, Sigfpe, Ebear422, Creidieki, Sam, CALR, Rich Farmbrough, Leib-
niz, John Vandenberg, PWilkinson, Arthena, Rodw, Suruena, Bluemoose, Waldir, Rjwilmsi, Salix alba, FayssalF, Chobot, Bgwhite,
Hairy Dude, Cyferx, RussBot, IanManka, Gaius Cornelius, Grafen, Trovatore, Mike Dillon, Reyk, SmackBot, Lavintzin, Scdevine,
AustinKnight, Jpvinall, Commander Keane bot, Chris the speller, Autarch, Concerned cynic, Ernestrome, Tompsci, Jon Awbrey, Robosh,
Mets501, Rschwieb, Nehrams2020, Paul Foxworthy, Philip ea, CBM, Gregbard, Chris83, AndrewHowse, Cydebot, M a s, PamD,
Nick Number, Abracadab, Leolaursen, Magioladitis, Pdturney, Ccrummer, EagleFan, David Eppstein, JaGa, Gwern, R'n'B, Kingding,
N4nojohn, Adavidb, Station1, The Tetrast, Nerketur, Sapphic, Newbyguesses, Paradoctor, Gerold Broser, Randy Kryn, Kai-Hendrik,
Dutton Peabody, Hans Adler, SchreiberBike, Ospix, Palnot, XLinkBot, Addbot, CountryBot, Yobot, Denispir, AnomieBOT, Daniel-
gschwartz, Citation bot, LilHelpa, CXCV, J04n, Omnipaedista, FrescoBot, Citation bot 1, Skyerise, EmausBot, The Nut, RANesbit,
Tijfo098, Wcherowi, NULL, Helpful Pixie Bot, BG19bot, PhnomPencil, CitationCleanerBot, Jochen Burghardt, BruceME, Eyesnore,
Dirkbaecker, GreenC bot, Jmcgnh, Bender the Bot, Deacon Vorbis and Anonymous: 49
List of Boolean algebra topics Source: https://en.wikipedia.org/wiki/List_of_Boolean_algebra_topics?oldid=744472575 Contributors:
Michael Hardy, Charles Matthews, Michael Snow, Neilc, ZeroOne, Oleg Alexandrov, FlaBot, Mathbot, Rvireday, Scythe33, YurikBot,
Trovatore, StuRat, Mhss, GBL, Fplay, MichaelBillington, Jon Awbrey, Igor Markov, Syrcatbot, Gregbard, Cydebot, [email protected], The
Transhumanist, Kruckenberg.1, The Tetrast, WimdeValk, Joaopitacosta, Niceguyedc, Hans Adler, Addbot, Verbal, Sz-iwbot, Gamewiz-
ard71, CaroleHenson, Zeke, the Mad Horrorist, Sam Sailor, Liz, Matthew Kastor and Anonymous: 10
Logic alphabet Source: https://en.wikipedia.org/wiki/Logic_alphabet?oldid=787895463 Contributors: Michael Hardy, Topbanana, Giftlite,
Rich Farmbrough, Kenyon, Oleg Alexandrov, BD2412, Cactus.man, Trovatore, EAderhold, Gregbard, PamD, Nick Number, Danger,
Smerdis, R'n'B, Romicron, Algotr, Slysplace, ImageRemovalBot, Alksentrs, Watchduck, ResidueOfDesign, Saeed.Veradi, WilliamB-
Hall, Ettrig, DrilBot, LittleWink, Gamewizard71, Masssly, Bender the Bot and Anonymous: 10
Logic optimization Source: https://en.wikipedia.org/wiki/Logic_optimization?oldid=800573937 Contributors: Zundark, Michael Hardy,
Abdull, Simon Fenney, Diego Moya, Wtshymanski, SmackBot, Sct72, Cydebot, MarshBot, Hitanshu D, NovaSTL, WimdeValk, Dekart,
Delaszk, AnomieBOT, Quebec99, Klbrain, Tijfo098, Matthiaspaul, Masssly, InternetArchiveBot, GreenC bot, PrimeBOT and Anony-
mous: 4
Logic redundancy Source: https://en.wikipedia.org/wiki/Logic_redundancy?oldid=645028636 Contributors: Michael Hardy, Nurg, Giftlite,
Cburnett, SmackBot, Chris the speller, Mart22n, WimdeValk, TeamX, The Thing That Should Not Be, Addbot, Yobot and Anonymous:
5
Logical matrix Source: https://en.wikipedia.org/wiki/Logical_matrix?oldid=795046783 Contributors: AugPi, Carlossuarez46, Paul Au-
gust, El C, Oleg Alexandrov, Jerey O. Gustafson, BD2412, RxS, Rjwilmsi, DoubleBlue, Nihiltres, TeaDrinker, BOT-Superzerocool,
Wknight94, Closedmouth, SmackBot, InverseHypercube, C.Fred, Aksi great, Octahedron80, MaxSem, Jon Awbrey, Lambiam, JzG,
Slakr, Mets501, Happy-melon, CBM, , Jheiv, Hut 8.5, Brusegadi, Catgut, David Eppstein, Brigit Zilwaukee,
Yolanda Zilwaukee, Policron, Cerberus0, TXiKiBoT, Seb26, ClueBot, Cli, Blanchardb, RABBU, REBBU, DEBBU, DABBU, BAB-
BU, RABBU, Wolf of the Steppes, REBBU, Doubtentry, DEBBU, Education Is The Basis Of Law And Order, -Midorihana-,
Bare In Mind, Preveiling Opinion Of Dominant Opinion Group, Buchanans Navy Sec, Overstay, Marsboat, Unco Guid, Poke Salat An-
nie, Flower Mound Belle, Mrs. Lovetts Meat Puppets, Addbot, Breggen, Floquenbeam, Erik9bot, FrescoBot, Kimmy007, EmausBot,
Quondum, Tijfo098, Masssly, Deyvid Setti, Helpful Pixie Bot, Jochen Burghardt, Suelru, Zeiimer, Pyrrhonist05 and Anonymous: 15
Lupanov representation Source: https://en.wikipedia.org/wiki/Lupanov_representation?oldid=692960339 Contributors: Michael Hardy,
Oleg Alexandrov, Welsh, A3nm, AnomieBOT, Alvin Seville, RobinK, Maalosh and Anonymous: 1
Maharam algebra Source: https://en.wikipedia.org/wiki/Maharam_algebra?oldid=745941092 Contributors: Finlay McWalter, R.e.b.,
David Eppstein, Deltahedron and Anonymous: 1
Majority function Source: https://en.wikipedia.org/wiki/Majority_function?oldid=742283469 Contributors: Tobias Hoevekamp, Jz-
cool, Michael Hardy, Ckape, Robbot, DavidCary, ABCD, Bluebot, Radagast83, Lambiam, J. Finkelstein, Gregbard, Pascal.Tesson, Al-
phachimpbot, Magioladitis, Vanish2, David Eppstein, Ilyaraz, Alexei Kopylov, TFCforever, DOI bot, Balabiot, Legobot, Luckas-bot,
Yobot, Rubinbot, Citation bot, Citation bot 1, , Jesse V., Monkbot, EDickenson and Anonymous: 7
Marquand diagram Source: https://en.wikipedia.org/wiki/Karnaugh_map?oldid=800150074 Contributors: Bryan Derksen, Zundark,
LA2, PierreAbbat, Fubar Obfusco, Heron, BL~enwiki, Michael Hardy, Chan siuman, Justin Johnson, Seav, Chadloder, Iulianu, Nveitch,
Bogdangiusca, GRAHAMUK, Jitse Niesen, Fuzheado, Colin Marquardt, Furrykef, Omegatron, Vaceituno, Ckape, Robbot, Naddy, Tex-
ture, Paul Murray, Ancheta Wis, Giftlite, DocWatson42, SamB, Bovlb, Macrakis, Mobius, Goat-see, Ktvoelker, Grunt, Perey, Dis-
cospinster, Caesar, Dcarter, MeltBanana, Murtasa, ZeroOne, Plugwash, Nigelj, Unstable-Element, Obradovic Goran, Pearle, Mdd, Phy-
zome, Jumbuck, Fritzpoll, Snowolf, Wtshymanski, Cburnett, Bonzo, Kenyon, Acerperi, Wikiklrsc, Dionyziz, Eyreland, Marudubshinki,
Jake Wartenberg, MarSch, Mike Segal, Oblivious, Ligulem, Ademkader, Mathbot, Winhunter, Fresheneesz, Tardis, LeCire~enwiki,
Bgwhite, YurikBot, RobotE, RussBot, SpuriousQ, B-Con, Anomie, Arichnad, Trovatore, RolandYoung, RazorICE, RUL3R, Rohan-
mittal, Cedar101, Tim Parenti, Gulliveig, HereToHelp, RG2, Sinan Taifour, SmackBot, InverseHypercube, Thunder Wolf, Edgar181,
88.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 393

Gilliam, Bluebot, Thumperward, Villarinho, Moonshiner, DHN-bot~enwiki, Locriani, Sct72, HLwiKi, Michael.Pohoreski, Hex4def6,
SashatoBot, Wvbailey, MagnaMopus, Freewol, Vobrcz, Jmgonzalez, Augustojd, CRGreathouse, CBM, Jokes Free4Me, Reywas92, Czar
Kirk, Tkynerd, Thijs!bot, Headbomb, JustAGal, Jonnie5, CharlotteWebb, RazoreRobin, Leuko, Ndyguy, VoABot II, Swpb, Gantoniou,
Carrige, R'n'B, Yim~enwiki, JoeFloyd, Aervanath, FreddieRic, KylieTastic, Sigra~enwiki, TXiKiBoT, Cyberjoac, Cremepu222, Mar-
tinPackerIBM, Kelum.kosala, Spinningspark, FxBit, Pitel, Serprex, SieBot, VVVBot, Aeoza, IdreamofJeanie, OKBot, Svick, Rrfwiki,
WimdeValk, Justin W Smith, Rjd0060, Unbuttered Parsnip, Czarko, Dsamarin, Watchduck, Sps00789, Hans Adler, Gciriani, B.Zsolt,
Jmanigold, Tullywinters, ChyranandChloe, Avoided, Cmr08, Writer130, Addbot, DOI bot, Loafers, Delaszk, Dmenet, AgadaUrbanit,
Luckas-bot, Kartano, Hhedeshian, SwisterTwister, Mhayes46, AnomieBOT, Jim1138, Utility Knife, Citation bot, Dannamite, Arthur-
Bot, Pnettle, Miym, GrouchoBot, TunLuek, Abed pacino, Macjohn2, BillNace, Amplitude101, Pdebonte, Biker Biker, Pinethicket,
RedBot, The gulyan89, SpaceFlight89, Trappist the monk, Vrenator, Katragadda465, RjwilmsiBot, Alessandro.goulartt, Zap Rows-
dower, Norlik, Njoutram, Rocketrod1960, Voomoo, ClueBot NG, Bukwoy, Matthiaspaul, AHA.SOLAX, Frietjes, Imyourfoot, Widr,
Danim, Jk2q3jrklse, Spudpuppy, Nbeverly, Ceklock, Giorgos.antoniou, Icigic, CARPON, Usmanraza9, Wolfmanx122, Shidh, Elec-
tricmun11, EuroCarGT, Yaxinr, Mrphious, Jochen Burghardt, Mdcoope3, TheEpTic, Akosibrixy, Microchirp, Cheater00, Lennerton,
GreenWeasel11, Loraof, Scipsycho, BILL ABK, Acayl, ShigaIntern, InternetArchiveBot, GreenC bot, Gerdhuebner, Abduw09, Dhoni
barath, NoahB123, Ngonz424, Arun8277 and Anonymous: 279
Modal algebra Source: https://en.wikipedia.org/wiki/Modal_algebra?oldid=787289589 Contributors: EmilJ, Mhss, Addbot and Prime-
BOT
Monadic Boolean algebra Source: https://en.wikipedia.org/wiki/Monadic_Boolean_algebra?oldid=623204166 Contributors: Michael
Hardy, Charles Matthews, Kuratowskis Ghost, Oleg Alexandrov, Trovatore, Mhss, Gregbard, R'n'B, Safek, Hans Adler, Alexey Muranov,
Addbot, Tijfo098, JMP EAX and Anonymous: 4
Parity function Source: https://en.wikipedia.org/wiki/Parity_function?oldid=786594379 Contributors: Michael Hardy, Giftlite, Eyre-
land, Qwertyus, Ott2, Cedar101, Ylloh, CBM, R'n'B, The enemies of god, M gol, Addbot, Luckas-bot, Twri, Amilevy, Helpful Pixie Bot,
Drqwertysilence, Bender the Bot and Anonymous: 1
Petricks method Source: https://en.wikipedia.org/wiki/Petrick{}s_method?oldid=776095422 Contributors: Michael Hardy, Willem,
Kulp, Paul August, Oleg Alexandrov, Mindmatrix, Tizio, Fresheneesz, Trovatore, SmackBot, Andrei Stroe, Jay Uv., MystBot, Addbot,
Luckas-bot, Timeroot, ArthurBot, Harry0xBd, Njoutram, Matthiaspaul, Wolfmanx122 and Anonymous: 8
Poretskys law of forms Source: https://en.wikipedia.org/wiki/Poretsky{}s_law_of_forms?oldid=762099079 Contributors: Bearcat,
Macrakis, Malcolma and WikiWhatthe
Product term Source: https://en.wikipedia.org/wiki/Product_term?oldid=786524106 Contributors: Alai, Oleg Alexandrov, Trovatore,
Mets501, CBM, Yobot, Materialscientist, Erik9bot, ClueBot NG, Brirush and Anonymous: 3
Propositional calculus Source: https://en.wikipedia.org/wiki/Propositional_calculus?oldid=798963524 Contributors: The Anome, Tar-
quin, Jan Hidders, Tzartzam, Michael Hardy, JakeVortex, Kku, Justin Johnson, Minesweeper, Looxix~enwiki, AugPi, Rossami, Ev-
ercat, BAxelrod, Charles Matthews, Dysprosia, Hyacinth, Ed g2s, UninvitedCompany, BobDrzyzgula, Robbot, Benwing, MathMartin,
Rorro, GreatWhiteNortherner, Marc Venot, Ancheta Wis, Giftlite, Lethe, Jason Quinn, Gubbubu, Gadum, LiDaobing, Beland, Grauw,
Almit39, Kutulu, Creidieki, Urhixidur, PhotoBox, EricBright, Extrapiramidale, Rich Farmbrough, Guanabot, FranksValli, Paul August,
Glenn Willen, Elwikipedista~enwiki, Tompw, Chalst, BrokenSegue, Cmdrjameson, Nortexoid, Varuna, Red Winged Duck, ABCD, Xee,
Nightstallion, Bookandcoee, Oleg Alexandrov, Japanese Searobin, Joriki, Linas, Mindmatrix, Ruud Koot, Trevor Andersen, Waldir,
Graham87, BD2412, Qwertyus, Kbdank71, Porcher, Koavf, PlatypeanArchcow, Margosbot~enwiki, Kri, No Swan So Fine, Roboto
de Ajvol, Hairy Dude, Russell C. Sibley, Gaius Cornelius, Ihope127, Rick Norwood, Trovatore, TechnoGuyRob, Jpbowen, Cruise,
Voidxor, Jerome Kelly, Arthur Rubin, Cedar101, Reyk, Teply, GrinBot~enwiki, SmackBot, Michael Meyling, Imz, Incnis Mrsi, Srnec,
Mhss, Bluebot, Cybercobra, Clean Copy, Jon Awbrey, Andeggs, Ohconfucius, Lambiam, Wvbailey, Scientizzle, Loadmaster, Mets501,
Pejman47, JulianMendez, Adriatikus, Zero sharp, JRSpriggs, George100, Harold f, 8754865, Vaughan Pratt, CBM, ShelfSkewed,
Sdorrance, Gregbard, Cydebot, Julian Mendez, Taneli HUUSKONEN, EdJohnston, Applemeister, GeePriest, Salgueiro~enwiki, JAnD-
bot, Thenub314, Hut 8.5, Magioladitis, Paroswiki, MetsBot, JJ Harrison, Epsilon0, Santiago Saint James, Anaxial, R'n'B, N4nojohn,
Wideshanks, TomS TDotO, Created Equal, The One I Love, Our Fathers, STBotD, Mistercupcake, VolkovBot, JohnBlackburne, TXiKi-
BoT, Lynxmb, The Tetrast, Philogo, Wiae, General Reader, Jmath666, VanishedUserABC, Sapphic, Newbyguesses, SieBot, Iamthedeus,
, Jimmycleveland, OKBot, Svick, Huku-chan, Francvs, ClueBot, Unica111, Wysprgr2005, Garyzx, Niceguyedc,
Thinker1221, Shivakumar2009, Estirabot, Alejandrocaro35, Reuben.cornel, Hans Adler, MilesAgain, Djk3, Lightbearer, Addbot, Rdan-
neskjold, Legobot, Yobot, Tannkrem, Stefan.vatev, Jean Santeuil, AnomieBOT, LlywelynII, Materialscientist, Ayda D, Doezxcty, Cwchng,
Omnipaedista, SassoBot, January2009, Thehelpfulbot, FrescoBot, LucienBOT, Xenfreak, HRoestBot, Dinamik-bot, Steve03Mills, Emaus-
Bot, John of Reading, 478jjjz, Chharvey, Chewings72, Bomazi, Tijfo098, ClueBot NG, Golden herring, MrKoplin, Frietjes, Helpful
Pixie Bot, BG19bot, Llandale, Brad7777, Wolfmanx122, Hanlon1755, Khazar2, Jochen Burghardt, Mark viking, Mrellisdee, Christian
Nassif-Haynes, Matthew Kastor, Marco volpe, Jwinder47, Mario Casteln Castro, Eavestn, SiriusGR, CLCStudent, Quiddital, DIYeditor,
CaryaSun, BobU, Nicolai uy and Anonymous: 166
Propositional directed acyclic graph Source: https://en.wikipedia.org/wiki/Propositional_directed_acyclic_graph?oldid=561376261
Contributors: Selket, Andreas Kaufmann, BD2412, Brighterorange, Trovatore, Bluebot, Nbarth, CmdrObot, RUN, MetsBot, Aagtbdfoua,
DRap, Mvinyals, Dennis714, Tijfo098 and Anonymous: 2
Propositional formula Source: https://en.wikipedia.org/wiki/Propositional_formula?oldid=799359014 Contributors: Michael Hardy,
Hyacinth, Timrollpickering, Tea2min, Filemon, Giftlite, Golbez, PWilkinson, Klparrot, Bookandcoee, Woohookitty, Linas, Mindma-
trix, Tabletop, BD2412, Kbdank71, Rjwilmsi, Bgwhite, YurikBot, Hairy Dude, RussBot, Open2universe, Cedar101, SmackBot, Hmains,
Chris the speller, Bluebot, Colonies Chris, Tsca.bot, Jon Awbrey, Muhammad Hamza, Lambiam, Wvbailey, Wizard191, Iridescent,
Happy-melon, ChrisCork, CBM, Gregbard, Cydebot, Julian Mendez, Headbomb, Nick Number, Arch dude, Djihed, R'n'B, Raise ex-
ception, Wiae, Billinghurst, Spinningspark, WRK, Maelgwnbot, Jaded-view, Mild Bill Hiccup, Neuralwarp, Addbot, Yobot, Adelpine,
AnomieBOT, Neurolysis, LilHelpa, The Evil IP address, Kwiki, John of Reading, Klbrain, ClueBot NG, Kevin Gorman, Helpful Pixie
Bot, BG19bot, PhnomPencil, Wolfmanx122, Dexbot, Jochen Burghardt, Mark viking, Knife-in-the-drawer, JJMC89 and Anonymous:
21
QuineMcCluskey algorithm Source: https://en.wikipedia.org/wiki/Quine%E2%80%93McCluskey_algorithm?oldid=793370921 Con-
tributors: Michael Hardy, TakuyaMurata, AugPi, Charles Matthews, Dcoetzee, Dysprosia, Roachmeister, Noeckel, Lithiumhead, Giftlite,
Gzornenplatz, Two Bananas, Simoneau, McCart42, Revision17, Ralph Corderoy, Jkl, Bender235, James Foster, Alansohn, RJFJR,
394 CHAPTER 88. ZHEGALKIN POLYNOMIAL

Kobold, Oleg Alexandrov, Woohookitty, Ruud Koot, Mendaliv, Rjwilmsi, Wikibofh, Dar-Ape, Mathbot, Ysangkok, Fresheneesz, Anti-
matter15, CiaPan, Chobot, YurikBot, Pi Delport, Andrew Bunn, Trovatore, Hv, Hgomersall, Cedar101, Gulliveig, Modify, Skryskalla,
Looper5920, Gilliam, Mhss, Durova, Allan McInnes, Cybercobra, Akshaysrinivasan, Jon Awbrey, Romanski, Tlesher, Dfass, Huntscor-
pio, Pqrstuv, Iridescent, Chetvorno, Gregbard, Elanthiel, QuiteUnusual, Salgueiro~enwiki, BBar, Johnbibby, Narfanator, Gwern, Infran-
gible, Jim.henderson, OneWorld22, Potopi, LordAnubisBOT, Andionita, VolkovBot, AlnoktaBOT, Jay Uv., W1k13rh3nry, WimdeValk,
ClueBot, AMCCosta, Dusa.adrian, Ra2007, Addbot, AgadaUrbanit, Luckas-bot, Yobot, Ipatrol, Sz-iwbot, RedLunchBag, Gulyan89,
DixonDBot, EmausBot, John of Reading, Dusadrian, Alessandro.goulartt, Njoutram, Clementina, Matthiaspaul, Ceklock, Citation-
CleanerBot, CARPON, Wh1chwh1tch, HHadavi, BattyBot, MatthewIreland, Cyberbot II, ChrisGualtieri, Snilan, Monkbot, Jakobjb,
LuckyBulldog, Srdrucker, GreenC bot, Zernity and Anonymous: 138
Random algebra Source: https://en.wikipedia.org/wiki/Random_algebra?oldid=747624533 Contributors: R.e.b., Moonraker, Bender
the Bot and Anonymous: 1
Read-once function Source: https://en.wikipedia.org/wiki/Read-once_function?oldid=723729058 Contributors: David Eppstein
ReedMuller expansion Source: https://en.wikipedia.org/wiki/Reed%E2%80%93Muller_expansion?oldid=794983448 Contributors:
Michael Hardy, AugPi, Macrakis, Macha, DavidCBryant, Cebus, Fyrael, Legobot, Yobot, Jason Recliner, Esq., RobinK, Klbrain, Matthi-
aspaul and Anonymous: 4
Relation algebra Source: https://en.wikipedia.org/wiki/Relation_algebra?oldid=799654562 Contributors: Zundark, Michael Hardy, AugPi,
Charles Matthews, Tea2min, Lethe, Fropu, Mboverload, D6, Elwikipedista~enwiki, Giraedata, AshtonBenson, Woohookitty, Paul Car-
penter, BD2412, Rjwilmsi, Koavf, Tillmo, Wavelength, Ott2, Cedar101, Mhss, Concerned cynic, Nbarth, Jon Awbrey, Lambiam, Physis,
Mets501, Vaughan Pratt, CBM, Gregbard, Sam Staton, King Bee, JustAGal, Balder ten Cate, David Eppstein, R'n'B, Leyo, Ramsey2006,
Plasticup, JohnBlackburne, The Tetrast, Linelor, Hans Adler, Addbot, QuadrivialMind, Yobot, AnomieBOT, Nastor, LilHelpa, Xqbot,
Samppi111, Charvest, FrescoBot, Irmy, Sjcjoosten, SchreyP, Seabuoy, BG19bot, CitationCleanerBot, Brad7777, Khazar2, Lerutit, RPI,
JaconaFrere, Narky Blert, SaltHerring, Some1Redirects4You and Anonymous: 41
Residuated Boolean algebra Source: https://en.wikipedia.org/wiki/Residuated_Boolean_algebra?oldid=777417470 Contributors: Tea2min,
Gracefool, PWilkinson, Cedar101, Mhss, Vaughan Pratt, Ctxppc, Addbot, Yobot, Charvest and Anonymous: 2
Robbins algebra Source: https://en.wikipedia.org/wiki/Robbins_algebra?oldid=722337346 Contributors: Tea2min, Giftlite, Nick8325,
Zaslav, John Vandenberg, Qwertyus, Salix alba, Trovatore, Christian75, Jdvelasc, SieBot, Thehotelambush, Addbot, Lightbot, Pcap, Spiros
Bousbouras, Xqbot, Irbisgreif, Afteread, Shishir332, Rcsprinter123 and Anonymous: 14
Sigma-algebra Source: https://en.wikipedia.org/wiki/Sigma-algebra?oldid=800163136 Contributors: AxelBoldt, Zundark, Tarquin, Iwn-
bap, Miguel~enwiki, Michael Hardy, Chinju, Karada, Stevan White, Charles Matthews, Dysprosia, Vrable, AndrewKepert, Fibonacci,
McKay, Robbot, Romanm, Aetheling, Ruakh, Tea2min, Giftlite, Lethe, MathKnight, Mboverload, Gubbubu, Gauss, Barnaby dawson, Vi-
vacissamamente, William Elliot, ArnoldReinhold, Paul August, Bender235, Zaslav, Elwikipedista~enwiki, MisterSheik, EmilJ, SgtThroat,
Jung dalglish, Tsirel, Passw0rd, Msh210, Jheald, Cmapm, Ultramarine, Oleg Alexandrov, Linas, Graham87, Salix alba, FlaBot, Mathbot,
Jrtayloriv, Chobot, Jayme, YurikBot, Lucinos~enwiki, Archelon, Trovatore, Mindthief, Solstag, Crasshopper, Dinno~enwiki, Nielses,
SmackBot, Melchoir, JanusDC, Object01, Dan Hoey, MalafayaBot, RayAYang, Nbarth, DHN-bot~enwiki, Javalenok, Gala.martin,
Henning Makholm, Lambiam, Dbtfz, Jim.belk, Mets501, Stotr~enwiki, Madmath789, CRGreathouse, CBM, David Cooke, Mct mht,
Blaisorblade, Xantharius, , Thijs!bot, Marek69, Escarbot, Keith111, Forgetfulfunctor, Quentar~enwiki, MSBOT, Magioladitis, Ro-
gierBrussee, Paartha, Joeabauer, Hippasus, Policron, Cerberus0, Digby Tantrum, Jmath666, Alfredo J. Herrera Lago, StevenJohnston,
Ocsenave, Tcamps42, SieBot, Melcombe, MicoFils~enwiki, Andrewbt, The Thing That Should Not Be, Mild Bill Hiccup, BearMa-
chine, 1ForTheMoney, DumZiBoT, Addbot, Luckas-bot, Yobot, Li3939108, Amirmath, Godvjrt, Xqbot, RibotBOT, Charvest, Fres-
coBot, BrideOfKripkenstein, AstaBOTh15, Stpasha, RedBot, Soumyakundu, Wikiborg4711, Stj6, TjBot, Max139, KHamsun, Ra5749,
Mikhail Ryazanov, ClueBot NG, Marcocapelle, Thegamer 298, QuarkyPi, Brad7777, AntanO, Shalapaws, Crawfoal, Dexbot, Y256,
Jochen Burghardt, A koyee314, Limit-theorem, Mark viking, NumSPDE, Moyaccercchi, Lewis Goudy, Killaman slaughtermaster, Daren-
Cline, 5D2262B74, Byyourleavesir, Amateur bert, KolbertBot and Anonymous: 99
Stone functor Source: https://en.wikipedia.org/wiki/Stone_functor?oldid=786531729 Contributors: Bearcat, Porton, John Z, TenPound-
Hammer, CBM, Shawn in Montreal, Blanchardb, AtheWeatherman, IkamusumeFan and Anonymous: 5
Stone space Source: https://en.wikipedia.org/wiki/Stone_space?oldid=770379543 Contributors: Michael Hardy, Charles Matthews, Markus
Krtzsch, David Eppstein, GeoreyT2000 and Anonymous: 1
Stones representation theorem for Boolean algebras Source: https://en.wikipedia.org/wiki/Stone{}s_representation_theorem_for_
Boolean_algebras?oldid=800707600 Contributors: Zundark, Michael Hardy, Chinju, AugPi, Smack, Naddy, Tea2min, Giftlite, Markus
Krtzsch, Fropu, Vivacissamamente, Pjacobi, Chalst, Porton, Blotwell, Tsirel, Kuratowskis Ghost, Aleph0~enwiki, Linas, R.e.b., Yurik-
Bot, Trovatore, SmackBot, BeteNoir, Sharpcomputing, Mhss, Rschwieb, CBM, Christian75, Thijs!bot, JanCK, David Eppstein, Falcor84,
R'n'B, StevenJohnston, YonaBot, Alexbot, Beroal, EEng, Addbot, Luckas-bot, Yobot, GrouchoBot, Tkuvho, Slawekb, Nosuchforever,
Dexbot, TheCoeeAddict, Larry Eaglet, KolbertBot and Anonymous: 17
Suslin algebra Source: https://en.wikipedia.org/wiki/Suslin_algebra?oldid=616031609 Contributors: R.e.b. and David Eppstein
Symmetric Boolean function Source: https://en.wikipedia.org/wiki/Symmetric_Boolean_function?oldid=742464417 Contributors: Michael
Hardy, Watchduck, Addbot, Luckas-bot, Twri, HamburgerRadio, DixonDBot, ZroBot, Mark viking and Anonymous: 1
True quantied Boolean formula Source: https://en.wikipedia.org/wiki/True_quantified_Boolean_formula?oldid=741858594 Con-
tributors: Edward, Michael Hardy, Charles Matthews, Neilc, Creidieki, EmilJ, Spug, Giraedata, Kusma, Twin Bird, Cedar101, Smack-
Bot, Karmastan, ForgeGod, Radagast83, Lambiam, Drae, Gregbard, Michael Fourman, David Eppstein, TXiKiBoT, Ilia Kr., Mis-
terspock1, Addbot, DOI bot, Citation bot, MauritsBot, Xqbot, Miym, Milimetr88, Citation bot 1, RobinK, RjwilmsiBot, EmausBot,
Dcirovic, KYLEMONGER, ChuispastonBot, Helpful Pixie Bot, Khazar2, Jochen Burghardt and Anonymous: 13
Truth table Source: https://en.wikipedia.org/wiki/Truth_table?oldid=800454606 Contributors: Mav, Bryan Derksen, Tarquin, Larry
Sanger, Webmaestro, Heron, Hephaestos, Stephen pomes, Bdesham, Patrick, Michael Hardy, Wshun, Liftarn, Ixfd64, Justin Johnson,
Delirium, Jimfbleak, AugPi, Andres, DesertSteve, Charles Matthews, Dcoetzee, Dysprosia, Markhurd, Hyacinth, Pakaran, Banno, Rob-
bot, RedWolf, Snobot, Ancheta Wis, Giftlite, Lethe, Jason Quinn, Vadmium, Lst27, Antandrus, JimWae, Schnits, Creidieki, Joyous!,
Rich Farmbrough, ArnoldReinhold, Paul August, CanisRufus, Gershwinrb, Robotje, Billymac00, Nortexoid, Photonique, Jonsafari, Mdd,
LutzL, Alansohn, Gary, Noosphere, Cburnett, Crystalllized, Bookandcoee, Oleg Alexandrov, Mindmatrix, Bluemoose, Abd, Graham87,
BD2412, Kbdank71, Xxpor, Rjwilmsi, JVz, Koavf, Tangotango, Bubba73, FlaBot, Maustrauser, Fresheneesz, Aeroknight, Chobot,
88.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 395

DVdm, Bgwhite, YurikBot, Wavelength, SpuriousQ, Trovatore, Sir48, Kyle Barbour, Cheese Sandwich, Pooryorick~enwiki, Rofthorax,
Cedar101, CWenger, LeonardoRob0t, Cmglee, KnightRider~enwiki, SmackBot, InverseHypercube, KnowledgeOfSelf, Vilerage, Can-
thusus, The Rhymesmith, Gilliam, Mhss, Gaiacarra, Deli nk, Can't sleep, clown will eat me, Chlewbot, Cybercobra, Uthren, Gschadow,
Jon Awbrey, Antonielly, Nakke, Dr Smith, Parikshit Narkhede, Beetstra, Dicklyon, Mets501, Iridescent, Richardcook, Danlev, CR-
Greathouse, CBM, WeggeBot, Gregbard, Slazenger, Starylon, Cydebot, Flowerpotman, Julian Mendez, Lee, Letranova, Oreo Priest, Anti-
VandalBot, Kitty Davis, Quintote, Vantelimus, K ganju, JAnDbot, Avaya1, Olaf, Holly golightly, Johnbrownsbody, R27smith200245, San-
tiago Saint James, Sevillana~enwiki, CZ Top, Aston Martian, On This Continent, LordAnubisBOT, Bergin, NewEnglandYankee, Policron,
Lights, VolkovBot, The Tetrast, Wiae, AlleborgoBot, Logan, SieBot, Paradoctor, Krawi, Djayjp, Flyer22 Reborn, WimdeValk, Francvs,
Someone the Person, JP.Martin-Flatin, Ocer781, ParisianBlade, Hans Adler, XTerminator2000, Wstorr, Vegetator, Aitias, Qwfp,
Staticshakedown, Addbot, Ghettoblaster, AgadaUrbanit, Ehrenkater, Kiril Simeonovski, C933103, Clon89, Luckas-bot, Yobot, Nallim-
bot, Fox89, Materialscientist, Racconish, Quad4rax, Xqbot, Addihockey10, RibotBOT, Jonesey95, Tom.Reding, MastiBot, Fixer88,
TBloemink, Onel5969, Mean as custard, K6ka, ZroBot, Tijfo098, ChuispastonBot, ClueBot NG, Smtchahal, Akuindo, Wcherowi,
Millermk, WikiPuppies, Jk2q3jrklse, Wbm1058, Bmusician, Ceklock, Joydeep, Supernerd11, CitationCleanerBot, Annina.kari, Achal
Singh, Wolfmanx122, La marts boys, JYBot, Darcourse, Seppi333, UY Scuti, Richard Kohar, ChamithN, Swashski, Aichotoitinhyeu97,
KasparBot, Adam9007, NoToleranceForIntolerance, The devil dipak, Deacon Vorbis, United Massachusetts and Anonymous: 296
Two-element Boolean algebra Source: https://en.wikipedia.org/wiki/Two-element_Boolean_algebra?oldid=786518528 Contributors:
Zundark, Michael Hardy, GTBacchus, Julesd, Nurg, Giftlite, Plugwash, Oleg Alexandrov, Linas, Igny, Salix alba, Trovatore, SmackBot,
Incnis Mrsi, Mhss, Nbarth, Nakon, NickPenguin, Lambiam, CmdrObot, CBM, Gregbard, Pjvpjv, Nick Number, Avaya1, David Epp-
stein, Gwern, R'n'B, WimdeValk, Classicalecon, Ngebendi, Hans Adler, Palnot, Addbot, Luckas-bot, AnomieBOT, FrescoBot, Jordgette,
Gernot.salzer, Tijfo098, Matthiaspaul, Tagremover, MCAllen91, Kephir, Assiliza, Fuebar, Bobanobahoba and Anonymous: 11
Vector logic Source: https://en.wikipedia.org/wiki/Vector_logic?oldid=743447485 Contributors: Michael Hardy, Chris the speller, Mya-
suda, Almadana, Paradoctor, Yobot, FrescoBot, Josve05a, Frietjes, BG19bot, DPL bot and Anonymous: 9
Veitch chart Source: https://en.wikipedia.org/wiki/Karnaugh_map?oldid=800150074 Contributors: Bryan Derksen, Zundark, LA2, Pier-
reAbbat, Fubar Obfusco, Heron, BL~enwiki, Michael Hardy, Chan siuman, Justin Johnson, Seav, Chadloder, Iulianu, Nveitch, Bogdan-
giusca, GRAHAMUK, Jitse Niesen, Fuzheado, Colin Marquardt, Furrykef, Omegatron, Vaceituno, Ckape, Robbot, Naddy, Texture, Paul
Murray, Ancheta Wis, Giftlite, DocWatson42, SamB, Bovlb, Macrakis, Mobius, Goat-see, Ktvoelker, Grunt, Perey, Discospinster, Cae-
sar, Dcarter, MeltBanana, Murtasa, ZeroOne, Plugwash, Nigelj, Unstable-Element, Obradovic Goran, Pearle, Mdd, Phyzome, Jumbuck,
Fritzpoll, Snowolf, Wtshymanski, Cburnett, Bonzo, Kenyon, Acerperi, Wikiklrsc, Dionyziz, Eyreland, Marudubshinki, Jake Warten-
berg, MarSch, Mike Segal, Oblivious, Ligulem, Ademkader, Mathbot, Winhunter, Fresheneesz, Tardis, LeCire~enwiki, Bgwhite, Yurik-
Bot, RobotE, RussBot, SpuriousQ, B-Con, Anomie, Arichnad, Trovatore, RolandYoung, RazorICE, RUL3R, Rohanmittal, Cedar101,
Tim Parenti, Gulliveig, HereToHelp, RG2, Sinan Taifour, SmackBot, InverseHypercube, Thunder Wolf, Edgar181, Gilliam, Bluebot,
Thumperward, Villarinho, Moonshiner, DHN-bot~enwiki, Locriani, Sct72, HLwiKi, Michael.Pohoreski, Hex4def6, SashatoBot, Wvbai-
ley, MagnaMopus, Freewol, Vobrcz, Jmgonzalez, Augustojd, CRGreathouse, CBM, Jokes Free4Me, Reywas92, Czar Kirk, Tkynerd,
Thijs!bot, Headbomb, JustAGal, Jonnie5, CharlotteWebb, RazoreRobin, Leuko, Ndyguy, VoABot II, Swpb, Gantoniou, Carrige, R'n'B,
Yim~enwiki, JoeFloyd, Aervanath, FreddieRic, KylieTastic, Sigra~enwiki, TXiKiBoT, Cyberjoac, Cremepu222, MartinPackerIBM,
Kelum.kosala, Spinningspark, FxBit, Pitel, Serprex, SieBot, VVVBot, Aeoza, IdreamofJeanie, OKBot, Svick, Rrfwiki, WimdeValk,
Justin W Smith, Rjd0060, Unbuttered Parsnip, Czarko, Dsamarin, Watchduck, Sps00789, Hans Adler, Gciriani, B.Zsolt, Jmanigold, Tul-
lywinters, ChyranandChloe, Avoided, Cmr08, Writer130, Addbot, DOI bot, Loafers, Delaszk, Dmenet, AgadaUrbanit, Luckas-bot, Kar-
tano, Hhedeshian, SwisterTwister, Mhayes46, AnomieBOT, Jim1138, Utility Knife, Citation bot, Dannamite, ArthurBot, Pnettle, Miym,
GrouchoBot, TunLuek, Abed pacino, Macjohn2, BillNace, Amplitude101, Pdebonte, Biker Biker, Pinethicket, RedBot, The gulyan89,
SpaceFlight89, Trappist the monk, Vrenator, Katragadda465, RjwilmsiBot, Alessandro.goulartt, Zap Rowsdower, Norlik, Njoutram,
Rocketrod1960, Voomoo, ClueBot NG, Bukwoy, Matthiaspaul, AHA.SOLAX, Frietjes, Imyourfoot, Widr, Danim, Jk2q3jrklse, Spud-
puppy, Nbeverly, Ceklock, Giorgos.antoniou, Icigic, CARPON, Usmanraza9, Wolfmanx122, Shidh, Electricmun11, EuroCarGT, Yax-
inr, Mrphious, Jochen Burghardt, Mdcoope3, TheEpTic, Akosibrixy, Microchirp, Cheater00, Lennerton, GreenWeasel11, Loraof, Scipsy-
cho, BILL ABK, Acayl, ShigaIntern, InternetArchiveBot, GreenC bot, Gerdhuebner, Abduw09, Dhoni barath, NoahB123, Ngonz424,
Arun8277 and Anonymous: 279
Wolfram axiom Source: https://en.wikipedia.org/wiki/Wolfram_axiom?oldid=780584081 Contributors: Michael Hardy, Bearcat, Greg-
bard, Nick Number, Magioladitis, Pleasantville, Addbot, FrescoBot, EmausBot, Bourbaki78, BG19bot, G McGurk and Anonymous:
6
Zhegalkin polynomial Source: https://en.wikipedia.org/wiki/Zhegalkin_polynomial?oldid=776268342 Contributors: Michael Hardy,
Macrakis, Bkkbrad, Rjwilmsi, GBL, Vaughan Pratt, CRGreathouse, Myasuda, Gregbard, Alaibot, Towopedia, Dougher, Jeepday, Hans
Adler, Addbot, DOI bot, Legobot, Luckas-bot, Yobot, Citation bot, Citation bot 1, Klbrain, Matthiaspaul, Jochen Burghardt, Nwezeakunel-
son and Anonymous: 1

88.7.2 Images
File:0001_0001_0001_1110_nonlinearity.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/66/0001_0001_0001_1110_
nonlinearity.svg License: Public domain Contributors: Own work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.
svg' class='image'><img alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.
svg.png' width='40' height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.
svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703'
data-le-height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:2010-05-26_at_18-05-02.jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/4e/2010-05-26_at_18-05-02.jpg License:
CC BY 3.0 Contributors: Own work Original artist: Marcovanhogan
File:3_Pottergate_-_geograph.org.uk_-_657140.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/a2/3_Pottergate_-_
geograph.org.uk_-_657140.jpg License: CC BY-SA 2.0 Contributors: From geograph.org.uk Original artist: Richard Croft
File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public do-
main Contributors: Own work based on: Ambox scales.svg Original artist: Dsmurat, penubag
396 CHAPTER 88. ZHEGALKIN POLYNOMIAL

File:Associatividadecat.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a7/Associatividadecat.svg License: Public do-


main Contributors: This le was derived from Associatividadecat.png: <a href='//commons.wikimedia.org/wiki/File:Associatividadecat.
png' class='image'><img alt='Associatividadecat.png' src='https://upload.wikimedia.org/wikipedia/commons/thumb/b/b4/Associatividadecat.
png/50px-Associatividadecat.png' width='50' height='57' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/b/b4/Associatividadecat.
png/75px-Associatividadecat.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/b/b4/Associatividadecat.png/100px-Associatividadecat.
png 2x' data-le-width='253' data-le-height='287' /></a>
Original artist: Associatividadecat.png: Campani
File:BDD.png Source: https://upload.wikimedia.org/wikipedia/commons/9/91/BDD.png License: CC-BY-SA-3.0 Contributors: Trans-
ferred from en.wikipedia to Commons. Original artist: The original uploader was IMeowbot at English Wikipedia
File:BDD2pdag.png Source: https://upload.wikimedia.org/wikipedia/commons/f/f4/BDD2pdag.png License: CC-BY-SA-3.0 Contrib-
utors: Transferred from en.wikipedia to Commons. Original artist: RUN at English Wikipedia
File:BDD2pdag_simple.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/90/BDD2pdag_simple.svg License: CC-BY-
SA-3.0 Contributors: Self made from BDD2pdag_simple.png (here and on English Wikipedia) Original artist: User:Selket and User:RUN
(original)
File:BDD_Variable_Ordering_Bad.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/28/BDD_Variable_Ordering_Bad.
svg License: CC-BY-SA-3.0 Contributors: self-made using CrocoPat, a tool for relational programming, and GraphViz dot, a tool for graph
layout Original artist: Dirk Beyer
File:BDD_Variable_Ordering_Good.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4b/BDD_Variable_Ordering_
Good.svg License: CC-BY-SA-3.0 Contributors: self-made using CrocoPat, a tool for relational programming, and GraphViz dot, a tool
for graph layout Original artist: Dirk Beyer
File:BDD_simple.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/14/BDD_simple.svg License: CC-BY-SA-3.0 Con-
tributors: self-made using CrocoPat, a tool for relational programming, and GraphViz dot, a tool for graph layout Original artist: Dirk
Beyer
File:Bloch_Sphere.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f4/Bloch_Sphere.svg License: CC BY-SA 3.0 Con-
tributors: Own work Original artist: Glosser.ca
File:BoolePlacque.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/BoolePlacque.jpg License: Public domain Con-
tributors: Own work Original artist: Logicus
File:BoolePlaque2.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9d/BoolePlaque2.jpg License: Public domain Con-
tributors: Own work Original artist: Logicus
File:BooleWindow(bottom_third).jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/BooleWindow%28bottom_third%
29.jpg License: Public domain Contributors: Own work Original artist: Logicus
File:Boole_House_Cork.jpg Source: https://upload.wikimedia.org/wikipedia/en/f/fb/Boole_House_Cork.jpg License: CC0 Contribu-
tors:
self-made
Original artist:
SandStone
File:Boolean_functions_like_1000_nonlinearity.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/21/Boolean_functions_
like_1000_nonlinearity.svg License: Public domain Contributors: Own work Original artist: Lipedia
File:Boolean_satisfiability_vs_true_literal_counts.png Source: https://upload.wikimedia.org/wikipedia/commons/4/42/Boolean_satisfiability_
vs_true_literal_counts.png License: CC BY-SA 3.0 Contributors: Own work Original artist: Jochen Burghardt
File:CardContin.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/75/CardContin.svg License: Public domain Contrib-
utors: en:Image:CardContin.png Original artist: en:User:Trovatore, recreated by User:Stannered
File:Circuit-minimization.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Circuit-minimization.svg License: CC
BY-SA 3.0 Contributors: Self-made, based on public-domain raster image Circuit-minimization.jpg, by user Uoft ftw, from Wikipedia.
Original artist: Steaphan Greene (talk)
File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: PD Contributors: ? Orig-
inal artist: ?
File:Crystal_Clear_app_kedit.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e8/Crystal_Clear_app_kedit.svg License:
LGPL Contributors: Own work Original artist: w:User:Tkgd, Everaldo Coelho and YellowIcon
File:DeMorganGates.GIF Source: https://upload.wikimedia.org/wikipedia/commons/3/3a/DeMorganGates.GIF License: CC BY 3.0
Contributors: Own work Original artist: Vaughan Pratt
File:DeMorgan_Logic_Circuit_diagram_DIN.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/db/DeMorgan_Logic_
Circuit_diagram_DIN.svg License: Public domain Contributors: Own work Original artist: MichaelFrey
File:Demorganlaws.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Demorganlaws.svg License: CC BY-SA 4.0
Contributors: Own work Original artist: Teknad
File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The
Tango! Desktop Project. Original artist:
The people from the Tango! project. And according to the meta-data in the le, specically: Andreas Nilsson, and Jakub Steiner (although
minimally).
File:Emoji_u1f510.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/35/Emoji_u1f510.svg License: Apache License
2.0 Contributors: https://github.com/googlei18n/noto-emoji/blob/f2a4f72/svg/emoji_u1f510.svg Original artist: Google
File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-
by-sa-3.0 Contributors: ? Original artist: ?
88.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 397

File:Four-Bit_Majority_Circuit.png Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Four-Bit_Majority_Circuit.png


License: CC BY-SA 4.0 Contributors: Own work Original artist: EDickenson
File:Free-Boolean-algebra-unit-sloppy.png Source: https://upload.wikimedia.org/wikipedia/commons/7/7e/Free-Boolean-algebra-unit-sloppy.
png License: Public domain Contributors: LaTeXiT Original artist: Daniel Brown
File:Free-Boolean-algebra-unit.png Source: https://upload.wikimedia.org/wikipedia/commons/5/5c/Free-Boolean-algebra-unit.png
License: Public domain Contributors: LaTeXiT Original artist: Daniel Brown
File:Free-boolean-algebra-hasse-diagram.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/72/Free-boolean-algebra-hasse-diagram.
svg License: CC0 Contributors: Own work Original artist: Chris-martin
File:Greyfriars,_Lincoln_-_geograph.org.uk_-_106215.jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/fe/Greyfriars%
2C_Lincoln_-_geograph.org.uk_-_106215.jpg License: CC BY-SA 2.0 Contributors: From geograph.org.uk Original artist: Dave Hitch-
borne
File:Hasse2Free.png Source: https://upload.wikimedia.org/wikipedia/commons/7/7c/Hasse2Free.png License: Public domain Contrib-
utors: ? Original artist: ?
File:Hasse_diagram_of_powerset_of_3.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ea/Hasse_diagram_of_powerset_
of_3.svg License: CC-BY-SA-3.0 Contributors: self-made using graphviz's dot. Original artist: KSmrq
File:Implication_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2f/Implication_graph.svg License: Public do-
main Contributors: Own work Original artist: David Eppstein
File:Internet_map_1024.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/d2/Internet_map_1024.jpg License: CC BY
2.5 Contributors: Originally from the English Wikipedia; description page is/was here. Original artist: The Opte Project
File:K-map_2x2_1,2,3,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/eb/K-map_2x2_1%2C2%2C3%2C4.svg Li-
cense: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_1,2,3.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/48/K-map_2x2_1%2C2%2C3.svg License: CC-
BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_1,2,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/42/K-map_2x2_1%2C2%2C4.svg License: CC-
BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_1,2.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c0/K-map_2x2_1%2C2.svg License: CC-BY-
SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_1,3,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/77/K-map_2x2_1%2C3%2C4.svg License: CC-
BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_1,3.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0b/K-map_2x2_1%2C3.svg License: CC-BY-
SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_1,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8d/K-map_2x2_1%2C4.svg License: CC-BY-
SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_1.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d2/K-map_2x2_1.svg License: CC-BY-SA-3.0 Con-
tributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_2,3,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/80/K-map_2x2_2%2C3%2C4.svg License: CC-
BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_2,3.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/01/K-map_2x2_2%2C3.svg License: CC-BY-
SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_2,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4a/K-map_2x2_2%2C4.svg License: CC-BY-
SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_2.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9f/K-map_2x2_2.svg License: CC-BY-SA-3.0 Con-
tributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_3,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/55/K-map_2x2_3%2C4.svg License: CC-BY-
SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_3.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/db/K-map_2x2_3.svg License: CC-BY-SA-3.0 Con-
tributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_4.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9e/K-map_2x2_4.svg License: CC-BY-SA-3.0 Con-
tributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_none.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f5/K-map_2x2_none.svg License: CC-BY-SA-
3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_6,8,9,10,11,12,13,14.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b7/K-map_6%2C8%2C9%2C10%
2C11%2C12%2C13%2C14.svg License: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist:
en:User:Cburnett
File:K-map_6,8,9,10,11,12,13,14_anti-race.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/02/K-map_6%2C8%2C9%
2C10%2C11%2C12%2C13%2C14_anti-race.svg License: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape.
Original artist: en:User:Cburnett
File:K-map_6,8,9,10,11,12,13,14_don't_care.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/00/K-map_6%2C8%
2C9%2C10%2C11%2C12%2C13%2C14_don%27t_care.svg License: CC-BY-SA-3.0 Contributors: This vector image was created with
Inkscape. Original artist: en:User:Cburnett
File:K-map_minterms_A.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1a/K-map_minterms_A.svg License: CC-
BY-SA-3.0 Contributors: en:User:Cburnett - modication of Image:K-map_minterms.svg Original artist: Werneuchen
398 CHAPTER 88. ZHEGALKIN POLYNOMIAL

File:Karnaugh6.gif Source: https://upload.wikimedia.org/wikipedia/commons/3/3b/Karnaugh6.gif License: CC BY-SA 3.0 Contribu-


tors: Own work Original artist: Jochen Burghardt
File:KleinBottle-01.png Source: https://upload.wikimedia.org/wikipedia/commons/4/46/KleinBottle-01.png License: Public domain
Contributors: ? Original artist: ?
File:LAlphabet_AND.jpg Source: https://upload.wikimedia.org/wikipedia/en/7/73/LAlphabet_AND.jpg License: Cc-by-sa-3.0 Con-
tributors: ? Original artist: ?
File:LAlphabet_AND_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/c/c1/LAlphabet_AND_table.jpg License: Cc-by-
sa-3.0 Contributors: ? Original artist: ?
File:LAlphabet_F.jpg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/LAlphabet_F.jpg License: Cc-by-sa-3.0 Contributors: ?
Original artist: ?
File:LAlphabet_FI.jpg Source: https://upload.wikimedia.org/wikipedia/en/d/d2/LAlphabet_FI.jpg License: Cc-by-sa-3.0 Contributors:
? Original artist: ?
File:LAlphabet_FI_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/a/a3/LAlphabet_FI_table.jpg License: Cc-by-sa-3.0
Contributors: ? Original artist: ?
File:LAlphabet_F_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/6/6f/LAlphabet_F_table.jpg License: Cc-by-sa-3.0
Contributors: ? Original artist: ?
File:LAlphabet_IFF.jpg Source: https://upload.wikimedia.org/wikipedia/en/2/26/LAlphabet_IFF.jpg License: Cc-by-sa-3.0 Contribu-
tors: ? Original artist: ?
File:LAlphabet_IFF_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/3/30/LAlphabet_IFF_table.jpg License: Cc-by-sa-
3.0 Contributors: ? Original artist: ?
File:LAlphabet_IFTHEN.jpg Source: https://upload.wikimedia.org/wikipedia/en/c/c8/LAlphabet_IFTHEN.jpg License: Cc-by-sa-
3.0 Contributors: ? Original artist: ?
File:LAlphabet_IFTHEN_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/0/0d/LAlphabet_IFTHEN_table.jpg License:
Cc-by-sa-3.0 Contributors: ? Original artist: ?
File:LAlphabet_NAND.jpg Source: https://upload.wikimedia.org/wikipedia/en/6/6e/LAlphabet_NAND.jpg License: Cc-by-sa-3.0 Con-
tributors: ? Original artist: ?
File:LAlphabet_NAND_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/c/c7/LAlphabet_NAND_table.jpg License: Cc-
by-sa-3.0 Contributors: ? Original artist: ?
File:LAlphabet_NFI.jpg Source: https://upload.wikimedia.org/wikipedia/en/f/f6/LAlphabet_NFI.jpg License: Cc-by-sa-3.0 Contribu-
tors: ? Original artist: ?
File:LAlphabet_NFI_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/b/ba/LAlphabet_NFI_table.jpg License: Cc-by-sa-
3.0 Contributors: ? Original artist: ?
File:LAlphabet_NIF.jpg Source: https://upload.wikimedia.org/wikipedia/en/4/49/LAlphabet_NIF.jpg License: Cc-by-sa-3.0 Contrib-
utors: ? Original artist: ?
File:LAlphabet_NIF_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/3/35/LAlphabet_NIF_table.jpg License: Cc-by-sa-
3.0 Contributors: ? Original artist: ?
File:LAlphabet_NOR.jpg Source: https://upload.wikimedia.org/wikipedia/en/b/b9/LAlphabet_NOR.jpg License: Cc-by-sa-3.0 Con-
tributors: ? Original artist: ?
File:LAlphabet_NOR_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/1/12/LAlphabet_NOR_table.jpg License: Cc-by-
sa-3.0 Contributors: ? Original artist: ?
File:LAlphabet_NOTP.jpg Source: https://upload.wikimedia.org/wikipedia/en/a/a0/LAlphabet_NOTP.jpg License: Cc-by-sa-3.0 Con-
tributors: ? Original artist: ?
File:LAlphabet_NOTP_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/9/92/LAlphabet_NOTP_table.jpg License: Cc-
by-sa-3.0 Contributors: ? Original artist: ?
File:LAlphabet_NOTQ.jpg Source: https://upload.wikimedia.org/wikipedia/en/0/0d/LAlphabet_NOTQ.jpg License: Cc-by-sa-3.0 Con-
tributors: ? Original artist: ?
File:LAlphabet_NOTQ_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/e/e1/LAlphabet_NOTQ_table.jpg License: Cc-
by-sa-3.0 Contributors: ? Original artist: ?
File:LAlphabet_OR.jpg Source: https://upload.wikimedia.org/wikipedia/en/9/99/LAlphabet_OR.jpg License: Cc-by-sa-3.0 Contribu-
tors: ? Original artist: ?
File:LAlphabet_OR_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/0/09/LAlphabet_OR_table.jpg License: Cc-by-sa-
3.0 Contributors: ? Original artist: ?
File:LAlphabet_P.jpg Source: https://upload.wikimedia.org/wikipedia/en/b/bd/LAlphabet_P.jpg License: Cc-by-sa-3.0 Contributors:
? Original artist: ?
File:LAlphabet_P_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/0/0a/LAlphabet_P_table.jpg License: Cc-by-sa-3.0
Contributors: ? Original artist: ?
File:LAlphabet_Q.jpg Source: https://upload.wikimedia.org/wikipedia/en/1/13/LAlphabet_Q.jpg License: Cc-by-sa-3.0 Contributors:
? Original artist: ?
File:LAlphabet_Q_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/4/47/LAlphabet_Q_table.jpg License: Cc-by-sa-3.0
Contributors: ? Original artist: ?
File:LAlphabet_T.jpg Source: https://upload.wikimedia.org/wikipedia/en/d/d4/LAlphabet_T.jpg License: Cc-by-sa-3.0 Contributors:
? Original artist: ?
88.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 399

File:LAlphabet_T_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/b/b4/LAlphabet_T_table.jpg License: Cc-by-sa-3.0


Contributors: ? Original artist: ?
File:LAlphabet_XOR.jpg Source: https://upload.wikimedia.org/wikipedia/en/2/22/LAlphabet_XOR.jpg License: Cc-by-sa-3.0 Con-
tributors: ? Original artist: ?
File:LAlphabet_XOR_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/8/82/LAlphabet_XOR_table.jpg License: Cc-by-
sa-3.0 Contributors: ? Original artist: ?
File:LampFlowchart.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/LampFlowchart.svg License: CC-BY-SA-
3.0 Contributors: vector version of Image:LampFlowchart.png Original artist: svg by Booyabazooka

File:Laws_of_Form_-_a_and_b.gif Source: https://upload.wikimedia.org/wikipedia/commons/e/e0/Laws_of_Form_-_a_and_b.gif Li-


cense: CC-BY-SA-3.0 Contributors: ? Original artist: ?
File:Laws_of_Form_-_a_or_b.gif Source: https://upload.wikimedia.org/wikipedia/commons/3/36/Laws_of_Form_-_a_or_b.gif Li-
cense: CC-BY-SA-3.0 Contributors: ? Original artist: ?
File:Laws_of_Form_-_cross.gif Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Laws_of_Form_-_cross.gif License:
CC-BY-SA-3.0 Contributors: Sam (talk) (Uploads) Original artist: Sam (talk) (Uploads)
File:Laws_of_Form_-_double_cross.gif Source: https://upload.wikimedia.org/wikipedia/commons/f/ff/Laws_of_Form_-_double_cross.
gif License: CC-BY-SA-3.0 Contributors: ? Original artist: ?
File:Laws_of_Form_-_if_a_then_b.gif Source: https://upload.wikimedia.org/wikipedia/commons/4/4f/Laws_of_Form_-_if_a_then_
b.gif License: CC-BY-SA-3.0 Contributors: ? Original artist: ?
File:Laws_of_Form_-_not_a.gif Source: https://upload.wikimedia.org/wikipedia/commons/0/09/Laws_of_Form_-_not_a.gif License:
CC-BY-SA-3.0 Contributors: ? Original artist: ?
File:Lebesgue_Icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Lebesgue_Icon.svg License: Public domain
Contributors: w:Image:Lebesgue_Icon.svg Original artist: w:User:James pic
File:Lock-green.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/65/Lock-green.svg License: CC0 Contributors: en:
File:Free-to-read_lock_75.svg Original artist: User:Trappist the monk
File:Logic.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e7/Logic.svg License: CC BY-SA 3.0 Contributors: Own
work Original artist: It Is Me Here
File:LogicGates.GIF Source: https://upload.wikimedia.org/wikipedia/commons/4/41/LogicGates.GIF License: CC BY 3.0 Contribu-
tors: Own work Original artist: Vaughan Pratt
File:Logic_portal.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/7c/Logic_portal.svg License: CC BY-SA 3.0 Con-
tributors: Own work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.svg' class='image'><img alt='Watchduck.svg'
src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.svg.png' width='40' height='46' srcset='https:
//upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.svg.png 1.5x, https://upload.wikimedia.org/
wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703' data-le-height='806' /></a> Watchduck
(a.k.a. Tilman Piesk)
File:Logical_connectives_Hasse_diagram.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Logical_connectives_
Hasse_diagram.svg License: Public domain Contributors: Own work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.
svg' class='image'><img alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.
svg.png' width='40' height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.
svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703'
data-le-height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:Merge-arrow.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/aa/Merge-arrow.svg License: Public domain Con-
tributors: ? Original artist: ?
File:Nicolas_P._Rougier{}s_rendering_of_the_human_brain.png Source: https://upload.wikimedia.org/wikipedia/commons/7/73/
Nicolas_P._Rougier%27s_rendering_of_the_human_brain.png License: GPL Contributors: http://www.loria.fr/~{}rougier Original artist:
Nicolas Rougier
File:Nuvola_apps_edu_mathematics_blue-p.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Nuvola_apps_edu_
mathematics_blue-p.svg License: GPL Contributors: Derivative work from Image:Nuvola apps edu mathematics.png and Image:Nuvola
apps edu mathematics-p.svg Original artist: David Vignoni (original icon); Flamurai (SVG convertion); bayo (color)
File:P_vip.svg Source: https://upload.wikimedia.org/wikipedia/en/6/69/P_vip.svg License: PD Contributors: ? Original artist: ?
File:People_icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/People_icon.svg License: CC0 Contributors: Open-
Clipart Original artist: OpenClipart
File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors:
? Original artist: ?
File:Propositional_formula_3.png Source: https://upload.wikimedia.org/wikipedia/commons/d/dc/Propositional_formula_3.png Li-
cense: CC-BY-SA-3.0 Contributors: Drawn by wvbailey in Autosketch then imported into Adobe Acrobat and exported as .png. Original
artist: User:Wvbailey
File:Propositional_formula_NANDs.png Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Propositional_formula_NANDs.
png License: CC-BY-SA-3.0 Contributors: Own work Original artist: User:Wvbailey
File:Propositional_formula_connectives_1.png Source: https://upload.wikimedia.org/wikipedia/commons/c/ca/Propositional_formula_
connectives_1.png License: CC-BY-SA-3.0 Contributors: Own work by the original uploader Original artist: User:Wvbailey
File:Propositional_formula_flip_flops_1.png Source: https://upload.wikimedia.org/wikipedia/commons/5/5b/Propositional_formula_
flip_flops_1.png License: CC-BY-SA-3.0 Contributors: Own work by the original uploader Original artist: User:Wvbailey
400 CHAPTER 88. ZHEGALKIN POLYNOMIAL

File:Propositional_formula_maps_1.png Source: https://upload.wikimedia.org/wikipedia/commons/b/bb/Propositional_formula_maps_


1.png License: CC-BY-SA-3.0 Contributors: Own work by the original uploader Original artist: User:Wvbailey
File:Propositional_formula_maps_2.png Source: https://upload.wikimedia.org/wikipedia/commons/9/90/Propositional_formula_maps_
2.png License: CC-BY-SA-3.0 Contributors: Own work by the original uploader Original artist: User:Wvbailey
File:Propositional_formula_oscillator_1.png Source: https://upload.wikimedia.org/wikipedia/commons/e/e3/Propositional_formula_
oscillator_1.png License: CC-BY-SA-3.0 Contributors: Own work by the original uploader Original artist: User:Wvbailey
File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0
Contributors:
Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:
Tkgd2007
File:Rotate_left.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/09/Rotate_left.svg License: CC-BY-SA-3.0 Contrib-
utors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rotate_left_logically.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5c/Rotate_left_logically.svg License: CC-
BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rotate_left_through_carry.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/71/Rotate_left_through_carry.svg Li-
cense: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rotate_right.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/Rotate_right.svg License: CC-BY-SA-3.0 Con-
tributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rotate_right_arithmetically.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/Rotate_right_arithmetically.svg
License: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rotate_right_logically.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/64/Rotate_right_logically.svg License: CC-
BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rotate_right_through_carry.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/27/Rotate_right_through_carry.svg
License: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rubik{}s_cube_v3.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b6/Rubik%27s_cube_v3.svg License: CC-
BY-SA-3.0 Contributors: Image:Rubik{}s cube v2.svg Original artist: User:Booyabazooka, User:Meph666 modied by User:Niabot
File:Sat_reduced_to_Clique_from_Sipser.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a5/Sat_reduced_to_Clique_
from_Sipser.svg License: CC BY-SA 3.0 Contributors: Own work (Original text: I (Thore Husfeldt (talk)) created this work entirely by
myself.) Original artist: Thore Husfeldt (talk)
File:Schaefer{}s_3-SAT_to_1-in-3-SAT_reduction.gif Source: https://upload.wikimedia.org/wikipedia/commons/9/9b/Schaefer%
27s_3-SAT_to_1-in-3-SAT_reduction.gif License: CC BY-SA 3.0 Contributors: Own work Original artist: Jochen Burghardt
File:Stone_functor.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d2/Stone_functor.svg License: CC BY-SA 4.0 Con-
tributors: Own work Original artist: IkamusumeFan
File:Symbol_book_class2.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/89/Symbol_book_class2.svg License: CC
BY-SA 2.5 Contributors: Mad by Lokal_Prol by combining: Original artist: Lokal_Prol
File:T_30.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/10/T_30.svg License: CC0 Contributors: Own work Original
artist: Mini-oh
File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_
with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svg
from the Tango project. Original artist: Benjamin D. Esham (bdesham)
File:Torus_from_rectangle.gif Source: https://upload.wikimedia.org/wikipedia/commons/6/60/Torus_from_rectangle.gif License: Pub-
lic domain Contributors: Own work Original artist: Kie
File:Translation_to_english_arrow.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8a/Translation_to_english_arrow.
svg License: CC-BY-SA-3.0 Contributors: Own work, based on :Image:Translation_arrow.svg. Created in Adobe Illustrator CS3 Original
artist: tkgd2007
File:Venn_0000_0001.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Venn_0000_0001.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0000_1010.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4b/Venn_0000_1010.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0001_0001.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bc/Venn_0001_0001.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0001_1011.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/65/Venn_0001_1011.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_A_intersect_B.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/Venn_A_intersect_B.svg License: Pub-
lic domain Contributors: Own work Original artist: Cepheus
File:Vennandornot.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ae/Vennandornot.svg License: Public domain Con-
tributors: Own work Original artist: Watchduck
File:Wiki_letter_w.svg Source: https://upload.wikimedia.org/wikipedia/en/6/6c/Wiki_letter_w.svg License: Cc-by-sa-3.0 Contributors:
? Original artist: ?
File:Wikibooks-logo-en-noslogan.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/df/Wikibooks-logo-en-noslogan.
svg License: CC BY-SA 3.0 Contributors: Own work Original artist: User:Bastique, User:Ramac et al.
88.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 401

File:Wikibooks-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Wikibooks-logo.svg License: CC BY-SA 3.0


Contributors: Own work Original artist: User:Bastique, User:Ramac et al.
File:Wikidata-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/ff/Wikidata-logo.svg License: Public domain Con-
tributors: Own work Original artist: User:Planemad
File:Wikinews-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/24/Wikinews-logo.svg License: CC BY-SA 3.0
Contributors: This is a cropped version of Image:Wikinews-logo-en.png. Original artist: Vectorized by Simon 01:05, 2 August 2006
(UTC) Updated by Time3000 17 April 2007 to use ocial Wikinews colours and appear correctly on dark backgrounds. Originally
uploaded by Simon.
File:Wikiquote-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Wikiquote-logo.svg License: Public domain
Contributors: Own work Original artist: Rei-artur
File:Wikisource-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Wikisource-logo.svg License: CC BY-SA
3.0 Contributors: Rei-artur Original artist: Nicholas Moreau
File:Wikiversity-logo-Snorky.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1b/Wikiversity-logo-en.svg License:
CC BY-SA 3.0 Contributors: Own work Original artist: Snorky
File:Wikiversity-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1b/Wikiversity-logo-en.svg License: CC BY-
SA 3.0 Contributors: Own work Original artist: Snorky
File:Wiktionary-logo-v2.svg Source: https://upload.wikimedia.org/wikipedia/en/0/06/Wiktionary-logo-v2.svg License: CC-BY-SA-
3.0 Contributors: ? Original artist: ?
File:_______.gif Source: https://upload.
wikimedia.org/wikipedia/commons/d/df/%D0%9F%D1%80%D0%B5%D0%BE%D0%B1%D1%80%D0%B0%D0%B7%D0%BE%D0%
B2%D0%B0%D0%BD%D0%B8%D0%B5_%D1%82%D0%B0%D0%B1%D0%BB%D0%B8%D1%86%D1%8B_%D0%B8%D1%81%
D1%82%D0%B8%D0%BD%D0%BD%D0%BE%D1%81%D1%82%D0%B8_%D0%B2_%D0%BF%D0%BE%D0%BB%D0%B8%
D0%BD%D0%BE%D0%BC_%D0%96%D0%B5%D0%B3%D0%B0%D0%BB%D0%BA%D0%B8%D0%BD%D0%B0_%D0%BC%
D0%B5%D1%82%D0%BE%D0%B4%D0%BE%D0%BC_%D1%82%D1%80%D0%B5%D1%83%D0%B3%D0%BE%D0%BB%D1%
8C%D0%BD%D0%B8%D0%BA%D0%B0.gif License: CC BY-SA 3.0 Contributors: Own work - Orig-
inal artist: AdmiralHood (<a href='//commons.wikimedia.org/wiki/User_talk:AdmiralHood' title='User talk:AdmiralHood'>talk</a>)
07:35, 6 June 2011 (UTC)

88.7.3 Content license


Creative Commons Attribution-Share Alike 3.0

You might also like