Module Notes
Module Notes
Module Notes
c
a
Remarks 1.6. (1) The notation and terminology for digraphs is much the same for
graphs but with obvious variations.
(2) Edges of a digraph G are elements of V (G) × V (G), i.e., they are ordered pairs
of vertices. Note the edge e = (u, v) has end vertices u and v. We can still write
e = uv but now vu may not be an edge, but even if it is uv ̸= vu (unless v = u).
(3) We say that the edge u = uv is in the direction u to v or e is from (or out of ) u
to (or into v).
Definition 1.7. A simple digraph is a digraph with no multiple edges or loops. That
is, an edge uv ∈ E appears only once in E (though vu may be an edge) and there are no
edges of the form uu.
Remark 1.8. In this course the term graph always means undirected graph. Some
texts differ and use the term graph to cover both directed and undirected graphs. Notation
and terminology in graph theory is not fully standardized. For instance, the terms node
and arc are sometimes used for vertex and edge, respectively.
Definition 1.9. Two graphs (or digraphs) G = (V, E) and G′ = (V ′ , E ′ ) are isomorphic
if there is a bijection ϕ : V → V ′ such that uv ∈ E iff ϕ(u)ϕ(v) ∈ E ′ , and uv and
ϕ(u)ϕ(v) have the same multiplicity in G and G′ , respectively. In this case, ϕ is called an
isomorphism. If in addition G = G′ then we say that ϕ is an automorphism.
Remark 1.10. We will frequently consider isomorphic graphs as being equal.
Definitions 1.11. (1) In a graph G, the degree δ(u) of a vertex u is the number
of edges incident with u (i.e., the number of edges of which u is an end vertex),
with the proviso that a loop on u is counted as being incident twice with u and
therefore contributes 2 to the degree of u. We shall say that a vertex is odd
(even) if its degree is odd (even).
(2) A graph G is regular of degree r if every vertex of G has degree r. We say that
G is regular if it is regular of degree r for some r.
(3) A vertex is isolated if its degree is zero.
(4) In a digraph, we define the out-degree δ(u) as the number of edges out of u. The
in-degree δ(u) is the number of edges into u. (Note that each loop on a vertex u
contributes its multiplicity to each of δ(u) and δ(u)).
There is a natural choice of graph to associate with a given digraph. We can also
associate a digraph with a given graph, but we need to take some care.
Definitions 1.12. (1) The underlying graph (UG), of a digraph is obtained by
ignoring the directions of edges.
(2) From any graph G we can form a digraph G′ by replacing each edge uv of G by
two directed edges uv and vu. Whenever we say ‘a graph can be considered
as a digraph’, this is what we mean.
Remarks 1.13. (1) The degree of a vertex in the UG is the sum of its out-degree and
in degree in the digraph.
(2) If u is a vertex of degree d in G, then the vertex u has out-degree d and in-degree
d in G′ .
(3) The number of edges in G′ is twice that in G.
Notation 1.14. If G = (V, E) is a graph or digraph, the number of vertices in G is
denoted by |V | and the number of edges by |E|.
P P
Theorem 1.15. If G = (V, E) is a digraph, then v∈V δ(v) = v∈V δ(v) = |E|.
MA32410 - GRAPHS AND NETWORKS 3
(1) The complete graph Kn on n vertices is the simple graph in which every pair
of distinct vertices is an edge.
K1 K2 K3 K4
(2) The null graph Nn on n vertices is the graph with n vertices and no edges.
N1 N2 N3 N4
(3) A bipartite graph G is a graph whose vertices can be partitioned into two non-
empty sets, A and B say, called the parts, such that every edge has one vertex
in A and the other vertex is in B. That is, no two vertices in A are adjacent, and
similarly for B. The example below has |A| = 4 and |B| = 3 (note that one vertex
in part A has degree zero – not every vertex must be incident with an edge).
(4) The complete bipartite graph Km,n is the simple bipartite graph with parts A
and B such that |A| = m, |B| = n and every pair of vertices with a vertex in A
and a vertex in B is an edge. For example,
A
(5) The complete digraph on n vertices is the simple digraph in which every
ordered pair of distinct vertices is an edge. The cases n = 1, 2, 3 are drawn below:
4 DR. GWION EVANS
(2) A trail is a path if all its vertices are different, except that we allow the first and
last vertices to be the same if the path is closed.
Remark 3.8. In a graph or digraph, a walk from a vertex u to a vertex v with distinct
vertices (except that we allow u = v if the path is closed) is necessarily a trail and therefore
a path (and therefore a cycle if it is closed).
Definition 3.9. A closed path of length r ≥ 1 in a graph or digraph is called a cycle or
an r-cycle.
Definitions 3.10. (1) In a graph, two vertices u and v are connected if u = v or
there is a walk connecting u and v.
(2) A connected graph is a graph in which any two vertices are connected.
(3) A connected digraph is a digraph whose UG is connected.
(4) A strongly connected digraph is a digraph in which there is a walk from any
given vertex u to any other distinct vertex.
Definition 3.11. A component C of a graph G is a subgraph of G satisfying:
(1) C is a connected graph; and
(2) if H is a connected subgraph of G such that C is a subgraph of H then H = C.
Remark 3.12. (1) A component, being itself a graph, must contain at least one vertex.
(2) A component is a maximally connected subgraph.
Theorem 3.13. Let G be a graph.
(1) The components of G partition G (i.e., each vertex and each edge are in exactly
one component).
(2) Any walk forms a subgraph of some component of G.
(3) G is connected if and only if G has only one component.
Proof. (To be revealed). □
Theorem 3.14. In a graph or digraph that has exactly n vertices, the length of any path
is at most n − 1, except that if the path is closed (and therefore a cycle) its length is at
most n.
Proof. (To be revealed). □
As the next theorem confirms, we may replace walk with path in the definition of two
vertices being connected.
Theorem 3.15. In a graph or digraph, if there is a walk from a vertex u to another vertex
v (u ̸= v) then there is a path from u to v.
Proof. (To be revealed). □
Definition 3.16. Consider a walk u0 e1 u1 · · · em um in a graph. The walk is said to
be backtracking if ei = ei+1 for some 1 ≤ i < m, otherwise the walk is said to be
non-backtracking.
A walk of length zero in a graph or digraph is called a trivial closed walk.
Theorem 3.17.
(1) A digraph that contains a non-trivial closed walk contains a cycle.
(2) A graph that contains a non-trivial non-backtracking closed walk contains a cycle.
Proof. (To be revealed). □
Corollary 3.18. If a graph or digraph has a non-trivial circuit, then it contains a cycle.
6 DR. GWION EVANS
Corollary 3.19. If a digraph is strongly connected and has more than one vertex, then
every vertex is on a cycle.
(3) if the graph is simple then A then it has zeros on the diagonal and its only entries
are 0 or 1.
Properties of the adjacency matrix A of a digraph
(1) A is not necessarilyPsymmetric;
(2) for each i, δ(ui ) = nj=1 Aij ; δ(ui ) = nj=1 Aji .
P
(3) if the digraph is simple then A then it has zeros on the diagonal and its only
entries are 0 or 1.
Theorem 4.1. Let G = (V, E) be a digraph with exactly m vertices u1 , . . . , um , and let
A be its adjacency matrix. For any integer r ≥ 1, Arij is the number of walks of length r
from ui to uj .
The Adjacency Matrix Row Algorithm (also known as the Fusion Algorithm).
The algorithm below finds components of a graph. Let A be the adjacency matrix of a
graph G = (V, E) and let u1 , u2 , . . . , un be the vertices of G. There is no loss of generality
in our assuming that G is simple.
In what follows, we will use so-called Boolean addition: all non-zero positive integers
are considered to be equal to 1 so that 1 + 1 = 1, 1 + 0 = 1, 0 + 0 = 0.
(1) Choose any vertex, say vertex uk . Record uk . (For faster convergence, choose a
row with the most number of ones.)
(2) Add all rows i to row k for which the ith entry in row k is 1. Record all vertices
ui corresponding to such rows.
(3) If new non-zero entries appear in row k, repeat Step 2 for the corresponding rows.
(4) Otherwise, the recorded vertices and the edges incident with them for a component
(the component containing vertex uk ).
(5) If unrecorded vertices remain, then, ignoring the recorded vertices and the rows
corresponding to them, go back to Step 1.
MA32410 - GRAPHS AND NETWORKS 7
(3) If δ(x) > 1, choose an edge xy that is not a bridge. Delete the edge xy from G
and adjoin it to the list L.
(4) If δ(x) = 1 and xy is the edge incident with x, delete the edge xy and vertex x
from G and adjoin xy to L.
(5) If there are any edges left incident with y, return to Step 3 with y as the ‘new’ x.
Otherwise - STOP.
(6) If L = E, then L gives the list of edges in order of an Eulerian circuit or trail;
otherwise G does not have an Eulerian circuit or trail.
6.1. A test for a graph NOT to be Hamiltonian. The converse to the above
theorem gives the following test for a graph not to be Hamiltonian:
Let G a graph. Suppose that for some integer k and for some choice of k vertices,
deleting these k vertices leaves a subgraph of G with more than k components, then G
is not Hamiltonian.
6.2. Tests for a graph to be Hamiltonian. The following results give sufficient
conditions for a simple graph to be Hamiltonian
Theorem 6.2.1. Let G be a simple graph with n ≥ 3 vertices. If G has the property that
δ(u) + δ(v) ≥ n
for every pair of non-adjacent vertices u, v, then G is Hamiltonian.
Corollary 6.2.2. If G is a simple graph with n ≥ 3 vertices and in which every vertex
has degree at least 21 n, then G is Hamiltonian.
7. Trees
Definition 7.1. Let G = (V, E) be a graph. Then
• G is said to be acyclic if G has no cycles;
• G is a tree if G is connected and acyclic;
• an acyclic graph is also called a forest;
• the trivial tree is N1 (i.e., a single vertex), it is the smallest tree; and
• a digraph is a tree if its underlying graph is a tree.
Theorem 7.2 (Alternative Characterisations of Trees).
(1) A graph with at least 2 vertices and no loops is a tree if and only if for every pair
of distinct vertices there exists a unique path connecting them.
(2) A connected graph is a tree if and only if every edge is a bridge.
(3) An acyclic graph is a tree if and only if it has the property that adjoining a new
edge creates a unique cycle in the extended graph, in which case the unique cycle
contains the new edge.
Proof. (To be revealed). □
Definition 7.3. A leaf in a tree is a vertex of degree 1.
Lemma 7.4. Every non-trivial tree has at least 2 leaves.
Proof. (To be revealed). □
Lemma 7.5. A tree with exactly n vertices has exactly n − 1 edges.
Proof. (To be revealed). □
10 DR. GWION EVANS
8. Labelled trees
Let G = (V, E) be a graph or digraph with |V | = n. Let S be a set with |S| = n. Given
a bijection θ : V → S, we say that G is labelled with labelling set S. The triple (G, θ, S)
forms a labelled graph.
A vertex u is said to be labelled θ(u).
The default labelling set will be taken to be {1, 2, . . . , n}.
Let (G, θ, S) and (H, ψ, S) be labelled graphs with the same labelling set S. A graph
isomorphism ϕ : G → H is called a labelled graph isomorphism if in addition ϕ satisfies
ψ ◦ ϕ = θ, i.e.,
ψ(ϕ(u)) = θ(u) for all u ∈ V (G).
The Prüfer Code of a Labelled Tree. Let T be a labelled tree with n vertices
labelled 1, 2, . . . , n (n > 1). Then its Prüfer code is a sequence of n − 2 integers (not
necessarily distinct) between 1 and n obtained as follows:
If n = 2 then the Prüfer code is the empty sequence. Otherwise
(1) Choose the unique leaf, x say, with least label. If the vertex adjacent to x is
labelled y, the code starts with y. Delete the vertex x and the edge xy.
(2) If there is more than one edge left, repeat Step 1 on the residual labelled tree to
find the next term of the code; otherwise stop.
The algorithm stops when only one edge remains; or equivalently, the code has exactly
n − 2 numbers.
(1) Choose any vertex u. Initially T consists of u only. Delete the row of M corre-
sponding to u.
(2) If there are no undeleted rows left in M then STOP. Otherwise scan all columns
of M corresponding to vertices that are already in T . Choose a least entry, say
the (x, y)-th entry of M (if there is a choice, it does not matter which is chosen).
Adjoin the edge xy and the vertex x to T (the vertex y is already in T ). Delete
the x-th row of M .
(3) Repeat Step 2 until the algorithm stops.
Proof that Kruskal’s Algorithm constructs a MinWST in a simple connected weighted graph.
□
The Algorithm
(1) Label the origin u with (−, 0) and declare it permanently labelled. All other
vertices are labelled (u, ∞).
(2) If all vertices are permanently labelled the STOP.
Otherwise, suppose j ∗ is the latest vertex to be permanently labelled and con-
sider all vertices y for which j ∗ y is an edge in G. If W (j ∗ y) + D(j ∗ ) < D(y) then
change the label of y so that now:
P (y) = j ∗ and D(y) = W (j ∗ , y) + D(j ∗ ).
Otherwise, the label of y is unchanged.
(3) Examine all non-permanently labelled vertices. From all such vertices choose one,
say v, with the least estimated shortest distance D(v). Declare v to be permanently
labelled.
(4) Go back to Step 2.
Proof that Dijkstra’s Algorithm Works. □
(k)
Then Wij is the distance of a shortest path from vertex i to vertex j (or ∞ if there is
no path from i to j) where all the vertices of the path, excluding i and j, are restricted
to {1, . . . , k}.
(n)
Thus, in the last matrix Wi,j is the shortest distance from vertex i to vertex j, or ∞
if there is no path from i to j.
The Optimum Policy Matrix. The Optimum Policy Matrix (OPM) correspond-
ing to the digraph G is an n × n matrix Z, which can be:
(a) computed alongside the matrices W (i) ; or
(b) computed at the end of the Warshall-Floyd algorithm.
14 DR. GWION EVANS
In case (a), Z has – in every position in which the weight matrix W has ∞. Elsewhere,
Zij = j. Z is updated at each stage as follows.
If when W (t) is being computed the (i, j) entry changes (so it is different from that of
W (t−1) , then change the (i, j) entry of Z to t.
Do this for t = 1, 2, . . . , n. The final Z is the OPM.
In case (b), the final OPM Z can be constructed after the Warshall-Floyd algorithm
has stopped.
This is done by comparing the first matrix W (0) with the last one W (n) .
(1) Put – in every position in Z where the last matrix W (n) has entry 0 or ∞.
(2) If the (i, j) entries of W (0) and W (n) are equal (but not 0 or ∞), then put Zij = j.
(3) Otherwise,if they are different and the (i, j) entry of W (n) first appears in W (t) ,
then put Zij = t.
Constructing shortest paths from the OPM. To find the shortest path, as given
by the OPM, from vertex i to vertex j (if paths exist), the following algorithm is used.
(1) If Zij = –, there is no path from i to j. Otherwise, initially S = ij.
(2) Suppose
u1 u2 . . . um
is the current S, where u1 = i and um = j. Then for each 1 ≤ k < m, if
Zuk ,uk+1 = x, where x ̸= –, uk , uk+1 , place vertex x between uk and uk+1 , i.e., the
new S is u1 . . . uk xuk+1 um .
(3) If no new vertices are adjoined to the current S then STOP. Otherwise, go back
to Step 2.
When the algorithm stops, S is the shortest path from i to j given by the OPM.
(2) Suppose there are vertices remaining. If there are no vertices of zero in-degree in
the residual graph, STOP.
Otherwise choose a vertex of zero in-degree and label it m + 1 if the previous
label given to a vertex was m. Delete the vertex m + 1 and all edges on it (they
will all be edges out of the vertex labelled m + 1).
(3) Repeat Step 2 until the algorithm stops.
If when the algorithm stops there are unlabelled vertices, then the digraph G is not acyclic
and therefore cannot be topologically ordered. Otherwise, if all vertices are labelled, then
G has been topologically ordered (and is therefore acyclic).
The Longest Path Algorithm
Let G be a weighted digraph with n vertices (where the weights are non-negative). Let
u be the origin.
In this algorithm each vertex x is assigned a label (P (x), D(x)), where
• P (x) is a predecessor vertex on a longest path from u to the vertex x,
• D(x) is the longest length from u to the vertex x.
The Algorithm
(1) Apply the topological ordering algorithm to G. If the algorithm fails, then G has
cycles and the LPA will not work, in which case STOP.
(2) Otherwise, suppose the vertices of G are topologically ordered. Wlog § we may
assume the vertices of G are in topological order 0, 1, 2, . . . , n−1, where we identify
the vertices of G with the labels 0, 1, . . . , n − 1, and u = 0.
(3) Assign the label (–, 0) to u.
(4) Suppose the vertices 0, 1, . . . , x − 1 have been already been labelled, where x ≥ 1.
If x = n, then STOP. Otherwise, consider all vertices y for which yx is an edge.
Then y is already labelled (since y < x). From all such vertices y, choose one for
which
D(y) + W (yx)
is a maximum. Then label the vertex x with (y, D(y) + W (yx)).
(5) If all vertices have been labelled – STOP. Otherwise go back to Step 4.
Theorem 11.2. Applying the Topological Ordering Algorithm to a digraph G results in
either a topological ordering of G if G is acyclic, or some vertices left unlabelled otherwise.
Proof. (To be revealed). □
Corollary 11.3. A digraph G is acyclic if and only if G can be topologically ordered.
Proof. (To be revealed). □
Projects. Activity networks are used to model projects. A project consists of activi-
ties. Each activity is given a time, the time required to complete that activity.
A crucial part of planning the project is to identify the immediate predecessors
(IPs) of each activity A; i.e., the activities that must be completed before A can be
started. (The term immediate refers to the assumption that it is immediately obvious
that X must be completed before A can start.)
There are two notional activities called start and finish, denoted by s and t, respec-
tively; each has time 0.
It is stipulated that:
• the start activity s has no IPs;
• the finish activity t is not an IP of any activity;
• if an activity X has no IPs, then it is assigned notionally s as an IP;
• if an activity X is not the IP of any activity, then it is defined notionally to be an
IP of t.
The project is said to be feasible if all its activities can be completed.
Associated Digraph. Associated with a given project is its weighted associated di-
graph G. The vertices of G are the activities of the project and its edges are ordered
pairs of activities XY , where X is an IP of Y . The length of the edge XY is defined
to be the time of the activity X. (Thus all edges out of X have the same length: the
time of X.) An activity X is said to be a predecessor of an activity Y , and that Y is a
successor of X, if there is a non-trivial path in G from X to Y .
Lemma 12.2. Let G be the digraph associated to a feasible project. Then:
(1) G is an activity network;
(2) every activity is on a path from s to t in G;
(3) every activity, other than s, is a successor of s; and
(4) every activity, other than t, is a predecessor of t.
Proof. (To be revealed). □
The length of any edge from an activity is the time needed to complete that activity.
The following statement is an easy consequence of this fact:
the time needed to complete all but the last activity on a path is the sum of the
edge lengths of the path, i.e., the length of the path.
Thus an activity A cannot start until all its predecessors are finished; or equivalently
(by Lemma 12.2), until every activity on every path from s to A is finished.
It follows that the earliest start time (EST) of A, denoted by EST(A), is the
length of a longest path from s to A.
Any longest path from s to t is called a critical path and all activities on a critical
path are called critical activities.
The earliest completion time (ECT) of the project is the shortest time in which
all its activities can be completed. It is clear that
ECT = EST(t) = Length of a critical path.
(In that time all predecessors of t, i.e., (by Lemma 12.2) all activities, will be finished.)
Theorem 12.3. A project is feasible if and only if its associated digraph is an activity
network.
Proof. (To be revealed). □
MA32410 - GRAPHS AND NETWORKS 17
The latest start time (LST) of an activity A, denoted by LST(A), is the latest time
that A can be scheduled to start and the project still completed in its ECT.
All successors of A, as well as activity A, need to be completed and (by Lemma 12.2)they
all lie on paths from A to t. Therefore, the minimum time needed to complete all successors
of A is the length of a longest path from A to t. During that time all successors of A can
be completed. It follows that
LST(A) = ECT - (the longest path length from A to t).
Finding a critical path and the ECT, ESTs and LSTs. Applying the Longest Path
Algorithm (LPA) to the activity network G of a feasible project will give all ESTs, a
critical path and the ECT. This is known as the forward scan. (The origin is necessarily
s; the only vertex of in-degree zero.)
Since the length of a longest path from an activity A to t is equal to the length of
a longest path from t to A in the reverse activity network, we compute this length for
all activities by applying the LPA to the reverse digraph of G (necessarily with t as the
origin). This is known as the reverse scan.
Theorem 13.2. The value of a flow equals the total flow into the sink.
Cuts. Recall that a TN is connected, which means its underlying graph (UG) is con-
nected. A set C of edges of a TN G is called a cut if deletion of these edges from G would
disconnect the UG of G into two components X and Y , such that x ∈ X and y ∈ Y .
(That is, deleting C disconnects, in the UG of G, the source form the sink.)
Edges of C from X to Y (i.e., with initial vertex in X and final vertex in Y ) are called
forward edges, while edges of C from Y to X are reverse edges.
A TN always has a cut, e.g. the cut consisting of all edges out of the source.
The capacity of the cut C is the sum of the capacities of its forward edges (thus the
reverse edge capacities contribute nothing to the capacity of C).
Theorem 13.3. The value of any flow is less than or equal to the capacity of any cut.
A max-flow is a flow of maximum value in a TN. A min-cut is a cut of minimum
capacity in a TN.
Corollary 13.4. A flow is a max-flow in a TN if across any cut, all forward edges are
saturated and all reverse edges have zero flow.
Conversely, a cut is a min-cut if for any flow, all forward edges of the cut are saturated
and all reverse edges have zero flow.
Theorem 13.5 (The Max-Flow-Min-Cut Theorem). The maximum flow value in any
TN equals the minimum cut capacity.
The celebrated ‘Max-Flow-Min-Cut Theorem’ was proved in 1956 by Ford and Fulker-
son. Crucial to proving this theorem is the following Lemma:
Lemma 13.6. If in a TN, the value of some flow f is equal to the capacity of some cut
C, then f is a max-flow and C is min-cut.
The Max-Flow-Min-Cut Theorem uses an algorithm (the Ford-Fulkerson Algorithm)
that constructs a flow and a cut such that the value of the flow is the capacity of the cut.
The flow is therefore a max-flow. The resulting max-flow ‘saturates’ each forward edge
of the cut (i.e., the flow along the edge equals the edge’s capacity) and there is zero flow
along each reverse edge.