Topic 5 - Dynamic Programming Method for Graph Problems
Topic 5 - Dynamic Programming Method for Graph Problems
Definition:
• δ(u,v) = weight of the shortest path(s) from u to v
Well Definedness:
• Negative-weight cycle in graph: Some shortest paths may not be defined
• Argument: Can always get a shorter path by going around the cycle again
cycle
<0
s v
Dealing With Negative-weight Edges
• No problem, as long as no negative-weight cycles are reachable from the source
• Otherwise, we can just keep going around it, and get w(s, v) = −∞ for all v on the
cycle
Graph Algorithms: Single Source Shortest Path
With Negative Edge Weights
Single-Source All-Destinations Shortest Paths
With Possible Negative Weights
• Premises:
• Directed weighted graph
• Edges may have negative weight/cost
• No cycle whose cost is less than zero
• Objective: To find a shortest path from a given source vertex s to each of the
vertices of a directed graph that follows the above premises
• Solution: To apply the Bellman-Ford Algorithm
• The algorithm was first proposed by Alfonso Shimbel (1955), but is instead
named after Richard Bellman and Lester Ford Jr., who published it in 1958
and 1956, respectively
• It is also sometimes called the Bellman–Ford–Moore algorithm as Edward F.
Moore also published a variation of the algorithm in 1959
• It is slower than Dijkstra's algorithm for the same problem, but more versatile
Bellman-Ford Algorithm: Few Key Points
• Bellman-Ford Algorithm assumes:
• Directed graph
• Possible presence of negative edge weights
• No cycle having negative weight
9
x y
Bellman-Ford Algorithm: Example 1
u v
5
6
–2
6 –3
8
z 0 7
–4
2
7
7 9
x y
Bellman-Ford Algorithm: Example 1
u v
5
6 4
–2
6 –3
8
z 0 7
–4
2
7
7 2
9
x y
Bellman-Ford Algorithm: Example 1
u v
5
2 4
–2
6 –3
8
z 0 7
–4
2
7
7 2
9
x y
Bellman-Ford Algorithm: Example 1
u v
5
2 4
–2
6 –3
8
z 0 7
–4
2
7
7 -2
9
x y
Bellman-Ford Algorithm: Another Look
Note: This is essentially a Dynamic Programming algorithm!!!
Let d(i, j) = Cost of the shortest path from s to i using at most j edges
0 if i = s & j = 0
d(i, j) = if i s & j = 0
min({d(k, j–1) + w(k, i): i Adj(k)} {d(i, j–1)}) if j > 0
i
z u v x y
1 2 3 4 5
j 0 0
1 0 6 7
2 0 6 4 7 2
3 0 2 4 7 2
4 0 2 4 7 –2
Bellman-Ford Algorithm: Example 2
Bellman-Ford Algorithm: Time Complexity
Bellman-Ford(G, w, s)
1. Initialize-Single-Source(G, s) O(|V|)
2. for i := 1 to |V| - 1 do
3. for each edge (u, v) E do
O(|V||E|)
4. Relax(u, v, w)
5. for each vertex v u.adj do O(|E|)
6. if d[v] > d[u] + w(u, v)
7. then return False // there is a negative cycle
8. return True
Case 1: G does not contain negative cycles which are reachable from s
• Assume that the shortest path from s to v is
p = v0, v1, . . . , vk, where s=v0 and v=vk, k≤|V-1|
s
vi-1 vi After relaxing (vi-1, vi):
w d[vi]≤d[vi-1]+w= δ (s, vi-1)+w= δ (s, vi)
d[vi-1]= δ (s, vi-1)
<0
Proof by
Contradiction:
suppose the
d d
algorithm
returns a
d d
solution
d d
Contradiction!
Graph Algorithms: All Pair Shortest Path
All-Pairs Shortest-Paths Problem
• Problem: Given a directed graph G=(V, E), and a weight function w:
ER, for each pair of vertices u, v, compute the shortest path weight
(u, v), and a shortest path if exists.
• Output:
• A VV matrix D = (dij), where, dij contains the shortest path weight
from vertex i to vertex j
• A VV matrix =(ij), where, ij is NIL if either i=j or there is no
path from i to j, otherwise ij is the predecessor of j on some
shortest path from i
Structure of a Shortest Path
Suppose W = (wij) is the adjacency matrix such that
0, if i j
wij the weight of edge (i, j ) if i j and (i, j ) E
if i j and (i, j ) E
Consider a shortest path P from i to j, and suppose that P has at most
m edges. Then,
if i = j, P has weight 0 and no edges.
If i j, we can decompose P into i P’ kj,
P’ is a shortest path from i to k
Matrix Multiplication: Min-Plus Product
Let lij(m) be the minimum weight of any path from i to j that contains at
most m edges
• For m = 0,
• lij(0) = 0, if i = j, and otherwise
• For m 1,
• lij(m) = min{lik(m-1)+wkj},
1k n
• O(n4)
• Note: We noticed earlier that the principle of optimality applies to shortest path
problems: If vertices i and j are distinct and if we decompose a shortest path p
into a shortest path p’ from i to k and an edge (k, j), then (i, j ) (i, k ) wkj
The Graph and The Weight Matrix
1
1 2 3 4 5 3 v1 v2
9
1 0 1 1 5 5
1 2 3
2 9 0 3 2 v5
3
3 0 4 v4
2
v3
4 2 0 3 4
5 3 0
All-Pair Shortest Paths: Dynamic-programming
Algorithm
1. Characterize the structure of an optimal solution
• D(0) = W
• D(n) = D which is the goal matrix
Vj
Vi
Vj
Vi
• Each time that a shorter path from i to j is found the k that provided
the minimum is saved (highest index node on the path from i to j)
1 2 3
1 0 0 0 D1[3,2] = min( D0[3,2], D0[3,1]+D0[1,2] )
= min (-3,)
P= 2 0 0 1
= -3
3 0 0 0
Example: Floyd Warshall’s Algorithm
1 2 3
1 5 D1 = 1 0 4 5
4 2 3 2 2 0 7
-3 3 -3 0
2
1 2 3
1 0 4 5 D2[1,3] = min( D1[1,3], D1[1,2]+D1[2,3] )
D =
2
= min (5, 4+7)
2 2 0 7
=5
3 -1 -3 0
1 2 3
1 0 0 0
D2[3,1] = min( D1[3,1], D1[3,2]+D1[2,1] )
P= 2 0 0 1
= min (, -3+2)
3 2 0 0
= -1
Example: Floyd Warshall’s Algorithm
1 5 D2 = 1 2 3
1 0 4 5
4 2 3
2 2 0 7
-3
2 3 -1 -3 0
1 2 3
1 0 2 5 D3[1,2] = min(D2[1,2], D2[1,3]+D2[3,2] )
D3 =
2 2 0 7 = min (4, 5+(-3))
3 -1 -3 0 =2
1 2 3
1 0 3 0 D3[2,1] = min(D2[2,1], D2[2,3]+D2[3,1] )
P= 2 0 0 1 = min (2, 7+ (-1))
3 2 0 0 =2
Floyd-Warshall's Algorithm Using n+1 D matrices
Floyd-Warshall//Computes shortest distance between all pairs of
//nodes, and saves matrix P to enable finding shortest paths
1. D0 W // initialize D array to W [ ]
2. P 0 // initialize P array to [0]
3. for k 1 to n
4. do for i 1 to n
5. do for j 1 to n
6. if (Dk-1[ i, j ] > Dk-1 [ i, k ] + Dk-1 [ k, j ] )
7. then Dk[ i, j ] Dk-1 [ i, k ] + Dk-1 [ k, j ]
8. P[ i, j ] k;
9. else Dk[ i, j ] Dk-1 [ i, j ]