Algorithms RWHung
Algorithms RWHung
Algorithms RWHung
(演算法概論)
Professor Ruo-Wei Hung(洪若偉)
0-1
Algorithms by R.-W. Hung, Dept. CSIE, CYUT, Taiwan 11
Course Introduction
0-2
Algorithms by R.-W. Hung, Dept. CSIE, CYUT, Taiwan 11
任課教師
洪若偉 (Prof. Ruo-Wei Hung)
E-mail: [email protected]
Web-site: http://www.cyut.edu.tw/~rwhung
Tel: 04-2332-3000 ext. 7758
Office Hours:
週二 13:30~15:30 (E724研究室)
週四 10:30~12:30 (E724研究室)
Personal Information
。一妻一子一女
。一屋一田四車(含腳踏車、機車)一貸款一基金
3
R.-W. Hung 2014 Introduction to Algorithms
11
Text & Reference books
教科書 (Textbook):
1. R.C.T. Lee(李家同), R.C.
Chang(張瑞川), S.S. Tseng(曾憲
雄), and Y.T. Tsai(蔡英德),
“Introduction to the Design and
Analysis of Algorithms A Strategic
Approach”, McGraw-Hill, Taipei,
2005. (旗標代理)
6
R.-W. Hung 2014 Introduction to Algorithms
11
課程大綱
Chapter 1: Introduction
Chapter 2: The Complexity of Algorithms and Lower Bounds of
Problems
Chapter 3: The Greedy Method
Chapter 4: The Divide-and-Conquer Strategy
Chapter 5: Tree Searching Strategies
Chapter 6: Prune-and-Search Strategy
Chapter 7: Dynamic Programming
Chapter 8: The Theory of NP-complete
Chapter 9: Approximation Algorithms
__________________________________________________ ________
Chapter 10:Amortized analysis (Option)
Chapter 11:Randomized algorithms (Option)
Chapter 12:On-line algorithms (Option)
7
R.-W. Hung 2014 Introduction to Algorithms
11
How to solve a problem or study a new topic?
5W2H分析法
2W-1H-1E
。What?
。Why?
。How?
。Extension!
9
R.-W. Hung 2014 Introduction to Algorithms
11
Why do we need to study Algorithms ?
10
R.-W. Hung 2014 Introduction to Algorithms
11
End of Course Introduction
11
R.-W. Hung 2014 Introduction to Algorithms
11
Chapter 1
Introduction
1-1
Algorithms by R.-W. Hung, Dept. CSIE, CYUT, Taiwan 21
Topic Overview
1.1 What is an Algorithm ?
1.2 Why should We Study Algorithms ?
1.3 Algorithm Specification
1.4 Varieties of Algorithms
1.5 Goals of Learning Algorithms
1.6 Analysis of Algorithms
2
Introduction to Algorithms
21 R.-W. Hung 2014
1.1 What is an algorithm ?
Definition
An algorithm is a finite set of instructions that, if followed,
problem
accomplishes a particular task. In addition, all algorithms
must satisfy the following criteria:
algorithm
。Input (零或多個)
。Output (至少一個)
program
。Definiteness (每個指令要清楚不模糊)
。Finiteness (每種情況都必須在有限步驟完成)
。Effectiveness (每個指令是基本、可執行的)
A program is the expression of an algorithm in a
programming language.
Algorithm + Data Structure = Program
Ex 1.1
3
Introduction to Algorithms
21 R.-W. Hung 2014
1.2 Why should we study algorithms ?
Confused Concept:
。In general, we believe that a high speed computation needs a
very high speed computer only.
。That is not entirely true.
。A good algorithm implemented on a slow computer may
perform much better than a bad algorithm implemented on a
fast computer.
better computer
bad algm
performance comparison
good algm
4
Introduction to Algorithms worse computer 21 R.-W. Hung 2014
Example: Sorting (排序)
。Problem definition:
Input: A sequence of unsorted data elements
Output:A sequence of sorted data elements in an increased manner
。Two algorithms:
Insertion sort
Quick sort
5
Introduction to Algorithms
21 R.-W. Hung 2014
Example: Sorting (Cont.)
。Insertion sort (內插法)
Examine the elements from the left to right one by one, and it
will be inserted into an appropriate place. (撲克牌排序)
Seconds
35
30
25
20 Insertion
sort by
15
VAX8800
Quicksort
10 by PC/XT
5
0
200 600 1000 1400 1800
numer of data items
Figure 1-1
A fast computer with a poor algorithm may perform
worse than a slow computer with a good algorithm. Ex 1.4
8
Introduction to Algorithms
21 R.-W. Hung 2014
1.3 Algorithm specification
Description:
。Natural language (不夠精確)
。Flow chart (只能表示小程式)
。Pseudo code
。Combination of NL and PC
加分題#1 Ex 1.5
11
Introduction to Algorithms
21 R.-W. Hung 2014
1.6 Analysis of algorithms
• Measure the goodness of algorithms
。efficiency
。asymptotic notations, e.g., O(n2)
。worst case
。average case
。amortized
• Measure the difficulty of problems
。NP-complete
。undecidable
。lower bound
• Is the algorithm optimal?
12
Introduction to Algorithms
21 R.-W. Hung 2014
0/1 Knapsack Problem
Problem definition:
Input: a set of n items with that each item Pi has a value Vi
and weight Wi, and a limit M of the total weights
Output: a subset of items such that the total weight does not
exceed M and the total value is maximized
(A bag can contains weight M, select the maximum value of items to
put them into the bag s.t. the sum of their weights M)
• Example: Input
P1 P P P4 P P P7 P8
2 3 5 6 Algorithm
Value 10 5 1 9 3 4 11 17
Weight 7 3 3 10 1 9 22 15 Output
。M (weight limit) = 14
。best solution: P1, P2, P3, P5 (optimal)
。This problem is NP-complete.
13
Introduction to Algorithms
21 R.-W. Hung 2014
Traveling Salesperson Problem
Problem definition:
Input: a set of n planar points
Output: a closed tour which includes all points exactly once
such that its total length is minimized.
P1 P4 P8
P7
P3
P2 P5
14
Introduction to Algorithms
21 R.-W. Hung 2014
Partition Problem
Problem definition:
Input: A set of positive integers S
Output: two subsets S1 and S2 of S such that S1S2 = , S1S2
= S, and i i
iS1 iS 2
• Example:
。S = {1, 7, 10, 9, 5, 8, 3, 13}
S1={1, 10, 9, 8} and S2={7, 5, 3, 13}.
15
Introduction to Algorithms
21 R.-W. Hung 2014
Art Gallery Problem
Problem definition:
Input: an art gallery
Output: min # of guards and their placements such that the
entire art gallery can be monitored.
• Example:
16
Introduction to Algorithms
21 R.-W. Hung 2014
Minimal Spanning Trees
Spanning tree:
。Given a weighted graph G, a spanning tree T is a tree where all
vertices of G are vertices of T and if an edge of T connects vi and vj,
its weight is the weight of e(vi, vj) in G.
。A minimal spanning tree of G is a spanning tree of G whose total
weight is minimized.
Problem definition:
Input: a weighted graph G with weights on edges
Output: a minimal spanning tree of G.
• Example:
。# of possible spanning trees for n points: nn2
。n=10 →108, n=10098 → 10196
。greedy method
17
Introduction to Algorithms
21 R.-W. Hung 2014
Convex Hull
Problem definition:
Input: a set of planar points
Output: a smallest convex polygon which contains all
points.
• Example:
• Example:
。Prune-and-search
Ex 1.6
19
Introduction to Algorithms
21 R.-W. Hung 2014
Many strategies, such as the greedy approach, the
divide-and-conquer approach and so on, will be
introduced in this course.
20
Introduction to Algorithms
21 R.-W. Hung 2014
End of Chapter 1
21
Introduction to Algorithms
21 R.-W. Hung 2014
Chapter 2
The Complexity of Algorithms and
the Lower Bounds of Problems
2-1
Algorithms by R.-W. Hung, Dept. CSIE, CYUT, Taiwan 70
Topic Overview
2.0 Performance Measurement
2.1 The Time Complexity of an Algorithm
2.2 The Best-, Average-, Worst-Case Analysis of
Algorithms
2.3 The Lower Bound of a Problem
2.4 The Worst-Case Lower Bound of Sorting
2.5 Heap Sort: Optimal Sorting in Worst Cases
*2.6The Average-Case Lower Bound of Sorting (option)
*2.7Improving a Lower Bound through Oracles (option)
*2.8Finding the Lower Bound by Problem Transformation
2
Introduction to Algorithms
70 R.-W. Hung 2014
2.0 Performance Measurement
Definition (complexity)
。The space complexity of an algorithm is the amount of
memory it needs to run to completion.
。The time complexity of an algorithm is the amount of
computer (CPU) time it needs to run to completion.
3
Introduction to Algorithms
70 R.-W. Hung 2014
Time complexity
Notes:
。分析演算法所需的時間時,不適合用實際的時間來表示,
因為每部電腦的執行速度都不相同。
。我們用 program steps 來表示一個 program 的執行時間。
每個 statement 都算一次。
• For instance: return (a + b + bc + (a+bc)/(a+b) + 4.0); 就當
作是一個 program step。
。可以在每個 statement 的地方增加一個 counter (global
variable),就可以實際算出演算法的執行步驟。
• Example:
4
Introduction to Algorithms
70 R.-W. Hung 2014
Add counter to algorithm
float Sum(float a[ ], int n) {
float s = 0.0; s = 0.0
count++; //count is global
for (int j = 1; j <= n; j++) {
count++; j <= n
6
Introduction to Algorithms
70 R.-W. Hung 2014
Measurement of the goodness of an algorithm
7
Introduction to Algorithms
70 R.-W. Hung 2014
Measurement of the difficulty of a problem
• NP-complete? (Chapter 8)
• Is the algorithm the best?
。optimal (algorithm)
• Lower bound of a problem
• Example:
。We can use the number of comparisons to measure a sorting
algorithm
8
Introduction to Algorithms
70 R.-W. Hung 2014
2.1 The Time Complexity of an Algorithm
Asymptotic notations (漸近符號)
Definition (Big O)
。f(n) = O(g(n)) “at most”, “upper bound”
c, n0, |f(n)| c|g(n)| n n0
最少input量
Example:
f(n) = 3n2 + 2; g(n) = n2 n0 = 2, c = 4
f(n) = O(n2)
f(n) = n3 + n = O(n3)
f(n) = 3n2 + 2 = O(n3) or O(n100 )
9
Introduction to Algorithms
70 R.-W. Hung 2014
Definition ()
。f(n) = (g(n)) “at least”, “lower bound”
c, n0, |f(n)| c|g(n)| n n0
。Example:
f(n) = 3n2 + 2 = (n2) or (n)
Definition ()
。f(n) = (g(n)) “exactly”
c1, c2, n0 c1|g(n)| |f(n)| c2|g(n)| n n0
。Example:
f(n) = 3n2 + 2 = (n2)
10
Introduction to Algorithms
70 R.-W. Hung 2014
Advanced illustration on asymptotic notations
。Example:
• 3n+2 = (n) • 6*2n+n2 = (2n)
• 3n+3 = (n) • 6*2n+n2 = (n)
• 100n+6 = (n) • 6*2n+n2 = (1)
• 10n2+4n+2 = (n2) • 3n+2≠ (n2)
• 3n2+3 = (n) • 10n2+4n+2 ≠ (n3)
當 f(n) = (g(n)) 時,g(n) 只能算是 f(n) 的一個 lower bound (下限),這個 lower bound 是愈大愈好。
12
Introduction to Algorithms
70 R.-W. Hung 2014
Theta () Optimal Enumeration
。f(n) = (g(n)) (read as “f of n is theta of g of n”) if and only
if there exist positive constants c1, c2 and n0 such that c1*g(n)
f(n) c2*g(n) for all n, n n0
13
Introduction to Algorithms
70 R.-W. Hung 2014
Diagram (、O、)
14
Introduction to Algorithms
70 R.-W. Hung 2014
Upper Bound and Lower Bound
Lower Bound
15
Introduction to Algorithms
70 R.-W. Hung 2014
Time complexity function
Time
f(n)
輸入資料量n
17
Introduction to Algorithms
70 R.-W. Hung 2014
Comparison among time functions
。O(1) O(log n) O(n) O(n log n) O(n2) O(n3) O(2n)
O(n!) O(nn)
Ex 2.3 Ex 2.2
18
Introduction to Algorithms
70 R.-W. Hung 2014
Algorithm A: O(n3), algorithm B: O(n).
。Does Algorithm B always run faster than Algorithm A?
Not necessarily.
。But, it is true when n is large enough!
加分題#2
19
Introduction to Algorithms
70 R.-W. Hung 2014
2.2 The Best-, Average-, Worst-Case
Analysis of Algorithms
Analysis of Algorithms
。Best case easiest
。Worst case
。Average case hardest
Notes:
。Best Case analysis is the easiest, Worst analysis is the second
easiest, and Average Case analysis is the hardest.
。Many Open problems concerning the average case analysis.
20
Introduction to Algorithms
70 R.-W. Hung 2014
The examples concerning the three cases analysis
。Straight Insertion Sort
。Binary Search
。Quick Sort
。2-D Ranking Finding* (follows Sec. 4.1)
21
Introduction to Algorithms
70 R.-W. Hung 2014
Example 2-1: Straight Insertion Sort
22
Introduction to Algorithms
70 R.-W. Hung 2014
Inversion Table Analysis of # of Movements
。(a1, a2, ..., an) : a permutation 。M: # of data movements in
of numbers straight insertion sort
。(d1, d2, ..., dn): the inversion inner loop
table of (a1, a2, ..., an) movements
1 5 7 4 3
。dj: the number of elements to outer loop
the left of aj that are greater x
movements
than aj
。d4 = 2: the number of
。Example: movements for element 4 in
permutation (7 5 1 4 3 2 6) the inner loop
n
inversion table (0 1 2 2 3 4 1) 。 M (2 d j )
permutation (7 6 5 4 3 2 1) j 2
n
inversion table (0 1 2 3 4 5 6) 2(n 1) d j .
j 2
23
Introduction to Algorithms
70 R.-W. Hung 2014
Analysis by Inversion Table
keypoint
d2 = n2 movements are needed.
。 The probability that xj is the
...
second largest is 1/j.
di = ni • In this case, 3 data
... movements are needed.
dn = 0 。 ...
M (2 d 。 # of movements for inserting xj:
n
j
)
j 2
2 3 j 1 j 3
n ... .
2(n 1) d j j j j 2
j 2
n
j 3 (n 8)( n 1)
n(n 1) M O(n 2 ).
2(n 1) j 2 2 4 Ex 2.4
2 24
Introduction to Algorithms
O (n ).2
70 R.-W. Hung 2014
Example 2-2: Binary Search
25
Introduction to Algorithms
70 R.-W. Hung 2014
Average case analysis for binary search
k = log2n + 1
found after i steps
2i1 nodes
Prove that i 2 2 (k 1) 1.
i 1 k
i 1
Proof. (By1 Mathematic Induction on k)
k = 1: i 2 2 (1 1) 1 1.
i 1 1
m
Assume that k = m, m 1, such that i 2 2 (m 1) 1.
i 1 i 1 m
i 1
Let k = m+1.
m 1 m
Then, i 2 i 2i 1 [( m 1)2 m ]
i 1
i 1 i 1
[2m (m 1) 1] [( m 1)2 m ]
2 m 2m 1
2 m1 m 1.
By induction, the equation holds true.
27
Introduction to Algorithms
70 R.-W. Hung 2014
Average case analysis for binary search (Cont.)
Ex 2.5
28
Introduction to Algorithms
70 R.-W. Hung 2014
Example 2-4: Quick Sort
Basic Concept
。 Quick Sort is based upon the divide-and-conquer strategy (see
Chapter 4).
。 Divide a problem into two sub-problems and solves these two sub-
problems individually and independently. Then, we merge the
results.
Idea
。 Pick an element x as pivot. Let the elements < x be in the left set L
and the elements > x be in the right set R.
。 Apply QuickSort to solve the sorting problem on L and R,
recursively.
x <x x >x
L R
29
Introduction to Algorithms
70 R.-W. Hung 2014
Example: (a round)
11 5 24 2 31 7 8 26 10 15
大
↑ 小
↑
11 5 10 2 31 7 8 26 24 15
↑ ↑
11 5 10 2 8 7 31 26 24 15
△ △
7 5 10 2 8 11 31 26 24 15
|← <11 →| |← > 11 →|
30
Introduction to Algorithms
70 R.-W. Hung 2014
Best Case: O(n log n)
。A list is split into two sublists with almost equal size.
31
Introduction to Algorithms
70 R.-W. Hung 2014
Worst Case: O(n2)
。In each round, the number used to split is either the smallest
or the largest. (each round only sorts one element)
。Total time T(n) = (n1) + (n2) + ... + 2 + 1 = n(n1)/2 =
O(n2).
cost
T(n) cn
T(1) T ( n -2) c ( n- 2)
T(1) T ( n -4) c ( n- 4)
.
.
.
T(2) +) c2
T(1) T(1) O ( n 2)
32
Introduction to Algorithms
70 R.-W. Hung 2014
Average Case: O(n log n)
1 n
(T ( s ) T (n s )) cn
n s 1
1
(T (1) T (n 1) T (2) T (n 2) ... T (n) T (0)) cn, where T (0) 0
n
1
(2T (1) 2T (2) ... 2T (n 1) T (n)) cn.
n
33
Introduction to Algorithms
70 R.-W. Hung 2014
Average Case: (Cont.)
1
T (n) (2T (1) 2T (2) ... 2T (n 1) T (n)) cn
n
nT (n) 2T (1) 2T (2) ... 2T (n 1) T (n) cn 2
(n 1)T (n) 2T (1) 2T (2) ... 2T (n 1) cn 2 ..............Eq. (1)
34
Introduction to Algorithms
70 R.-W. Hung 2014
Average Case: (Cont.)
。Recursively, T (n) T (n 1) 1 1
c
n n 1 n n 1
T (n 1) T (n 2) 1 1
c
n 1 n2 n 1 n 2
T (n 2) T (n 3) 1 1
c
n2 n3 n 2 n 3
T (2) T (1) 1 1
c
2 1 2 1
T ( n) 1 1 1 1 1
。We then have c ... c ... 1
n n n 1 2 n 1 n 2
c( H n 1) cH n 1
1 n 1
c 2 H n 1 c 2 H n .
n n
35
Introduction to Algorithms
70 R.-W. Hung 2014
Average Case: (Cont.)
。Harmonic number (調和數) [Knuth 1986]
H n 1 12 13 ... 1n
ln n γ 21n 121n 1201n ε, where 0 ε 2521n and γ 0.5772156649....
2 4 6
H n O(log n).
。Finally, we have
T ( n) n 1
c 2 H n
n n
T (n) 2cnH n c(n 1) O(n log n) 1.40n log n.
Ex 2.6
36
Introduction to Algorithms
70 R.-W. Hung 2014
2.3 The Lower Bound of a Problem
What ?
。How do we measure the difficulty of a problem?
Notes:
• Use the notation to describe the lower bound.
• The lower bound for a problem is not unique.
• (1), (n), (n log n) are all lower bounds for sorting. ((1),
(n) are trivial)
• Each higher lower bound is found by theoretical analysis, not
by pure guessing.
37
Introduction to Algorithms
70 R.-W. Hung 2014
Categories of Lower bound
。Worst-case lower bound
▫ the worst-case time complexity is used, the lower bound is
called the worst-case lower bound.
。Average-case lower bound
▫ the average-case time complexity is used, the lower bound is
called the average-case lower bound.
38
Introduction to Algorithms
70 R.-W. Hung 2014
Explanation on lower bound
39
Introduction to Algorithms
70 R.-W. Hung 2014
Summary of lower bounds
一個問題的lower bound是解決此問題的最少Time
complexity.
lower bound越高越好.
假使問題P的lower bound低於解決P的最佳Algorithm A, 則
lower bound可能被改進 or 解決P的Algorithm A可能更好.
假使問題P的lower bound等於解決P的最佳Algorithm OPT,
則lower bound與OPT均不能被改進, 而Algorithm OPT稱為
解決問題P的最佳演算法
Ex 2.7
40
Introduction to Algorithms
70 R.-W. Hung 2014
2.4 The Worst-Case Lower Bound of Sorting
Permutation for data elements
。6 permutations for 3 data elements a1, a2, a3
a1 a2 a3
1 2 3
1 3 2
2 1 3
2 3 1
3 1 2
3 2 1
41
Introduction to Algorithms
70 R.-W. Hung 2014
Straight Insertion Sort:
42
Introduction to Algorithms
70 R.-W. Hung 2014
Decision Tree for Straight Insertion Sort:
a1 a2
a2 a3
a1 a2
Original
a1 a2 sequence
a2 a3
If we sort a list in a
a1 a2 increasing manner,
then exchange (ai, aj)
when ai > aj.
a2 a3
a1 a2
Figure 2-7: The binary decision tree describing the bubble sort
44
Introduction to Algorithms
70 R.-W. Hung 2014
Lower Bound of Sorting
n = 3 3! nodes
45
Introduction to Algorithms
70 R.-W. Hung 2014
Approximating log n!
Method 1:
log n! log(n(n 1)...(2)(1))
log n log(n 1) ... log 2 log1
n
log i
i 1
46
Introduction to Algorithms
70 R.-W. Hung 2014
Method 2: (Stirling approximation)
。When n is very large, we have from calculus formula that
n
n
n! S n 2πn , where S n is called the Stirling approximation.
e
。Then, n
n
log n! log( 2πn )
e
1 n
log 2 π log n n log
2 e
n log n 1.44n (n log n).
n n! Sn
1 1 0.922
2 2 1.919
3 6 5.825
4 24 23.447
5 120 118.02
6 720 707.39
10 3,628,800 3,598,600
20 2.433x1018 2.423x1018
47 100 9.333x10157 9.328x10157
Introduction to Algorithms
70 R.-W. Hung 2014
Theorem 2.1
Any comparison-based sorting algorithm must at least log(n!)
comparisons.
Thus, the lower bound of any comparison-based sorting
algorithm is (n log n).
48
Introduction to Algorithms
70 R.-W. Hung 2014
2.5 Heap Sort: Optimal Sorting in Worst Cases
Heap
A heap (堆積) is a binary tree satisfying the following conditions:
。 it is complete balanced
。 if its height is h, then leaves can be at level h or h 1
。 All leaves at level h are as far to the left as possible
。 parent children
The maximum
51
Introduction to Algorithms
70 R.-W. Hung 2014
Phase 1: Construction of a heap
。Input data: 4, 37, 26, 15, 48 (put them in a balanced tree
sequentially)
52
Introduction to Algorithms
70 R.-W. Hung 2014
Phase 2: Output the maximum and restore the heap
(recursively)
53
Introduction to Algorithms
70 R.-W. Hung 2014
Implementation of heap sort
54
Introduction to Algorithms
70 R.-W. Hung 2014
Time complexity of heap sort
2d (2 d 1) 4(2 d 1 (d 1 1) 1)
k
[ L 2 L 1 2 k (k 1) 1......See Slide 27]
L 0
...
cn 2 log n 4, where 2 c 4
O(n).
55
Introduction to Algorithms
70 R.-W. Hung 2014
Phase 2: Output the maximum and restore the heap
。After deleting one node of the heap, 2log i comparisons are
needed to restore the heap if i elements remains.
i nodes
log i
。Therefore,
# of comparisons needed to delete all nodes is:
n 1
2 log i 2(log 1 log 2 log n 1)
i 1
2(log 1 2 3 (n 1))
2log(n!) O(n log n).
2n log n 4cn 4, where 2 c 4
O(n log n).
Heap sort is optimal for sorting in worst case.
Ex 2.8
56
Introduction to Algorithms
70 R.-W. Hung 2014
*2.6 The Average-Case Lower Bound of Sorting
57
Introduction to Algorithms
70 R.-W. Hung 2014
Unbalanced
external path length = 43+1 = 13
Balanced
external path length = 23+32 = 12
58
Introduction to Algorithms
70 R.-W. Hung 2014
Compute the minimum external path length
59
Introduction to Algorithms
70 R.-W. Hung 2014
。External path length:
M x1 ( d 1) x2 d
( 2 d c)(d 1) 2(c 2 d 1 )d
cd c 2 d , d log c
c log c c 2 log c .
。c = n!
M n! log n! n!2 log n!.
Then, the average-case time complexity is therefore
M n! log n! n!2 logn!
n! n!
log n! 1 1
(n log n).
Heap Sort
。Worst case time complexity of heap sort is O(n log n).
。Average case time complexity of heap sort is O(n log n).
。Average case lower bound of sorting: (n log n).
。Worst case lower bound of sorting: (n log n).
Heap sort is optimal in the average case and worst case.
61
Introduction to Algorithms
70 R.-W. Hung 2014
*2.7 Improving a Lower Bound through Oracles
62
Introduction to Algorithms
70 R.-W. Hung 2014
(1) Binary decision tree:
m n
。There are ways to merge list A into list B!
n
m n
leaf nodes in the decision tree.
n
63
Introduction to Algorithms
70 R.-W. Hung 2014
。When m = n m n (2m)!
log log log(2m)!2 log(m!)
n ( m!) 2
The very hard case: a1 < b1 < a2 < b2 < ... < am < bm
65
Introduction to Algorithms
70 R.-W. Hung 2014
(2) Oracle: (Cont.)
。We must compare:
a1 : b1
b1 : a2
a2 : b2
...
bm1 : am1
am : bm
Otherwise, we may get a wrong result for some input data.
e.g. If b1 and a2 are not compared, we can not distinguish
a1 < b1 < a2 < b2 < ... < am < bm and Which one of b1 or a2
will be put next to a1?
a1 < a2 < b1 < b2 < ... < am < bm
。Thus, at least 2m1 comparisons are required.
The conventional merging algorithm is optimal for m = n.
66
Introduction to Algorithms
70 R.-W. Hung 2014
*2.8 Finding the Lower Bound by Problem
Transformation
Problem A reduces to problem B (denoted by A B)
。 iff A can be solved by using any algorithm which solves B.
。 If A B, then B is more difficult than A.
。 Diagram:
instance transformation instance of B
of A T(tr1)
T(A) T(B) solver of B
answer transformation
of A T(tr2) answer of B
Note:
(1) T(tr1) + T(tr2) < T(B)
(2) T(A) T(tr1) + T(tr2) + T(B) O(T(B))
67
Introduction to Algorithms
70 R.-W. Hung 2014
The lower bound of the Convex Hull problem
。Lower bound of B
Time of solving A
= Time of solving B + transform-time
lower bound of B Figure 2-17
= lower bound of A transform-time
∵The lower bound of Sorting = (n logn)
∴The lower bound of Convex hull
= (n logn) (n) = (n logn)
68
Introduction to Algorithms
70 R.-W. Hung 2014
2.9 Exercises (練習題)
1. (a) Describe an O(n log n)-time algorithm that, given a set S of n
integers and another integer x, determines whether or not there exist
two elements in S whose sum is exactly x. (b) Please give an example
to illustrate your algorithm.
2. Is 2n+1 = O(2n)? Is 22n = O(2n)?
3. What are the minimum and maximum numbers of elements in a heap
of height h?
4. Show that an n-element heap has height log n?
5. Show that quicksort’s best-case running time is (n log n).
加分題#3 加分題#4
69
Introduction to Algorithms
70 R.-W. Hung 2014
End of Chapter 2
70
Introduction to Algorithms
70 R.-W. Hung 2014
Chapter 3
The Greedy Method
(貪婪法)
3-1
Algorithms by R.W. Hung, Dept. CSIE, CYUT, Taiwan 63
Topic Overview
3.0 The Greedy Method
3.1 Kruskal’s Method for Minimal Spanning Trees
3.2 Prim’s Method for Minimal Spanning Trees
3.3 The Single-Source Shortest Path Problem
3.4 The 2-Way Merging Problem
*3.5The Minimal Cycle Basis Problem
*3.6The 2-Terminal One to Any Special Channel Routing
Problem
3.7 Fraction Knapsack Problem
2
Introduction to Algorithms
63 R.-W. Hung 2014
3.0 The Greedy Method
What ?
。Suppose that a problem can be solved by a sequence of
decisions. The greedy method has that each decision is
locally optimal. These locally optimal solutions will finally
add up to a globally optimal solution.
How?
。Step by step to construct a “ feasible ” solution.
。At each step, “ select ” the best feasible candidate.
Notes:
。Useful to construct heuristic or approximation algorithms.
。Only a few optimization problems can be solved by the
greedy method. heuristic源自希腊语 heuriskein(發現),中文意
思是“啟發式的” 。
heuristic algorithm通常找到的答案都非最佳的
(optimal) 。
3
Introduction to Algorithms
63 R.-W. Hung 2014
Structure
Algorithm Greedy(A, n)
// A : a set of n input data items
// S : is the solution Select是根據順序
找出下個元素 x
S ← ;
for i:=1 to n do
x ← select the “best” candidate(候選人) in the set A;
if (S {x}) is feasible
then S ← S {x} ;
A ← A –{x};
如果將 x 加進來還是
end for feasible solution 的話,就
加進來,不然就不加進來。
end Greedy
4
Introduction to Algorithms
63 R.-W. Hung 2014
A Simple Example
Problem: Given a set A of n numbers, we are asked to pick
out k numbers such that the sum of these k numbers is the
largest among all possible ways of picking out these k
numbers.
Greedy Algorithm:
for i = 1 to k
compare with select the largest number of A and delete it from A.
the sorted
algorithm
Figure 3-1
6
Introduction to Algorithms
63 R.-W. Hung 2014
Shortest path on a multi-stage graph
。Problem: Find a shortest path from v0 to v3 in the following
multi-stage graph.
Figure 3-2
5 d min ( v1, 3 , v3 )
7 d min ( v1, 4 , v3 )
8
Introduction to Algorithms Figure 3-2 R.-W. Hung 2014
63
Illustrations on Greedy Method
Remarks
。Greedy method 大部份都是找 optimal solution 。
。n 個 data items,找出一些 feasible solution,再從這些
feasible solution 中根據某個 objective function 找出
maximum 或 minimum 的解。
。Greedy method 最大的特色就是根據某個順序每次考慮一
個 input 資料,在增加這個 input 資料之前已經有一個原
先的最佳解,增加新的 input 資料後如果不是最佳解的話,
這個 input 資料就不放入目前的最佳解中。
。這樣作其實有個前提,就是原先的最佳解一定是最後最
佳解的部份。(Not look ahead)
。Greedy method 不見得可用於任何問題, 但它是一種解決
問題最直覺的方法。
Ex 3.2 Ex 3.1
9
Introduction to Algorithms
63 R.-W. Hung 2014
3.1 Kruskal’s Method for Minimal Spanning
Trees
Definition (Spanning Tree)
。Given a weighted connected undirected graph G = (V, E)
with weights on edges, a spanning tree of G is an undirected
tree T = (V, E’) with that E’ E. That is, T covers all vertices
of G.
10
Introduction to Algorithms
63 R.-W. Hung 2014
Example:
。A weighted graph G v.s. A spanning tree of G
a 5 b a 5 b
4 4
3
4 6 6
c 2 d c d
11
Introduction to Algorithms
63 R.-W. Hung 2014
Two famous algorithms for MST problem:
Kruskal’s algorithm: O(|E| log|E|)
Prim’s algorithm: O(|E|+|V|log|V|)
Both are greedy algorithms.
12
Introduction to Algorithms
63 R.-W. Hung 2014
Kruskal’s Algorithm
13
Introduction to Algorithms
63 R.-W. Hung 2014
Example 1:
a 5 b a 5 b
4 4
3 3
4 6 4 6
c 2 d c 2 d
a 5 b a 5 b
4 4
3 3
4 6 4 6
c 2 d c 2 d
14
Introduction to Algorithms
63 R.-W. Hung 2014
Example 2:
15
Introduction to Algorithms
63 R.-W. Hung 2014
The details for constructing MST
。How do we check if a cycle is formed when a new edge is
added?
• By the SET and UNION method.
• A tree in the forest is used to represent a SET
• If (u, v) E and u, v are in the same set, then the addition of (u,
v) will form a cycle
• If (u, v) E and u S1 , v S2 , then perform UNION of S1
and S2.
Each operation runs in amortized time O(1).
16
Introduction to Algorithms
63 R.-W. Hung 2014
The details for constructing MST (Cont.)
。How do we check if a cycle is formed when a new edge is
added? (Cont.)
• Union and Find Operations
S1 S2
Find(3)=S1 =
Find(4)=S1
Figure 3-9: A spanning forest
• If (3, 4) is added, since 3 and 4 belong to the same set, a cycle
will be formed.
• If (4, 5) is added, since 4 and 5 belong to different sets, a new
Find(4)=S1
Find(5)=S2 forest (1,2,3,4,5,6,7,8,9) is created.
Thus we are always performing the union of two sets.
17
Introduction to Algorithms
63 R.-W. Hung 2014
Time Complexity of Kruskal’s Algorithm
Ex 3.3
18
Introduction to Algorithms
63 R.-W. Hung 2014
Correctness of Kruskal’s Algorithm
19
Introduction to Algorithms
63 R.-W. Hung 2014
3.2 Prim’s Method for Minimal Spanning
Trees
Based upon vertices of graphs
Algorithm Prim
Input: A weighted, connected and undirected graph G = (V, E).
Output: A minimum spanning tree T for G.
Method.
1. Let x be any vertex of G; T = {x} and Y = V {x};
2. While Y is not empty do
(2.1) Choose an edge (u, v) from E such that u T, v Y
and (u, v) is the smallest weight among edges between T
and Y;
(2.2) Connect u to v;
(2.3) T =T+{(u, v)}; Y = Y {v}; u v
20 T Y
Introduction to Algorithms
63 R.-W. Hung 2014
Example 1:
T
Y
a 5 b a 5 b
4 4
3 3
4 6 4 6
c 2 d c 2 d
a 5 b a 5 b
4 4
3 3
4 6 4 6
c 2 d c 2 d
21
Introduction to Algorithms
63 R.-W. Hung 2014
Example 2:
22
Introduction to Algorithms
63 R.-W. Hung 2014
Time Complexity of Prim’s Algorithm
Ex 3.4
23
Introduction to Algorithms
63 R.-W. Hung 2014
3.3 The Single-Source Shortest Path Problem
Definition (SSSP Problem)
。Given a directed (or undirected) graph G = (V, E) with non-
negative weights on edges and a vertex x in V, find all of the
shortest paths from x.
25
Introduction to Algorithms
63 R.-W. Hung 2014
Dijkstra’s Algorithm
Method
。Construct a set S V such that the shortest path from v0 to
each vertex S lies wholly in S.
Algorithm Dijkstra
Input: A weighted connected graph G = (V, E) and v0.
Output: The shortest paths from v0 to all other vertices in G.
Method.
S = {v0};
啟始步驟
for i = 1 to n do /*|V|=n+1*/
if (v0, vi) E then L(vi) = cost(v0, vi) else L(vi) = ;
for i = 1 to n do
choose u from VS such that L(u) is the smallest;
Greedy步驟 S = S {u}; /*put u into S*/
for all w in VS do
L(w) = min{L(w), L(u)+cost(u, w)};
26
Introduction to Algorithms
63 R.-W. Hung 2014
Illustration on Dijkstra’s Algorithm
v0 30 v2
29 u
v1 3 v3 v0 w
S VS
Figure 3-17: Two vertex sets S and V S S VS
v0 30 v2 L(w) = min{L(w), L(u)+cost(u, w)}
29
L(u)+cost(u, w)}: the length of shortest path
v1 3 v3 from v0 through u to w
S VS
27
Introduction to Algorithms
63 R.-W. Hung 2014
Example 1: Shortest paths from v0 to all destinations
v0 30 v2
。 S: 包含已計算完成的所有v0到x的 4
shortest path的頂點x的集合 10 15 10 v4
。 L(vi): 記錄目前v0到vi的shortest path的length 20
10
v1 v3
。 Greedy loop v1 10
29
Introduction to Algorithms
63 R.-W. Hung 2014
Time Complexity of Dijkstra’s Algorithm
Ex 3.5
30
Introduction to Algorithms
63 R.-W. Hung 2014
3.4 The 2-Way Merging Problem
Definition (2-Way Merging Problem)
。Merge k sorted lists into a sorted list. (k 2)
32
Introduction to Algorithms
63 R.-W. Hung 2014
How to solve the 2-way merge problem?
。If more than two sorted lists are to be merged, we can still
apply the linear merge algorithm, which merges two sorted
lists, repeatedly.
。These merging processes are called 2-way merge because
each merging step only merges two sorted lists.
。merge k sorted lists into one sorted list: Let (L1, L2, L3, L4, L5)
= (20, 5, 8, 7, 4)
• Merge L1 and L2 into Z1: 20+5=25
• Merge Z1 and L3 into Z2: 25+8=33
• Merge Z2 and L4 into Z3: 33+7=40
• Merge Z3 and L5 into Z4: 40+4=44
Total = 25+33+40+44 = 142 comparisons
33
Introduction to Algorithms
63 R.-W. Hung 2014
2-way merge tree
。The merging pattern can be represented as a binary tree:
44
Z4
44 Z4
40 Z
3 L5
33 Z L4 4 24 Z3 L1
2
20
7
25 Z L3 9 Z 15 Z2
1
1
8
L1 L2 L5 L2 L3 L4
20 5 4 5 8 7
Comparisons = 25+33+40+44=142 Comparisons = 9+15+24+44=92
34
Better
Introduction to Algorithms
63 R.-W. Hung 2014
Optimal 2-Way Merge Tree
Algorithm 2-way-merge
Input: k sorted lists
Output: an optimal 2-way merge tree
Method:
Step 1: Generate k trees, each tree has a node with weight ni (the
length of the list);
Step 2: Choose two trees T1 and T2 with minimum weights;
Step 3: Create a new tree T whose root has T1 and T2 as its subtree,
and the weight of T is the sum of weights of T1 and T2;
Step 4: Replace T1 and T2 by T;
Step 5: If there is only one tree left then stop; else goto Step 2.
35
Introduction to Algorithms
63 R.-W. Hung 2014
Example: Input 6 sorted lists with lengths 2, 3, 5, 7, 11 and 13.
Step 1. 2 3 5 7 11 13
Step 2 5 10
to
Step 5. 2 3 5 7 11 13 5 5 7 11 13
41 2 3
17 24
17
10 7 11 13 10 7 11 13
5 5 5 5
2 3
2 3
36
Introduction to Algorithms
63 R.-W. Hung 2014
Time Complexity of 2-Way Merge Algorithm
Time complexity
。Time complexity for generating an optimal 2-way merge tree:
O(n log n)
。n is the number of merged lists
Ex 3.6
37
Introduction to Algorithms
63 R.-W. Hung 2014
Huffman codes
Huffman code
。In telecommunication, how do we represent a set of messages,
each with an access frequency, by a sequence of 0’s and 1’s?
。To minimize the transmission and decoding costs, we may
use short strings to represent more frequently used messages.
。This problem can by solved by using the 2-way merge
algorithm.
38
Introduction to Algorithms
63 R.-W. Hung 2014
Example:
。Symbols: A, B, C, D, E, F, G
freq. : 2, 3, 5, 8, 13, 15, 18
。Huffman tree:
。Huffman codes:
A: 10100 B: 10101 C: 1011 E F G
D: 100 E: 00 F: 01
G: 11 D
A B
Figure 3-22: A Huffman code Tree
Ex 3.7
39
Introduction to Algorithms
63 R.-W. Hung 2014
*3.5 The Minimal Cycle Basis Problem
Definition: Cycle operation (ring sum, exclusive-or)
。For two cycle C1 and C2, the ring sum operation on them,
denoted by C = C1C2, is defined as: C = (C1C2)(C1C2).
。That operation is used to combine two cycles into another
cycle.
C3
Cycle basis: {C1, C2} or {C1, C3} or {C2, C3}
The minimum cycle basis: {C1, C2}, weight = 30
C1 C2
41
Introduction to Algorithms
63 R.-W. Hung 2014
A greedy algorithm for finding a minimal cycle
basis
Step 1: Determine the size of the minimal cycle basis, denoted
as k.
Step 2: Find all of the cycles. Sort all cycles by weights.
Step 3: Add cycles to the cycle basis one by one.
Check if the added cycle is a combination of some cycles
already existing in the basis. If yes, delete this cycle.
Step 4: Stop if the cycle basis has k cycles.
42
Introduction to Algorithms
63 R.-W. Hung 2014
Detailed Steps for the Minimal Cycle Basis Problem
Step 1 :
。A cycle basis corresponds to the fundamental set of cycles
with respect to a spanning tree.
a fundamental set of
a graph a spanning tree cycles
# of cycles in a
cycle basis:
=k
= |E| (|V|1)
= |E||V|+1
43
Introduction to Algorithms
63 R.-W. Hung 2014
Step 2 :
。Find all of the cycles. Sort all cycles by weights.
How to find all cycles in a graph?
[Reingold, Nievergelt and Deo 1977]
How many cycles are there in a graph in the worst case?
In a complete digraph of n vertices and n(n1) edges:
n
C
i 2
i
n
(i 1)! (n 1)!
44
Introduction to Algorithms
63 R.-W. Hung 2014
Step 3 :
。Add cycles to the cycle basis one by one.
How to check if a cycle is a linear combination of some
cycles?
Using Gaussian elimination (see Linear Algebra)
45
Introduction to Algorithms
63 R.-W. Hung 2014
Step 3 : (Gaussian elimination)
2 cycles C1 and C2 are represented by
a 0/1 matrix
e1 e2 e3 e4 e5
C1 1 1 1
C2 1 1 1
Add C3
e1 e2 e3 e4 e5 e1 e2 e3 e4 e5
C1 1 1 1 C1 1 1 1
C2 1 1 1 C2 1 1 1
C3 1 1 1 C3 1 1 1 1
e1 e2 e3 e4 e5 Exclusive-or operation on rows 1 and 3
C1 1 1 1 Exclusive-or operation on rows 2 and 3
C2 1 1 1 Thus, C3 is a linear combination of C1
C3 and C2
46
Introduction to Algorithms
63 R.-W. Hung 2014
A polynomial time algorithm to find the minimal cycle
basis
Algorithm
Step 1: Find all pairs shortest paths.
Step 2: For each vertex v V and for each edge (x, y) E, the
cycle consists of shortest path (v, x), edge (x, y) and shortest
path (y, v) is a candidate. Compute the weight of each
candidate. x
47
Introduction to Algorithms
63 R.-W. Hung 2014
Time complexity = O(n7)
。|V| = n, |E| = m (m = O(n2) in the worst case).
。Step 1: O(n3)
n Dijkstra’s algorithm (=O(n2))
Floyd algorithm O(n3)
。Step 2: O(mn2), find mn cycles and their weights
。Step 3: O(mn log(mn)) sorting mn cycles
。Step 4: O(msmn), s = mn+1 (the size of basis) = O(m3n)
48
Introduction to Algorithms
63 R.-W. Hung 2014
*3.6 The 2-Terminal One to Any Special
Channel Routing Problem
Definition: (track)
。Given two sets of terminals on the upper and lower rows,
respectively, each marked upper row terminal must be
connected to a marked lower row terminal in a one-to-one
fashion. It is required that no two lines can intersect. That is,
all the lines are either vertical or horizontal. Each horizontal
line corresponds to a track.
via
Figure 3-29: Two feasible solutions for the problem instance in Figure 3-28
50
Introduction to Algorithms
63 R.-W. Hung 2014
Definition: (density)
。Redraw the solution of 2-terminal one to any channel routing
problem by connecting upper terminals to lower terminals
directly. Then, we use a vertical scan line to scan these
connection lines. The density of the solution is the maximum
number of connection lines the scan line intersects.
51
Introduction to Algorithms
63 R.-W. Hung 2014
(a) Optimal solution
density
=1
density
=4
53
Introduction to Algorithms
63 R.-W. Hung 2014
Algorithm 2T1-A-Special
Input: Upper row terminals, Lower row terminals, and
minimum density d (finding d is not the topic in this chapter)
Output: A connection with density d
Method.
Step 1 : P1 is connected to Q1.
Step 2 : After Pi is connected to Qj, we check whether Pi+1 can
be connected to Qj+1. If the density is increased to d+1, try to
connect Pi+1 to Qj+2.
Step 3 : Repeat Step 2 until all Pj’s are connected.
54
Introduction to Algorithms
63 R.-W. Hung 2014
Figure 3-31: The problem instance in Figure 3-28 solved by the greedy algorithm
, where d=1 is first discovered
55
Introduction to Algorithms
63 R.-W. Hung 2014
3.7 Fraction Knapsack Problem
Definition: (Fraction Knapsack Problem)
。n objects, each with a weight wi > 0 and a profit pi > 0
。capacity of a knapsack: M
。The fraction knapsack problem is to
maximize pi xi
1 i n
subject to w x
1i n
i i M , where 0 xi 1 and 1 i n.
57
Introduction to Algorithms
63 R.-W. Hung 2014
Algorithm Greedy-Knapsack
Input: n objects, each with a weight wi > 0 and a profit pi > 0, and the
capacity M of a knapsack
Output: xi, 1 i n, such that pi xi is maximum with wi xi M .
1i n 1 i n
Method.
// 假設物品已經按照 pi / wi 作遞減排序了
int j;
for (j=1; j <= n; j++) x[j] = 0.0;
int U = M;
for (j = 1; j <= n; j++) {
if (w[j] > U) break;
x[j] = 1.0; Time complexity = O(n)
U = w[j];
}
if (j <= n) x[j] = U / w[j];
58
Introduction to Algorithms
63 R.-W. Hung 2014
Theorem:
If p1/w1 ≧ p2/w2 ≧ ... ≧pn/wn, then Algorithm
GreedyKnapsack generates an optimal solution to the
given instance of the knapsack problem.
Proof.
令 (x1, x2, …, xn) 是用 GreedyKnapsack 算出來的答案,其中 x1到xj-1 都等於 1, 而 0
< xj < 1, xj+1 到 xn 都等於0,且Σwixi = m。而 (y1, y2, …, yn) 是一個 optimal solution
且與 (x1, x2, …, xn) 不相同。令 yk 是第一個不等於 xk 的數字。
若 k < j 則 yk < xk, 因為 xk = 1
若 k = j 則 yk < xk, 不然的話 Σwiyi > m
若 k > j,這是不可能的,因為 Σwiyi > m
因為 Σwiyi = m,從 (y1, y2, …, yn) 中將 yk 增加到 xk 不會使 total profit 變小。用這
種方法可以將每個不同的數字都變成相同,因此可證出(x1, x2, …, xn) 也是 optimal
solution。
Ex 3.8
59
Introduction to Algorithms
63 R.-W. Hung 2014
Definition: (0/1 Knapsack Problem)
。n objects, each with a weight wi > 0 and a profit pi > 0
。capacity of a knapsack: M
。The 0/1 knapsack problem is to
maximize px
1i n
i i
subject to w x
1 i n
i i
M , where either xi 0 or 1 and 1 i n.
60
Introduction to Algorithms
63 R.-W. Hung 2014
3.8 Exercises (練習題)
1. (a) Show why Dijkstra’s algorithm will not work properly when the considered graph
contains negative cost edges. (b) Modify Dijkstra’s algorithm so that it can compute the
shortest path from source node to each node in an arbitrary graph with negative cost
edges, but no negative cycles.
2. 【Node Cover】Let G = (V, E) be an undirected graph. A node cover of G is a subset U
of the vertex set V such that every edge in E is incident to at least one vertex in U. A
minimum node cover is one with the fewest number of vertices. Consider the following
greedy algorithm for this problem: (the degree of a vertex is the number of vertices
adjacent to it)
Algorithm Cover(int V[], int E[])
{ 7
U = ; 2 6
Do { 4
Let q be a vertex from V of maximum degree;
Add q to U; eliminate q from V; 1
5 8
E = E - {(x, y) | x = q or y = q};
} while (E ) 3
Output U; // U is the node cover
}
(a) Please use the above algorithm to find a node cover of the above graph? (b) Does the
above algorithm always generate a minimum node cover?
61
Introduction to Algorithms
63 R.-W. Hung 2014
3.8 Exercises (Cont.)
3. 【0/1 Knapsack Problem】(a) What is the 0/1 Knapsack Problem? (b) Please give a
greedy algorithm, which is heuristic, to solve the 0/1 Knapsack Problem. (Note that the
solution of your algorithm may not be optimal) (c) Please give an example to show that it
does not always yield an optimal solution.
4. Given a graph shown in the following figure, please use Dijkstra’s algorithm to find the
shortest path from u0 to v.
a 4 b
4 2 1
2 3 1
u0 v
2 3
c 5 d
5. (a) What is an optimal Huffman code for the following set of frequencies, based on the
first 8 Fibonacci numbers? a: 1 b:1: c:2 d:3 e:5 f:8 g:13 h:21
(b) Can you generalize your answer to find the optimal code when the frequencies are
the first n Fibonacci numbers?
62
Introduction to Algorithms
63 R.-W. Hung 2014
End of Chapter 3
63
Introduction to Algorithms
63 R.-W. Hung 2014
Chapter 4
The Divide-and-Conquer Strategy
(各個擊破法)
4-1
Algorithms by R.W. Hung, Dept. CSIE, CYUT, Taiwan 82
Topic Overview
4.0 The Divide-and-Conquer Strategy
4.1 2-D Maxima Finding Problem
4.2 The Closest Pair Problem
4.3 The Convex Hull Problem
*4.4 The Voronoi Diagram Problem
*4.5 Applications of the Voronoi Diagrams
*4.6 Fast Fourier Transform (FFT)
4.7 Techniques for Solving Recurrence
2
Introduction to Algorithms_ R.-W. Hung 2014
82
4.0 The Divide-and-Conquer Strategy
How ?
。Divide-and-conquer 的方法基本觀念就是將問題分解成 k
個(k 2)比較小的、相同的問題,得到每個小問題的答案
之後,再合成原來問題的答案。
。如果分解成的小問題還是太大的話,就用類似的方法再
分解。
。用這個觀念想出來的演算法常可以是遞迴演算法。
。小到不能再分割的問題就直接解。
。其 complexity 的分析常需要解遞迴函數。其格式:
T (1) , if n 1
T ( n)
aT (n / b) f (n) , if n 1
Recursive Programming:
加分題#5 費伯那西數的計算
3
Introduction to Algorithms_ R.-W. Hung 2014
82
Method
1. Divide: Partition the problem into “independent”
subproblems recursively until the subproblem’s size is small
enough.
2. Conquer: solve the independent “small” subproblems.
3. Combine (Merge): merge the subsolutions into a solution.
4
Introduction to Algorithms_ R.-W. Hung 2014
82
Method (Illustration)
1. Divide:
If small Else, split
直接解
5
Introduction to Algorithms_ R.-W. Hung 2014
82
Method (Illustration)
2. Recursively divide until enough small:
6
Introduction to Algorithms_ R.-W. Hung 2014
82
Method (Illustration)
3. Conquer and Merge:
7
Introduction to Algorithms_ R.-W. Hung 2014
82
General Structure (Recursive Structure)
。S(P): solution of problem P.
。small(P): decide P small or not.
Algorithm D_and_C(P)
if small(P) then return S(P)
else
divide P into small subproblems P1, P2, ..., Pk , k 1;
D_ and_C(Pi), for k i 1; // recursive calls
return Merge (S(P1), S(P2), ..., S(Pk) );
End D_and_C.
8
Introduction to Algorithms_ R.-W. Hung 2014
82
A Simple Example
9
Introduction to Algorithms_ R.-W. Hung 2014
82
Time complexity:
2T ( n2 ) 1, n 2
T ( n)
1, ,n2
。Calculation of T(n):
• Assume n = 2k,
T (n) 2T ( n2 ) 1
2(2T ( n4 ) 1) 1 4T ( n4 ) 2 1
...
切割成3個sub-
2k 1T (2) 2k 2 2k 3 ... 4 2 1
problems,結果為
2k 1 2k 2 2k 3 ... 4 2 1 何?
2k 1 n 1.
10 Ex 4.2 Ex 4.1
Introduction to Algorithms_ R.-W. Hung 2014
82
A General Divide-and-Conquer Algorithm
Step 1:
。If the problem size is small, solve this problem directly;
otherwise, split the original problem into k ( 2) sub-
problems with equal sizes.
Step 2:
。Recursively solve these k sub-problems by applying this
algorithm.
Step 3:
。Merge the solutions of the k sub-problems into a solution of
the original problem.
11
Introduction to Algorithms_ R.-W. Hung 2014
82
• Time complexity of the general algorithm:
kT ( n ) S (n) M (n), n c
T ( n) k
b, ,nc
12
Introduction to Algorithms_ R.-W. Hung 2014
82
4.1 2-D Maxima Finding Problem
Definition (2-D Maxima Finding Problem)
。A point (x1, y1) dominates (x2, y2) if x1 > x2 and y1 > y2. A
point is called a maxima if no other point dominates it.
。Given a set of n planar points, the 2-D maxima finding
problem is to find all of the maxima points of the set.
14
Introduction to Algorithms_ R.-W. Hung 2014
82
Divide-and-Conquer for Maxima Finding
15
Introduction to Algorithms_ R.-W. Hung 2014
82
The divide-and-conquer algorithm
。Input: A set S of n planar points.
。Output: The maximal points of S.
。Method:
Step 1: If S contains only one point, return it as the maxima.
Otherwise, find a line L perpendicular to the X-axis which
separates S into SLand SR, with equal sizes.
(initial and divide step)
Step 2: Recursively find the maximal points of SLand SR.
(conquer step)
Step 3: Find the largest y-value of SR, denoted as yR. Discard
each of the maximal points of SL if its y-value is less than yR.
(merge step) Please think: why does the step
produce all maximal points!
16
Introduction to Algorithms_ R.-W. Hung 2014
82
Time complexity: T(n)
。Step 1: O(n)
Step 2: 2T(n/2)
Step 3: O(n)
。Thus,
2T ( n2 ) O(n) O(n), n 1;
T ( n)
1, , n 1.
By Substitution Method to solve T(n):
Assume n = 2k, we get T(n) = O(n log n).
Definition (dominate)
Let A = (a1, a2), B = (b1, b2) be two points in 2-dimension
space.
。A dominates B a1 > b1 and a2 > b2.
Definition (rank)
Given a set S of n points in 2-dimension space, the rank of a
point x is the number of points dominated by x.
D
。rank(A)=0; rank(B)=1; B
rank(C)=1; rank(D)=3; C
A
rank(E)=0. E
18
Introduction to Algorithms_ R.-W. Hung 2014
82
Definition (the ranking problem)
Given a set S of n points in 2-dimension space, find all ranks
of each point in S.
A Straightforward Strategy
。Conduct an exhaustive comparison of all pairs of points.
。For instance, in Figure 2-2, find the rank of point A: compare
A, B, A, C, A, D, A, E and find its rank is 0.
A C
E
19
Introduction to Algorithms_ R.-W. Hung 2014 Figure 2-2
82
Algorithm Straightforward_rank_finding
Input: A set of n planar points P1 = (x1, y1), P2 = (x2, y2), ..., Pn = (xn, yn)
Output: Rank ri of point Pi for 1 i n
Method.
1. for i = 1 to n do
2. ri = 0; // store the rank of point Pi
3. for j = 1 to n do
4. if xi > xj and yi > yj then ri++;
5. Output ri for 1 i n.
20
Introduction to Algorithms_ R.-W. Hung 2014
82
Divide-and-Conquer 2-D Ranking Finding (more efficient)
Step 1: Split the points along the median line L into A and B.
(Divide S into 2 equal spaces A and B by a straight line L)
21
Introduction to Algorithms_ R.-W. Hung 2014
82
Step 1: Split the points along the median line L into A and B.
(Divide S into 2 equal spaces A and B by a straight line L)
A L B
22
Introduction to Algorithms_ R.-W. Hung 2014
82
Step 2: Find ranks of points in A and ranks of points in B,
recursively. (Local Ranking of A and B)
A L B
3
1
2
1 1
1
1
0
0
0
23
Introduction to Algorithms_ R.-W. Hung 2014
82
Step 3: Sort points in A and B according to their y-values.
Update the ranks of points in B. (Modification of ranks)
A L B
3
1+4
2+4
1 1+4
1+3
1
0+2
0
0
24
Introduction to Algorithms_ R.-W. Hung 2014
82
Time complexity
。Step 1 = O(n) (finding median)
。Step 3 = O(n log n) (sorting)
。Total time : T (n) 2T ( n2 ) O ( n) O (n log n)
T (n) 2T ( n2 ) O(n) O(n log n)
2T ( n2 ) 4T ( n4 ) 2O( n2 ) 2O( n2 log n2 )
4T ( n4 ) 8T ( 8n ) 4O( n4 ) 4O( n4 log n4 )
...
2k T ( 2n ) 2k 1T ( 2 n ) 2k O( 2n ) 2k O( 2n log 2n ), where n 2k 1
k k 1 k k k
27
Introduction to Algorithms_ R.-W. Hung 2014
82
The divide-and-conquer algorithm
。Input: A set S of n planar points.
。Output: The distance between two closest points.
。Method:
Step 1: Sort points in S according to their y-values.
Step 2: If S contains only one point, return infinity as its
distance.
Step 3: Find a median line L perpendicular to the X-axis to
divide S into SL and SR, with equal sizes. (split step)
Step 4: Recursively apply Steps 2 and 3 to solve the closest
pair problems of SL and SR. Let dL(resp. dR) denote the
distance between the closest pair in SL (resp. SR).
Let d = min(dL, dR). (conquer step)
28
Introduction to Algorithms_ R.-W. Hung 2014
82
The divide-and-conquer algorithm (Cont.)
Step 5: For a point P in the half-slab bounded by Ld and L,
let its y-value be denoted as yP. For each such P, find all
points in the half-slab bounded by L and L+d whose y-value
fall within yP+d and yPd. If the distance d’ between P and a
point in the other half-slab is less than d, let d = d’. The final
value of d is the answer. (merge step)
P P
maximum region
to be examined
d
P
d
Ld L L+d
30
Introduction to Algorithms_ R.-W. Hung 2014
82
Time complexity: O(n log n)
。Step 1: O(n log n)
Step 2 Step 5:
2T ( n2 ) O (n) O (n), n 1;
T ( n)
1, , n 1.
T (n) O (n log n).
31 Ex 4.5
Introduction to Algorithms_ R.-W. Hung 2014
82
4.3 The Convex Hull Problem
Definition (Convex polygon)
。A convex polygon (凸多邊形) is a polygon with the property
that any line segment connecting two points inside the
polygon must itself lie inside the polygon
。concave polygon
32
Introduction to Algorithms_ R.-W. Hung 2014
82
Definition (The Convex Hull Problem)
。The convex hull of a set of planar points is the smallest
convex polygon containing all of the points.
(在平面上的一組點,用最小面積的凸多邊形將所有點包起來)
。Given a set S of n planar points, the convex hull problem is to
find a convex hull of S.
34
Introduction to Algorithms_ R.-W. Hung 2014
82
A simple algorithm (Cont.)
How to test whether point (x, y) falls within the interior of ABC, where
A = (xa, ya), B = (xb, yb), and C = (xc, yc) ?
xa ya 1
1 1 1
(1) ABC面積 ( AB與AC向量的外積) = 2
| xb yb 1 | | det( ABC ) |
2
2
xc yc 1
35
Introduction to Algorithms_ R.-W. Hung 2014
82
A simple algorithm (Cont.)
How to test whether point (x, y) falls within the interior of ABC, where
A = (xa, ya), B = (xb, yb), and C = (xc, yc) ? (Cont.)
(4) Procedure TestLocation
Input: X=(x, y) and ABC
Output: Yes if X is in ABC ; otherwise, No;
Method.
1. if det(ABX)=0 or det(ACX)=0 or det(BCX)=0 then return ‘No’;
2. If ABX面積 + ACX面積 + BCX面積 = ABC面積
3. then return ‘Yes’;
4. else return ‘No’.
36
Introduction to Algorithms_ R.-W. Hung 2014
82
A simple algorithm (Cont.)
Algorithm Straightforward_Convex-Hull
Input: A set of n planar points P1 = (x1, y1), P2 = (x2, y2), ..., Pn = (xn, yn)
Output: A set S of points to form a convex hull
Method.
1. S = ;
2. for i = 1 to n do
3. corneri = true; // indicate whether Pi is a corner point
4. for any other three points Px, Py, Pz do
5. if TestLocation(Pi, PxPyPz )=Yes then corneri = false;
6. if corneri = true then S = S {Pi};
7. Output S.
37
Introduction to Algorithms_ R.-W. Hung 2014
82
Divide-and-Conquer Algorithm
Algorithm Convex-Hull
。Input: A set S of n planar points.
。Output: A convex hull for S.
。Method:
Step 1: If S contains no more than five points, use exhaustive
searching to find the convex hull and return.
Step 2: Find a median line perpendicular to the X-axis which
divides S into SL and SR, with equal sizes.
Step 3: Recursively construct convex hulls for SL and SR,
denoted as Hull(SL) and Hull(SR), respectively.
Step 4: Apply the merging procedure to merge Hull(SL) and
Hull(SR) together to form a convex hull. (Graham Scan or
upper-lower tangents Algorithm)
38
Introduction to Algorithms_ R.-W. Hung 2014
82
The convex hull algorithm using divide-conquer
L
SL SR
upper tangent
lower tangent
X
39
Introduction to Algorithms_ R.-W. Hung 2014
82
Find the lower tangent
Procedure LowerTangent(SL, SR)
1. let l be the rightmost point of SL;
2. let r be the lettmost point of SR;
3. while lr is not a lower tangent for SL and SR do
4. while lr is not a lower tangent for SL do l = l 1;
5. while lr is not a lower tangent for SR do r = r + 1;
6. return lr.
40
Introduction to Algorithms_ R.-W. Hung 2014 lower tangent82
Graham Scan Algorithm
。Input: A set S of planar points.
。Output: A convex hull for S.
。Method:
Step 1: Select an interior point as the origin.
Step 2: Each other point forms a polar angle, and all points
sorted by polar angles.
Step 3: Eliminate the points cause reflexive angles.
Step 4: The remaining points are convex hull vertices.
41
Introduction to Algorithms_ R.-W. Hung 2014
82
Graham Scan Algorithm (Cont.)
。Illustration (Sort polar angles)
Y
P2
P1
P P0
P3
P5
P4
X
42
Introduction to Algorithms_ R.-W. Hung 2014
82
Graham Scan Algorithm (Cont.)
。Illustration (Eliminate points)
令 P0, P1, P2 座標各為
Y (x0,y0) (x1,y1) (x2,y2)
P4
x0 y0 1
det x1 y1 1
P3 x2 y2 1
x0 y1 x1 y 2 x2 y0
x2 y1 x1 y0 x0 y2
P2
P5 If det < 0 then 順時針
If det > 0 then 逆時針
If det = 0 then 三點共線
P1
P0
X
43
Introduction to Algorithms_ R.-W. Hung 2014
82
Use Graham Scan Algorithm to merge two convex
hulls L
Y 找小於/2 最大的
SL SR
j d
i
e c
k p
f
h
g
a b
找大於 3/2 最小的
X
44
Introduction to Algorithms_ R.-W. Hung 2014
82
Use Graham Scan Algorithm to merge two convex
hulls (Cont.) – sort the angles and merge
Y L
SL SR
j d
i
e c
k p
f
h
g
a b
X
45
Introduction to Algorithms_ R.-W. Hung 2014
82
Time complexity of Algorithm Convex-Hull
。Step 1: O(1)
。Step 2: 2T(n/2) + O(n)
。Step 3: O(n) merging process by using Graham Scan
(tangents finding) Algorithm
。Thus, T (n) 2T ( n2 ) O(n).
T (n) O(n log n)
46 Ex 4.6
Introduction to Algorithms_ R.-W. Hung 2014
82
*4.4 The Voronoi Diagram Problem
Definition (Voronoi diagram for 2 points)
。Let P1 and P2 be two planar points.
。Line L12 is a perpendicular bisector of the line connecting P1
and P2
。L12 is the Voronoi diagram for P1 and P2
P1 P2
47
Introduction to Algorithms_ R.-W. Hung 2014
82
Definition (Voronoi diagram for 3 points)
。Let P1, P2, and P3 be three planar points.
。Line Lij is a perpendicular bisector of the line connecting Pi
and Pj for 1 i < j 3
。L12, L13, and L23 construct the Voronoi diagram for P1, P2,
and P3
Any point X located in region Ri is closer to Pi
L12
R1 R2
P1 P2
L13 L23
P3 R3
Figure 4-13: A Voronoi diagram for three points
48
Introduction to Algorithms_ R.-W. Hung 2014
82
Definition (Voronoi polygon(多邊形))
。Given two points Pi, Pj S (a set of n points), let H(Pi, Pj)
denote the half plane containing the set of points closer to Pi.
。The Voronoi polygon associated with Pi is a convex polygon
region having n 1 sides, defined by
V ( Pi ) H ( Pi , Pj ).
i j
L12
V ( P1 ) H ( P1 , P2 ) H ( P1 , P3 )
V ( P2 ) H ( P2 , P1 ) H ( P2 , P3 )
P1 P2
V ( P3 ) H ( P3 , P1 ) H ( P3 , P2 )
L13 L23
P3
49
Introduction to Algorithms_ R.-W. Hung 2014
82
V(Pi)
Pi
50
Introduction to Algorithms_ R.-W. Hung 2014
82
Definition (Voronoi diagram)
。Given a set S of n points, the Voronoi diagram consists of all
the Voronoi polygons of these points.
51
Introduction to Algorithms_ R.-W. Hung 2014
82
A Delaunay triangulation
。 The straight line dual of a Voronoi diagram is called the Delaunary
triangulation
。 Each Lij is perpendicular bisector of the line
。 Lij connects Pi and Pj the Voronoi polygons of Pi and Pj share
the same edge
。 Three perpendicular bisectors of a triangle must intersect on a point
Algorithm D_C_Voronoi-Diagram
。Input: A set S of n planar points.
。Output: The Voronoi diagram of S.
。Method:
Step 1: If S contains only one point, return.
Step 2: Find a median line L perpendicular to the X-axis
which divides S into SL and SR, with equal sizes.
Step 3: Construct Voronoi diagrams of SL and SR recursively.
Denote these Voronoi diagrams by VD(SL) and VD(SR).
Step 4: Construct a dividing piece-wise linear hyperplane HP
which is the locus of points simultaneously closest to a point
in SL and a point in SR. Discard all segments of VD(SL)
which lie to the right of HP and all segments of VD(SR) that
lie to the left of HP. The resulting graph is the Voronoi
diagram of S.
The merging step will be
53 introduced in slide Ch4 p.55
Introduction to Algorithms_ R.-W. Hung 2014
82
Example
56
Introduction to Algorithms_ R.-W. Hung 2014
82
Merge two Voronoi diagrams
Example:
。Construct the convex hulls for these two Voronoi diagrams
57
Introduction to Algorithms_ R.-W. Hung 2014
82
Example: (Cont.)
。Find the upper and lower tangent lines (all points are in one
side of the line)
58
Introduction to Algorithms_ R.-W. Hung 2014
82
Example: (Cont.)
。 Construct the perpendicular bisection L15 of P1P5
。 Find the point b such that the first Voronoi edge intersects L15 at b
b
59
Introduction to Algorithms_ R.-W. Hung 2014
82
Example: (Cont.)
。 Construct the perpendicular bisection L14 of P1P4
。 Find the point c such that the first Voronoi edge intersects L14 at c
b
60
Introduction to Algorithms_ R.-W. Hung 2014
82
Example: (Cont.)
。 Continue to find the points of HP until the lower tangent line occurs
d
e
61
Introduction to Algorithms_ R.-W. Hung 2014
82
Example: (Cont.)
。 Continue to find the points of HP until the lower tangent line occurs
d
e
62
Introduction to Algorithms_ R.-W. Hung 2014
82
Example: (Cont.)
。 Discard all segments of VD(SL) which lie to the right of HP and all
segments of VD(SR) that lie to the left of HP.
b
d
e
63
Introduction to Algorithms_ R.-W. Hung 2014
82
Example: (Cont.)
。Final Voronoi diagram.
64
Introduction to Algorithms_ R.-W. Hung 2014
82
Merges Two Voronoi Diagrams into One Voronoi
Diagram
Algorithm Merge-2-Voronoi-Diagrams
。Input: (a) SL and SR where SL and SR are divided by a
perpendicular line L; and (b) VD(SL) and VD(SR).
。Output: VD(S), where S = SL ∪ SR.
。Method:
Step 1: Find the convex hulls of SL and SR, denoted as Hull(SL)
and Hull(SR), respectively. (A special algorithm for finding a
convex hull in this case will by given later.)
Step 2: Find segments PaPb and PcPd which join Hull(SL) and
Hull(SR) into a convex hull (Pa and Pc belong to SL and Pb and
Pd belong to SR)
Pa Pb
Assume that PaPblies above PcPd.
Let x = a, y = b, SG = PaPb and HP = .
Pc Pd
65
Introduction to Algorithms_ R.-W. Hung 2014
82
Algorithm Merge-2-Voronoi-Diagrams (Cont.)
Step 3: Find the perpendicular bisector of SG. Denote it by
BS. Let HP = HP∪{BS}. If SG = PcPd, go to Step 5;
otherwise, go to Step 4.
Step 4: The ray from VD(SL) and VD(SR) which BS first
intersects with must be a perpendicular bisector of either PxPz
or PyPz for some z. If this ray is the perpendicular bisector
of PyPz, then let SG = PxPz; otherwise, let SG = PyPz. Go to
Step 3.
Step 5: Discard the edges of VD(SL) which extend to the right
of HP and discard the edges of VD(SR) which extend to the
left of HP. The resulting graph is the Voronoi diagram of S =
SL∪SR.
66
Introduction to Algorithms_ R.-W. Hung 2014
82
Construct a Convex Hull from a Voronoi Diagram
68
Introduction to Algorithms_ R.-W. Hung 2014
82
69
Introduction to Algorithms_ R.-W. Hung 2014
82
Properties of Voronoi Diagrams
Definition
。Given a point p and a set S of points, the distance between p
and S is the distance between p and Pi which is the nearest
neighbor of p in S.
70
Introduction to Algorithms_ R.-W. Hung 2014
82
# of Voronoi Edges
。# of edges of a Voronoi diagram 3n 6, where n is # of
given planar points.
。Reasoning:
(1) # of edges of a planar graph with n vertices 3n 6.
(2) A Delaunay triangulation is a planar graph.
(3) Edges in Delaunay triangulation
1 1
Edges in Voronoi diagram.
71
Introduction to Algorithms_ R.-W. Hung 2014
82
# of Voronoi Vertices
。# of Voronoi vertices 2n 4, where n is # of given planar
points.
。Reasoning:
(1) Let f, e and v denote # of face, edges and vertices in a planar
graph.
Euler’s relation: f = e v + 2.
(2) A Delaunay triangulation is a planar graph.
(3) In a Delaunay triangulation,
v = n, e 3n 6
f = e v + 2 3n 6 n + 2 = 2n 4.
72
Introduction to Algorithms_ R.-W. Hung 2014
82
Time Complexity
73
Introduction to Algorithms_ R.-W. Hung 2014
82
The Lower Bound of the Voronoi Diagram
Problem
The lower bound of the Voronoi diagram problem is
(n log n)
Figure 4-26: The Voronoi diagram for a set of points on a straight line
74
Introduction to Algorithms_ R.-W. Hung 2014
82
4.7 Techniques for Solving Recurrence
Substitution method
Example: 2T ( n2 ) n , n 1;
T ( n)
2 , n 1.
T (n) 2T ( n2 ) n
2[2T ( n4 ) n2 ] n 4T ( n4 ) 2n
8T ( 8n ) 3n
...
2k T ( 2n ) kn.
k
nT (1) n log n
2n n log n
O(n log n).
75
Introduction to Algorithms_ R.-W. Hung 2014
82
Substitution method (Cont.)
Example: T (n) 2 ) n , n 1;
2T ( n
2 , n 1.
Let n = 2k. Then, k = log n.
T (n) 2T ( n2 ) n
2T ( n2 ) 2 2 T ( 2n2 ) n
2 2 T ( 2n2 ) 23 T ( 2n3 ) n
+) 2 k 1T ( 2nk 1 ) 2 k T ( 2nk ) n
T (n) 2 k T ( 2nk ) kn
2 k T (1) kn
2 k 1 kn
2n n log n
O(n log n)
76
Introduction to Algorithms_ R.-W. Hung 2014
82
Theorem:
Assume that a, b, c 0 and k is a constant.
Let aT ( nc ) b n k , n 1;
T ( n)
b , n 1.
Then,
O(n k ) , a ck ;
T (n) O(n k log n) , a c k ;
O(n log a ) , a c k .
c
77
Introduction to Algorithms_ R.-W. Hung 2014
82
Examples: please solve the following recurrences
。 4T ( n2 ) 3n , n 1;
T ( n)
2 , n 1.
。
2T ( n4 ) 3n , n 1;
T ( n)
2 , n 1.
78
Introduction to Algorithms_ R.-W. Hung 2014
82
Transformation
2T ( n ) log n , n 1;
Example: T ( n )
1 , n 1.
Let n = 2k. Then, k = log n.
T(2k) = 2T(2k/2) + k.
Assume that S(m) = T(2m). Then, S(m/2) = T(2m/2).
S(k) = 2S(k/2) + k.
S(k) = O(k log k)
∵ k = log n
∴ S(k) = T(n) = O( log n loglog n)
79 Ex 4.8 Ex 4.7
Introduction to Algorithms_ R.-W. Hung 2014
82
Recursion Tree
2T ( n2 ) n 2 , n 1;
Example: T (n)
1 , n 1.
Cost n2 is to divide T(n) into 2 subproblems with size n/2
T(n) n2 n2
2n 2
+
O(n 2 ).
80
Introduction to Algorithms_ R.-W. Hung 2014
82
4.8 Exercises (練習題)
1. (a) Does binary search use the divide-and-conquer strategy? Please give your
reason. (b) Prove the time complexity k
of the binary search algorithm in the
average case is O(log n). [Hint: i 2 2k (k 1) 1. ]
i 1
i 1
2. Prove that in Quick sort algorithm, the maximum stack needed is O(log n).
2
3. Let T ( n r ) nT (n) bn 2 , where r is an integer and r 1. Please find T(n).
2
4. Solve the following recurrence by using the recursion tree method.
2T ( n ) n 3 , n 1;
T ( n) 2
1 , n 1.
5. Show that the solution of T(n) = T(n/2)+1 is O(log n).
6. Give asymptotic upper and lower bound for T(n) in each of the following
recurrences. Assume that T(n) is constant for n 2. Make your bounds as
tight as possible, and justify your answers. (a) T(n) = 2T(n/2)+n3. (b) T(n) =
T(n1)+n. (c) T(n) = 2T(n/4)+ n . (d) T(n) = T( n )+1.
81
Introduction to Algorithms_ R.-W. Hung 2014
82
End of Chapter 4
82
Introduction to Algorithms
82 R.-W. Hung 2014
Chapter 5
The Tree Searching Strategy
(樹狀搜尋策略)
5-1
Algorithms by R.W. Hung, Dept. CSIE, CYUT, Taiwan 72
Topic Overview
5.0 The Tree Searching Strategy 5.10 A* Algorithm
5.1 The Breadth-First Search (BFS) *5.11 The Channel Routing
Problem
5.2 The Depth-First Search (DFS)
*5.12 The Linear Block Code
5.3 The Hill Climbing
Decoding Problem
5.4 The Best-First Search Strategy
5.5 Branch-and-Bound Strategy
*5.6 The Personnel Assignment
Problem
5.7 The Traveling Salesperson
Optimization Problem
5.8 The 0/1 Knapsack Problem
*5.9 The Job Scheduling Problem
2
Introduction to Algorithms_ R.-W. Hung 2014
72
5.0 The Tree Searching Strategy
Concept
。Better searching (branch) method based on bound (one
flexible solution). (branch and bound strategy)
。Generally, solving NP-complete problem for avoiding
exhaustive search. (Note: only useful in average case)
。Certainly, it can also solve general problem.
3
Introduction to Algorithms_ R.-W. Hung 2014
72
Tree Searching Problems
。The solutions of many problems may be represented by trees.
。These problems can be solved by tree searching strategy.
4
Introduction to Algorithms_ R.-W. Hung 2014
72
Examples of tree searching problems
5
Introduction to Algorithms_ R.-W. Hung 2014
72
# of possible assignments
If there are n variables x1, x2, ..., xn, then there are 2n possible
assignments.
Example: A formula consisting of 3 variables
x1 x2 x3 x1=T x1=F
F F F
F F T x2=T x2=F x2=T x2=F
F T F
F T T x3=T x3=F x3=T x3=F
T F F x3=T x3=F x3=T x3=F
T F T
T T F
T T T Figure 5-1: Tree representation of 8 assignments
6
Introduction to Algorithms_ R.-W. Hung 2014
72
Classify each class of assignments
。An instance:
x1……..…… (1)
x1………….. (2) x1=T x1=F
x2 x5….…. (3)
x3…….……. (4)
x2…….……. (5) (1) is falsified (2) is falsified
Figure 5-2: A partial tree to determine the left SAT problem
7
Introduction to Algorithms_ R.-W. Hung 2014
72
2. the Hamiltonian cycle (circuit) problem (HC):
。A Hamiltonian cycle is a round trip path along n edges of G
= (V, E) which visits every vertex once and returns to its
starting vertex, where n = |V|.
。Hamiltonian cycle problem:
• Given a graph G, determine whether a Hamiltonian circuit
exists in G or not
• This is an NP-complete problem
8
Introduction to Algorithms_ R.-W. Hung 2014
72
Example:
• Given the following graphs G1 and G2, G1 has two Hamiltonian
circuits which are 12347561 and
16574321, and G2 has no Hamiltonian circuit.
1 2 7 5 1 2 4
6 3 4 3 5
G1 G2
9
Introduction to Algorithms_ R.-W. Hung 2014
72
The tree representation for the solution of the HC problem:
• Given a graph G as follows: 1 2 7 5
6 3 4
• The tree representation of the solutions of the HC problem is in
the following: 1
2 6 3
3 5 2 4
4 4 7 5 7
5 7 3 7 4 7 6 5
7 6 5 2 3 6
6 2
1 10 1
Introduction to Algorithms_ R.-W. Hung 2014
72
5.1 The Breadth-First Search (BFS, 先寬)
Method
。Traversal the tree representation by breadth-first manner and
test whether it has obtained a solution. (Level-by-Level)
。A Queue can be used to guide BFS
Example: 1 1
1 2 7 5 2 2 6 3 3 4
3 5 5 6 2 7 4 8
6 3 4
49 10 4 7 11 12 5 7 13
14 5 15 7 16 3 17 7 4 18 19 7 20 6 21 5
22 7 23 6 245 252 3 26 27 6
6 28 2 29
1 30 1
11
Introduction to Algorithms_ R.-W. Hung 2014
72
5.2 The Depth-First Search (DFS, 先深)
Method
。Traversal the tree representation by depth-first manner and
test whether it has obtained a solution. (Top-Down & backtracking)
。A Stack can be used to guide DFS
Example: 1 1
1 2 7 5 2 2 6 3
3 3 5 2 4
6 3 4
4 4 4 7 5 7
5 5 8 7 3 7 4 7 6 5
6 7 7 6 95 2 3 6
10 6 2
11 1 1
12 Ex 5.2 Ex 5.1
Introduction to Algorithms_ R.-W. Hung 2014
72
The sum of subset problem
。Given a set S and a number k, determine whether there exists
a subset S’ of S such that the sum of S’ equals to k ?
Example: S = {7, 5, 1, 2, 10} and k = 9
0
x1=7
7
x2=5 x2=2
x2=1
12 8 9 Goal node
x =2 x3=10
3
10 18
Figure 5-11: A sum of subset problem solved by depth-first search
13 Ex 5.3
Introduction to Algorithms_ R.-W. Hung 2014
72
5.3 The Hill Climbing (爬山策略)
DFS
。In DFS, one problem is how to select one child of the present
node to expand!
Hill Climbing
。A variant of depth-first search
。The method selects the locally optimal node to expand.
Example: 8-puzzle problem
• Given a initial arrangement and the goal state, the problem is to
determine whether there exists a sequence of movements from
initial state to goal state, where each item can be moved only
horizontally or vertically to the empty spot.
14
Introduction to Algorithms_ R.-W. Hung 2014
72
8-puzzle problem
。 Given a initial arrangement and the goal state, the problem is to
determine whether there exists a sequence of movements from
initial state to goal state, where each item can be moved only
horizontally or vertically to the empty spot.
。 Example:
2 8 3 1 2 3
1 4 8 4
7 6 5 7 6 5
15
Introduction to Algorithms_ R.-W. Hung 2014
72
Hill Climbing strategy for 8-puzzle problem
。 evaluation function f(n) = d(n) + w(n)
, where d(n) is the depth of node n and w(n) is # of misplaced tiles
in node n.
。 The hill climbing strategy is to select the least f(n) to expand the
present node
16
Introduction to Algorithms_ R.-W. Hung 2014
72
Hill Climbing strategy for 8-puzzle problem (Cont.)
1 2 3 2 8 3 f(n) of the node
1 4
(3)
8 4
7 6 5 expanding order 7 6 5
1 2 3 4
goal state 2 3 2 8 3 2 8 3 2 8 3
1 8 4 (4) 1 4 (4) 1 4 (5) 1 6 4 (5)
7 6 5 7 6 5 7 6 5 7 5
5 6
2 3 2 3
1 8 4 1 8 4
(4) (6)
7 6 5 7 6 5
7
1 2 3
8 4
(4)
7 6 5
8
1 2 3 1 2 3
7 8 4
Figure 5-15: An 8-puzzle problem solved by the
8 4 (6)
hill climbing method
Goal Node 7 6 5 6 5
17
Introduction to Algorithms_ R.-W. Hung 2014
72
5.4 The Best-First Search Strategy
Method
。 Combing depth-first search and breadth-first search
。 Selecting the node with the best estimated cost among all
nodes
。 This method has a global view
Best-First Search Scheme
Step 1: Form a one-element list consisting of the root node
Step 2: Remove the first element from the list. Expand the first
element. If one of the descendants of the first element is a goal
node, then stop; otherwise, add the descendants into the list.
Step 3: Sort the entire list by the values of some estimation function
Step 4: If the list is empty, then failure. Otherwise, go to Step 2
18
Introduction to Algorithms_ R.-W. Hung 2014
72
Best-First search scheme for 8-puzzle problem
1 2 3 2 8 3 f(n) of the node
1 4
(3)
8 4
7 6 5 expanding order 7 6 5
1 2 3 4
goal state 2 8 3 2 3 2 8 3 2 8 3
1 4 (4) 1 8 4 (4) 1 4 (5) 1 6 4 (5)
7 6 5 7 6 5 7 6 5 7 5
5 6 7 8
8 3 2 8 3 2 3 2 3
2 1 4 7 1 4 1 8 4 1 8 4
(5) (6) (4) (6)
7 6 5 6 5 7 6 5 7 6 5
9
1 2 3
8 4 (4)
7 6 5
10 11
1 2 3 1 2 3 Figure 5-16: An 8-puzzle
(6) 7 8 4 8 4 problem solved by the best-first
6 5
19
7 6 5 search scheme
Introduction to Algorithms_ R.-W. Hung 2014 Ex 5.4
72 Goal Node
Summaries
Comparison
。BFS
。DFS
。Hill Climbing
。Best-First Search
Useful
。In general, they are only used in solving decision problems
。For optimization problems, we can use branch-and-bound
strategy or A* algorithm to solve them
20
Introduction to Algorithms_ R.-W. Hung 2014
72
Note: Decision Problems v.s. Optimization Problems
Decision problem
。Any problem for which the answer is either zero or one is
called a decision problem. An algorithm for a decision
problem is termed a decision algorithm.
Optimization problem
。Any problem that involves the identification of an optimal
(either minimum or maximum) value of a given cost function
is known as an optimization problem. An optimization
algorithm is used to solve an optimization problem.
21
Introduction to Algorithms_ R.-W. Hung 2014
72
5.5 The Branch-and-Bound Strategy
What ?
。 The previous strategies apply to the decision problems only
。 The branch-and-bound strategy can be used to solve the
optimization problems
Branch-and-bound strategy
• It needs to define a feasible solution as an upper bound for a
minimization problem (lower bound for a maximization
problem). All nodes with cost > upper bound will be discarded.
• When we expand a node, we select the next expanded node
whose feasible solution is the least (Best-first search strategy).
This feasible solution will become a new upper bound of the
solved minimization problem.
upper bound • How to obtain a “Bound” (one flexible solution): Using DFS,
Hill climbing or Best-first search to find it. This bound can be
used to reduce the branch space.
The most cost of a minimization problem 22
Introduction to Algorithms_ R.-W. Hung 2014
72
A multi-stage graph searching problem
。Given a multi-stage graph and a source s and a destination t
in it, the problem is to find the shortest path from s to t.
。The problem cannot be solved by greedy method, see Chapter
3. But, it can be solved by dynamic programming (Chapter 7)
The branch-and-bound strategy
• An instance
a v0
1 2
3
v1,1 b v1,2 c v1,3 d
5 3
v2,1 e f v2,3
1
g v3 A feasible solution is found whose cost is 5
24
Introduction to Algorithms_ R.-W. Hung 2014
72
• An illustration of the branch-and-bound strategy (Cont.)
– Use the hill climbing scheme as the searching strategy
a v0
1 2
3
v1,1 b v1,2 c v1,3 d
5 3 4 3 2 7
v2,1 e f v2,3 v2,1 h v2,2 i v2,2 j v2,3 k
x>6 x>7 x>6 1 x>9
1
g v3 l v3
An optimal solution is found whose cost is 5
Figure 5-19: An illustration of the branch-and-bound strategy branch bounding
• Any solution with cost > 5 cannot be an optimal solution.
• Any node with cost > 5 will be terminated.
25 Ex 5.5
Introduction to Algorithms_ R.-W. Hung 2014
72
*5.6 The Personnel Assignment Problem
What ?
。 A linearly ordered set of persons P = {P1, P2, ..., Pn}, where
P1<P2< ...<Pn
。 A partially ordered set of jobs J = {J1, J2, ..., Jn}
。 Suppose that Pi and Pj are assigned to jobs f(Pi) and f(Pj),
respectively.
• If f(Pi) f(Pj), then Pi Pj.
• Cost Cij is the cost (pay) of assigning Pi to Jj. We want to find
a feasible assignment with the minimum cost (pay), i.e.,
Let Xij = 1 if Pi is assigned to Jj and Xij = 0 otherwise.
Minimize i,j CijXij
26
Introduction to Algorithms_ R.-W. Hung 2014
72
Topological sorting
。Let S = {S1, S2, ..., Sn} be a partial ordering set.
A linear sequence S1, S2, ..., Sn is topologically sorted w.r.t. S
if Si Sj implies that Si is located before Sj
Example: 1
3
7 9
2
4 5
6 8
Figure 5-20: A partial ordering
• A topologically sorted sequence is 1, 3, 7, 4, 9, 2, 5, 8, 6
• The topologically sorted sequence is not unique
27
Introduction to Algorithms_ R.-W. Hung 2014
72
The Personnel Assignment Problem
。Let P1 Jk1, P2 Jk2, ..., Pn Jkn be a feasible solution.
Then, Jk1, Jk2, ..., Jkn must be a topologically sorted sequence.
Example: Consider P = {P1, P2, P3, P4} and J = {J1, J2, J3, J4}
J1 J2
J3 J4
Figure 5-21: A partial ordering of jobs
• After topological sorting, one of the following topologically
sorted sequences will be generated: P1 P2 P3 P4
J1, J2, J3, J4
J1, J2, J4, J3
• A feasible assignment: J1, J3, J2, J4
J2, J1, J3, J4
P1 J1, P2 J2, P3 J3, P4 J4 J2, J1, J4, J3
28
Introduction to Algorithms_ R.-W. Hung 2014
72
A Solution Tree
A Solution Tree
。All possible solutions can be represented by a solution tree
person J1 J2
0
Assigned
1 J3 J4
1 2
2 3 1 2
3 4 2 4 3 3
4 3 4 3 4 4
0 person
Jobs Assigned
1 2 3 4
Persons 1 29 2 19 1
1 29 19 17 12 2 59 3 55 1 51 2
2 32 30 26 28
66 3 4 68 2 76 58 4 3 60 3
3 3 21 7 9
81 4 3 78 4 73 3 4 70 4
4 18 13 10 15
only one node is
pruned away
30
Introduction to Algorithms_ R.-W. Hung 2014
72
Reduced Cost Matrix
Jobs Jobs
1 2 3 4 1 2 3 4
Persons Persons
1 29 19 17 12 12 1 17 4 5 0
2 32 30 26 28 26 2 6 1 0 2
3 3 21 7 9 3 3 0 15 4 6
4 18 13 10 15 10 4 8 0 0 5
3
31
Introduction to Algorithms_ R.-W. Hung 2014
72
How to obtain a reduced cost matrix
。A reduced cost matrix can be obtained:
subtract a constant from each row and each column
respectively such that each row and each column contains at
least one zero.
32
Introduction to Algorithms_ R.-W. Hung 2014
72
Branch-and-Bound for the Personnel Assignment Problem
0 54 person
Jobs Assigned
1 2 3 4
Persons
1 71 2 58 1
1 17 4 5 0
2 3 1 64 2
2 6 1 0 2
66 3 4 2 68 4 3 70 3
3 0 15 4 6
81 4 3 78 4 73 3 4 70 4
4 8 0 0 5
33
Introduction to Algorithms_ R.-W. Hung 2014
72
5.7 The Traveling Salesperson Optimization
Problem
What ?
。 Given a directed graph with weights on edges, determine the
minimum closed tour (Hamiltonian cycle) if it exists
。 An NP-complete problem
A cost matrix i
1 2 3 4 5 6 7
j
1 3 93 13 33 9 57
2 4 77 42 21 16 34
3 45 17 36 16 28 25
4 39 90 80 56 7 91
5 28 46 88 33 25 57
Table 5-3: A cost matrix
6 3 88 18 46 92 7
for a traveling
7 44 26 33 27 84 39 salesperson problem
34 Ex 5.6
Introduction to Algorithms_ R.-W. Hung 2014
72
Compute the reduced cost matrix
i
1 2 3 4 5 6 7 reduced: 84
j
1 0 90 10 30 6 54 3
2 0 73 38 17 12 30 4
3 29 1 20 0 12 9 16
4 32 83 73 49 0 84 7
5 3 21 63 8 0 32 25
6 0 85 15 43 89 4 3
Table 5-4: A reduced cost
7 18 0 7 1 58 13 26 matrix for that in Table 5-3
每列減去某一cost,不影響optimal solution
35
Introduction to Algorithms_ R.-W. Hung 2014
72
Compute the reduced cost matrix (Cont.)
i
1 2 3 4 5 6 7
j
1 0 83 9 30 3 50
2 0 66 37 17 12 26
3 29 1 19 0 12 5
4 32 83 66 49 0 80
5 3 21 56 7 0 28
36
Introduction to Algorithms_ R.-W. Hung 2014
72
Branch-and-bound strategy
。split a solution into two groups:
• one group includes a particular arc
• the other excludes this arc
Example: (Table 5-5)
• includes arc (4, 6) lower bound = 96
• excludes arc (4, 6) lower bound = 96+32 (since the group
must include the arcs (4, x) and (y, 6), x 6, y 4, and the least
cost of the arcs is 32+0)
38
Introduction to Algorithms_ R.-W. Hung 2014
72
Branch-and-bound strategy (Cont.)
Example: (Table 5-5) (Cont.)
• A reduced cost matrix if arc (4, 6) is included in the solution.
• Arc (6, 4) is changed to be infinity since it cannot be included
in the solution. Note that row 4 and column 6 will be eliminated.
i
1 2 3 4 5 7
j
1 0 83 9 30 50
2 0 66 37 17 26
3 29 1 19 0 5
5 3 21 56 7 28
6 0 85 8 89 0
Table 5-6: A reduced cost matrix
7 18 0 0 0 58 if arc (4, 6) is included
39
Introduction to Algorithms_ R.-W. Hung 2014
72
Branch-and-bound strategy (Cont.)
Example: (Table 5-5) (Cont.)
• The reduced cost matrix for all solutions with arc (4, 6).
i
1 2 3 4 5 7
j
1 0 83 9 30 50
2 0 66 37 17 26
3 29 1 19 0 5
total reduced: 96+3 =
5 0 15 53 4 25 3 99 a new lower
bound
6 0 85 8 89 0
7 18 0 0 0 58
40
Introduction to Algorithms_ R.-W. Hung 2014
72
Branch-and-bound strategy (Cont.)
Example: (Table 5-5) (Cont.)
Because
lower-bound <
upper bound
2
1
5
4 3
An upper 6
bound is 7
found Figure 5-26: A branch-and-bound solution of
41
Introduction to Algorithms_ R.-W. Hung 2014 a traveling salesperson problem
72
5.8 The 0/1 Knapsack Problem
What ?
。 Given
• Positive integers P1, P2, ..., Pn (profits when putting objects
into Knapsack)
• W1, W2, ..., Wn (object weights)
• M (capacity of Knapsack)
。 Find X1, X2, ..., Xn, each Xi is either 0 or 1, such that
n
maximize PX
i 1
i i
n
subject to Wi X i M .
i 1
Pi 6 10 4 5 6 4
Wi 10 19 8 10 12 8
• Pi/Wi Pi+1/Wi+1
。A feasible solution: X1 = 1, X2 = 1, X3 = 0, X4 = 0, X5 = 0, X6
=0
• (P1+P2) = (6+10) = 16 upper bound
• Any solution higher than 16 cannot be an optimal solution.
43
Introduction to Algorithms_ R.-W. Hung 2014
72
Relax the Restriction
problem
。Let i 1 Pi X i Y be an optimal solution for knapsack
n
problem
Y Y
44
Introduction to Algorithms_ R.-W. Hung 2014
72
Upper Bound and Lower Bound
Lower bound
。We can use the greedy method to find an optimal solution for
knapsack problem: (Recall Section 3.7)
• M = 34
• X1 = 1, X2 =1, X3 = 5/8, X4=X5=X6=0
• (P1+P2+5/8P3) = 18.5 (lower bound)
18 is our lower bound (only consider integers)
i 1 2 3 4 5 6
Pi 6 10 4 5 6 4
Wi 10 19 8 10 12 8
45
Introduction to Algorithms_ R.-W. Hung 2014
72
Upper bound
。Consider the case that X1 = 1, X2 =0, X3 = 0, X4 = 0
• Let X5 = 1, X6 = 1.
• Then, an upper bound is obtained: (P1+P5+P6) = 16
16 is our initially upper bound
i 1 2 3 4 5 6
Pi 6 10 4 5 6 4
Wi 10 19 8 10 12 8
Optimal solution
。18 optimal solution 16
。optimal solution : X1 = 1, X2 =0, X3 = 0, X4 = 1, X5 = 1, X6 = 0
(P1+P4+P5) = 17
46
Introduction to Algorithms_ R.-W. Hung 2014
72
The Branching Mechanism
47
Introduction to Algorithms_ R.-W. Hung 2014
72
The Branch-and-Bound strategy
Expand the node with the best
lower bound
51
Introduction to Algorithms_ R.-W. Hung 2014
72
Step 2 : Expand node A
root
a c
b 4
2 3
A 4 B 6 C 5
d
2 e 3
D E
g(D)=4 g(E)=5
h(D)=min{3,1}=1 h(E)=min{2,2}=2
f(D)=4+1=5 f(E)=5+2=7
52
Introduction to Algorithms_ R.-W. Hung 2014
72
Step 3 : Expand node C
root
a c
b 4
2 3
A 4 B 6 C 5
d k
2 e 3 2 f 2
D 5 E 7 F G
g(F)=5 g(G)=5
h(F)=min{3,1}=1 h(G)=min{5}=5
f(F)=5+1=6 f(G)=5+5=10
53
Introduction to Algorithms_ R.-W. Hung 2014
72
Step 4 : Expand node D
root
a c
b 4
2 3
A 4 B 6 C 5
d k
2 e 3 2 f 2
D 5 E 7 6 F G 10
h
1 i 3
H I
g(H)=5 g(I)=7
h(H)=min{5}=5 h(I)=0
f(H)=5+5=10 f(I)=7+0=7
54
Introduction to Algorithms_ R.-W. Hung 2014
72
Step 5 : Expand node B
root
a c
b 4
2 3
A 4 B 6 C 5
d k
2 e 3 g 2 2 f 2
D 5 E 7 J 6 F G 10
h
1 i 3 g(J)=6
H 10 I 7 h(J)=min{5}=5
f(J)=6+5=11
55
Introduction to Algorithms_ R.-W. Hung 2014
72
Step 6 : Expand node F
root
a c
b 4
2 3
A 4 B 6 C 5
d k
2 e 3 g 2 2 f 2
D 5 E 7 J 6 F G 10
h h
1 i 3 1 i 3
H 10 I 7 K L
g(K)=6 g(L)=8
h(K)=min{5}=5 h(L)=0
f(K)=6+5=11 f(L)=8+0=8
56
Introduction to Algorithms_ R.-W. Hung 2014
72
Step 7 : Expand node I
root
a c
b 4
2 3
A 4 B 6 C 5
d k
2 e 3 g 2 2 f 2
D 5 E 7 J 6 F G 10
h h
1 i 3 1 i 3
H 10 I 7 K L
Since node I is a goal node, we stop and return the optimal solution.
57
Introduction to Algorithms_ R.-W. Hung 2014
72
*5.11 The Channel Routing Problem
Ordinary A*
。 When a goal node is selected, the algorithm stops.
A Specialized A*
。 When reaching a goal node, the algorithm stops.
。 Used to solve the channel routing problem
root
g(t)
t h(t) = h*(t)
goal
58
Introduction to Algorithms_ R.-W. Hung 2014
72
The Channel Routing Problem
A channel specification
59
Introduction to Algorithms_ R.-W. Hung 2014
72
The channel routing problem
。Input: a channel, two rows of terminals, and a set of nets
。Output: a layout with minimum tracks
。In a layout, it does not allow the illegal connections:
(horizontal and vertical constrains)
61
Introduction to Algorithms_ R.-W. Hung 2014
72
An Optimal Routing
62
Introduction to Algorithms_ R.-W. Hung 2014
72
Observations
Horizontal constraints
。If nets x = (a, b) and y = (c, d) are in the same track and they
satisfy that a < c, then net x must be wired to the left of net y.
。A horizontal constraint graph (HCG)
64
Introduction to Algorithms_ R.-W. Hung 2014
72
Vertical constraint and Horizontal constraint
graphs
。While VCG gives us information about which nets can be
assigned, the HCG informs us which nets can be assigned to
a track.
Example:
• From Figure 5-43 (VCG), we know that nets 1, 3, 5, 7, and 8
may be assigned.
• Consulting Figure 5-42 (HCG), we note that among nets 1, 3, 5,
7, and 8, there are 3 maximal cliques: {1, 8}, {1, 3, 7}, and {5,
7}. Each maximal clique can be assigned to a track.
Maximal clique
。A clique of a graph is a subgraph in which every pair of
vertices are connected.
。A maximal clique is a clique whose size cannot be made
larger.
。The maximal clique problem is to determine the size of a
largest clique in a graph. (Exercise 8.7) NP-complete
maximal cliques :
{a, b}, {a, c, d},
{c, d, e, f}
maximum clique :
(largest)
{c, d, e, f}
66
Introduction to Algorithms_ R.-W. Hung 2014
72
A* algorithm
。Expand the subtree in Figure 5-44:
• After nets 7, 3, 1 are assigned, consulting VCG in Figure 5-43,
the nets which can be assigned become nets 2, 4, 5, 8
• After nets 8, 1 are assigned, consulting VCG in Figure 5-43, the
nets 2, 3, 5. 7 can be assigned
67
Introduction to Algorithms_ R.-W. Hung 2014
72
A* algorithm (Cont.)
。Expand the subtree in Figure 5-44: (Cont.)
• Then, after nets 7, 3, 1 are assigned, consulting HCG in Figure
5-42, the maximal cliques in {2, 4, 5, 8} include {4, 2}, {8, 2}
and {5}.
• Thus, if we only expand node {7, 3, 1}, the solutions become
those shown in Figure 5-45.
4, 2 8, 2 5
After assigning a
track
69
Introduction to Algorithms_ R.-W. Hung 2014
72
A* algorithm (Cont.)
Figure 5-46: A partial solution tree for the channel routing problem
by using A* algorithm
70
Introduction to Algorithms_ R.-W. Hung 2014
72
A* algorithm (Cont.)
An optimal routing
71
Introduction to Algorithms_ R.-W. Hung 2014
72
End of Chapter 5
72
Introduction to Algorithms
72 R.-W. Hung 2014