COMP20007 Design of Algorithms

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

COMP20007 Design of Algorithms

Analysis of Algorithms

Lars Kulik

Lecture 4

Semester 1, 2022

1
Establishing Growth Rate

In the last lecture we proved t(n) ∈ O(g (n)) for some cases of t
and g , using the definition of O directly:

n > n0 ⇒ t(n) < c · g (n)

for some c and n0 . A more common approach uses



t(n)  0
 implies t grows asymptotically slower than g
lim = c implies t and g have same order of growth
n→∞ g (n) 
∞ implies t grows asymptotically faster than g

Use this to show that 1000n ∈ O(n2 ). .


2
L’Hôpital’s Rule

Often it is helpful to use L’Hôpital’s rule:

t(n) t 0 (n)
lim = lim 0
n→∞ g (n) n→∞ g (n)

where t 0 and g 0 are the derivatives of t and g .



For example, we can show that log2 n grows slower than n:

log2 n (log2 e) n1 1
lim √ = lim 1
= 2 log2 e lim √ = 0
n→∞ n n→∞ √
2 n
n→∞ n

3
Induction Trap (Polya)

• A(n): All horses are the same colour


• Base case: A(1) is trivially true (only one horse)
• Assume in a set of n horses, all are the same colour
• For a set of n + 1 horses, take the subsets {1, . . . , n} and
{2, . . . , n + 1}.
• Both subsets are of size n, so all horses are the same colour in
each subset (by inductive hypotheses).
• Since n − 1 of the horses are the same in both sets, the horses
in both sets must be all the same colour, hence all n + 1 horses
are the same colour.
• What went wrong?

4
Example: Finding the Largest Element in a List

function MaxElement(A[0..n − 1])


max ← A[0]
for i ← 1 to n − 1 do
if A[i] > max then
max ← A[i]
return max

We count the number of comparisons executed for a list of size n:


n−1
X
C (n) = 1 = n − 1 = Θ(n)
i=1

5
Example: Selection Sort

function SelSort(A[0..n − 1])


for i ← 0 to n − 2 do
min ← i
for j ← i + 1 to n − 1 do
if A[j] < a[min] then
min ← j
swap A[i] and A[min]

We count the number of comparisons executed for a list of size n:


n−2 X
X n−1 n−2
X n−2
X
C (n) = 1= (n − 1 − i) = (n − 1)2 − i
i=0 j=i+1 i=0 i=0

(n − 2)(n − 1) n(n − 1)
= (n − 1)2 − = = Θ(n2 )
2 2
6
Example: Matrix Multiplication

function
MatrixMult(A[0..n − 1, 0..n − 1], B[0..n − 1, 0..n − 1])
for i ← 0 to n − 1 do
for j ← 0 to n − 1 do
C [i, j] ← 0.0
for k ← 0 to n − 1 do
C [i, j] ← C [i, j] + A[i, k] · B[k, j]
return C

The number of multiplications executed for a list of size n is:


n−1 X
X n−1 X
n−1
M(n) = 1
i=0 j=0 k=0

.
7
Analysing Recursive Algorithms

Let us start with a simple example:


function F(n)
if n = 0 then return 1
else return F(n − 1) · n

The basic operation here is the multiplication.

We express the cost recursively as well:

M(0) = 0
M(n) = M(n − 1) + 1 for n > 0

To find a closed form, that is, one without recursion, we usually try
“telescoping”, or “backward substitutions” in the recursive part.

8
Telescoping

The recursive equation was:

M(n) = M(n − 1) + 1 (for n > 0)

Use the fact M(n − 1) = M(n − 2) + 1 to expand the right-hand


side:
M(n) = [M(n − 2) + 1] + 1 = M(n − 2) + 2

and keep going:

. . . = [M(n − 3) + 1] + 2 = M(n − 3) + 3 = . . . = M(n − n) + n = n

where we used the base case M(0) = 0 to finish.

9
A Second Example: Binary Search in Sorted Array

function BinSearch(A[], lo, hi, key )


if lo > hi then return −1
mid ← lo + (hi − lo)/2
if A[mid] = key then return mid
else
if A[mid] > key then
return BinSearch(A, lo, mid − 1, key )
else return BinSearch(A, mid + 1, hi, key )

The basic operation is the key comparison. The cost, recursively, in


the worst case:
C (0) = 0
C (n) = C (n/2) + 1 for n > 0

10
Telescoping

A smoothness rule allows us to assume that n is a power of 2.


The recursive equation was:
C (n) = C (n/2) + 1 (for n > 0)

Use the fact C (n/2) = C (n/4) + 1 to expand, and keep going:


C (n) = C (n/2) + 1
= [C (n/4) + 1] + 1
= [[C (n/8) + 1] + 1] + 1
..
.
= [[. . . [[C (0) + 1] + 1] + · · · + 1] + 1]
| {z }
1+log2 n times

Hence C (n) = Θ(log n).


11
Logarithmic Functions Have Same Rate of Growth

In O-expressions we can just write “log” for any logarithmic


function, no matter what its base is.

Asymptotically, all logarithmic behaviour is the same, since

loga x = (loga b)(logb x)

So, for example, if ln is the natural logarithm then

log2 n ∈ O(ln n)
ln n ∈ O(log2 n)

Also note that since log nc = c · log n, we have, for all constants c,

log nc = O(log n)

12
Summarising Reasoning with Big-Oh

O(f (n)) + O(g (n)) = O(max{f (n), g (n)})

c · O(f (n)) = O(f (n))

O(f (n)) · O(g (n)) = O(f (n) · g (n)).

The first equation justifies throwing smaller summands away.

The second says that constants can be thrown away too.

The third may be used with some nested loops. Suppose we have a
loop which is executed O(f (n)) times, and each execution takes
time O(g (n)). Then the execution of the loop takes time
O(f (n) · g (n)).
13
Some Useful Formulas

From Stirling’s formula:


1
n! = O(nn+ 2 )

Some useful sums:


Pn n
i=0 i
2 = 3 (n + 12 )(n + 1)

Pn
i=0 (2i + 1) = (n + 1)2

Pn
i=1 1/i = O(log n)

See also Levitin’s Appendix A.

Levitin’s Appendix B is a tutorial on recurrence relations.

14
The Road Ahead

You will become much more familiar with asymptotic analysis as


we use it on algorithms that we meet.

We shall begin the study of algorithms by looking at brute force


approaches.

15

You might also like