Mcs 31
Mcs 31
Mcs 31
Introduction
Objectives
Example of an Algorithm
Problems and Instances
Characteristics of an Algorithm
Problems, Available Tools & Algorithms
Building Blocks of Algorithms
1.6.1
1.6.2
1.6.3
1.7
7
9
9
10
12
14
17
Outline of Algorithmics
1.7.1
1.7.2
1.7.3
1.7.4
1.7.5
1.7.6
1.7.7
1.7.8
1.7.9
1.7.10
1.8
1.9
1.10
Page Nos.
27
Summary
Solutions/Answers
Further Readings
32
33
40
1.0 INTRODUCTION
We are constantly involved in solving problem. The problems may concern our
survival in a competitive and hostile environment, may concern our curiosity to know
more and more of various facets of nature or may be about any other issues of interest
to us. Problem may be a state of mind of a living being, of not being satisfied with
some situation. However, for our purpose, we may take the unsatisfactory/
unacceptable/ undesirable situation itself, as a problem.
One way of looking at a possible solution of a problem, is as a sequence of activities
(if such a sequence exists at all), that if carried out using allowed/available tools,
leads us from the unsatisfactory (initial) position to an acceptable, satisfactory or
desired position. For example, the solution of the problem of baking delicious
pudding may be thought of as a sequence of activities, that when carried out, gives
us the pudding (the desired state) from the raw materials that may include sugar,
flour and water (constituting the initial position)using cooking gas, oven and some
utensils etc. (the tools). The sequence of activities when carried out gives rise to a
process.
Technically, the statement or description in some notation, of the process is called
an algorithm, the raw materials are called the inputs and the resulting entity (in the
above case, the pudding) is called the output. In view of the importance of the
concept of algorithm, we repeat:
Introduction to
Algorithmics
Elementary Algorithmics
1.1 OBJECTIVES
After going through this Unit, you should be able to:
E2.
E3.
{Interchange}. Let the new value of m be the current value of n and the new
value of n be the current value of r. Go back to Step E1.
The termination of the above method is guaranteed, as m and n must reduce in each
iteration and r must become zero in finite number of repetitions of steps E1, E2 and
E3.
The great Greek mathematician Euclid sometimes between fourth and third century
BC, at least knew and may be the first to suggest, the above algorithm. The algorithm
is considered as among the first non-trivial algorithms. However, the word
algorithm itself came into usage quite late. The word is derived from the name of the
Persian mathematician Mohammed al-Khwarizmi who lived during the ninth century
A.D. The word al-khowarizmi when written in Latin became Algorismus, from
which algorithm is a small step away.
In order to familiarise ourselves with the notation usually used to express algorithms,
next, we express the Euclids Algorithm in a pseudo-code notation which is closer to
a programming language.
Algorithm GCD-Euclid (m, n)
{This algorithm computes the greatest common divisor of two given positive
integers}
begin {of algorithm}
while n 0 do
begin {of while loop}
r m mod n;
{a new variable is used to store the remainder which is obtained by dividing
m by n, with 0 r < m}
Introduction to
Algorithmics
m n;
{the value of n is assigned as new value of m; but at this stage value of n
remains unchanged}
m r;
{the value of r becomes the new value of n and the value of r remains
unchanged}
end {of while loop}
return (n).
end; {of algorithm}
(1.3.1)
x=
/b
b 4 / 6ac
4a
(1.3.2)
0.
(1.3.3)
6 4 / 6 5 3
4 5
x=
/64
8
, i.e.,
/3
or / 3.
5
With reference to the above discussion, the issue of finding roots of the general
quadratic equation ax2 + bx + c = 0, with a 0 is called a problem, whereas the issue
of finding the roots of the particular equation
3x2 + 4x+1 = 0
is called a question or an instance of the (general) problem.
In general, a problem may have a large, possibly infinite, number of instances. The
above-mentioned problem of finding the roots of the quadratic equation
ax2 + bx + c = 0
10
with a 0, b and c as real numbers, has infinitely many instances, each obtained by
giving some specific real values to a, b and c, taking care that the value assigned to a
is not zero. However, all problems may not be of generic nature. For some problems,
there may be only one instance/question corresponding to each of the problems. For
example, the problem of finding out the largest integer that can be stored or can be
arithmetically operated on, in a given computer, is a single-instance problem. Many
of the interesting problems like the ones given below, are just single-instance
problems.
Elementary Algorithmics
Problem (i): Crossing the river in a boat which can carry at one time, alongwith the
boatman only one of a wolf, a horse and a bundle of grass, in such a way that neither
wolf harms horse nor horse eats grass. In the presence of the boatman, neither wolf
attacks horse, nor horse attempts to eat grass.
Problem (ii): The Four-Colour Problem , which requires us to find out whether a
political map of the world, can be drawn using only four colours, so that no two
adjacent countries get the same colour.
The problem may be further understood through the following explanation. Suppose we
are preparing a coloured map of the world and we use green colour for the terrestrial part
of India. Another country is a neighbour of a given country if it has some boundary in
common with it. For example, according to this definition, Pakistan, Bangladesh and
Myanmar (or Burma) are some of the countries which are Indias neighbours.
Then, in the map, for all the neighbours of India, including Pakistan, Bangladesh
and Myanmar, we can not use green colour. The problem is to show that the
minimum number of colours required is four, so that we are able to colour
the map of the world under the restrictions of the problem.
Problem (iii): The Fermats Last Theorem: which requires us to show that there
do not exist positive integers a, b, c and n such that
an + bn = cn
with n
3.
The problem also has a very fascinating history. Its origin lies in the simple observation that the equation
x2 + y2 = z2
has a number of solutions in which x, y and z all are integers. For example, for x = 3, y = 4, z = 5, the
equation is satisfied. The fact was also noticed by the great mathematician Pierre De Fermat (1601
1665). But, like all great intellectuals, he looked at the problem from a different perspective. Fermat felt
and claimed that for all integers n 3, the equation
xn + yn = zn
has no non-trivial # solution in which x, y and z are all positive integers. And he jotted down the above
claim in a corner of a book without any details of the proof.
However, for more than next 300 years, mathematicians could not produce any convincing proof of the
Fermats the-then conjecture, and now a theorem. Ultimately, the proof was given by Andrew Wiles in
1994. Again the proof is based not only on a very long computer program but also on sophisticated
modern mathematics.
Problem (iv): On the basis of another generalisation of the problem of finding integral solutions of
x2+y2 = z2, great Swiss mathematician Leonhard Euler conjectured that for n 3, the sum of (n 1)
,
The origin of the Four-colour conjecture, may be traced to the observation by Francis Guthrie, a student
of Augustus De Morgan (of De Morgans Theorem fame), who noticed that all the counties (sort of
parliamentary constituencies in our country) of England could be coloured using four colours so that no
adjacent counties were assigned the same colour. De Morgan publicised the problem throughout the
mathematical community. Leaving aside the problem of parallel postulate and the problem in respect of
Fermats Last Theorem, perhaps, this problem has been the most fascinating and tantalising one for the
mathematicians, remaining unsolved for more than one hundred years. Ultimately, the problem was
solved in 1976 by two American mathematician, Kenneth Appel and Wolfgang Haken.
However, the proof is based on a computer program written for the purpose, that took 1000 hours of
computer time (in 1976). Hence, the solution generated, among mathematicians, a controversy in the
sense that many mathematicians feel such a long program requiring 1000 hours of computer time in
execution, may have logical and other bugs and hence can not be a reliable basis for the proof of a
conjecture.
#
one solution, of course, is given by x = 0 = y = z, though x, y and z, being zero, are not positive.
11
Introduction to
Algorithmics
number of nth powers of positive integers can not be an nth power of an integer. For a long time the
conjecture neither could be refuted nor proved. However, in 1966, L.J. Lander and T.R. Parkin found a
counter example for n = 5, by showing that 275 + 845 + 1105 + 1335 = 1445.
Coming back to the problem of finding the roots of a quadratic equation, it can be
easily seen that in finding the roots of a quadratic equation, the only operations that
have been used are plus, minus, multiplication and division of numbers alongwith the
operation of finding out the square root of a number. Using only these operations, it
is also possible, through step-by-step method, to find the roots of a cubic equation
over the real numbers, which, in general, is of the form
ax3 +bx2 + cx + d = 0,
where a
Further, using only the set of operations mentioned above, it is also possible, through
a step-by-step method, to solve a biquadratic equation over real numbers, which, in
general, is of the form
ax4 +bx3 + cx2 + dx + e = 0,
where a
However, the problem of finding the roots of a general equation of degree five or
more, can not be solved, using only the operations mentioned above, through a stepby-step method, i.e., can not be solved algorithmically.
In such cases, we may attempt some non-algorithmic methods including solutions
based on numerical methods which may not give exact but some good approximate
solutions to such problems. Or we may just use some hit and trial method, e.g.,
consisting of guessing a possible root and then verifying the guess as a possible
solution, by actually substituting the guessed root in the given equation. A hit and
trial method is not an algorithm, because we cannot guarantee the termination of
the method, where as discussed later, termination is one of the characteristic
properties of an algorithm.
It may be noted that a ( general ) problem, like finding the roots of an equation of
degree 5 or more, may not be solvable algorithmically, i.e., through some step-by-step
method, still it is possible for some (particular) instances of the problem to have
algorithmic solutions. For example, the roots of the equation
x5
32 = 0
are easily available through a step-by-step method. Also, the roots of the equation
2x6 3x3 + 1 = 0 can be easily found through a method, in which, to begin with, we
may take y = x3
Ex. 1) Give at least three examples of problems, each one of which has only finitely
many instances.
Hint: Structures over Boolean set {0, 1} may be good sources for such examples.
Elementary Algorithmics
{in the above, the symbol denotes that the value on its R.H.S is assigned to the
variable on its L.H.S. Detailed discussion under (i) of Section 1.6.1}
All the steps, like tossing the coin etc., can be (effectively) carried out. However, the
method is not definite, as two different executions may yield different outputs.
3. Inputs: An algorithm has zero or more, but only finite, number of inputs.
Examples of algorithms requiring zero inputs:
(i)
Print the largest integer, say MAX, representable in the computer system
being used.
(ii)
Print the ASCII code of each of the letter in the alphabet of the computer
system being used.
(iii) Find the sum S of the form 1+2+3+, where S is the largest integer less
than or equal to MAX defined in Example (i) above.
4. Output: An algorithm has one or more outputs. The requirement of at least one
output is obviously essential, because, otherwise we can not know the
answer/solution provided by the algorithm.
The outputs have specific relation to the inputs, where the relation is defined by
the algorithm.
5. Effectiveness: An algorithm should be effective. This means that each of the
operation to be performed in an algorithm must be sufficiently basic that it can,
in principle, be done exactly and in a finite length of time, by a person using
pencil and paper. It may be noted that the FINITENESS condition is a special
case of EFFECTIVENESS. If a sequence of steps is not finite, then it can not be
effective also.
A method may be designed which is a definite sequence of actions but is not finite
(and hence not effective)
Example 1.4.2: If the following instruction is a part of an algorithm:
Find exact value of e using the following formula
There are some methods, which are not definite, but still called algorithms viz., Monte Carlo
algorithms in particular and probabilistic algorithms in general. However, we restrict our
algorithms to those methods which are definite alongwith other four characteristics. In other
cases, the full name of the method viz., probabilistic algorithm, is used.
13
Introduction to
Algorithmics
and add it to x.
For each of the following, give one example of a method, which is not an
algorithm, because
(i)
the method is not finite
(ii) the method is not definite
(iii) the method is not effective but finite.
For this algorithm, we assume that the only arithmetic capabilities the system is
endowed with, are
(i)
(ii)
14
Step 2: For each new mark on the First Portion, make m new marks on the Second
Portion.
Elementary Algorithmics
Step 3: Count the number of marks in First Portion. If the count equals n, then count
the number of all marks in the Second Portion and return the last count as the result.
However, if the count in the First Portion is less than n, then make one more mark in
the First Portion and go to Step 2.
Third Algorithm:
Then
m . n = ((m 1) + 1) . n = (m 1)n + n
= ((m 1)/2) . (2n) + n.
where (m 1)/2 is an integer as m is an odd integer.
For example, m = 7 and n = 12
Then
m * n = 7 * 11 = ((7 1) + 1) * 11 = (7
*9 / 3+ (2 * 11) + 11
=
4
1) * 11 + 11
Therefore, if at some stage, m is even, we halve m and double n and multiply the two
numbers so obtained and repeat the process. But, if m is odd at some stage, then we
halve (m 1), double n and multiply the two numbers so obtained and then add to the
product so obtained the odd value of m which we had before halving (m 1).
Next, we describe the ala russe method/algorithm.
The algorithm that uses four variables, viz., First, Second, Remainder and
Partial-Result, may be described as follows:
Step 1: Initialize the variables First, Second and Partial-Result respectively with m
(the first given number), n (the second given number) and 0.
Step 2: If First or Second , is zero, return Partial-result as the final result and then
stop.
,
0, then Second
15
Introduction to
Algorithmics
Else, set the value of the Remainder as 1 if First is odd, else set Remainder as 0. If
Remainder is 1 then add Second to Partial-Result to get the new value of Partial
Result.
Step 3: New value of First is the quotient obtained on (integer) division of the current
value of First by 2. New value of Second is obtained by multiplying Second by 2. Go
to Step 2.
Example 1.5.1: The logic behind the ala russe method, consisting of Step 1, Step 2
and Step 3 given above, may be better understood, in addition to the argument given
the box above, through the following explanation:
Initially:
Step 2
Step 3
Step 2
Step 3
Step 2
Step 3
Step 2
Step 3
Step 2
Step 3
Step 2
16
First
45
As value of First 0,
hence continue
22
Value of first 0,
continue
11
Value of First 0,
continue
5
Value of first 0,
continue
2
Value of First 0,
continue
0
Second
19
Remainder
Partial Result
0
19
38
0
76
1
76+19=95
152+95=247
152
304
0
608
608+247=855
As the value of the First is 0, the value 855 of Partial Result is returned as the result
and stop.
Ex. 3)
Elementary Algorithmics
(i)
that of counting,
(ii) that of comparing two integers w.r.t. less than or equal to relation and
(iii) those of both multiplying and dividing by 2 as well as 3.
Design an algorithm that multiplies two integers, and fully exploits the capabilities of
the system. Using the algorithm, find the product.
1.6.1
j 4*i - j/ r ;
It is assumed that each of the variables occurring on R.H.S. of the above statement,
has a value associated with it before the execution of the above statement. The
association of a value to a variable, whether occurring on L.H.S or on R.H.S, is made
according to the following rule:
For each variable name, say i, there is a unique location, say loc 1 (i), in the main
memory. Each location loc(i), at any point of time contains a unique value say v(i).
Thus the value v(i) is associated to variable i.
Using these values, the expression on R.H.S. is evaluated. The value so obtained is
the new value of the variable on L.H.S. This value is then stored as a new value of the
variable (in this case, j) on L.H.S. It may be noted that the variable on L.H.S (in this
case, j) may also occur on R.H.S of the assignment symbol.
In such cases, the value corresponding to the occurrence on R.H.S (of j, in this case)
is finally replaced by a new value obtained by evaluating the expression on R.H.S (in
this case, 2 * i + j r).
The values of the other variables, viz., i and r remain unchanged due to assignment
statement.
(ii) The next basic action is to read values of variables i, j, etc. from some
secondary storage device, the identity of which is (implicitly) assumed here, by a
statement of the form
read (i,j, ,);
17
Introduction to
Algorithmics
The values corresponding to variables i, j, in the read statement, are, due to read statement,
stored in the corresponding locations loc(i) , loc(j),, in the main memory. The values are
supplied either, by default, through the keyboard by the user or from some secondary or
external storage. In the latter case, the identity of the secondary or external storage is also
specified in the read statement.
(iii) The last of the three basic actions, is to deliver/write values of some variables
say i, j, etc. to the monitor or to an external secondary storage by a statement of
the form
write (i, j ,.);
1.6.2
one after the other on successive lines, or even on the some line if there is
enough space on a line, and
separated by some statement separator, say semi-colons, and
in the order of intended execution.
18
Elementary Algorithmics
Where A and B are instructions, which may be even composite instructions obtained
by applying these structuring rules recursively to the other instructions.
Further, in some situations the action B is null, i.e., if Q is false, then no action is
stated.
This new situation may be denoted by
If Q then do A
In this case, if Q is true, A is executed. If Q is not true, then the remaining part of the
instruction is ignored, and the next instruction, if any, in the program is considered for
execution.
Also, there are situations when Q is not just a Boolean variable i.e., a variable which
can assume either a true or a false value only. Rather Q is some variable capable of
assuming some finite number of values say a, b, c, d, e, f. Further, suppose depending
upon the value of Q, the corresponding intended action is as given by the following
table:
Value
a
b
c
d
e
f
Action
A
A
B
NO ACTION
D
NO ACTION
grade (G)
80
60
M < 80
19
Introduction to
Algorithmics
50
M < 60
40
M < 50
M < 40
Then the corresponding notation may be:
Case M of
80 . . 100 : A
60 . . 79 : B
50 . . 59 : C
40 . . 49 : D
0 . . 39 : F
where M is an integer variable
(iii)
(a)
Example 1.6.2.2: We are required to find out the sum (SUM) of first n natural
numbers. Let a variable x be used to store an integer less than or equal to n, then the
algorithm for the purpose may be of the form:
algorithm Sum_First_N_1
begin
read (n); {assuming value of n is an integer 1}
x1 ; SUM 1;
while (x < n) do (c1)
begin
x x + 1;
SUM SUM + x
end; {of while loop} ( d 1)
write (The sum of the first, n, natural numbers is SUM)
end. {of algorithm}
Explanation of the algorithm Sum_First_N_1:
Initially, an integer value of the variable n is read. Just to simplify the argument, we
assume that the integer n 1. The next two statements assign value 1 to each of the
variables x and SUM. Next, we come the execution of the while-loop. The while-loop
extends from the statement (c1) to ( d 1) both inclusive. Whenever we enter the loop,
the condition x<n is (always) tested. If the condition x<n is true then the whole of the
remaining portion upto d (inclusive) is executed. However, if the condition is false
then all the remaining statement of the while-loop, i.e., all statements upto and
including ( d 1) are skipped.
20
Elementary Algorithmics
x x - 1;
SUM SUM - x;
end
Here S is called the body of the do. while loop. It may be noted that here S is not
surrounded by the brackets, viz., begin and end. It is because of the fact do and while
enclose S.
Again consider the example given above, of finding out the sum of first n natural
numbers. The using do while statement, may be of the form:
The above instruction is denoted in the programming language Pascal by
Repeat S until (not Q)
Example 1.6.2.3: Algorithm Sum_First_N_2
Introduction to
Algorithmics
x0
SUM 0
do
( c 4)
x x+1
SUM SUM + x
while (x < n) .
end {of algorithm}.
(d4)
1) do
begin S end;
where v is some integer variable assuming initial value i and increasing by 1 after
each execution of S and final execution of S takes place after v assumes the value
i+n1.
Then the execution of the for-loop is terminated. Again begin S do is called the body
of the for-loop. The variable x is called index variable of the for-loop.
Example 1.6.2.4: Again, consider the problem of finding the sum of first n natural
numbers, algorithm using for may be of the form:
algorithm Sum_First_N_3
begin
read (n);
SUM 0
for x 1 to n do
(c 3)
begin
(d 3)
SUM SUM + x
end;
write (The sum of the first first, n, natural numbers numbers, SUM)
end. {of the algorithm}
In the algorithm Sum_First_N_3 there is only one statement in the body of the
for-loop. Therefore, the bracket words begin and end may not be used in the for-loop.
In this algorithm, also, it may be noted that only the variable SUM is initialized. The
variable x is not initialized explicitly. The variable x is implicitly initialised to 1
through the construct for x varying from 1 to n do. And, after each execution of the
body of the for-loop, x is implicitly incremented by 1.
A noticeable feature of the constructs (structuring rules) viz., sequencing, selection
and iteration, is that each defines a control structure with a single entry point and
single exit point. It is this property that makes them simple but powerful building
blocks for more complex control structures, and ensures that the resultant structure
remains comprehensible to us.
Structured Programming, a programming style, allows only those structuring rules
which follow single entry, single exit rule.
Ex.4)
22
Write an algorithm that finds the real roots, if any, of a quadratic equation
ax2 + bx + c = 0 with a 0, b, c as real numbers.
Ex.5)
Ex.6)
1.6.3
Extend your algorithm of Ex. 4 above to find roots of equations of the form
ax2 + bx +c = 0, in which a, b, c may be arbitrary real numbers, including 0.
(i)
Explain how the algorithm Sum_First_N_2 finds the sum of the first 3
natural numbers.
(ii)
Explain how the algorithm Sum_First_N_3 finds the sum of the first 3
natural numbers.
Elementary Algorithmics
Procedure
Recursion
Among a number of terms that are used, in stead of procedure, are subprogram and
even function. These terms may have shades of differences in their usage in different
programming languages. However, the basic idea behind these terms is the same, and
is explained next.
It may happen that a sequence frequently occurs either in the same algorithm
repeatedly in different parts of the algorithm or may occur in different algorithms. In
such cases, writing repeatedly of the same sequence, is a wasteful activity. Procedure
is a mechanism that provides a method of checking this wastage.
Under this mechanism, the sequence of instructions expected to be repeatedly used in
one or more algorithms, is written only once and outside and independent of the
algorithms of which the sequence could have been a part otherwise. There may be
many such sequences and hence, there is need for an identification of each of such
sequences. For this purpose, each sequence is prefaced by statements in the following
format:
Procedure <Name> (<parameter-list>) [: < type>]
<declarations>
<sequence of instructions expected to be occur repeatedly>
end;
(1.6.3.1)
where <name>, <parameter-list> and other expressions with in the angular brackets
as first and last symbols, are place-holders for suitable values that are to be substituted
in their places. For example, suppose finding the sum of squares of two variables is a
frequently required activity, then we may write the code for this activity independent
of the algorithm of which it would otherwise have formed a part. And then, in
(1.6.3.1), <name> may be replaced by sum-square and <parameter-list> by the twoelement sequence x, y. The variables like x when used in the definition of an
algorithm, are called formal parameters or simply parameters. Further, whenever
the code which now forms a part of a procedure, say sum-square is required at any
place in an algorithm, then in place of the intended code, a statement of the form
sum-square (a, b);
(1.6.3.2)
is written, where values of a and b are defined before the location of the statement
under (1.6.3.2) within the algorithm.
23
Introduction to
Algorithmics
Further, the pair of brackets in [: < type >] indicates that : < type > is optional. If the
procedure passes some value computed by it to the calling program, then : < type >
is used and then <type> in (1.6.3.1) is replaced by the type of the value to be passed,
in this case integer.
In cases of procedures which pass a value to the calling program another basic
construct (in addition to assignment, read and write) viz., return (x) is used, where
x is a variable used for the value to be passed by the procedure.
There are various mechanisms by which values of a and b are respectively associated
with or transferred to x and y. The variables like a and b, defined in the calling
algorithm to pass data to the procedure (i.e., the called algorithm), which the
procedure may use in solving the particular instance of the problem, are called actual
parameters or arguments.
Also, there are different mechanisms by which statement of the form
sum-square (a, b) of an algorithm is associated with the code of the procedure for
which the statement is used. However, all these mechanisms are named as calling
the procedure. The main algorithm may be called as the calling algorithm and the
procedure may be called the called algorithm. The discussion of these mechanisms
may be found in any book on concepts of programming languages , .
In order to explain the involved ideas, let us consider the following simple examples
of a procedure and a program that calls the procedure. In order to simplify the
discussion, in the following, we assume that the inputs etc., are always of the required
types only, and make other simplifying assumptions.
Example 1.6.3.1.1
Procedure sum-square (a, b : integer) : integer;
{denotes the inputs a and b are integers and the output is also an integer}
S: integer;
{to store the required number}
begin
S a2 + b2
Return (S)
end;
Program Diagonal-Length
{the program finds lengths of diagonals of the sides of right-angled triangles whose
lengths are given as integers. The program terminates when the length of any side is
not positive integer}
24
Elementary Algorithmics
First Step: In program Diagonal-Length through the statement read (L1, L2), we read
L1 as 4 and L2 as 5. As L1 > 0 and L2 > 0. Therefore, the program enters the
while-loop. Next the program, in order to compute the value of the diagonal calls the
procedure sum-square by associating with a the value of L1 as 4 and with b the value
of L2 as 5. After these associations, the procedure sum-square takes control of the
computations. The procedure computes S as 41 = 16 + 25. The procedure returns 41
to the program. At this point, the program again takes control of further execution.
The program uses the value 41 in place of sum-square (L1, L2). The program calls the
procedure square-root, which is supposed to be built in the computer system, which
temporarily takes control of execution. The procedure square-root returns value 63
and also returns control of execution to the program Diagonal-Length which in turn
assigns this value to D and prints the statement:
For sides of given lengths 4 and 5, the required diagonal length is
63 .
The program under while-loop again expects values of L1 and L2 from the user. If the
values supplied by the user are positive integers, whole process is repeated after
entering the while-loop. However, if either L1 0 (say 34) or L2 0, then whileloop is not entered and the program terminates.
We summarise the above discussion about procedure as follows:
(1.6.3.2.1)
For those who are familiar with recursive definitions like the one given above for
factorial, it is easy to understand how the value of (n!) is obtained from the above
definition of factorial of a natural number. However, for those who are not familiar
with recursive definitions, let us compute factorial (4) using the above definition.
By definition
factorial (4) = 4 * factorial (3).
Again by the definition
factorial (3) = 3 * factorial (2)
Similarly
factorial (2) = 2* factorial (1)
And by definition
factorial (1) = 1
Substituting back values of factorial (1), factorial (2) etc., we get
factorial (4) = 4.3.2.1=24, as desired.
This definition suggests the following procedure/algorithm for computing the factorial
of a natural number n:
25
Introduction to
Algorithmics
In the following procedure factorial (n), let fact be the variable which is used to pass
the value by the procedure factorial to a calling program. The variable fact is initially
assigned value 1, which is the value of factorial (1).
Procedure factorial (n)
fact: integer;
begin
fact 1
if n equals 1 then return fact
else begin
fact n * factorial (n 1)
return (fact)
end;
end;
In order to compute factorial (n 1), procedure factorial is called by itself, but this
time with (simpler) argument (n 1). The repeated calls with simpler arguments
continue until factorial is called with argument 1. Successive multiplications of
partial results with 2,3, .. upto n finally deliver the desired result.
Though, it is already mentioned, yet in view of the significance of the matter, it is
repeated below. Each procedure call defines a variables fact, however, the
various variables fact defined by different calls are different from each other. In
our discussions, we may use the names fact1, fact2, fact3 etc. However, if there is
no possibility of confusion then we may use the name fact only throughout.
Let us consider how the procedure executes for n = 4 compute the value of
factorial (4).
Initially, 1 is assigned to the variable fact. Next the procedure checks whether the
argument n equals 1. This is not true (as n is assumed to be 4). Therefore, the next
line with n = 4 is executed i.e.,
fact is to be assigned the value of 4* factorial (3).
Now n, the parameter in the heading of procedure factorial (n) is replaced by 3. Again
as n 1, therefore the next line with n = 3 is executed i.e.,
fact = 3 * factorial (2)
On the similar grounds, we get fact as 2* factorial (1) and at this stage n = 1. The
value 1 of fact is returned by the last call of the procedure factorial. And here lies the
difficulty in understanding how the desired value 24 is returned. After this stage, the
recursive procedure under consideration executes as follows. When factorial
procedure is called with n = 1, the value 1 is assigned to fact and this value is
returned. However, this value of factorial (1) is passed to the statement fact 2 *
factorial (1) which on execution assigns the value 2 to the variable fact. This value is
passed to the statement fact 3 * factorial (2) which on execution, gives a value of 6
to fact. And finally this value of fact is passed to the statement fact 4 * factorial (3)
which in turn gives a value 24 to fact. And, finally, this value 24 is returned as value
of factorial (4).
Coming back from the definition and procedure for computing factorial (n), let us
come to general discussion.
26
(i)
There must be in-built mechanism in the computer system that supports the
calling of a procedure by itself, e.g, there may be in-built stack operations on a
set of stack registers.
(ii)
Elementary Algorithmics
(iii) The arguments in successive calls should be simpler in the sense that each
succeeding argument takes us towards the conditions mentioned in (ii).
In view of the significance of the concept of procedure, and specially of the concept of
recursive procedure, in solving some complex problems, we discuss another recursive
algorithm for the problem of finding the sum of first n natural numbers, discussed
earlier. For the discussion, we assume n is a non-negative integer
procedure SUM (n : integer) : integer
s : integer;
If n = 0 then return (0)
else
begin s n + SUM (n 1);
return (s)
end;
end;
Ex.7)
Explain how SUM (5) computes sum of first five natural numbers.
Sorting problems
27
Introduction to
Algorithmics
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
(viii)
Searching problems
Linear programming problems
Number-theory problems
String processing problems
Graph problems
Geometric problems
Numerical problems.
Study of these specific types of problems may provide useful help and guidance in
solving new problems, possibly of other problem types.
Next, we enumerate and briefly discuss the sequence of steps, which generally,
one goes through for designing algorithms for solving (algorithmic) problems,
and analyzing these algorithms.
1.7.2
(7 x
- sin 4 x cos 4 x ) dx
7 x 4 dx
(ii)
28
and
Sin
x Cos 4 x dx
i.e., irrecoverable by the (very) nature of the problem. Depending upon the
nature of the problem, the solution strategy has to be decided or modified. For
example,
a)
b)
Elementary Algorithmics
by sliding, any one of the digits from a cell adjacent to the blank cell, to the blank cell.
Then a wrong step cannot be ignored but has to be recovered. By recoverable, we
mean that we are allowed to move back to the earlier state from which we came to the
current state, if current state seems to be less desirable than the earlier state. The 8puzzle problem has recoverable steps, or, we may say the problem is a recoverable
problem
c)
However if, we are playing chess, then a wrong step may not be even
recoverable. In other words, we may not be in a position, because of the
adversarys move, to move back to earlier state. Such a problem is called an
irrecoverable step problem.
For example, for ignorable-step problems, simple control structures for sequencing
and iteration may be sufficient. However, if the problem additionally has recoverablestep possibilities then facilities like back-tracking, as are available in the programming
language PROLOG, may have to be used. Further, if the problem additionally has
irrecoverable-step possibilities then planning tools should be available in the
computer system, so that entire sequence of steps may be analyzed in advance to find
out where the sequence may lead to, before the first step is actually taken.
There are many other characteristics of a problem viz.,
which can be known through analyzing the problem under consideration, and
the knowledge of which, in turn, may help us in determining or guessing a correct
sequence of actions for solving the problem under consideration
1.7.3
We have already discussed the importance of the step in Section 1.5, where we
noticed how, because of change in computational capabilities, a totally different
29
Introduction to
Algorithmics
algorithm has to be designed to solve the same problem (e.g., that of multiplying two
natural numbers).
Most of the computer systems used for educational purposes are PCs based on VonNeumann architecture. Algorithms, that are designed to be executed on such
machines are called sequential algorithms.
However, new more powerful machines based on parallel/distributed architectures,
are also increasingly becoming available. Algorithms, that exploit such additional
facilities, are called parallel/ distributed; such parallel/distributed algorithms, may not
have much resemblance to the corresponding sequential algorithms for the same
problem.
However, we restrict ourselves to sequential algorithms only.
1.7.4
For some problems like finding the square root of a given natural number n, it may
not be possible to find exact value for all ns (e.g., n = 3). We have to determine in
advance what approximation is acceptable, e.g., in this case, the acceptable error may
be, say, less than .01.
Also, there are problems, for which finding the exact solutions may be possible, but
the cost (or complexity, to be defined later) may be too much.
In the case of such problems, unless it is absolutely essential, it is better to use an
alternative algorithm which gives reasonably approximate solution, which otherwise
may not be exact. For example, consider the Travelling Salesperson Problem: A
salesperson has a list of, say n cities, each of which he must visit exactly once. There
are direct roads between each pair of the given cities. Find the shortest possible route
that takes the salesperson on the round trip starting and finishing in any one of the n
cities and visiting other cities exactly once.
In order to find the shortest paths, one should find the cost of covering each of the
n! different paths covering the n given cities. Even for a problem of visiting 10
cities, n!, the number of possible distinct paths is more than 3 million. In a
country like India, a travelling salesperson may be expected to visit even more than 10
cities. To find out exact solution in such cases, though possible, is very time
consuming. In such case, a reasonably good approximate solution may be more
desirable.
1.7.5
1.7.6
30
We have already enumerated various design techniques and also various problem
domains which have been rigorously pursued for computational solutions. For each
problem domain, a particular set of techniques have been found more useful, though
other techniques also may be gainfully employed. A major part of the material of the
course, deals with the study of various techniques and their suitability for various
types of problem domains. Such a study can be a useful guide for solving new
problems or new problems types.
1.7.7
Elementary Algorithmics
(ii)
1.7.8
1.7.9
Analyzing an Algorithm
Subsection 1.7.2 was concerned with analyzing the problem in order to find out
special features of the problem, which may be useful in designing an algorithm that
solves the problem. Here, we assume that one or more algorithms are already designed
to solve a problem. The purpose of analysis of algorithm is to determine the
requirement of computer resources for each algorithm. And then, if there are more
than one algorithms that solve a problem, the analysis is also concerned with
choosing the better one on the basis of comparison of requirement of resources for
different available algorithms. The lesser the requirement of resources, better the
algorithm. Generally, the resources that are taken into consideration for analysing
algorithms, include
(i)
(ii)
31
Introduction to
Algorithmics
(iii) Also sometimes, the man-hour or man-month taken by the team developing the
program, is also taken as a resource for the purpose.
The concerned issues will be discussed from place to place throughout the course
material.
1.8 SUMMARY
1.
In this unit the following concepts have been formally or informally defined
and discussed:
Problem, Solution of a Problem, Algorithm, Program, Process (all
section 1.1) . Instance of a problem (Section 1.2)
2.
are explained
3.
4.
32
(i)
(ii)
(iii)
(iv)
(v)
Elementary Algorithmics
Finiteness
Definiteness
Inputs
Outputs
Effectiveness
5.
In order to emphasize the significant role that available tools play in the
design of an algorithm, the problem of multiplication of two natural numbers
is solved in three different ways, each using a different set of available tools.
(Section 1.5)
6.
7.
8.
In Section 10, the following issues which play an important role in designing,
developing and choosing an algorithm for solving a given problem, are
discussed:
(i)
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
(viii)
(ix)
1.9 SOLUTIONS/ANSWERS
Ex.1)
Example Problem 1: Find the roots of the Boolean equation
ax2 + bx + c = 0,
where x,y,a,b,c {0,1} and a
x2 + x + 1 = 0
x2 + x
=0
x2 + 1
=0
x2
=0
(for a = 1 = b = c)
(for a = 1 = b, c = 0)
(for a = 1 = c, b = 0)
(for a = 1, b = 0 = c)
33
Introduction to
Algorithmics
S S - */ 1+
n n+1
.
n
end;
end.
(ii) A method which is not definite
Method Not-Definite
Read (x)
{Let an Urn contain four balls of different colours viz., black, white, blue and
red. Before taking the next step, take a ball out of the urn without looking at the
ball}
Begin
34
else
Elementary Algorithmics
xx+4;
end.
Then we can see that for the same value of x, the method Not-Definite may return
different values in its different executions.
(iii) If any of the following is a part of a method then the method is not
effective but finite.
(a)
(b)
If the statement:
If the word Heterological is heterological**
is true
then x3
else
x 4
Ex.3)
It is not possible to tell whether the speaker is actually telling lies or not. Because , if the
speaker is telling lies, then the statement: I am telling lies should be false. Hence the speaker
is not telling lies. Similarly, if the speaker is not telling lies then the statement: I am telling
lies should be true. Therefore, the speaker is telling lies. Hence, it is not possible to tell
whether the statement is true or not. Thus, the part of the method, and hence the method itself,
is not effective. But this part requires only finite amount of time to come to the conclusion that
the method is not effective.
**
A word is heterological, if it is an adjective and the property denoted by the word, does not
apply to the word itself. For example, the word monosyllabic is not monosyllabic. Similarly,
long is not a long word. German is not a German (language) word. Double is not a double
word. Thus, each of the words: monosyllabric , long, German, and double is heterological.
But, if we think of the word heterological , which is an adjective, in respect of the matter of
determining whether it is heterological or not, then it is not possible to make either of the two
statements:
(i)
(ii)
The reason being that either of these the assertions alongwith definition of heterological leads
to the assertion of the other. However, both of (i) and (ii) above can not be asserted
simultaneously. Thus it is not possible to tell whether the word heterological is heterological
or not.
35
Introduction to
Algorithmics
The required algorithm which uses variables First, Second, Remainder and
Partial-Result, may be described as follows:
Step 1: Initialise the variables First, Second and Partial-Result respectively with
m (the first given number), n (the second given number) and 0.
Step 2: If First or Second , is zero, then return Partial-result as the final result
and then stop. Else
Step 3:
{For computing Partial-Result1, replace First by First1; Second by Second1, and Partial-Result by
Partial-Result1 in Step 2 and repeat Step 2}
**
36
First = 3*First1+Remainder1
with 0 Remainder1<2
Second1 = 3*Second
Partial-Result1= First1 * Second1
Partial-Result = First * Second = (First1*3+Remainder1)*(Second)
=(First1*3)*Second+Remainder1*Second
=First1*(3*Second)+Remainder1*Second
=First1*Second1+Remainder1*Second
=Partial-Result1+Remainder1*Second
Where 0 Remainder1<2
Thus at every stage, we are multiplying and dividing, if required by at most 3
Initially:
Step 2
Step 3
Step 2
Step 3
Step 2
Step 3
Step 2
Step 3
First
Second
52
As value of First 0,
hence continue
17
Value of first 0,
continue,
5
Value of First 0,
continue
1
Value of first 0,
continue
0
19
Remainder
on division
by 3
Partial Result
0
19
2*57+19 =133
2*171+133
= 475
513+475=988
Elementary Algorithmics
57
171
513
304
As the value of the First is 0, the value 988 of Partial Result is returned as the result
and stop.
Ex. 4)
Algorithm Real-Roots-Quadratic
{of algorithm}
read (a);
while (a=0) do {so that finally a 0}
read (a);
read (b,c);
temp b*b 4*a*c
If temp<0 then
begin
write (the quadratic equation has no real roots.)
STOP
end;
else
if
temp=0
then
begin
first= b/(2*a)
write (The two roots of the quadratic equation
equal. The root is first)
end;
{as first is outside the quotes, therefore, value of first will
be given as output and not the word first will be output}.
else
{ie., when temp>0, i.e., when two roots are distinct}
begin
temp sq-root (temp);
first ( b+temp)/(2*a);
second ( b temp)/(2*a);
37
Introduction to
Algorithmics
end;
{of case when temp>0}
end; {of algorithm}
Ex. 5)
Algorithm Real-Roots-General-Quadratic
Initially the value of variable n is read as 3. Each of the variables x and Sum is
assigned value 0. Then without any condition the algorithm enters the dowhile
loop. The value x is incremented to 1 and the execution of statement
SUMSum +1 makes SUM as 1.
Next the condition x<n is tested which is true, because x = 1 and n = 3. Once
the condition is true the body of the do..while loop is entered again and executed
second time. By this execution of the loop, x becomes 2 and SUM becomes
1+2=3.
38
As x = 2< 3 = n, the body of the do..while loop is again entered and executed
third time. In this round/iteration, x becomes 3 and SUM becomes 3+3 =6.
Again the condition x<n is tested But, now x = 3 n = 3, therefore, x < n is false.
Hence the body of the do..while loop is no more executed i.e., the loop is
terminated. Next, the write statement gives the following output:
Elementary Algorithmics
is executed to give the value 1 to SUM. After executing once the body of the
for-loop, the value of the index variable x is implemental incremented by 1 to
become 2.
After each increment in the index variable, the value of the index variable is
compared with the final value, which in this case, is n equal to 3. If index
variable is less than or equal to n (in this case) then body of for-loop is executed
once again.
As x = 3 = n hence SUM 3 SUM+x is executed once more, making SUM
equal to 1+2 = 3. Again the index variable x is incremented by 1 to become 3.
As 3 n (=3) therefore once again the body of the for-loop containing the only
statement SUM SUM+x is executed making 6 as the value of SUM. Next x is
automatically incremented by 1 to make x as 4. But as 4 is not less than n (=3).
Hence the for-loop is terminated. Next, the write statement gives the output:
The sum of the first 3 natural numbers is 6.
The last statement consisting of end followed by dot, indicates that the
algorithm is to be terminated. Hence, the algorithm is terminated.
Ex.7)
At this stage n=0, and accordingly, the algorithm returns value 0. Substituting
the value o of SUM (0) we get
S1= 1+0=1 which is returned by SUM(1).
Substituting this value we get S2=3. Continuing like this, we get S3 =6, S4=10
and S5=15
Introduction to
Algorithmics
40
1.
2.
3.
4.
5.
6.
7.
Introduction
Objectives
Some Useful Mathematical Functions &Notations
2.2.1
2.2.2
2.3
2.4
2.5
2.6
Page Nos.
Mathematical Expectation
Principle of Mathematical Induction
Concept of Efficiency of an Algorithm
Well Known Asymptotic Functions & Notations
2.6.1
2.6.2
2.6.3
2.6.4
2.6.5
2.6.6
41
41
42
49
50
52
56
2.7 Summary
2.8 Solutions/Answers
2.9 Further Readings
66
67
70
2.0 INTRODUCTION
We have already mentioned that there may be more than one algorithms, that solve a
given problem. In Section 3.3, we shall discuss eight algorithms to sort a given list of
numbers, each algorithm having its own merits and demerits. Analysis of algorithms,
the basics of which we study in Unit 3, is an essential tool for making well-informed
decision in order to choose the most suitable algorithm, out of the available ones if
any, for the problem or application under consideration.
A number of mathematical and statistical tools, techniques and notations form an
essential part of the baggage for the analysis of algorithms. We discuss some of these
tools and techniques and introduce some notations in Section 2.2. However, for
detailed discussion of some of these topics, one should refer to the course material of
MCS-013.
Also, in this unit, we will study a number of well-known approximation functions.
These approximation functions which calculate approximate values of quantities
under consideration, prove quite useful in many situations, where some of the
involved quantities are calculated just for comparison with each other. And the
correct result of comparisons of the quantities can be obtained even with approximate
values of the involved quantities. In such situations, the advantage is that the
approximate values may be calculated much more efficiently than can the actual
values.
2.1 OBJECTIVES
After going through this Unit, you should be able to:
41
Introduction to
Algorithmics
define and use a number of concepts like function, 1-1 function, onto function,
monotonic function, floor and ceiling functions, mod function, exponentiation,
logarithm functions etc
define and use Mathematical Expectation
Use of Principle of Mathematical Induction in establishing truth of infinitely
many statements and
(i)
O:
(ii)
Y:
(iii) S:
(iv)
o:
(v)
y:
Summation:
The expression
a1 + a2+ +ai++an
may be denoted in shorthand as
n
i ?1
(ii)
Product
The expression
a1 a2 ai an
may be denoted in shorthand as
n
i ?1
42
Definition 2.2.1.2:
Function:
For two given sets A and B (which need not be distinct, i.e., A may be the same as B)
a rule f which associates with each element of A, a unique element of B, is called a
function from A to B. If f is a function from a set A to a set B then we denote the fact
by f: A B. Also, for x A, f(x) is called image of x in B. Then, A is called the
domain of f and B is called the Codomain of f.
Example 2.2.1.3:
Let f: I I be defined such that
f(x) = x2 for all x I
Then
f maps
4
to
f maps
0
to
f map
5
to
16
0
25
Remark 2.2.1.4:
We may note the following:
(i)
By putting the restriction on a function f, that for each y Y, there must be at least
one element x of X s.t f(x) = y, we get special functions called onto or surjective
functions and shall be defined soon.
Next, we discuss some important functions.
Definition 2.2.1.5:
1-1 or Injective Function:
A function f: A B is said to 1-1 , or injective if
for x, y A,
if
f(x) = f(y)
then x = y
We have already seen that the function defined in Example 2.2.1.3 is not 1-1.
However, by changing the domain, through defined by the same rule, f becomes a
1-1 function.
Example 2.2.1.2:
In this particular case, if we change the domain from I to N = {1,2,3} then we can
easily check that function
,
Some authors write 1-to-1 in stead of 1-1. However, other authors call a function 1-to-1 if
f is both 1-1 and onto (to be defined 0 in a short while).
43
f:NI
f(x) = x2,
Introduction to
Algorithmics
defined as
for all x N,
is 1-1.
Because, in this case, for each x N its negative x N. Hence for f(x) = f(y)
implies x = y. For example, If f(x) = 4 then there is only one value of x, viz, x = 2 s.t
f(2) = 4.
Definition 2.2.1.7:
Onto/Surjective function: A function f: X Y is said to onto, or surjective if to
every element of Y, the codomain of f, there is an element x X s.t f(x) = y.
We have already seen that the function defined in Example 2.2.1.3 is not onto.
However, in this case either, by changing the codomain Y or changing the rule, (or
both) we can make f as Onto.
Example 2.2.1.8: (Changing the domain)
Let X = I = { 3, 2, 1, 0, 1, 2, 3, }, but, we change Y as
Y = {0, 1, 4, 9, } = {y y = n2 for n X}
then it can be seen that
f: X Y defined by
f(x) = x2 for all x X is Onto
Example 2.2.1.9: (Changing the rule)
Here, we change the rule so that X = Y = {
But f: X Y is defined as
F(x) = x + 3 for x X.
3, 2, 1, 0, 1,2, 3}
f(x) = y
x+3=y
x=y 3
f: R R
where, R is the set of real numbers , .
,
44
Monotonic functions
f : X Y,
may be defined even when each of X and Y, in stead of being R, may be any ordered sets.
But, such general definition is not required for our purpose.
R and x
In other words, as x increases, the value of its image f(x) also increases for a
monotonically increasing function.
Further, f is said to be strictly monotonically increasing, if x < y then f(x) < f(y)
Example 2.2.1.11:
Let f: R R be defined as f(x) = x + 3, for x
R and x
Further, f is said to be strictly monotonically decreasing, if x < y then f(x) > f(y).
Example 2.2.1.12:
Let f: R R be defined as
F(x) = x + 3
if x1 x2 then x1
x2 implying x1+3
which further implies f(x1) f(x2)
Hence, f is monotonically decreasing.
x2 + 3,
Next, we define Floor and Ceiling functions which map every real number to an
integer.
Definition 2.2.1.13:
Floor Function: maps each real number x to the integer, which is the greatest of all
integers less than or equal to x. Then the image of x is denoted by x .
Instead of x , the notation [x] is also used.
For example: 2.5 = 2,
2.5 =
3, 6 = 6.
Definition 2.2.1.14:
Ceiling Function: maps each real number x to the integer, which is the least of all
integers greater than or equal to x. Then the image of x is denoted by x .
For example: 2.5 = 3, 2.5 =
2, 6 = 6
1<x
x < x + 1.
45
Introduction to
Algorithmics
Example 2.2.1.15:
Each of the floor function and ceiling function is a monotonically increasing function
but not strictly monotonically increasing function. Because, for real numbers x and y,
if x y then y = x + k for some k 0.
y = x + k = integral part of (x + k)
integral part of x = x
Similarly
y = x + k = least integer greater than or equal to x + k
greater than or equal to x = x .
least integer
But, each of floor and ceiling function is not strictly increasing, because
2.5 = 2.7 = 2.9 = 2
and
2.5 = 2.7 = 2.9 = 3
(ii)
If, it is 5th day (i.e., Friday) of a week, after 4 days, it will be 2nd day
(i.e., Tuesday) and not 9th day, of course of another, week (whenever the number of
the day exceeds 7, we subtract n = 7 from the number, we are taking here Sunday as 7th day, in
stead of 0th day)
(iii) If, it is 6th month (i.e., June) of a year, then after 8 months, it will be 2nd month
(i.e., February) of, of course another, year ( whenever, the number of the month exceeds
12, we subtract n = 12)
r<n
b/n n
Definition 2.2.2.2:
46
Definition 2.2.2.3:
Exponentiation Function Exp: is a function of two variables x and n where x is any
non-negative real number and n is an integer (though n can be taken as non-integer
also, but we restrict to integers only)
Exp (x, n) denoted by xn, is defined recursively as follows:
For n = 0
Exp (x, 0) = x0 = 1
For n > 0
Exp (x, n) = x Exp (x, n
1)
i.e
xn = x xn-1
For n < 0, let n =
m for m > 0
3
xm
xn = x-m =
if x = 1.5, n = 3, then
For example:
3) = (1.5) 3 =
*3.7+
3
5.597
For two integers m and n and a real number b the following identities hold:
**b+ +
m n
*b +
bmn
m n
*b +
n m
=
=
bm . bn
bm+n
Definition 2.2.2.4:
Polynomial: A polynomial in n of degree k, where k is a non-negative integer, over
R, the set of real numbers, denoted by P(n), is of the form
Pk(n)
0 and ai
R, i = 0, 1, , k.
Pk(n) =
ni
ak
0, ai
i-2
47
Introduction to
Algorithmics
0 then
Notation:
Though 0o is not defined, yet, unless otherwise mentioned, we will
o
take 0 = 1. The following is a very useful result relating the exponentials and
polynomials
Result 2.2.2.5: For any constants b and c with b > 1
nc
Lim n ? 2
n b
The result, in non-mathematical terms, states that for any given constants b and c, but
kc
3c 4 c 5c
with b > 1, the terms in the sequence 3 , 4 , 5 , ..., k , .... gradually decrease and
b
b b b
approaches zero. Which further means that for constants b and c, and integer
variable n, the exponential term bn, for b > 1, increases at a much faster rate than
the polynomial term nc.
Definition 2.2.2.6:
3 3 3
- - - ......,
3! 4! 5!
and is taken as the base of natural logarithm function, then for all real numbers x,
we define the exponential function
ex = 1 + x +
xi
x4 x5
- .... ?
4! 5!
i ? 2 i!
1+x
Further, if x
1 + x+ x2
ex
1 then 1+x
x
Result 2.2.2.7: Lim 3 - ? e x
n n
Definition 2.2.2.8:
Logarithm: The concept of logarithm is defined indirectly through the definition of
Exponential defined earlier. If a > 0, b > 0 and c > 0 are three real numbers, such that
c = ab
Then b = loga c (read as log of c to the base a)
Then a is called the base of the algorithm.
48
Generally two bases viz 2 and e are very common in scientific and computing
fields and hence, the following specials notations for these bases are used:
(i)
lg n
denotes log2n
(base 2)
(ii)
ln n
denotes logen
(base e);
where the letter l in ln denotes logarithm and the letter n in ln denotes natural.
The following important properties of logarithms can be derived from the properties
of exponents. However, we just state the properties without proof.
Result 2.2.2.9:
For n, a natural number and real numbers a, b and c all greater than 0, the following
identities are true:
(i)
loga (bc)
n
loga b+loga c
(ii)
loga (b )
n logab
(iii)
logba
logab
(iv)
loga (1/b)
logba
(v)
logab
1
log b a
(vi)
a log b c
c logb a
Percentage of students
who scored in the range
0% to 20%
08
20% to 40%
20
40% to 60%
57
60% to 80%
09
80% to 100%
06
If a student is picked up randomly from the set of students under consideration, what
is the % of marks expected of such a student? After scanning the table given above,
we intuitively expect the student to score around the 40% to 60% class, because, more
than half of the students have scored marks in and around this class.
Assuming that marks within a class are uniformly scored by the students in the class,
the above table may be approximated by the following more concise table:
49
Introduction to
Algorithmics
% marks
10 ,
30
50
70
90
08
20
57
09
06
6
9
57
20
8
- 30
- 50
- 70
- 90
? 47
100
100
100
100
100
The final calculation of expected marks of 47 is roughly equal to our intuition of the
expected marks, according to our intuition, to be around 50.
We generalize and formalize these ideas in the form of the following definition.
Mathematical Expectation
For a given set S of items, let to each item, one of the n values, say, v1, v2,,vn, be
associated. Let the probability of the occurrence of an item with value vi be pi. If an
item is picked up at random, then its expected value E(v) is given by
n
E(v) =
pv
i i
? p1. v1 - p 2 . v 2 - ........ p n . vn
i /1
1.
2.
3.
50
Let us consider the following sequence in which nth term S(n) is the sum of first
(n 1) powers of 2, e.g.,
S(1) = 20
=2 1
S(2) = 20 + 21
= 22 1
S(3) = 20 + 21 + 22 = 23 1
We (intuitively) feel that
S(n) = 20 + 21 +.+ 2n-1 should be 2n 1 for all n 1.
We may establish the correctness of the intuition, i.e., correctness of all the infinite
number of statements
S(n) = 2n 1
for all n
1,
Induction Hypothesis: Assume, for some k > base-value (=1, in this case)
S(k) = 2k
(iii)
1 = 21 1 is correct
1.
Induction Step: Using (i) & (ii) establish that (in this case)
S(k+1) = 2k+1
In order to establish
S(k+1) = 2k+1 1,
we use the definition of S(n) and Steps (i) and (ii) above
(A)
By definition
S(k+1) = 20 + 21++2k+1-1
= (20 + 21++2k-1) + 2k
(B)
But by definition
20 + 21++2k-1 = S(k).
(C)
(D)
(E)
Ex.1)
Ex.2)
Let us assume that we have unlimited supply of postage stamps of Rs. 5 and
Rs. 6 then
n,
51
Introduction to
Algorithmics
2.5
(i)
(ii)
CONCEPT OF EFFICIENCY OF AN
ALGORITHM
If a problem is algorithmically solvable then it may have more than one algorithmic
solutions. In order to choose the best out of the available solutions, there are criteria
for making such a choice. The complexity/efficiency measures or criteria are based
on requirement of computer resources by each of the available solutions. The solution
which takes least resources is taken as the best solution and is generally chosen for
solving the problem. However, it is very difficult to even enumerate all possible
computer resources e.g., time taken by the designer of the solution and the time taken
by the programmer to encode the algorithm.
Mainly the two computer resources taken into consideration for efficiency measures,
are time and space requirements for executing the program corresponding to the
solution/algorithm. Until it is mentioned otherwise, we will restrict to only time
complexities of algorithms of the problems.
In order to understand the complexity/efficiency of an algorithm, it is very important
to understand the notion of the size of an instance of the problem under consideration
and the role of size in determining complexity of the solution.
It is easy to realize that given an algorithm for multiplying two n n matrices, the
time required by the algorithm for finding the product of two 2 2 matrices, is
expected to take much less time than the time taken by the same algorithm for
multiplying say two 100 100 matrices. This explains intuitively the notion of the
size of an instance of a problem and also the role of size in determining the (time)
complexity of an algorithm. If the size (to be later considered formally) of general
instance is n then time complexity of the algorithm solving the problem (not just
the instance) under consideration is some function of n.
In view of the above explanation, the notion of size of an instance of a problem plays
an important role in determining the complexity of an algorithm for solving the
problem under consideration. However, it is difficult to define precisely the concept
of size in general, for all problems that may be attempted for algorithmic solutions.
Formally, one of the definitions of the size of an instance of a problem may be taken
as the number of bits required in representing the instance.
However, for all types of problems, this does not serve properly the purpose for which
the notion of size is taken into consideration. Hence different measures of size of an
instance of a problem, are used for different types of problem. For example,
52
(i)
(ii)
In the case of solving polynomial equations or while dealing with the algebra
of polynomials, the degrees of polynomial instances, may be taken as the
sizes of the corresponding instances.
There are two approaches for determining complexity (or time required) for executing
an algorithm, viz.,
(i)
(ii)
and
(i)
The approach does not depend on the programming language in which the
algorithm is coded and on how it is coded in the language,
(ii)
The approach does not depend on the computer system used for executing (a
programmed version of) the algorithm.
(iii)
(iv)
Remark 2.5.1:
In view of the advantages of the theoretical approach, we are going to use it as
the only approach for computing complexities of algorithms. As mentioned earlier,
in the approach, no particular computer is taken into consideration for calculating time
complexity. But different computers have different execution speeds. However, the
speed of one computer is generally some constant multiple of the speed of the other.
Therefore, this fact of differences in the speeds of computers by constant
multiples is taken care of, in the complexity functions t for general instance sizes
n, by writing the complexity function as c.t(n) where c is an arbitrary constant.
An important consequence of the above discussion is that if the time taken by one
machine in executing a solution of a problem is a polynomial (or exponential)
function in the size of the problem, then time taken by every machine is a polynomial
(or exponential) function respectively, in the size of the problem. Thus, functions
differing from each other by constant factors, when treated as time complexities
should not be treated as different, i.e., should be treated as complexity-wise
equivalent.
53
Introduction to
Algorithmics
Remark 2.5.2:
Asymptotic Considerations:
Computers are generally used to solve problems involving complex solutions. The
complexity of solutions may be either because of the large number of involved
computational steps and/or because of large size of input data. The plausibility of the
claim apparently follows from the fact that, when required, computers are used
generally not to find the product of two 2 2 matrices but to find the product of two
n n matrices for large n, running into hundreds or even thousands.
Similarly, computers, when required, are generally used not only to find roots of
quadratic equations but for finding roots of complex equations including polynomial
equations of degrees more than hundreds or sometimes even thousands.
The above discussion leads to the conclusion that when considering time complexities
f1(n) and f2(n) of (computer) solutions of a problem of size n, we need to consider and
compare the behaviours of the two functions only for large values of n. If the relative
behaviours of two functions for smaller values conflict with the relative behaviours
for larger values, then we may ignore the conflicting behaviour for smaller values.
For example, if the earlier considered two functions
f1(n) = 1000 n2
f2(n) = 5n4
and
represent time complexities of two solutions of a problem of size n, then despite the
fact that
f1 (n) f2 (n)
for n 14,
we would still prefer the solution having f1 (n) as time complexity because
f1(n)
f2 (n)
for all n
15.
This explains the reason for the presence of the phrase n k in the definitions
of the various measures of complexities and approximation functions, discussed
below:
Remark 2.5.3:
Comparative Efficiencies of Algorithms: Linear, Quadratic, Polynomial
Exponential
Suppose, for a given problem P, we have two algorithms say A1 and A2 which solve
the given problem P. Further, assume that we also know time-complexities T1(n) and
T2 (n) of the two algorithms for problem size n. How do we know which of the two
algorithms A1 and A2 is better?
The difficulty in answering the question arises from the difficulty in comparing time
complexities T1(n) and T2(n).
T1 (n)
T1 (n)
T2 (n) for n
T2 (n) for n
14 and
15.
The issue will be discussed in more detail in Unit 3. However, here we may mention
that, in view of the fact that we generally use computers to solve problems of large
54
sizes, in the above case, the algorithms A1 with time-complexity T1 (n) = 1000n2 is
preferred over the algorithm A2 with time-complexity T2 (n) = 5n4, because
T1 (n) T2(n) for all n 15.
In general if a problem is solved by two algorithms say B1 and B2 with timecomplexities BT1(N) and BT2(n) respectively, then
B
(i)
then the algorithm D1 is assumed to be more efficient and is preferred over D2.
Certain complexity functions occur so frequently that special names commensurate
with their usage may be given to such functions. For example, complexity function
c n is called linear time complexity and corresponding algorithm, is called as linear
algorithm.
Similarly, the terms quadratic and polynomial time complexity functions and
algorithms are used when the involved complexity functions are respectively of the
forms c n2 and c1 nk +.+ck.
In the next section we find examples of linear and quadratic algorithms.
Remark 2.5.4:
For all practical purposes, the use of c, in (c t(n)) as time complexity measure, offsets
properly the effect of differences in the speeds of computers. However, we need to be
on the guard, because in some rarely occurring situations, neglecting the effect of c
may be misleading.
For example, if two algorithms A1 and A2 respectively take n2 days and n3 secs for
execution of an instance of size n of a particular problem. But a day is a constant
multiple of a second. Therefore, as per our conventions we may take the two
complexities as of C2 n2 and C3 n3 for some constants C2 and C3. As, we will discuss
later, the algorithm A1 taking C2 n2 time is theoretically preferred over the algorithm
A2 with time complexity C3 n3. The preference is based on asymptotic behaviour of
complexity functions of the algorithms. However in this case, only for instances
requiring millions of years, the algorithm A1 requiring C2 n2 time outperforms
algorithms A2 requiring C3 n3.
Remark 2.2.5:
Unit of Size for Space Complexity: Though most of the literature discusses the
complexity of an algorithm only in terms of expected time of execution, generally
55
Introduction to
Algorithmics
neglecting the space complexity. However, space complexity has one big advantage
over time complexity.
Ex.3)
T1 (n) =
T2 (n) =
4n5 + 3n
2500n3 + 4n
and
Find the range for n, the size of an instance of the given problem, for whichA1 is more
efficient than A2.
2.6.1
(i)
O:
(ii)
Y:
(iii) S:
(iv)
o:
(v)
y:
f, g: NsN
are given by
f(n) = n2 5n
g(n) = n2
and
O(f(n)) = g(n)
or
then
O(n2 5n) = n2
In the discussion of any one of the five notations, generally two functions say f and g
are involved. The functions have their domains and codomains as N, the set of natural
numbers, i.e.,
f: NsN
g: NsN
These functions may also be considered as having domain and codomain as R.
2.6.2
The Notation O
Provides asymptotic upper bound for a given function. Let f(x) and g(x) be two
functions each from the set of natural numbers or set of positive real numbers to
positive real numbers.
Then f (x) is said to be O (g(x)) (pronounced as big-oh of g of x) if there exist two
positive integer/real number constants C and k such that
f (x)
C g(x)
for all x k
(A)
f(x) = O (x3)
f(x) = O (x4)
x3 = O (f(x))
x4
O (f(x))
f(x)
O ( x2)
Solution:
Part (i)
Consider
f(x) = 2x3 +3x2 +1
2x3 +3x3 +1 x3 = 6x3
for all x
Introduction to
Algorithmics
f(x)
1.
However, we may also, by computing some values of f(x) and x4, find C and k as
follows:
f(1) = 2+3+1 = 6
f(2) = 2.23 + 3.22 + 1 = 29
f(3) = 2.33 + 3.32 + 1 = 82
;
;
;
(1)4 = 1
(2)4 = 16
(3)4 = 81
Hence
Part (iii)
Part (iv)
We prove the result by contradiction. Let there exist positive constants C and k
such that
x4 C (2x3 + 3x2 +1) for all x k
^x4 C (2x3 +3x3+x3) = 6Cx3 for x k
^ x4 6 C x3 for all x k.
implying x 6C for all x k
But for x = max of { 6 C + 1, k}, the previous statement is not true.
Hence the proof.
Part (v)
(S x3
The big-oh notation can be used to estimate Sn, the sum of first n positive integers
Hint: Sn=1+2+3+.+n
n+n ++ n = n2
Therefore, Sn = O (n2).
Remark 2.6.2.2:
It can be easily seen that for given functions f(x) and g(x), if there exists one pair of C
and k with f(x) C.g (x) for all x k, then there exist infinitely many pairs (Ci, ki)
which satisfy
f(x)
Ci g(x)
2.6.3
for all x
C and any ki
ki.
The Y Notation
C (g(x))
whenever x
Example 2.6.3.1:
f(x) = Y (x3)
(ii)
h(x)= Y (x3)
(iii)
h(x)= Y (x2)
(iv)
x3 = Y (h(x))
(v)
x2
Y (h(x))
Solutions:
Part (i)
1
59
Introduction to
Algorithmics
Part (ii)
such that
2x3 4x2+2
0 for all x
Part (iv)
C (3x3 2x2 + 2)
for all x
3C x3 + 2
for all x
C x3 for all x
(4 C - 3)
,
C
2.6.4
The Notation S
Provides simultaneously both asymptotic lower bound and asymptotic upper bound
for a given function.
Let f(x) and g(x) be two functions, each from the set of natural numbers or positive
real numbers to positive real numbers. Then f(x) said to be S (g(x)) (pronounced as
big-theta of g of x) if, there exist positive constants C1, C2 and k such that
C2 g(x) f(x) C1 g(x) for all x k.
(Note the last inequalities represent two conditions to be satisfied simultaneously viz.,
C2 g(x) f(x) and f(x) C1 g(x))
We state the following theorem without proof, which relates the three functions
O, Y, S.
Theorem: For any two functions f(x) and g(x), f(x) = S (g(x)) if and only if
(i)
f(x) = S (x3)
(ii)
f(x)
S (x2)
(iii)
f(x)
S (x4)
Solutions
Part (i)
for C1 = 3, C2 = 1 and k = 4
1. C2 x3
f(x)
C1 x3
for all x
Part (ii)
x= max }C1 - 1, k
Introduction to
Algorithmics
S (x4)
(2x3 + 3x2 + 1)
2x3 + 3x2 + 1
k 1,
implying
C2 x
6 for all x
- 3
But for x =
C4
the above inequality is false. Hence, proof of the claim by contradiction.
2.6.5
The Notation o
The asymptotic upper bound provided by big-oh notation may or may not be
tight in the sense that if f(x) = 2x3 + 3x2 +1
Then for f (x) = O (x3), though there exist C and k such that
f(x)
yet there may also be some values for which the following equality also holds
f(x) = C (x3)
for x
However, if we consider
f(x) = O (x4)
then there can not exist positive integer C s.t
f (x) = C x4
for all x
The case of f(x) = O (x4), provides an example for the next notation of small-oh.
The Notation o
Let f(x) and g(x) be two functions, each from the set of natural numbers or positive
real numbers to positive real numbers.
Further, let C > 0 be any number, then f(x) = o(g(x)) (pronounced as little oh of
g of x) if there exists natural number k satisfying
f(x) < C g(x) for all x
62
k 1
(B)
In the case of little-oh the constant C does not depend on the two functions f (x)
and g (x). Rather, we can arbitrarily choose C >0
(ii)
The inequality (B) is strict whereas the inequality (A) of big-oh is not
necessarily strict.
f(x) = o (xn)
for any n
4.
f(x)
o(x ) for n 3
Solutions:
Part (i)
Let C > 0 be given and to find out k satisfying the requirement of little-oh.
Consider
2x3 + 3x2 + 1 < C xn
5 3
= 2 + - 5 < C x n-3
x x
Case
when n = 4
5 3
<C x
x x5
9
if we take k = max ,3
C
then
2x3 + 3x2 + 1 < C x4
for x
k.
4,
therefore
2x3 + 3x2 + 1 < C xn
for n 4
for all x k
9
with k = max , 3
c
Part (ii)
We prove the result by contradiction. Let, if possible, f(x) = 0(xn) for n 3.
Then there exist positive constants C and k such that 2x3+3x2+1< C xn
for all x k.
Dividing by x3 throughout, we get
2+
5 3
< C xn-3
x x4
63
Introduction to
Algorithmics
n 3 and x
As C is arbitrary, we take
C = 1, then the above inequality reduces to
5 3
< C. xn-3 for n
x x4
Also, it can be easily seen that
2+
xn-3 1
^ 2+
5 3
x x4
for n
3 and x
3 and x
for n
1.
1.
However, the last inequality is not true. Therefore, the proof by contradiction.
Generalising the above example, we get the
Theorem 2.6.5.3: Let f(x) and g(x) be functions in definition of small-oh notation.
Then f(x) = o(g(x) if and only if
f (x)
Lim
=0
x g( x )
Next, we introduce the last asymptotic notation, namely, small-omega. The relation of
small-omega to big-omega is similar to what is the relation of small-oh to big-oh.
2.6.6
The Notation y
Again the asymptotic lower bound Y may or may not be tight. However, the
asymptotic bound y cannot be tight. The formal definition of y is follows:
Let f(x) and g(x) be two functions each from the set of natural numbers or the set of
positive real numbers to set of positive real numbers.
Further
Let C > 0 be any number, then
f(x) = y (g(x))
if there exist a positive integer k s.t
f(x) > C g(x)
for all x
Example 2.6.6.1:
64
then
f(x) = y (x)
and also
f(x) = y (x2)
Solution:
Let C be any positive constant.
Consider
2x3 + 3x2 + 1 > C x
To find out k 1 satisfying the conditions of the bound y.
3
> C (dividing throughout by x)
x
Let k be integer with k C+1
2x2 + 3x +
C+1)
2x + 3 +
Then for x
3
2x + 3 + 4
x
C+1
Hence
f(x) = y (x2)
In general, we have the following two theorems (stated without proof).
Theorem 2.6.6.3: Let f(x) and g(x) be functions in the definitions of little-omega
Then f(x) = y (g(x)) if and only if
Lim
f *x +
?
g*x +
Lim
g*x +
?2
f *x +
Ex.4)
Ex.5)
Introduction to
Algorithmics
Ex.6)
2.7 SUMMARY
In this unit, first of all, a number of mathematical concepts are defined. We defined
the concepts of function, 1-1 function, onto function, ceiling and floor functions, mod
function, exponentiation function and log function. Also, we introduced some
mathematical notations.
In Section 2.3, the concept of mathematical expectation is introduced, which is useful
in average-case analysis. Formally, mathematical expectation is defined as follows:
Mathematical Expectation
For a given set S of items, let to each item, one of the n values, say, v1, v2,,vn, be
associated. Let the probability of the occurrence of an item with value vi be pi. If an
item is picked up at random, then its expected value E(v) is given by
n
E(v) =
pv
i i
? p1. v1 - p 2 . v 2 - ........ p n . vn
i /1
4.
5.
6.
(i) empirical
(ii) theoretical.
These approaches are briefly discussed in Section 2.5. However, in the rest of the
course only theoretical approach is used for the analysis and computations of
complexity of algorithms. Also, it is mentioned that analysis and computation of
complexity of an algorithm may be considered in terms of
(i) time expected to be taken or
(ii) space expected to be required for executing the algorithm.
Again, we will throughout consider only the time complexity. Next, the concepts of
linear, quadratic, polynomial and exponential time complexities and algorithms, are
discussed.
Next, five Well Known Asymptotic Growth Rate functions are defined and
corresponding notations are introduced. Some important results involving these
are stated and/or proved
66
Let f(x) and g(x) be two functions, each from the set of natural numbers or set of
positive real numbers to positive real numbers.
Then f (x) is said to be Y (g(x)) (pronounced as big-omega of g of x) if there exist two
positive integer/real number Constants C and k such that
f(x)
C (g(x))
whenever x
The Notation S
Provides simultaneously both asymptotic lower bound and asymptotic upper bound
for a given function.
Let f(x) and g(x) be two functions, each from the set of natural numbers or positive
real numbers to positive real numbers. Then f(x) said to be S (g(x)) (pronounced as
big-theta of g of x) if, there exist positive constants C1, C2 and k such that
C2 g(x) f(x) C1 g(x) for all x k.
The Notation o
Let f(x) and g(x) be two functions, each from the set of natural numbers or positive
real numbers to positive real numbers.
Further, let C > 0 be any number, then f(x) = o(g(x)) (pronounced as little oh of
g of x) if there exists natural number k satisfying
f(x) < C g(x) for all x
k 1
The Notation y
Again the asymptotic lower bound Y may or may not be tight. However, the
asymptotic bound y cannot be tight. The formal definition of y is as follows:
Let f(x) and g(x) be two functions each from the set of natural numbers or the set of
positive real numbers to set of positive real numbers.
Further
Let C > 0 be any number, then
f(x) = y (g(x))
if there exist a positive integer k s.t
f(x) > C g(x)
for all x
2.8 SOLUTIONS/ANSWERS
Ex. 1)
Introduction to
Algorithmics
Consider (k+1)3
(ii)
Part (i):
=
=
1 5 + 0 6
0 5 + 1 6
10
11
12
=
=
=
2 5 + 0 6
1 5 + 1 6
0 5 + 2 6
15
16
17
18
=
=
=
=
3 5 + 0 6
2 5 + 1 6
1 5 + 2 6
0 5 + 3 6
19 is not possible
20
=
4 5 + 0 6
21
=
3 5 + 1 6
22
=
2 5 + 2 6
23
=
1 5 + 3 6
24
=
0 5 + 4 6
68
using 2 stamps
using 2 stamps
using 3 stamps
using 4 stamps
25
26
27
28
29
30
5 5 + 0 6
4 5 + 1 6
3 5 + 2 6
2 5 + 3 6
1 5 + 4 6
0 5 + 5 6
=
=
=
=
=
=
using 5 stamps
Part (ii):
Base Step:
Induction Hypothesis:
Induction Step:
Case (i):
Case (ii):
Ex. 3) Algorithm A1 is more efficient than A2 for those values of n for which
4n5 + 3n = T1 (n)
T2 (n) = 2500 n3 + 4n
i.e.,
4n4 + 3
2500 n2 + 4
i.e.,
4n2 2500 1/n2
Consider
4n2 2500 = 0
then n = 50/2 = 25
for n
(i)
25
4n2
2500
(25)2
2500
1/n2
26
69
4n2
Introduction to
Algorithmics
2500
4(26)2
2500 = 2704
= 204 > 1 >
Therefore, for n
2500
3
*48+
3
n4
70
1.
Discrete Mathematics and Its Applications (Fifth Edition) K.N. Rosen: Tata
McGraw-Hill (2003).
2.
Basics of Analysis
Page Nos.
3.0 Introduction
3.1 Objectives
3.2 Analysis of Algorithms Simple Examples
3.3 Well Known Sorting Algorithms
3.3.1
3.3.2
3.3.3
3.3.4
3.3.5
3.3.6
3.3.7
3.3.8
3.3.9
71
72
72
75
Insertion Sort
Bubble Sort
Selection Sort
Shell Sort
Heap Sort
Divide and Conquer Technique
Merge Sort
Quick Sort
Comparison of Sorting Algorithms
97
3.5
100
3.5.1 Sequencing
3.5.2 For Construct
3.5.3 While and Repeat Constructs
3.6
3.7
Recursive Constructs
Solving Recurrences
105
107
3.8
110
3.9 Summary
3.10 Solutions/Answers
3.11 Further Readings
114
114
126
3.0 INTRODUCTION
Analysis of algorithms is an essential tool for making well-informed decision in order
to choose the most suitable algorithm, out of the available ones, if any, for the
problem or application under consideration. For such a choice of an algorithm, which
is based on some efficiency measures relating to computing resources required by the
algorithm, there is no systematic method. To a large extent, it is a matter of judgment
and experience. However, there are some basic techniques and principles that help
and guide us in analysing algorithms. These techniques are mainly for
(i)
(ii)
for solving recurrence relations, which arise if the algorithm involves recursive
structures.
In this unit, we mainly discuss models, techniques and principles for analyzing
algorithms.
Also, sorting algorithms, which form good sources for learning how to design and
analyze algorithms, are discussed in detail in this unit.
71
Introduction to
Algorithmics
3.1
OBJECTIVES
explain and use any one of the several well-known algorithms discussed in the
text, for sorting a given array of numbers.
SIMPLE
n,
72
Analysis of First-Prefix-Averages:
Basics of Analysis
Step 1: Intitalization step for setting up of the array A[1..n] takes constant time say
C1, in view of the fact that for the purpose, only address of A (or of A[1]) is to be
passed. Also after all the values of B[1..n] are computed, then returning the array
B[1..n] also takes constant time say C2, again for the same reason.
Step 2: The body of the algorithm has two nested for-loops, the outer one, called the
first for-loop is controlled by i and is executed n times. Hence the second for-loop
alongwith its body, which form a part of the first for-loop, is executed n times. Further
each construct within second for-loop, controlled by j, is executed i times just because
of the iteration of the second for-loop. However, the second for-loop itself is being
executed n times because of the first for-loop. Hence each instruction within the
second for-loop is executed (n.i) times for each value of i = 1, 2, n.
Step 3: In addition, each controlling variable i and j is incremented by 1 after each
iteration of i or j as the case may be. Also, after each increment in the control
variable, it is compared with the (upper limit + 1) of the loop to stop the further
execution of the for-loop.
Thus, the first for-loop makes n additions (to reach (n+1)) and n comparisons with
(n+1).
The second for-loop makes, for each value of i=1,2,,n, one addition and one
comparison. Thus total number of each of additions and comparisons done just for
controlling variable j
n *n - 3+
= (1+2++n) =
.
4
Step 4: Using the explanation of Step 2, we count below the number of times the
various operations are executed.
(i)
(ii)
Summerizing, the total number of operations performed in executing the First-PrefixAverages are
(i)
(ii)
n *n - 3+
4
n *n - 3+
Number of assignments in (B[i] Sum/i) =
4
Introduction to
Algorithmics
Assuming each of the operations counted above takes some constant number of unit
operations, then total number of all operations is a quadratic function of n, the size
of the array A[1..n].
Next, we show that a minor modification in the First-Prefix-Algorithm, may lead
to an algorithm, to be called Second-Prefix-Algorithm and defined below of
linear complexity only.
i
A[k] is computed
k -3
only once in the variable Sum in Second_Prefix_Average. The new algorithm is given
below:
Algorithm Second-Prefix-Averages (A[1..n]);
2 n additions
n comparisons
(2n+1) assignments
n divisions
C1 and C2, constants for initialization and return.
As each of the operations, viz addition, comparison, assignment and division takes a
constant number of units of time; therefore, the total time taken is C.n for some
constant C.
Basics of Analysis
Insertion Sort
Bubble Sort
Selection Sort
Shell Sort
Heap Sort
Merge Sort
Quick Sort
For the discussion on Sorting Algorithms, let us recall the concept of Ordered Set.
We know given two integers, say n1 and n2, we can always say whether n1 n2 or
n2 n1. Similarly, if we are given two rational numbers or real numbers, say n1 and
n2, then it is always possible to tell whether n1 n2 or n2 n1.
Ordered Set: Any set S with a relation, say, , is said to be ordered if for any two
elements x and y of S, either x y or y x is true. Then, we may also say that
(S, ) is an ordered set.
Thus, if I, Q and R respectively denote set of integers, set of rational numbers and set
of real numbers, and if denotes the less than or equal to relation then, each of
(I, ), (Q, ) and (R, ) is an ordered set. However, it can be seen that the set
C = {x + iy : x, y R and i2 = 1} of complex numbers is not ordered w.r.t . For
example, it is not possible to tell for at least one pair of complex numbers, say 3 + 4i
and 4+3i, whether 3 + 4i 4 +3i, or 4 + 3i 3 + 4i.
Just to facilitate understanding, we use the list to be sorted as that of numbers.
However, the following discussion about sorting is equally valid for a list of elements
from an arbitrary ordered set. In that case, we use the word key in stead of number
in our discussion.
The general treatment of each sorting algorithm is as follows:
1.
2.
3.
4.
All the sorting algorithms discussed in this section, are for sorting numbers in
increasing order.
Next, we discuss sorting algorithms, which form a rich source for algorithms. Later,
we will have occasions to discuss general polynomial time algorithms, which of
course include linear and quadratic algorithms.
One of the important applications for studying Sorting Algorithms is the area of
designing efficient algorithms for searching an item in a given list. If a set or a list is
already sorted, then we can have more efficient searching algorithms, which include
75
Introduction to
Algorithmics
binary search and B-Tree based search algorithms, each taking (c. log (n)) time,
where n is the number of elements in the list/set to be searched.
{Pick up the next number 32 from the list and place it at correct position
relative to 80, so that the sublist considered so far is sorted}.
32
80
{We may note in respect of the above sorted sublist, that in order to insert 32
before 80, we have to shift 80 from first position to second and then insert 32
in first position.
The task can be accomplished as follows:
1. First 32 is copied in a location say m
2. 80 is copied in the location A[2] = 32 so that we have
A[1] A[2]
m
80
80
32
3. 32 is copied in A[1] from m so that we have
A[1]
A[2]
m
32
80
32
thus we get the sorted sublist given above}
32
80
{Next number 31 is picked up, compared first with 80 and then (if required)
with 32. in order to insert 31 before 32 and 80, we have to shift 80 to third
position and then 32 to second position and then 31 is placed in the first
position}.
The task can be accomplished as follows:
1. First 31 is copied in a location say m
2. 80 is copied in the location A[3] = 31 so that we have
A[1] A[2]
A[3]
m
32
80
80
31
3. 32 is copied in A[2] from A[1] so that we have
A[1]
A[2]
A[3]
m
32
32
80
31
4. the value 31 from m is copied in A[1] so that we get
A[1]
A[2]
A[3]
m
31
32
80
31
thus we get the sorted sublist}
76
31
32
Basics of Analysis
80
{Next 110 is picked up, compared with 80. As 110>80, therefore, no shifting
and no more comparisons. 110 is placed in the first position after 80}.
31
32
80
110
{Next, number 50 is picked up. First compared with 110, found less; next
compared with 80, again found less; again compared with 32. The correct
position for 50 is between 32 and 80 in the sublist given above. Thus, each of
110 and 80 is shifted one place to the right to make space for 50 and then 50
is placed over there
The task can be accomplished as follows:
1. First 50 is copied in a location say m
2. 110 is copied in the location A[5] = 50 so that we have
A[1] A[2]
A[3]
A[4]
A[5]
m
31
32
80
110
110
50
3. 80 is copied in A[4] from A[3] so that we have
A[1]
A[2]
A[3]
A[4]
A[5]
m
31
32
80
80
110
50
4. the value 50 from m is copied in A[3] so that we get
A[1]
A[2]
A[3]
A[4]
A[5]
m
31
32
50
80
110
50
thus we get the following sorted sublist}
31
32
50
80
110
{Next in order to place 40 after 32 and before 50, each of the values 50, 80
and 110 need to be shifted one place to the right as explained above.
However, values 31 and 32 are not to be shifted. The process of inserting 40
at correct place is similar to the ones explained earlier}.
31
32
40
50
80
110
The idea of Insertion Sort as explained above, may be implemented through procedure
Insertion-Sort given below. It is assumed that the numbers to be sorted are stored in
an array A[1..n].
Procedure Insertion-Sort (A[1..n ] : real)
begin {of procedure}
if n = 1
then
write (A list of one element is already sorted)
else
begin {of the case when n 2}
for j 2 to n do
{to find out the correct relative position for A[ j] and insert it there among
the already sorted elements A[1] to A [j 1]}
begin {of for loop}
If A[j] < A[j 1] then begin
{We shift entries only if A [j] < A [j
i j 1; m A[j]
1]}
77
{In order to find correct relative position we store A[j] in m and start with the
last element A[j 1] of the already sorted part. If m is less than A[j 1], then
we move towards left, compare again with the new element of the array. The
process is repeated until either m some element of the array or we reach the
left-most element A[1]}.
Introduction to
Algorithmics
3.3.2
Bubble Sort
The Bubble Sort algorithm for sorting of n numbers, represented by an array A[1..n],
proceeds by scanning the array from left to right. At each stage, compares adjacent
pairs of numbers at positions A[i] and A[i+1] and whenever a pair of adjacent
numbers is found to be out of order, then the positions of the numbers are swapped.
The algorithm repeats the process for numbers at positions A[i+1] and A[i+2].
Thus in the first pass after scanning once all the numbers in the given list, the largest
number will reach its destination, but other numbers in the array, may not be in order.
In each subsequent pass, one more number reaches its destination.
3.3.2.1 Example
In the following, in each line, pairs of adjacent numbers, shown in bold, are
compared. And if the pair of numbers are not found in proper order, then the
positions of these numbers are exchanged.
The list to be sorted has n = 6 as shown in the first row below:
iteration number i = 1
80
32
31
32
80
31
32
31
80
32
31
80
32
31
80
32
31
81
78
110
110
110
110
50
50
50
50
50
50
110
40
40
(j = 1)
40
(j = 2)
40
40
40
(j = 5)
110
(j = 1)
In the first pass traced above, the maximum number 110 of the list reaches the
rightmost position (i.e 6 th position). In the next pass, only the list of remaining
(n 1) = 5 elements, as shown in the first row below, is taken into consideration.
Again pairs of numbers in bold, in a row, are compared and exchanged, if required.
iteration number i = 2
31
32
81
31
32
81
31
32
50
31
32
50
Basics of Analysis
50
50
81
40
40
(j = 2
40
(j = 3)
40
(j = 4)
81
(j = 1)
In the second pass, the next to maximum element of the list viz, 81, reaches the 5th
position from left. In the next pass, the list of remaining (n 2) = 4 elements are taken
into consideration.
iteration number i = 3
31
32
50
40
(j = 2)
50
40
(j = 3)
31
32
31
32
40
50
(j = 1)
31
31
32
32
40
40
32
31
These elements are compared and found in proper order. The process
terminates.
Procedure bubblesort (A[1..n])
begin
for i 1 to n 1 do
for j 1 to (n
i).
{in each new iteration, as explained above, one less number of elements is
taken into consideration. This is why j varies upto only (n i)}
if A[j] > A[j+1] then interchange A[j]and A[j+1].
end
{A[1..n] is in increasing order}
Note: As there is only one statement in the scope of each of the two for-loops,
therefore, no begin and end pair is used.
Ex. 2) Sort the following sequence of numbers using Bubble Sort:
15,
10,
13,
9,
12,
17.
Further, find the number of comparisons and assignments required by the
algorithm in sorting the list.
79
Introduction to
Algorithmics
3.3.3
Selection Sort
80
32
31
110
50
40 { given initially}
Initially, MAX 80
MAX-POS 1
After one iteration, finally; MAX 110, MAX-POS = 4
80
31
40
50
110
80
32
31
40
50
50
32
31
40
80
50
32
31
40
80
32
31
Basics of Analysis
50
40
32
31
Initially & finally
Max 40 ; MAX-POS 1
Therefore, entries 40 and 31 are exchanged to get
31
32
40
31
32
31
This is a single-element list. Hence, no more iterations. The algorithm terminates
This completes the sorting of the given list.
Next, we formalize the method used in sorting the list.
Procedure Selection Sort (A[1..n])
begin {of procedure}
for i 1 to (n 1) do
begin {of i for loop}
MAX A[i];
MAX-POS i
for j i+1 to n do
begin {j for-loop} ,
If MAX < A[j] then
begin
MAX-POS j
MAX A[j]
end {of if }
end {of j for-loop}
A [MAX-POS] A[n i+1]
A[n i+1] MAX
{the ith maximum value is stored in the ith position from right or
equivalently in (n i + 1) th position of the array}
end {of i loop}
as there is only one statement in j-loop, we can Omit begin and end.
81
Introduction to
Algorithmics
3.3.4
Shell Sort
The sorting algorithm is named so in honour of D.L. Short (1959), who suggested the
algorithm. Shell Sort is also called diminishing-increment sort. The essential idea
behind Shell-Sort is to apply any of the other sorting algorithm (generally Insertion
Sort) to each of the several interleaving sublists of the given list of numbers to be
sorted. In successive iterations, the sublists are formed by stepping through the file
with an increment INCi taken from some pre-defined decreasing sequence of stepsizes INC1 > INC2 > > INCi > 1, which must terminate in 1.
Example 3.3.4.2: Let the list of numbers to be sorted, be represented by the next row.
13 3
4 12 14 10 5 1 8 2 7 9 11 6
(n = 14)
10
13
Taking sublist of elements at 2nd, 7th and 12th positions, viz sublist of values 3, 5 and 9. After sorting
these values we get the sorted sublist.
Taking sublist of elements at 3rd, 8th and 13th positions, viz sublist of values 4, 1 and 11. After sorting
these values we get the sorted sublist.
1
4
Similarly, we get sorted sublist
6
11
12
14
{Note that, in this case, the sublist has only two elements S it is 5th sublist and n = 14 is less than
36
INC - 7 where INC = 5}
INC
After merging or interleaving the entries from the sublists, while maintaining the
initial relative positions, we get the New List:
7 3 1 6 2 10 5 4 8
14 13 9 11
12
Next, take INC = 3 and repeat the process, we get sorted sublists:
6
2
7
3
11
4
14,
12
13
and
10
After merging the entries from the sublists, while maintaining the initial relative
positions, we get the New List:
New List
5 2 1 6 3 8 7 4 9
82
11 12 10 14 13
Taking INC = 1 and repeat the process we get the sorted list
1 2 3 4 5 6 7
8 9
10
Basics of Analysis
11 12 13 14
Note: Sublists should not be chosen at distances which are mutliples of each other
e.g. 8, 4, 2 etc. otherwise, same elements may be technique compared again and
again.
Procedure Shell-Sort (A[1..n]: real)
K: integer;
{to store value of the number of increments}
INC [1..k]: integer
{to store the values of various increments}
begin {of procedure}
read (k)
for i 1 to (k 1) do
read (INC [i])
INC [k] 1
{last increment must be 1}
for i 1 to k do
{this i has no relation with previous occurrence of i}
begin
j INC [i]
r [n/j];
for t 1 to k do
{for selection of sublist starting at position t}
begin
if n < r * j + t
then s r 1
else s r
Insertion-Sort (A [t (t + s * j)])
end {of t-for-loop}
end {i for-loop}
end {of procedure}.
Ex. 4) Sort the following sequence of number, using Shell Sort:
15,
10,
13,
9,
12
7
Further, find the number of comparisons and copy/assignment operations
required by the algorithm in sorting the list.
Also, initial INC = 3 and final INC = 1 for selecting interleaved sublists. For
sorting sublists use Insertion Sort.
3.3.5
Heap Sort
1.
It may be noted that the above definition is a recursive definition, in the sense
that definition of binary tree is given in its own terms (i.e., binary tree). In
Unit 1, we discussed other examples of recursive definitions.
2.
The following are all distinct and the only binary trees having two nodes.
83
Introduction to
Algorithmics
The following are all distinct and only binary trees having three nodes
Heap: is defined as a binary tree with keys assigned to its nodes (one key per node)
such that the following conditions are satisfied:
(i)
(ii)
The binary tree is essentially complete (or simply complete), i.e, all its levels
are full except possibly the last level where only some rightmost leaves may be
missing.
The key at each node is greater than or equal to the keys at its children.
10
However, the following is not a heap because the value 6 in a child node is more
than the value 5 in the parent node.
84
Basics of Analysis
10
Also, the following is not a heap, because, some leaves (e.g., right child of 5), in
between two other leaves (viz 4 and 1), are missing.
10
4
1
Heap is an array H[1..n] in which every element in position i (the parent) in the first
half of the array is greater than or equal to elements in positions 2i and and 2i+1 (the
children):
HEAP SORT is a three-step algorithm as discussed below:
(i)
(ii)
(Maximum deletion) Copy the root value (which is maximum of all values in
the Heap) to right-most yet-to-be occupied location of the array used to store the
sorted values and copy the value in the last node of the tree (or of the
corresponding array) to the root.
(iii) Consider the binary tree (which is not necessarily a Heap now) obtained from
the Heap through the modifications through Step (ii) above and by removing
currently the last node from further consideration. Convert the binary tree into a
Heap by suitable modifications.
Example 3.3.5.1:
Let us consider applying Heap Sort for the sorting of the list 80 32 31 110 50 40
120 represented by an array A[17]
Step 1: Construction of a Heap for the given list
First, we create the tree having the root as the only node with node-value 80 as shown
below:
80
85
Introduction to
Algorithmics
80
32
As 32 < 80, therefore, heap property is satisfied. Hence, no modification of the tree.
Next, value 31 is attached as right child of the node 80, as shown below
80
32
31
Again as 31 < 80, heap property is not disturbed. Therefore, no modification of the
tree.
Next, value 110 is attached as left child of 32; as shown below.
80
32
31
110
However, 110 > 32, the value in child node is more than the value in the parent node.
Hence the tree is modified by exchanging the values in the two nodes so that, we get
the following tree
80
13
110
12
Again as 110 > 80, the value in child node is more than the value in the parent node.
Hence the tree is modified by exchanging the values in the two nodes so that, we get
the following tree
110
80
32
86
31
Basics of Analysis
This is a Heap.
Next, number 50 is attached as right child of 80 so that the new tree is as given below
110
80
31
32
50
As the tree satisfies all the conditions of a Heap, we insert the next number 40 as left
child of 31 to get the tree
110
80
31
32
50
40
As the new insertion violates the condition of Heap, the values 40 and 31 are
exchanged to get the tree which is a heap
110
80
32
40
50
31
Next, we insert the last value 120 as right child of 40 to get the tree
110
80
32
40
50
31
120
87
Introduction to
Algorithmics
The last insertion violates the conditions for a Heap. Hence 40 and 120 are exchanged
to get the tree
110
80
120
32
50
40
31
Again, due to movement of 120 upwards, the Heap property is disturbed at nodes
`110 and 120. Again 120 is moved up to get the following tree which is a heap.
120
80
110
32
50
40
31
Next, value 40 is moved to the root and the node containing 40 is removed from
further consideration, to get the following binary tree, which is not a Heap.
40
80
32
110
50
31
In order to restore the above tree as a Heap, the value 40 is exchanged with the
maximum of the values of its two children. Thus 40 and 110 are exchanged to get the
tree which is a Heap.
88
Basics of Analysis
110
80
40
32
50
31
Again 110 is copied to B[6] and 31, the last value of the tree is shifted to the root and
last node is removed from further consideration to get the following tree, which is not
a Heap
31
40
80
32
50
Again the root value is exchanged with the value which is maximum of its childrens
value i.e exchanged with value 80 to get the following tree, which again is not a Heap.
80
40
31
32
50
Again the value 31 is exchanged with the maximum of the values of its children, i.e.,
with 50, to get the tree which is a heap.
80
50
32
40
31
Again 80 is copied in B[5] and 31, the value of the last node replaces 80 in the root
and the last node is removed from further consideration to get the tree which is not a
heap.
89
Introduction to
Algorithmics
31
50
40
32
Again, 50 the maximum of the two childrens values is exchanged with the value of
the root 31 to get the tree, which is not a heap.
50
31
40
32
50
32
40
31
32
40
Again 40, the maximum of the values of children is exchanged with 31, the value in
the root. We get the Heap
40
32
90
31
Again 40 is copied in B[3]. The value in the last node of the tree viz 31, replaces the
value in the root and the last node is removed from further consideration to get the
tree, which is not a Heap.
Basics of Analysis
31
32
Again 32, the value of its only child is exchanged with the value of the root to get the
Heap
32
31
Next, 32 is copied in B[2] and 31, the value in the last node is copied in the root and
the last node is deleted, to get the tree which is a Heap.
31
This value is copied in B[1] and the Heap Sort algorithm terminates.
Next, we consider the two procedure
1. Build-Heap
and
2. Delete-Root-n-Rebuild.
which constitute the Heap-Sort algorithm. The list L of n numbers to be sorted is
represented by an array A[1..n]
The following procedure reads one by one the values from the given to-be-sorted
array A[1..n] and gradually builds a Heap. For this purpose, it calls the procedure
Build-Heap. For building the Heap, an array H[1..n] is used for storing the elements of
the Heap. Once the Heap is built, the elements of A[1..n] are already sorted in H[1..n]
and hence A may be used for sorting the elements of finally sorted list for which we
used the array B. Then the following three steps are repeated n times, (n, the number
of elements in the array); in the ith iteration.
(i)
The root element H[1] is copied in A[n i+1] location of the given array A.
The first time, root element is stored in A[n]. The next time, root element is
stored in A[n 1] and so on.
(ii)
The last element of the array H[n i + 1] is copied in the root of the Heap, i.e.,
in H[1] and H[n i + 1] is removed from further consideration. In other words,
in the next iteration, only the array H[1..(n
i)] (which may not be a Heap) is
taken into consideration.
91
Introduction to
Algorithmics
The following procedure takes an array B[1..m] of size m, which is to be sorted, and
builds it into a Heap
Procedure Build-Heap (B[1..m]: real)
begin {of procedure}
for j = 2 to m do
begin {of for-loop}
location j
while (location > 1) do
begin
Parent [location/2]
If A[location] A [parent] then return {i.e., quit while loop}
else
{i.e., of A[location] > A [parent] then}
end.
92
There are two well-known Divide and conquer methods for sorting viz:
Basics of Analysis
3.3.7
Merge Sort
In this method, we recursively chop the list into two sublists of almost equal sizes
and when we get lists of sizes one, then start sorted merging of lists in the reverse
order in which these lists were obtained through chopping. The following example
clarifies the method.
Example 3.3.7.1 of Merge Sort:
Given List: 4 6 7 5 2 1 3
Chop the list to get two sublists viz.
((4, 6, 7, 5),
( 2 1 3))
where the symbol / separates the two sublists
Again chop each of the sublists to get two sublists for each viz
(((4, 6),
(7, 5))), ((2),
(1, 3)))
Again repeating the chopping operation on each of the lists of size two or more
obtained in the previous round of chopping, we get lists of size 1 each viz 4 and 6, 7
and 5, 2, 1 and 3. In terms of our notations, we get
((((4), (6)), ((7), (5))), ((2), ((1), (3))))
At this stage, we start merging the sublists in the reverse order in which chopping was
applied. However, during merging the lists are sorted.
Start merging after sorting, we get sorted lists of at most two elements viz
(((4, 6),
(5, 7)),
((2),
(1, 3)))
Merge two consecutive lists, each of at most two elements we get sorted lists
((4, 5, 6, 7),
(1, 2, 3))
Finally merge two consecutive lists of at most 4 elements, we get the sorted
list: (1, 2, 3, 4, 5, 6, 7)
93
Introduction to
Algorithmics
3.3.8
Quick Sort
Quick Sort is also a divide and conquer method of sorting. It was designed by
C.A.R. Hoare, one of the pioneers of Computer Science and also Turing Award
Winner for the year 1980. This method does more work in the first step of
partitioning the list into two sublists. Then combining the two lists becomes trivial.
To partition the list, we first choose some value from the list for which, we hope,
about half the values will be less than the chosen value and the remaining values will
be more than the chosen value.
Division into sublists is done through the choice and use of a pivot value, which is a
value in the given list so that all values in the list less than the pivot are put in one list
and rest of the values in the other list. The process is applied recursively to the sublists
till we get sublists of lengths one.
Remark 3.3.8.1:
The choice of pivot has significant bearing on the efficiency of Quick-Sort algorithm.
Sometime, the very first value is taken as a pivot.
However, the first values of given lists may be poor choice, specially when the
given list is already ordered or nearly ordered. Because, then one of the sublists may
be empty. For example, for the list
7
6
4
3
2
1
the choice of first value as pivots, yields the list of values greater than 7, the pivot, as
empty.
Generally, some middle value is chosen as a pivot.
Even, choice of middle value as pivot may turn out to be very poor choice, e.g, for
the list
2
4
6
7
3
1
5
the choice of middle value, viz 7, is not good, because, 7 is the maximum value of the
list. Hence, with this choice, one of the sublists will be empty.
A better method for the choice of pivot position is to use a random number generator
to generate a number j between 1 and n, for the position of the pivot, where n is the
size of the list to be sorted. Some simpler methods take median value of the of a
sample of values between 1 to n, as pivot position. For example, the median of the
values at first, last and middle (or one of the middle values, if n is even) may be taken
as pivot.
Example 3.3.8.1 of Quick Sort
We use two indices (viz i & j, this example) one moving from left to right and other
moving from right to left , using the first element as pivot
In each iteration, the index i while moving from left to the right, notes the position of
the value first from the left that is greater than pivot and stops movement for the
iteration. Similarly, in each iteration, the index j while moving from right to left notes
the position of the value first from the right that is less than the pivot and stops
movement for the iteration. The values at positions i and j are exchanged. Then, the
next iteration starts with current values of i and j onwards.
94
3
i
9
i
4
j
7 { Given list}
j
Basics of Analysis
{the value 9 is the first value from left that is greater than the pivot viz 5 and the value 4 is the first value
from right that is less than the pivot. These values are exchanged to get the following list}
5
4
i
9
j
{Moving i toward right and j toward left and i stops, if it reaches a value greater than the pivot and j
stops if j meets a value less than the pivot. Also both stop if j i. We get the list}
8
i
2
j
8
j
2
i
{The next moves of i to the right and j to the left make j < i and this indicates the completion of one
iteration of movements of i and j to get the list with positions of i and j as follows}
2
j
8
i
{At this stage, we exchange the pivot with value at position j, i.e., 2 and 5 are exchanged so that pivot
occupies almost the middle position as shown below.}
{It may be noted that all values to the left of 5 are less than 5 and all values to the right of 5 are greater
than 5. Then the two sublists viz 2,3,1,4 and 8,9,7 are sorted independently}
3
i
4
j
and
8
7
9
i
j
3
i
1
j
and
7
i
9
j
1
i
3
j
and
7
j
9
i
and
7
i
8
j
2
and
3
i
3
j
4
j,
4
i
{Pivot position = 1 = j. And pivot position is neglected therefore, we need to sort the
remaining lists which are respectively empty and {4}, which are already sorted.
This completes the sorting. Merging does not require any extra time as already the
entries are in sorted order}
95
Introduction to
Algorithmics
Procedure
{min is the lower index and max the upper index of the array to be sorted using Quick Sort}
begin
if min < max then
p partition (A[min..max]);
{p is the position s.t for min
A[j] A [p]}
1,
A [i]
p+1,
{In this procedure the first element is taken as pivot; the index i, used below moves from left to right
to find first value i = 1i from left such that A[v1 ] > A[1]. Similarly j moves from right to left and
finds first value j = v2 such that A[v2] < A[1]. If j >i, then A[i] and A[j] are exchanged. If j I
then A[1] and A[j] are exchanged}.
{the exchange operation involves three assignments, viz, temp A [i ]; A[i] A [j] and
A[j] temp, where
temp is a new variable }
end; {of while loop}
Comparison of keys
assignment of values
The following table gives the time complexities requirement in terms of size n of the
list to be sorted, for the two types of actions for executing the algorithms. Unless
mentioned otherwise, the complexities are for average case behaviors of the
algorithms:
96
Name of the
algorithm
Comparison of Keys
Assignments
Selection Sort
0.5n2 + O(n)
3.0 n + O(1)
Insertion Sort
Shell Sort
0.25n2 + O(n)
Heap Sort
n log n + O(n)
3 2
(n
4
3 2
(n
4
3 2
(n
6
3 2
(n
4
Bubble Sort
Quick Sort
Merge Sort
n log n) (average)
n) (worst)
1.39 n log n
n log n to (n log n
( for linked lists)
Basics of Analysis
n) (average)
n) (worst)
0.69 n log n
1.583n + 1)
n log n
(for contiguous lists)
Merge Sort is good for linked lists but poor for contiguous lists. On the other hand,
Quick Sort is good for contiguous lists but is poor for linked lists.
In context of complexities of a sorting algorithm, we state below an important
theorem without proof.
Theorem: Any algorithm, based on comparison of keys, that sorts a list of n
elements, in its average case must perform at least log (n!) (n log n + O(n)) number
of comparisons of keys.
j2
ij 1
m A[j]
A[i + 1] m
m < A[i] in the while loop.
97
Introduction to
Algorithmics
We can see that these are the minimum operation performed by Insertion Sort irrective
of the value of A[2]. Further, had A[2] been less than A[1], then more operations
would have been performed, as we shall see in the next example.
To conclude, as if A[2] A[1] (as is the case, because A[2] = 3 and A[1] = 1) then
Insertion Sort performs 4 assignments and 1 comparison. We can see that in general,
if A[l+1] A[l] then we require (exactly) 4 additional assignments and 1 comparison
to place the value A[l+1] in its correct position, viz (l+1)th.
Thus in order to use Insertion Sort to attempt to sort an already, in proper
order, sorted list of n elements, we need 4(n 1) assignments and (n 1)
comparison.
Further, we notice, that these are the minimum operations required to sort a list
of n elements by Insertion Sort.
Hence in the case of array TS[1..n], the Insertion-Sort algorithm takes linear time for
an already sorted array of n elements. In other words, Insertion-Sort has linear
Best-Case complexity for TS[1..n].
Next, we discuss the number of operations required by Insertion Sort for sorting
TR[1..n] which is sorted in reverse order. For this purpose, let us consider the sorting
of the list {9, 7, 5, 3, 1} stored as A[1] = 9, A[2] = 7, A[3] = 5 A[4] = 3 and A[5] = 1.
Let m denote the variable in which the value to be compared with other values of the
array, is stored. As discussed above in the case of already properly sorted list, 4
assignments and one comparison, of comparing A[k +1] with A[k], is essentially
required to start the process of putting each new entry A[k+1] after having already
sorted the list A[1] to A[k]. However, as 7 = A[2] = m < A[1] = 9. A[1] is copied to
A[2] and m is copied to A[1]. Thus Insertion Sort requires one more assignment
(viz A[2] A[1]).
At this stage A[1] = 7 and A[2] = 9. It is easily seen that for a list of two elements, to
be sorted in proper order using Insertion-Sort, at most one comparison and
5 (= 4 +1) assignments are required.
Next, m A[3] = 5, and m is first compared with A[2] = 9 and as m < A[2], A[2] is
copied to A[3]. So, at this stage both A[2] and A[3] equal 9.
Next, m = 5 is compared with A[1] = 7 and as m < A[1] therefore A[1] is copied to
A[2], so that, at this stage, both A[1] and A[2] contain 7. Next 5 in m is copied to
A[1]. Thus at this stage A[1] = 5, A[2] = 7 and A[3] = 9. And, during the last round,
we made two additional comparisons and two addition assignments
(viz A[3] A[2] A[1]), and hence total (4 + 2) assignments, were made.
Thus, so far we have made 1+ 2 comparisons and 5 + 6 assignments.
Continuing like this, for placing A[4] = 3 at right place in the sorted list {5,7,9}, we
make 3 comparisons 7 assignments. And for placing A[5] = 1 at the right place in the
sorted list {3,5,7,9}, we make 4 comparisons and 8 assignments. Thus, in the case of
67
a list of 5 elements in reverse order, the algorithm takes 1+2+3+4 =
4
comparisons and 5+6+7+8+9 = 4 5 + (1+2+3+4+5) assignments.
In general, for a list of n elements, sorted in reverse order, the Insertion Sort
*n / 3+ n = 3 (n2 n) comparisons and 4n + 3 (n2 + n)
algorithm makes
4
4
4
3 2
=
(n + 9n) assignments.
4
98
Again, it can be easily seen that for a list of n elements, Insertion-Sort algorithm
3 2
3 2
should make at most
(n n) comparisons and
(n + 9n) assignments.
4
4
Thus, the total number of operations
3
n4 / n
?
4
+ - 34 *n
Basics of Analysis
- n - 6n ? n 4 - 6n
If we assume that time required for a comparison takes a constant multiple of the time
than that taken by an assignment, then time-complexity of Insertion-Sort in the case of
reverse-sorted list is a quadratic in n.
When actually implemented for n = 5000, the Insertion-Sort algorithm took 1000
times more time for TR[1..n], the array which is sorted in reverse order than for
TS[1..n], the array which is already sorted in the required order, to sort both the
arrays in increasing order.
3.4.1
We discuss worst-case analysis and best-case analysis in this section and the
other two analyses will be discussed in Section 2.9.
3.4.2
Worst-Case Analysis
(ii)
if the size of such a list is n, then the longest time should correspond to
3 4
3 4
n - n comparisons and
n - 5n assignments.
4
4
and
99
Introduction to
Algorithmics
3.4.3
Best-Case Analysis
Best- Case Analysis of an algorithm for a given problem, involves finding the
shortest of all the times that can (theoretically) be taken by the various instances of a
given size, say n, of the problem. In other words, the best-case analysis is concerned
with
(i)
(ii)
finding the instances (types) for which algorithm runs fastest and then
with finding running time of the algorithm for such instances (types).
If C-best (n) denotes the best-case complexity for instances of size n, then by
definition, it is guaranteed that no instance of size n, of the problem, shall take less
time than
C-best (n).
The best-case analysis is not as important as that of the worst-case analysis.
However, the best-case analysis may be useful guide for application to situations,
which need not necessarily correspond to the instance types taking shortest time, yet
which are close to such instance types. For example, a telephone directory of a
metropolitan city like Delhi, contains millions of entries already properly sorted.
Each month, if a few thousand of new entries are to made, then if these entries are put
in the beginning of the directory, then in this form, without further processing, the
directory is a nearly sorted list. And, the Insertion-Sort algorithm which makes only
n comparisons and no shifts for an already properly sorted list of n elements sorted in
the required order, may be useful for application to this slightly out-of-order new list
obtained after addition of a few thousand entries in the beginning of the earlier sorted
list of millions of entries.
In general, it can be shown that
(i)
for an already sorted list of w elements, sorted in the required order, the
Insertion-Sort algorithm will make (n
assignments and
(ii)
1) comparisons and 4 (n
1)
Thus the best time complexity of Insertion Sort algorithm is a linear polynomial in the
size of the problem instance.
100
3.5.1
Sequencing
Basics of Analysis
Let F1 and F2 be two program fragments, with t1 and t2 respectively the time required
for executing F1 and F2. Let the program fragment F1 ; F2 be obtained by sequencing
the two given program fragments, i.e, by writing F1 followed by F2.
Then sequencing rule states that the time required for the program fragment
F1 ; F2 is t1 + t2.
Word of Caution: The sequencing rule, mentioned above, is valid only under the
assumption that no instruction in fragment F2 depends on any instruction in Fragment
F1. Otherwise, instead of t1 + t2, the time required for executing the fragment F1 ; F2
may be some more complex function of t1 and t2, depending upon the type of
dependency of instruction(s) of F2 on instructions of F1. Next, we consider the various
iterative or looping structures, starting with For loops.
3.5.2
For Construct
In order to understand better the ideas involved, let us first consider the following two
simple examples involving for construct.
Example 3.5.2.1: The following program fragment may be used for computing sum
of
first n natural numbers:
for i = 1 to n do
sum = sum + i.
The example above shows that the instruction sum = sum + i depends upon the loop
variable i. Thus, if we write
P(i) : sum = sum + i
then the above-mentioned for loop may be rewritten as
for i= 1 to n do
P (i),
end {for}
where i in P(i) indicates that the program fragment P depends on the loop variable i.
Example 3.5.2.2: The following program fragment may be used to find the sum of n
numbers, each of which is to be supplied by the user:
for i = 1 to n do
read (x);
sum = sum + x;
end {for}.
In the latter example, the program fragment P, consisting of two instructions viz.,
read (x) and sum = sum + x, do not involve the loop variable i. But still, there is
nothing wrong if we write P as P(i). This is in view of the fact that a function f of a
variable x, given by
f(x) = x2
may also be considered as a function of the two variables x and y, because
f(x,y)
= x2 + 0.y
101
Introduction to
Algorithmics
Remark 3.5.2.3:
The case when n = 0 in the loop for i = 1 to n do P (i) would not be treated as an error.
The case n = 0 shall be interpreted that P(i) is not executed even once.
Let us now calculate the time required for executing the loop
for i = 1 to n do
P(i).
end for
For this purpose, we use the expanded definition considered under Remark and, in
addition, we use the following notations:
fl
fl
for i = 1
for (n+1) times testing i n
for n times execution of P(i)
+ na
+ ns
i.e.
fl = (n+1)a + (n+1)c + n s + n t
102
(i)
the quantity on R.H.S is bounded below by nt, n times the time of execution of
P(i)
(ii)
Basics of Analysis
nt
3.5.3
From the point of view of time complexity, the analysis of while or repeat loops is
more difficult as compared to that of for loops. This is in view of the fact that in the
case of while or repeat loops, we do not know, in advance, how many times the loop
is going to be executed. But, in the case of for loops, we can know easily, in advance
the number of times, the loop is going to be executed.
One of the frequently used techniques for analysing while/repeat loops, is to define a
function say f of the involved loop variables, in such a way that
(i)
(ii)
(iii) the value of f reaches some minimum non-negative value, when the loop is to
terminate, of course, only if the loop under consideration is a terminating loop.
Once, such a function f, if it exists, is found, the analysis of the while/repeat loop gets
simplified and can be accomplished just by close examination of the sequence of
successive values of f.
We illustrate the techniques for computing the time complexity of a while loop
through an example given below. The repeat loop analysis can be handled on the
similar lines.
Example 3.5.3.1:
Let us analyze the following Bin-Search algorithm that finds the location of a value v
in an already sorted array A[1..n], where it is given that v occurs in A[ 1.. n ]
The Bin-Search algorithm, as defined below, is, in its rough version, intuitively applied by us in finding
the meaning of a given word in a dictionary or in finding out the telephone number of a person from the
telephone directory, where the name of the person is given. In the case of dictionary search, if the word
to be searched is say CARTOON, then in view of the fact that the word starts with letter C which is near
the beginning of the sequence of the letters in the alphabet set, we open the pages in the dictionary near
the beginning. However, if we are looking for the meaning of the word REDUNDANT, than as R is 18th
letter out of 26 letters of English alphabet, we generally open to pages after the middle half of the
dictionary.
However, in the case of search of a known value v in a given sorted array, the values
of array are not know to us. Hence we do not know the relative position of v. This is
3 - n
why, we find the value at the middle position
in the given sorted array
4
3 - n
A[1..n] and then compare v with the value A
. These cases arise:
4
n -3
(i)
If the value v = A
, then search is successful and stop. Else,
4
103
Introduction to
Algorithmics
3 - n
3 - n
if v < A
/ 3 of the given
, then we search only the part A 3..
4
4
array. Similarly
3 - n
3 - n
(iii) if v > A
, than we search only the part A 4 - 3...n of the array.
4
(ii)
And repeat the process. The explanation for searching v in a sorted array, is
formalized below as function Bin-Search.
Function Bin-Search ( A[1..n], v)
begin
i=1 ; j=n
while i < j
do
{i.e., A[ i ]
A[ j ] }
k = [(i+ j) 2]
j=k 1
{ return k }
i=k+1
Case
v < A[ k ]:
v = A[ k ]:
v > A[ k ]:
end case
end while { return i }
end function;
11
15
18
21
23
24
27
3 ? 34
35
k=
? ?8
4 4
and A[6] = 15
As v = 11 < 15 = A[6]
Therefore, for the next iteration
j=6 1=5
i = 1 (unchanged)
3 - 7
hence k =
?5
4
A [3] = 7
As v = 11 > 7 = A[3]
For next iteration
^ i becomes (k+1) = 4, j remains unchanged at 5.
6 - 7
Therefore new value of k =
?6
4
As v = 11 > 9 = A[k]
Therefore, in new iteration i becomes
Therefore new
104
k + 1 = 5, j remains unchanged at 5.
30
7 - 7
k=
?7
4
And A[k] = A[5] = 11 = v
Basics of Analysis
fold/2
( jold + iold) /2
iold
< (jold iold + 1)/2 = fold/2
(n/2t)
t = log 4 n
or
n = 2t
Introduction to
Algorithmics
Example 3.6.1:
begin
if n = 0 return 1
else return (n * factorial (n 1))
end factorial
Analysis of the above recursive algorithm
We take n, the input, as the size of an instance of the problem of computing factorial
of n. From the above (recursive) algorithm, it is clear that multiplication is its basic
operation. Let M(n) denote the number of multiplications required by the
algorithm in computing factorial (n). The algorithm uses the formula
Factorial (n) = n* factorial (n 1)
for n > 0.
Therefore, the algorithm uses one extra multiplication for computing factorial (n) then
for computing factorial (n 1). Therefore,
M(n)
M(n 1) + 1.
for n > 0
(3.6.1)
Also, for computing factorial (0), no multiplication is required, as, we are given
factorial (0) = 1. Hence
M(0) = 0
( 3.6.2)
The Equation ( 3.6.1) does not define M(n) explicitly but defines implicitly through
M(n 1). Such an equation is called a recurrence relation/equation. The equation
( 3.6.2) is called an initial condition. The equations (3.6.1) and (3.6.2 ) together form
a system of recurrences.
By solution of a system of recurrences, we mean an explicit formula, say for M(n) in
this case, free from recurrences in terms of n only, and not involving the function to
be defined, i.e., M in this case, directly or indirectly.
We shall discuss in the next section, in some detail, how to solve system of
recurrences. Briefly, we illustrate a method of solving such systems, called Method of
Backward Substitution, through solving the above system of recurrences viz.
M(n)
and
M(0)
M(n 1) + 1
(3.6.1)
(3.6.2)
=
=
2) + 1
M(n 1) + 1
[M (n 2) + 1] + 1,
106
(3.6.3)
= M (n 1) + 1
= [M (n 2) + 1 ] + 1
= [M (n 3) + 1 ] + 2
= M ( 1 ) + ( n 1)
= M (n
= M (n
= M (n
= M (n
1) + 1
2) + 2
3) + 3
I) + i
2),
= M (0) + n
M(n)
Basics of Analysis
=0+n=n
3.7.1
=
=
2F(n
1
1) + 1 for n > 1
(3.7.1)
(3.7.2)
First few terms of the sequence < F (n) > are, as given below.
f(1)
f(2)
f(3)
f(4)
=
=
=
=
1
2 * f (1) + 1
2 * f (2) + 1
f(3) + 1
=
=
=
2 1+1 =3
2 3+1 =7
2 x 7 + 1 = 15
= 2n
for n = 1, 2, 3, 4
for all n
F(1) = 21
1 = 1,
F (k) = 2k
F(k+1) = 2k+1
1.
For showing
107
F(k+1) = 2k+1
Introduction to
Algorithmics
1,
Consider, by definition,
F(k+1) = 2 F (K + 1 1) + 1
= 2 F (k) + 1
= 2 (2k 1) + 1 (by Step (ii))
= 2k+1 2 + 1
= 2k+1 1
Therefore, by Principle of Mathematical Induction, our feeling that F(n) = 2n
all n 1 is mathematically correct.
1 for
3.7.2
a F(n) + b F( n
1) + cF ( n 2) = g (n)
(3.7.2.1)
where a, b, c are real numbers, a 0, are called linear second-order recurrences with
constant coefficients. Further, if g(n) = 0 then the recurrence is called Homogeneous.
Otherwise, it is called inhomogeneous. Such systems of recurrences can be solved by
neither backward substitution method nor by forward substitution method.
In order to solve recurrences of the form (2.8.2.1).
First we consider only the homogeneous case, i.e., when g(n) = 0. The recurrence
becomes
a F(n) + b F(n
1) + c F(n
2) = 0
(3.7.2.2)
The above equation has infinitely many solutions except in the case when both b = 0
and c = 0.
With equation (2.8.2.2) we associate an auxiliary quadratic equation, called
characteristic equation, given by
ax2 + bx + c = 0
(3.7.2.3)
Then the solutions of the recurrences (2.8.2.1) are given by the following theorem,
which we state without proof.
Theorem 3.7.2.1:
Case 1:
F(n)
c x 3n - d x n4
(3.7.2.4)
108
If the roots x1 and x2 are real but x1 = x2 then solutions of (3.7.2.2) are
given by
c x3n - d nx3n ,
Where, again, c and d are arbitrary real constants.
F(n)
(3.7.2.5)
Basics of Analysis
Case III: If x1 and x2 are complex conjugates given by u + iv, where u and v are real
numbers. Then solutions of (2.8.2.2) are given by
(3.7.2.6)
Example 3.7.2.2:
4F(n
1) + 4 F (n
2) = 0.
(3.7.2.7)
4x + 4 = 0,
1) + c F (n
(3.7.2.8)
can be obtained as the sum of the general solution of the homogeneous equation
a F(n) + b F(n 1) + c F (n 2) = 0
and a particular solution of (3.8.2.8).
The method of finding a particular solution of (3.8.2.8) and then a general solution of
(3.8.2.8) is explained through the following examples.
Example 3.7.2.4
4 F(n
1) + 4 F (n
2) = 3
If F(n) = c is a particular solution of the recurrence, then replacing F(n), F(n 1) and
F(n 2) by c in the recurrence given above, we get
c 4c + 4c = 3
i.e.,
c=3
109
Introduction to
Algorithmics
Also, the general solution of the characteric equation (of the inhomogeneous recurrence given
viz
F(n) 4 F(n 1) + 4 F(n 2) = 0
Are (obtained from Example 2.8.2.2, as) c 2n dn 2n
above)
3.8.1
Average-Case Analysis
In Section 3.4, we mentioned that efficiency of an algorithm may not be the same for
all inputs, even for the same problem-size n. In this context, we discussed Best-Case
analysis and Worst-Case analysis of an algorithm.
However, these two analyses may not give any idea about the behaviour of an
algorithm on a typical or random input. In this respect, average-case analysis is more
informative than its two just-mentioned counter-parts, particularly, when the
algorithm is to be used frequently and on varied types of inputs. In order to get really
useful information from the average-case analysis of an algorithm, we must explicitly
mention the properties of the set of inputs, obtained either through empirical evidence
or on the basis of theoretical analysis. We explain the ideas through the analysis of the
following algorithm, that, for given an element K and an array A[1..n], returns the
index i in the array A[1..n], if A[ i ] = K otherwise returns 0.
Algorithm Sequential_ Search ( A [ 1.. n ], K)
begin
i 1
while (i < n
and A [ i ] K)
do
i i + 1
If i < n
return i
else
return 0
end;
Some of the assumptions, that may be made, in the case of the above algorithm, for the
purpose of average-case analysis, are
(i)
and
(ii)
in the case of successful search, the probability that first time K occurs in ith
position in A[1..n], is the same for all i= 1, 2, n.
With these two assumptions, we make average-case analysis of the Algorithm
Sequential Search given above as follows:
110
p),
where the term i . (p/n) is the contribution of i comparisons that have been made when
executing while-loop i times such that i is the least index with A[i] = K after which
while-loop terminates.
Basics of Analysis
Also, the last term (n . (1 p)) is the contribution in which while-loop is executed
n times and after which it is found that A [ i ] K for i = 1, 2, ., n.
Simplifying R.H.S of the above equation, we get
p
Cavg (n) = [ 1 + 2 + + i + .. + n] + n ( 1
n
p n *n - 3+
- n. (3 / p)
= .
4
n
p *n - 3+
=
- n (3 / p)
4
p)
As can be seen from the above discussion, the average-case analysis is more difficult
than the best-case and worst-case analyses.
Through the above example we have obtained an idea of how we may proceed to find
average-case complexity of an algorithm. Next we outline the process of finding
average-case complexity of any algorithm, as follows:
(i)
First categories all possible input instances into classes in such a way that inputs
in the same class require or are expected to require the execution of the same
number of the basic operation(s) of the algorithm.
(ii)
Next, the probability distribution of the inputs for different class as, is obtained
empirically or assumed on some theoretical grounds.
3.8.2
Amortized Analysis
worst-case and best-case analyses of an algorithm may not give a good idea
about the behaviour of the algorithm on a typical or random input.
(ii)
Another important fact that needs our attention is the fact that most of the operations,
including the most time-consuming operations, on a data structure (used for solving a
problem) do not occur in isolation, but different operations, with different time
complexities, occur as a part of a sequence of operations. Occurrences of a
particular operation in a sequence of operations are dependent on the occurrences of
111
Introduction to
Algorithmics
other operations in the sequence. As a consequence, it may happen that the most time
consuming operation can occur but only rarely or the operation only rarely consumes
its theoretically determined maximum time. We will support our claim later through
an example. But, we continue with our argument in support of the need for another
type of analysis, viz., amortized analysis, for better evaluation of the behaviour of an
algorithm. However, this fact of dependence of both the occurrences and complexity
of an operation on the occurrences of other operations, is not taken into consideration
in the earlier mentioned analyses. As a consequence, the complexity of an algorithm
is generally over-evaluated.
Next, we give an example in support of our claim that the frequencies of occurrences
of operations and their complexities, are generally interdependent and hence the
impact of the most time-consuming operation may not be as bad as it is assumed or
appears to be.
Example 3.8.2.1:
We define a new data structure say MSTACK, which like the data structure STACK
has the usual operations of PUSH and POP. In addition there is an operation
MPOP(S, k), where S is a given stack and k is a non-negative integer. Then
MPOP(S, k) removes top k elements of the stack S, if S has at least k elements in the
stack. Otherwise it removes all the elements of S. MPOP (S, k) may be formally
defined as
Procedure MPOP (S, k);
begin
While (not Empty (S) and k
POP (S)
kk 1
Endwhile;
EndMPOP.
0) do
For example, If, at some stage the Stack S has the elements
35
40
TOP
Then after MPOP (S, 4) we have
6
TOP
27
18
11
BOTTOM
11
BOTTOM
(i)
(ii)
112
if k m
otherwise
If we start with an empty stack S, then at any stage, the number of elements
that can be POPed off the stack either through a POP or MPOP, can not
exceed the total number of preceding PUSHes.
Basics of Analysis
(ii a)
If we start with an empty stack S, then, at any stage, the number of elements
that can be popped off the stack through all the POPs and MPOPs can not
exceed the number of all the earlier PUSHes.
For Example, if Si denotes ith PUSH and Mj denote jth POP/MPOP and if we have a
sequence of PUSH/POP/MPOP as (say)
S1 S2 S3 M1 S4 S5 M2 S6 S7 M3 S8 S9 S10 S11 M4
Then in view of (i) above
Cost (M1)
Cost (S1 S2 S3) = 3
Cost (M1) + Cost M2 Cost (S1 S2 S3) + Cost (S4 S5) = 5
Cost (M1) + Cost (M2) + Cost (M3) Cost (S1 S2 S3) + Cost (S4 S5)
+ Cost (S6 S7) = 7
In general if we have a sequence of PUSH/POP/MPOP, total n in number, then for a
sequence.
S11 S12 S3 i3 M1 S21 S22 . S4 i 4 M2 St1 St2 .St i t Mt
Where Mj is either a POP or MPOP and
(3.8.2.1)
(3.8.2.2)
(3.8.2.3)
(3.8.2.4)
Cost (Mt)
(3.8.2.5)
(i)
(ii)
Introduction to
Algorithmics
The above discussion motivates the concept and study of AMORTIZED ANALYSIS.
3.9 SUMMARY
In this unit, the emphasis is on the analysis of algorithms, though in the process, we
have also defined a number of algorithms. Analysis of an algorithm generally leads to
computational complexity/efficiency of the algorithm.
It is shown that general analysis of algorithms may not be satisfactory for all types of
situations and problems, specially, in view of the fact that the same algorithm may
have vastly differing complexities for different instances, though, of the same size.
This leads to the discussion of worst-case and best-case analyses in Section 3.4 and of
average-case analysis and amortized analysis in Section 3.8.
In, Section 3.2, we discuss two simple examples of algorithms for solving the same
problem, to illustrate some simple aspects of design and analysis of algorithms.
Specially, it is shown here, how a minor modification in an algorithm may lead to
major efficiency gain.
In Section 3.3, the following sorting algorithms are defined and illustrated with
suitable examples:
(i)
Insertion Sort
(ii)
Bubble Sort
Shell Sort
(v)
Heap Sort
(vi)
Merge Sort
3.10 SOLUTIONS/ANSWERS
Ex. 1)
Iteration (i): For placing A[2] at its correct relative position w.r.t A[1]
in the finally sorted array, we need the following operations:
(i)
Basics of Analysis
(ii)
(iii)
(iv)
Also, for correct place for A [3] = 13 w.r.t A[1] and A[2], the following
operations are performed
(i)
(ii)
(iii)
(iv)
Then
13 = m is compared with
(i)
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
(viii)
A[2] = 10,
A[3] = 13,
A[4] = 15
Iteration IV: For correct place for A[5] = 12, the following operations
are performed.
(i)
In view of the earlier discussion, and in view of the fact that the
number 12 (contents of A[5]) occurs between A[2] = 10 and
A[3] = 13, the algorithm need to perform, in all, the following
operations.
Introduction to
Algorithmics
(a)
(b)
(c)
(i)
(ii)
(iii)
(one assignment)
17 = A[6] is copied to m
m is compared with A[5] = 15 and as A[5] = 15 < 17 = m,
therefore, no more comparisons and no copying of elements A[1]
to A[5] to the locations respectively on their right.
(though this step appears to be redundant yet) the algorithm
executes it
17 = m is copied to A[6]
In
In
In
In
In
I iteration,
II iteration,
III iteration,
IV iteration,
V iteration,
1
2
3
3
1
comparison
comparison
comparison
comparison
comparison
and
and
and
and
and
116
Basics of Analysis
Iteration I:
15
10
10
10
10
10
In this iteration,
were performed
13
9
10
9
15
13
13
15
9
13
9
15
13
9
12
13
9
12
5 comparisons and 5 exchanges i.e.,
12
17
12
17
12
17
17
12
15
17
15
17
15 assignments,
9
9
13
12
12
13
13
9
9
9
12
12
12
13
13
15
15
15
15
15
9
10
10
10
12
12
12
12
13
13
13
13
10
10
10
12
12
12
10
10
There will be five iterations in all. In each of the five iterations at least,
the following operations are performed:
Introduction to
Algorithmics
Next we explain the various iterations and for each iteration, count
the operations in addition to these 4 assignments.
Iteration 1: MAX 15
MAX_ POS 1
MAX is compared with successively 10, 13, 9, 12, and 17, one at a time
i.e, 5 comparisons are performed.
The list after the iteration is 10, 9, 12. Again 2 comparisons of 12 with 10
and 9 are performed. No additional assignments are made, in addition to
normal 4.
Iteration 5: The last element 12, is dropped from further consideration.
The list to be sorted is 10, 9. MAX 10 and MAX_POS 1. One
comparison of MAX = 10 with 9. No additional assignment over and
above the normal 4.
Finally, the list to be sorted is: 9 and the process of sorting terminates.
The number of operations performed iteration-wise, is given below:
118
In Iteration I
In Iteration II
In Iteration III
In Iteration IV
In Iteration V
Basics of Analysis
(i)
(ii)
(A)
t 1 (one assignment)
The comparison 6 = n < 2+3+1 is performed which returns true, hence
s r 1 = 2
Next, to calculate the position of A [t+3 s], one multiplication and one
addition is performed. Thus, for selection of a particular sublist, 2
assignments, one comparison, one subtraction, one addition and one
multiplication is performed
Thus, just for the selection of all the three sublists, the following
operations are performed:
6
3
3
3
3
Assignments
Comparisons
Subtractions
additions
multiplications
(iii) (a)
119
Introduction to
Algorithmics
i 1
m A[2] = 9
Next, the comparisons m = 9 < A [1] =15 and 1 = i < 0 are performed,
(two comparisons)
both of which are true.
15 = A [1] is copied to A [2]
and i 1 1 = 0
(assignment)
Next again the one comparisons is performed viz i > 0 which is false and
hence 9 = m < A [0] is not performed
Then 9 = m is copied to A [1]
(iii) (b)
(one assignment)
To sort the list 15, 10, 13, 9, 12, 17 stored in A[1..6], using Heap Sort
first build a heap for the list and then recursively delete the root and
restore the heap.
Step I
(a)
For j = 2
2 = location > 1 is tested, which is true.
(one comparison)
Hence
location
parent
? 3,
4
is performed.
(one comparison)
120
Basics of Analysis
15
10
(one comparison)
Parent
(assignments)
10
13
(one comparison)
Parent
(one assignment)
10
13
(ii)
(d)
The Comparison
Location = 5 > 1 is performed,
which is true. Therefore,
(one comparison)
location 5
? 2 ?2
2
Parent
is performed
(one comparison)
121
Introduction to
Algorithmics
13
10
12
12
13
10
(one comparison)
(one comparison)
(e)
j 6
The comparison
Location = 6>1 is performed
(one comparison)
location
Therefore, parent
? 3 is performed
2
(one assignment)
(one comparison)
122
(3 assignment)
Next
location 3
(location>1) is performed
(one assignment)
(one comparison)
Basics of Analysis
and
location
?1
2
parent
(one assignment)
is performed
Further
A [location] = 17 < 15 = A [parent] is performed,
(one comparison)
which is false.
Hence, the following operations are further performed
A [1] and A [3] are exchanged
(3 assignments)
12
15
10
13
Step II: The following three substeps are iterated 1 repeated 5 times:
a.
b.
(iii)
The sub steps (i) and (ii) are performed 5 times each, which contribute to
10 assignments
Iteration (i): after first two sub steps the heap becomes the tree
13
12
15
10
Root node is compared, one by one, with the values of its children
(2 comparisons)
The variable MAX stores 15 and MAX_POS stores the index of the right
child
(Two assignment)
123
Introduction to
Algorithmics
(3 assignments)
15
13
12
10
12
13
12
10
Iteration (iii): Again 13 of the root mode is removed and 9 of the last
node is copied in the root to get the tree
9
12
10
124
Basics of Analysis
12
10
Iteration (iv): 12 is removed and the value 10 in the last node is copied
in the root to get the tree
10
10 is compared with 9
and no assignments
The sequence
15, 10, 13, 9, 12, 17,
125
Introduction to
Algorithmics
126
1.
2.
3.
4.
5.
6.
7.
Discrete Mathematics and Its Applications, K.N. Rosen: (Fifth Edition) Tata
McGraw-Hill (2003).
8.
Basics of Analysis
127
Divide-And-Conquer
UNIT 1 DIVIDE-AND-CONQUER
Structure
1.0
1.1
1.2
1.3
1.4
1.5
Page Nos.
Introduction
Objectives
General Issues in Divide-and-Conquer
Integer Multiplication
Binary Search
Sorting
5
5
6
8
12
13
1.6
1.7
1.8
1.9
1.10
1.11
1.12
Randomization Quicksort
Finding the Median
Matrix Multiplication
Exponentiation
Summary
Solutions/Answers
Further Readings
17
19
22
23
25
25
28
1.0 INTRODUCTION
We have already mentioned that solving (a general) problem, with or without
computers, is quite a complex and difficult task. We also mentioned a large number
of problems, which we may encounter, even in a formal discipline like Mathematics,
may not have any algorithmic/computer solutions. Out of the problems, which
theoretically can be solved algorithmically, designing a solution for such a problem is,
in general, quite difficult. In view of this difficulty, a number of standard techniques,
which are found to be helpful in solving problems, have become popular in computer
science. Out of these techniques Divide-and-Conquer is probably the most wellknown one.
The general plan for Divide-and-Conquer technique has the following three major
steps:
Step 1:
Step 2:
Step 3:
1.1 OBJECTIVES
After going through this Unit, you should be able to:
explain the essential idea behind the Divide-and-Conquer strategy for solving
problems with the help of a computer, and
use Divide-and-Conquer strategy for solving problems.
Design Techniques-I
Suppose our original algorithm alpha is used to carry out the Step 2, viz., solve these
sub-instances. Let
T (alpha) (n)
T (beta) (n)
=
=
=
=
3 T (alpha) (n/2) + d n
(3/4) (cn2) + dn
Then,
T (alpha) (n)
But
T (beta) (n)
So if dn < (cn2)/4 (i.e., d/4c < n) then beta is faster than alpha.
In particular, for all large enough ns, (viz., for n > 4d/c = Constant), beta is faster
than alpha.
The algorithm beta improves upon the algorithm alpha by just a constant factor. But
if the problem size n is large enough such that for some i > 1, we have
n
n/2
n/2i
>
>
>
4d/c
4d/c
4d/c
and also
and even
which suggests that using beta instead of alpha for the Step 2 repeatedly until the
sub-sub-subsub-instances are of size n0 < = (4d/c), will yield a still faster algorithm.
So consider the following new algorithm for instances of size n
Procedure gamma (n : problem size),
6
If n < = n0 then
Divide-And-Conquer
if n < = n0
otherwise
T (gamma) (n) =
we shall show how relations of this form can be estimated. Later in the course, with
these methods it can be shown that
T(gamma) (n) = O (n(log3) ) =O(n1.59)
This is a significant improvement upon algorithms alpha and beta, in view of the fact
that as n becomes larger the differences in the values of n1.59 and n2 becomes larger
and larger.
The improvement that results from applying algorithm gamma is due to the fact that it
maximizes the savings achieved through beta. The (relatively) inefficient method
alpha is applied only to small problem sizes.
The precise form of a divide-and-conquer algorithm is characterised by:
(i)
The threshold input size, n0, below which the problem size is not further
sub-divided.
(ii)
(iv)
In (ii), it is more usual to consider the ratio of initial problem size to sub-instance size.
In our example, the ration was 2. The threshold in (i) is sometimes called the
(recursive) base value. In summary, the generic form of a divide-and-conquer
algorithm is:
Procedure D-and-C (n : input size);
begin
read (n0)
;
read the threshold value.
if n < = n0 then
solve problem without further sub-division;
else
Split into sub-instances each of size n/k;
for each of the r sub-instances do
D-and-C (n/k);
Combine the resulting sub-solutions to produce the solution to the original
problem;
end if;
end D-and-C;
Design Techniques-I
Such algorithms are naturally and easily realised as recursive procedures in (suitable)
high-level programming languages.
x ? x n /3 x n / 4 ... x3 x 2
and
y ? y n /3 y n / 4 ... y3 y 2
z ? z 4 n /3 z 4 n / 4 z 4 n / 5 ... z3 z 2
Note: The algorithm given below works for any number base, e.g., binary, decimal,
hexadecimal, etc. We use decimal simply for convenience.
The classical algorithm for multiplication requires O(n2) steps to multiply two n-digit
numbers.
A step is regarded as a single operation involving two single digit numbers, e.g.,
5+6, 3* 4, etc.
In 1962, A. A. Karatsuba discovered an asymptotically faster algorithm for
multiplying two numbers by using a divide-and-conquer approach.
x ? x n /3 x n / 4 ... x 3 x 2
and
y ? y n /3 y n / 4 ... y3 y 2
are clearly given by,
n /3
x = ( x i ) * 32 i ; and
i?2
n /3
y ? ( y i ) * 32 i .
i?2
z=
4n / 3
i
i n /3
i n /3
( z i ) * 32 ? x i * 32 * y i * 32 .
i?2
i ? 2
i ?2
Divide-And-Conquer
For example:
581
602
581*602
=
=
=
Let us denote
x[n/2]
y[n/2]
1x [n/2] 2
y[n/2]
x1 . x0
y1 y0
Then, if a, b, c, and d are the numbers whose decimal representations are a, b, c and d
then
x = a * 10[n/2] + b
y = c * 10[n/2] + d
For example, if n = 4, x = 1026 and y = 7329 then a = 10, b = 26, c = 73 and d = 29,
and,
x
y
=
=
1026
7329
=
=
10 * 102 + 26
73 * 102 + 29
=
=
a * 102 + b
c * 102 + d
From this we also know that the result of multiplying x and y (i.e., z) is
z
x*y
(a * 10[n/2] + b) * (c * 10[n/2] + d)
(a * c) * 102[n/2] + (a * d + b * c) * 10[n/2] + (b * d)
=
=
where
n ,
2[n/2] =
n - 3
if n is even
if n is odd
Each of the terms (a* c), (a * d), ( b* c) and (b* d) is a product of two [n/2]-digit
numbers.
Thus the expression for the multiplication of x and y in terms of the numbers
a, b, c and d tells us that:
1.
2.
If n > 1 then the product of two n-digit numbers can be expressed in terms of
4 products of two numbers , (Divide-and-conquer stage)
For a given n-digit number, whenever we divides the sequence of digits into two
subsequences, one of which has [n/] digits, the other subsequence has n [n/2] digits, which is
n - 3
if n is odd. However, because of the convenience we
4
digit sequences/numbers.
Design Techniques-I
3.
Given the four returned products, the calculation of the result of multiplying
x and y involves only additions (can be done in O(n) steps) and multiplications
by a power of 10 (also can be done in O(n) steps, since it only requires placing
the appropriate number of 0s at the end of the number). (Combine stage).
? y n /3 y n / 4 ...y3 y 2
Further a,b,c,d are the numbers whose decimal representations are given by
a
x[n/2]
y[n/2]
x [n/2]
y[n/2]
x1 . x0
y1 y0
=
=
=
a *c
b*d
(a + b) * (c + d)
Then
W = a * c + a * d + b * c + b* d
=U+a*d+b*c+V
Therefore,
a *d+b*c=W
U V.
Therefore,
z=x*y
=
(a * 10[n/2] + b) * (c * 10[n/2] + d)
=
(a * c)* 102[n/2] + (a * d + b * c)* 10[n/2] + b * d
=
U * 102[n/2] + (W U V)* 10[n/2] + V.
This algorithm is formalised through the following function.
function Karatsuba
(xunder, yunder : n-digit integer; n : integer)
a, b, c, d: (n/2)-digit integer
U, V, W: n-digit integer;
10
begin
if n = 1 then
return x0 * y0;
Divide-And-Conquer
else
a
b
c
d
U
V
W
:=
:=
:=
:=
:=
:=
:=
x(n 1)x[n/2];
x[n/2] 1 x0;
y(n 1) y [n/2];
y[n/2] 1 y0;
Karatsuba
Karatsuba
Karatsuba
(a, c, [n/2]);
(b, d, [n/2]);
(a+b, c+d, [n/2]);
One of the reasons why we study analysis of algorithms, is that if there are more than
one algorithms that solve a given problem then, through analysis, we can find the
running times of various available algorithms. And then we may choose the one be
which takes least/lesser running time.
This is useful in allowing comparisons between the performances of two algorithms to
be made.
For Divide-and-Conquer algorithms the running time is mainly affected by 3 criteria:
=
=
Constant (Recursive-base)
T (P (n d) + gamma ( n )
In the case when and d are both constant (as mentioned earlier, in all the examples
we have given, there is a general method that can be used to solve such recurrence
relations in order to obtain an asymptotic bound for the running time Tp (n). These
methods were discussed in Block 1.
In general:
T ( n/d) + O ( n gamma)
T (n ) =
T( n )
O(n gamma)
if
O (n gamma log n)
if
O (n log-beta (alpha)) if
alpha
alpha
alpha
<
=
>
beta gamma
beta gamma
beta gamma
In general:
T (n ) =
T ( n/d) + O ( n gamma)
11
Design Techniques-I
gamma
T( n )
=
gamma
O(n gamma)
if
alpha
<
beta
O (n gamma log n)
if
alpha
beta
O (n log-beta (alpha)) if
if
alpha
>
beta
gamma
Ex. 1) Using Karatsubas Method, find the value of the product 1026732 732912
int mid:
while (low < high)
{
mid = (low + high) / 2;
if (value = = A [mid])
return mid;
else if (value < A [mid])
high = mid 1;
else low = mid + 1;
}
return 1;
}
It takes as parameter the array A, in which the value is to be searched. It also takes
the lower and upper bounds of the array as parameters viz., low and high respectively.
At each step of the interation of the while loop, the algorithm reduces the number of
elements of the array to be searched by half. If the value is found then its index is
returned. However, if the value is not found by the algorithm, then the loop terminates
if the value of the low exceeds the value of high, there will be no more items to be
searched and hence the function returns a negative value to indicate that item is not
found.
Analysis
As mentioned earlier, each step of the algorithm divides the block of items being
searched in half. The presence or absence of an item in an array of n elements, can be
established in at most lg n steps.
12
Thus the running time of a binary search is proportional to lg n and we say this is a
O(lg n) algorithm.
Divide-And-Conquer
Ex. 2) Explain how Binary Search method finds or fails to find in the given sorted
array:
8
12
75
26
35
48
57
78
86
93
97
108
135
168
201
the following values
(i)
15
(ii)
93
(iii)
43
1.5 SORTING
We have already discussed the two sorting algorithms viz., Merge Sort and Quick
Sort. The purpose of repeating the algorithm is mainly to discuss, not the design but,
the analysis part.
The recursion stops when the subarray has just only 1 element, so that it is trivially
sorted. Below is the Merge Sort function in C++.
13
Design Techniques-I
if (p < r)
//Check for base case
{
int q = (p + r)/2;
//Divide step
merge_sort (A, p,q);
//Conquer step
merge_sort (A, q + 1, r);//Conquer step
merge (A, p, q, r);
}
//Combine step
Next, we define merge function which is called by the program merge-sort At this
stage, we have an Array A and indices p,q,r such that p < q < r. Subarray A[p .. q] is
sorted and subarray A [q + 1 . . r] is sorted and by the restrictions on p, q, r, neither
subarray is empty. We want that the two subarrays are merged into a single sorted
subarray in A[p .. r]. We will implement it so that it takes O(n) time, where
n = r p + 1 = the number of elements being merged.
Let us consider two piles of cards. Each pile is sorted and placed face-up on a table
with the smallest card on top of each pile. We will merge these into a single sorted
pile, face-down on the table. A basic step will be to choose the smaller of the two top
cards, remove it from its pile, thereby exposing a new top card and then placing the
chosen card face-down onto the output pile. We will repeatedly perform these basic
steps until one input becomes empty. Once one input pile empties, just take the
remaining input pile and place it face-down onto the output pile. Each basic step
should take constant time, since we check just the two top cards and there are n basic
steps, since each basic step removes one card from the input piles, and we started with
n cards in the input piles. Therefore, this procedure should take O(n) time. We dont
actually need to check whether a pile is empty before each basic step. Instead we will
put on the bottom of each input pile a special sentinel card. It contains a special value
that we use to simplify the code. We know in advance that there are exactly r p + 1
non sentinel cards. We will stop once we have performed r p + 1 basic step. Below
is the function merge which runs in O(n) time.
Void merge (int A[]. int p. int q. int r)
{
int n 1 = q p + 1
int n 2 = r q:
int* L = new int[n1 + 1];
int * R = new int [ n2 + 1];
for (int i = 1; i < = n1; i ++)
L [i] = A [ p + i 1];
for (int j = 1; j < = n2; j++)
R[ j] = A [q + j];
L[0] = INT_MIN;
//negative infinity
R[0] = INT_MIN;
//negative infinity
L[n1 + 1] = INT_MAX;
// positive infinity
R[n2 + 1] = INT_MAX; // positive infinity
i = j = 1;
for (int k = p;k <=r; k++)
14
A[k] = L[i];
Divide-And-Conquer
i + = 1;
else
A[k] = R[j];
j + = 1;
2,
T(n) = O(1) if n = 1,
T(n) = 2T (n/2) + O(n) if n
2.
Solving the merge-sort recurrence: By the master theorem, this recurrence has the
solution T(n) = O(n lg n). Compared to insertion sort (O(n2) worst-case time), merge
sort is faster. Trading a factor of n for a factor of lg n is a good deal. On small inputs,
insertion sort may be faster. But for large enough inputs, merge sort will always be
faster, because its running time grows more slowly than insertion sorts.
1.5.2
Quick Sort
As mentioned earlier, the purpose of discussing the Quick Sort Algorithm is to discuss
its analysis
15
Design Techniques-I
QUICKSORT (A, p, r)
If p < r THEN
q = PARTITION (A, p, r)
QUICKSORT (A, p, q 1)
QUICKSORT (A, q + 1, r)
end if
The algorithm PARTITION, which is called by QUICKSORT, is defined after a short
while.
Then, in order to sort an array A of n elements, we call QUICKSORT with the three
parameters A, 1 and n QUICKSORT (A, 1, n).
If q = n/2 and is s(n) time, we again get the recurrence. If T(n) denotes the time taken
by QUICKSORT in sorting an array of n elements.
T(n) = 2T(n/2) + s (n). Then after solving the recurrence we get the running time as
T(n) = s (n log n)
The problem is that it is hard to develop partition algorithm which always divides A in
two halves.
PARTITION (A, p, r)
x = A [ r]
i=p 1
FOR j = p TO r 1 DO
IF A [J] r THEN
i=i+1
Exchange A [ i] and A[j]
end if
end Do
Exchange A[I + 1] and A [r]
RETURN i + 1
QUICKSORT correctness:
Example:
2
2
2
2
2
2
2
2
2
8
8
8
8
1
1
1
1
1
7
7
7
7
7
3
3
3
3
1
1
1
1
8
8
8
8
4
3
3
3
3
3
7
7
7
7
5
5
5
5
5
5
5
5
5
6
6
6
6
6
6
6
6
6
4
4
4
4
4
4
4
4
8
i = 0, j = 1
i = 1, j = 2
i = 1, j = 3
i = 1, j = 4
i = 2, j = 5
i = 3, j = 6
i = 3, j = 7
i = 3, j = 8
q=4
The natural question: what is the average case running time of QUICKSORT?
Is it close to worst case (s(n2), or to the best case s (n lg n) ? Average time
16
If we run QUICKSORT on a set of inputs that are already sorted, the average
running time will be close to the worst-case.
Similarly, if we run QUICKSORT on a set of inputs that give good splits, the
average running time will be close to the best-case.
Divide-And-Conquer
So, under the assumption that all input permutations are equally likely, the average
time of QUICKSORT is s (n lg n) (intuitively). Is this assumption realistic?
Not really. In many cases the input is almost sorted: think of rebuilding indexes
in a database etc.
The question is: how can we make QUICKSORT have a good average time
irrespective of the input distribution?
Using randomization.
We can enforce that all n! permutations are equally likely by randomly permuting the
input before the algorithm.
Most computers have pseudo-random number generator random (1, n)
returning random number between 1 and n.
Using pseudo-random number generator we can generate a random
permutation (such that all n! permutations equally likely) in O(n)
time:
Choose element in A[1] randomly among elements in A[1..n],
choose element in A[2] randomly among elements in A [2..n], choose
element in A[3] randomly among elements in A [3..n] and so on.
,
17
Design Techniques-I
RANDQUICKSORT (A, p, r)
IF p < r THEN
q = RANDPARTITION (A,p,r)
RANDQUICKSORT (A, p, q 1, r)
END IF
One call of PARTITION takes O (1) time plus time proportional to the number of
iterations FOR-loop.
In each iteration of FOR-loop we compare an element with the pivot
element.
If X is the number of comparisons A[j] r performed in PARTITION over the entire
execution of RANDQUICKSORT then the running time is
O (n + X).
E[T (n)] = E[O(n + X)] = n + E [X]
To analyse the expected running time we need to compute E[X]
To compute X we use z1, z2.zn to denote the elements in A where zi is the ith
smallest element. We also use Zij to denote {zi, zi+1 zj}.
Each pair of elements zi and zj are compared at most once (when either of them is the
pivot)
X = in?/33
n
j ? i -3
X ij where
3 If zi compared to zi
Xij =
2 If zi not compared to zi
EX = E [ in?/33 nj ? i - 3 Xij
= n / 3 n ? i - 3 E X ij
i ?3
j
18
= in?/33 nj ? i - 3 Pr zi compared to z j
Divide-And-Conquer
To compute Pr [zi compared to zj] it is useful to consider when two elements are not
compared.
Example: Consider an input consisting of numbers 1 through n.
Assume first pivot it 7 first partition separates the numbers into sets
{1, 2, 3, 4, 5, 6} and {8, 9, 10}.
In partitioning. 7 is compared to all numbers. No number from the first set will ever
be compared to a number from the second set.
In general once a pivot r, zi < r < zj, is chosen we know that zi and zj cannot later be
compared.
On the other hand if zi is chosen as pivot before any other element in Zij then it is
compared to each element in Zij. Similar for zj.
In example 7 and 9 are compared because 7 is first item from Z7,9 to be chosen as pivot
and 2 and 9 are not compared because the first pivot in Z2,9 is 7.
Prior to an element in Zij being chosen as pivot the set Zij is together in the same
partition any element in Zij is equally likely to be first element chosen as pivot
3
the probability that zi or zj is chosen first in Zij is
j/ i -3
4
Pr [zi compared to zj ] =
j / i -3
We now have:
EX
=
=
=
=
=
=
in?/33 nj ? i - 3 Pr zi compared to z j
4
n
n
/
3
i ? 3 j ? i - 3
j / i -3
i 4
in?/33 nk /
?3 k - 3
i 4
in?/33 nk /
?3 k
in?/33 O (log n )
O(n log n)
19
Design Techniques-I
statistic and median is the n/2th order statistic. Also note that if n is even then there
will be two medians.
We will give here two algorithms for the solution of above problem. One is practical
randomized algorithm with O(n) expected running time. Another algorithm which is
more of theoretical interest only, with O(n) worst case running time.
Randomized Selection
The key idea is to use the algorithm partition () from quicksort but we only need to
examine one subarray and this saving also shows up in the running time O(n). We
will use the Randomized Partition (A, p,r) which randomly partitions the Array A
around an element A[q] such that all elements from A[p] to A[q 1] are less than A[q]
and all elements A[q+1] to A[r] are greater than A[q]. This is shown in a diagram
below.
q = Randomized Partition (A, p, r)
A[q]
A [q]
p
A [q]
We can now give the pseudo code for Randomized Select (A, p, r, i). This procedure
selects the ith order statistic in the Array A [p ..r].
Randomized Select (A, p, r, i)
if (p = = r) then return A [p];
q = Randomized Partition (A, p, r)
k = q p + 1;
if (i ==k) then return A [q] q;
if (i < k) then
return Randomized Select (A, p, q-1, i);
else
return Randomized Select (A, q+1, r, i-k);
T(n)
=
=
T(n 1) + O(n)
O(n2) (arithmetic series)
=
=
T(9n/10) + O(n)
O(n) (Master Theorem, case 3)
Average case: Let us now analyse the average case running time of Randomized
Select.
For upper bound, assume ith element always occurs in the larger side of partition:
T(n)
20
3 n /3
T*max*k , n / k / 3++ - S *n +
n k ?2
4 n /3
T ( k ) - S( n )
n k ?n / 4
Divide-And-Conquer
T(n)
4 n /3
T ( k ) - S( n )
n k ?n / 4
4 n /3
ck - S(n )
n k ?n / 4
4c n /3 n / 4/3
k / k - S(n )
n k ?3
k ?3
cn for T(k)
4c 3
3n n
*n / 3+n / / 3 - S (n ) Expand arithmetic series
n 4
44 4
c(n 1) -
cn
/ 3 - S ( n )
44
c(n 1) -
cn
/ 3 - S ( n )
44
T(n)
=
=
cn c
/ - S( n )
6 4
cn c
cn ( / - S(n ) )
6 4
cn (if c is big enough)
Multiply it out
The recurrence so far
cn c
Subtract c/2
Rearrange the arithmetic
What we set out to prove
Randomized algorithm works well in practice. But there exists an algorithm which has
a worst-case O(n) time complexity for find the ith order statistic but which is only of
theoretical significance. The basis idea of worst case linear time selection is to
generate a good partitioning element. We will call this element x.
The algorithm in words:
1.
2.
3.
4.
5.
if (i = = k) then return x
6.
if (i < K) then use Select () recursively to find ith smallest element in first
partition
7.
There are at least of the 5-element medians which are x which equal to
n/5/2 = n/10 and also there are at least 3 n/10 elements which are x. Now,
for large n, 3 n/10 n/4. So at least n/4 elements are x and similarly n/4 elements
are x. Thus after partitioning around x, step 5 will call Select () on at most 3n/4
elements. The recurrence is therefore.
21
T(n)
Design Techniques-I
=
=
n/5 n/5
Substitute T(n) = cn
Combine fractions
Express in desired form
What we set out to prove
t u c
b e
d f
g
h
= P5 + P4 P2 + P6
= P1 + P2
= P3 + P4
= P5 + P1 P3 P7
This method works as it can be easily seen that s = (ag ah) + (ah + bh) = ag + bh. In
this method there are 7 multiplications and 18 additions. For (n n) matrices, it can
be worth replacing one multiplication by 18 additions, since multiplication costs are
much more than addition costs.
The recursive algorithm for multiplying n n matrices is given below:
1.
2.
3.
The running time of above recurrence is given by the recurrence given below:
T(n)
22
= 7T (n/2) + s (n2)
= s (n lg 7)
= O (n 2.81)
The current best upper bound for multiplying matrices is approximately O(n2.376)
Divide-And-Conquer
From a practical point of view, Strassens algorithm is often not the method of choice
for matrix multiplication, for the following four reasons:
1.
2.
3.
4.
The constant factor hidden in the running time of Strassens algorithm is larger
than the constant factor in the nave s (n3) method.
When the matrices are sparse, methods tailored for sparse matrices are faster.
Strassens algorithm is not quite as numerically stable as the nave method.
The sub matrices formed at the levels of recursion consume space.
5
;
1.9 EXPONENTIATION
Exponentiating by Squaring is an algorithm used for the fast computation of large
powers of number x. It is also known as the square-and-multiply algorithm or
binary exponentiation. It implicitly uses the binary expansion of the exponent. It is
of quite general use, for example, in modular-arithmetic.
Squaring Algorithm
Power (x, n) =
x,
Power (x2, n/2),
x . (Power (x2, (n 1)/2)),
if n = 1
if n is even
if n > 2 is odd
The same idea allows fast computation of large exponents modulo a number.
Especially in cryptography, it is useful to compute powers in a ring of integers modulo
q. It can also be used to compute integer powers in a group, using the rule
Power (x,
The method works in every semigroup and is often used to compute powers of
matrices.
Examples 1.9.1:
Design Techniques-I
of a factor of two. But since the new problem is of the same type, we can apply the
same observation again, once more approximately having the size.
The repeated application this algorithm is equivalent to decomposing the exponent (by
a base conversion to binary) into a sequence of squares and products: for example,
x7= x4 x2x1
= (x2)2 * x2*x
= (x2 * x)2 * x
This algorithm works better than binary exponentiation for high exponents. However,
it trades off space for speed, so it may not be good on over-worked systems.
1.10
Divide-And-Conquer
SUMMARY
The unit discusses various issues in respect of the technique viz., Divide and Conquer
for designing and analysing algorithms for solving problems. First, the general plan of
the Divide and conquer technique is explained and then an outline of a formal Divideand-conquer procedure is defined. The issue of whether at some stage to solve a
problem directly or whether to further subdivide it, is discussed in terms of the relative
efficiencies in the two alternative cases.
The technique is illustrated with examples of its applications to solving problems of
(large) integer multiplication, Binary search, Sorting, of finding median of a given
data, of matrix multiplication and computing exponents of a given number. Under
sorting, the well-known techniques viz., Merge-sort and quick-sort are discussed in
detail.
1.11 SOLUTIONS/ANSWERS
Ex.1)
1026732 732912
In order to apply Karatsubas method, first we make number of digits in the
two numbers equal, by putting zeroes on the left of the number having lesser
number of digits. Thus, the two numbers to be multiplied are written as
x = 1026732 and y = 0732912.
As n = 7, therefore [n/2] = 3, we write
x = 1026 103 + 732 = a 103 + b
y = 0732 103 + 912 = c 103 + d
where a = 1026,
b = 732
c = 0732,
d = 912
Then
x y = (1026 0732) 102 3 + 732 912
+ [(1026 + 732) (732 + 912)
(1026 0732) (732 912)]103
= (1026 0732) 106 + 732 912 +
[(1758 1644) (1026 0732) (732 912)]103
(A)
Though, the above may be simplified in another simpler way, yet we want to
explain Karatsubas method, therefore, next, we compute the products.
U
V
P
=
=
=
1026 732
732 912
1758 1644
Let us consider only the product 1026 732 and other involved products may
be computed similarly and substituted in (A).
Let us write
U = 1026 732 = (10 102 + 26) (07 102 + 32)
= (10 7) 104 + 26 32 + [(10 + 7) (26 + 32)
10 7 26 32)] 102
25
= 17 104 + 26 32 + (17 58
Design Techniques-I
70
26 32) 102
At this stage, we do not apply Karatsubas algorithm and compute the products of
2-digit numbers by conventional method.
Ex. 2) The number of elements in the given list is 15. Let us store these in an array
say A[1..15]. Thus, initially low = 1 and high = 15 and, hence,
mid = (1+15)/2 = 8.
In the first iteration, the search algorithm compares the value to be searched
with A[8] = 78
Part (i):
1=7
3 - 9
and hence (new) mid =
?6
4
As 15 < A[4] = 26, the algorithm iterates once more. In the
third iteration, the new values become
low = 1, high 4
1=3
3- 5
Therefore, mid =
?4
4
As 15 > A[2] = 12, the algorithm iterates itself once more. In
fourth iteration, new values become
low = mid + 1 = 2+ 1 = 3,
high = 4
Therefore
5 - 6
(new) mid =
?5
4
As A[3] = 15 (the value to be searched)
Hence, the algorithm terminates and returns the index value 3
as output.
Part (ii):
and mid = 8
; - 37
4
high = 15
Divide-And-Conquer
iteration
low = 9 ;
and (new) mid =
high = mid
; - 33
4
1 = 12
1 = 11
= 10 with A[10] = 93
high = 15
and
mid = 8
high = 8
1 = 7 and
mid = 4
a b
/ 9
c d ? 7
and
/ 9
e g
f h ? 7
Then
P1 = a . (g h)
= 5 (6 9) = 15
P2 = (a + b) . h = (5 + 6) . 9 = 99,
P3 = (c + d) . e = ( 4 + 3) . ( 7) = 7
P4 = d . (f e) = 3. (5 ( 7)) = 36;
P5 = (a + d) (e + h) = (5 + 3) ( 7 + 9) = 16
P6 = ( b d) (f + h) = (6 3) . (5 + 9) = 42
P7 = (a c) (e + g) = (5 ( 4)) ( 7 + 6) =
27
where
Design Techniques-I
r = P5 + P4 P2 + P6
= 16 + 36 99 + 42 = 5
s = P1 + P2 = 15 + 99 = 84
t = P3 + P4 = 7 + 36 = 43
u = P5 + P1 P3 P7
= 16 + ( 15) 7 ( 9)
= 16 15 7 + 9
=3
28
1.
2.
3.
4.
5.
6.
7.
Graph Algorithms
Introduction
Objectives
Examples
Page Nos.
29
29
29
2.3
2.4
2.5
Traversing Trees
Depth-First Search
Breadth-First Search
32
34
44
2.6
2.7
2.8
2.9
2.10
49
55
57
57
59
2.0 INTRODUCTION
A number of problems and games like chess, tic-tac-toe etc. can be formulated and
solved with the help of graphical notations. The wide variety of problems that can be
solved by using graphs range from searching particular information to finding a good
or bad move in a game. In this Unit, we discuss a number of problem-solving
techniques based on graphic notations, including the ones involving searches of
graphs and application of these techniques in solving game and sorting problems.
2.1 OBJECTIVES
After going through this Unit, you should be able to:
explain and apply various graph search techniques, viz Depth-First Search
(DFS), Breadth-First-Search (BFS), Best-First Search, and Minimax Principle;
2.2 EXAMPLES
To begin with, we discuss the applicability of graphs to a popular game known as
NIM.
2.2.1
NIM/Marienbad Game
The game of nim is very simple to play and has an interesting mathematical structure.
Nim is a game for 2 players, in which the players take turns alternately. Initially the
players are given a position consisting of several piles, each pile having a finite
number of tokens. On each turn, a player chooses one of the piles and then removes at
least one token from that pile. The player who picks up the last token wins.
Assuming your opponent plays optimally, there may be some positions/situations in
which the player having the current move cannot win. Such positions are called
29
Design Techniques-I
losing positions (for the player whose turn it is to move next). The positions which
are not losing ones are called winning.
Marienbad is a variant of a nim game and it is played with matches. The rules of this
game are similar to the nim and are given below:
(1)
It is a two-player game
(2)
(3)
The winner of the game is one who takes the last match, whosoever is left with
no sticks, loses the game.
(4)
On the very first turn, up to n 1 matches can be taken by the player having the
very first move.
(5)
On the subsequent turns, one must remove at least one match and at most twice
the number of matches picked up by the opponent in the last move.
Before going into detailed discussion through an example, let us explain what may be
the possible states which may indicate different stages in the game. At any stage, the
following two numbers are significant:
(i)
(ii)
The total number of match sticks available, after picking up by the players so
far.
The number of match sticks that the player having the move can pick up.
:
:
For example:
(i)
Initially if n is the number of sticks, then the state is (n, n 1), because the
players must leave at least one stick.
(ii)
While in the state (i, j), if the player having the move picks up k sticks then the
state after this move, is (i k, min (2k, i k)),
which means
(a)
k)
(b)
the player, next to pick up, can not pick up more than the double of the
number of sticks picked up in the previous move by the opponent and also
clearly the player can not pick up more than the number of sticks
available, i.e., (i k)
(iii) We can not have the choice of picking up zero match sticks, unless no match
stick is available for picking. Therefore the state (i, 0) implies the state (0,0),
After discussing some of possible states, we elaborate the game described above
through the following example.
Example 2.2.1:
Let the initial number of matches be 6, and let player A take the chance to move first.
What should be As strategy to win, for his first move. Generally, A will consider all
possible moves and choose the best one as follow:
30
if A takes 5 matches, that leaves just one for B, then B will take it and win the
game;
if A takes 4 matches, that leaves 2 for B, then B will take it and win;
if A takes 3 matches, that leaves 3 for B, then B will take it and win;
if A takes 2 match, that leaves 4 for B, then B will take it and win;
if A takes 1 match, that leaves 5 for B. In the next step, B can take 1 or 2(recall
that B can take at most the twice of the number what A just took) and B will go
to either of the states (4,2) or (3,3) both of which are winning moves for A, Bs
move will lead to because A can take all the available stick and hence there will
be not more sticks for B to pick up. Taking a look at this reasoning process, it is
for sure that the best move for A is just taking one match stick.
Graph Algorithms
The above process can be expressed by a directed graph, where each node
corresponds to a position (state) and each edge corresponds a move between two
positions, with each node expressed by a pair of numbers < i,j >, 0 <= j <= i, and
i: the number of the matches left;
j : the upper limit of number of matches which can be removed in the next move, that
is, any number of matches between 1 and j can be taken in the next move.
As mentioned earlier, we have
The initial node: < n,n 1 >.
Edges leaving the node < i,j > can lead to the node < i- k, min(2k, j - k) >,
with 0 < k i.
In the directed graph shown below, rectangular nodes denote losing nodes and
oval nodes denote winning nodes:
5,2
6,5
3,3
6,5
4,2
4,4
5
3,2
2,2
1,1
Figure 1
a terminal node < 0, 0 >, from which there is no legal move. It is a losing
position.
a nonterminal node is a winning node (denoted by a circle), if at least one of its
successors is a losing node, because the player currently having the move is can
leave his opponent in losing position.
a nonterminal node is a losing node (denoted by a square) if all of its successors
are wining nodes. Again, because the player currently having the move cannot
avoid leaving his opponent in one of these winning positions.
How to determine the wining nodes and losing nodes in a directed graph?
Intuitively, we can starting at the losing node < 0, 0 >, and work back according to
the definition of the winning node and losing node. A node is a losing node, for the
current player, if the move takes to a state such that the opponent can make at least
one move which forces the current player to lose. On the other hand, a node is a
winning node, if after making the move, current player will leave the opponent in a
state, from which opponent can not win. For instance, in any of nodes < 1, 1 >,
31
Design Techniques-I
< 2, 2 >, < 3, 3 > and < 4, 4 >, a player can make a move and leave his opponent to
be in the position < 0, 0 >, thus these 4 nodes are wining nodes. From position
< 3, 2 >, two moves are possible but both these moves take the opponent to a winning
position so it is a losing node. The initial position < 6, 5 > has one move which takes
the opponent to a losing position so it is a winning node. Keeping the process of
going in the backward direction, we can mark the types of nodes in a graph. A
recursive C program for the purpose, can be implemented as follows:
Draw a directed graph for a game of Marienbad when the number of match
sticks, initially, is 5.
Preconditioning
Consider a scenario in which problem might have many similar situations or
instances, which are required to be solved. In such a situation, it it might be useful to
spend some time and energy in calculating the auxiliary solutions (i.e., attaching
some extra information to the problem space) that can be used afterwards to fasten
the process of finding the solution of each of these situations. This is known as
preconditioning. Although some time has to be spent in calculating / finding the
auxiliary solutions yet it has been seen that in the final tradeoff, the benefit achieved
in terms of speeding up of the process of finding the solution of the problem will be
much more than the additional cost incurred in finding auxiliary/additional
information.
In other words, let x be the time taken to solve the problem without preconditioning, y
be the time taken to solve the problem with the help of some auxiliary results (i.e.,
after preconditioning) and let t be the time taken in preconditioning the problem space
i.e., time taken in calculating the additional/auxiliary information. Then to solve n
typical instances, provided that y < x , preconditioning will be beneficial only
when ,
nx > t + ny
i.e., nx ny > t
or n > t / (x y)
32
Graph Algorithms
For an example, let us consider the problem of finding the ancestor of any given node
in a rooted tree (which may be a binary or a general tree).
In any rooted tree, node u will be an ancestor of node v, if node u lies on the path
from root to v. Also we must note that every node is an ancestor of itself and root is an
ancestor of all nodes in the tree including itself. Let us suppose, we are given a pair of
nodes (u,v) and we are to find whether u is an ancestor or v or not. If the tree contains
n nodes, then any given instance can take (n) time in the worst case. But, if we
attach some relevant information to each of the nodes of the tree, then after spending
(n) time in preconditioning, we can find the ancestor of any given node in constant
time.
Now to precondition the tree, we first traverse the tree in preorder and calculate the
precedence of each node in this order, similarly, we traverse the tree in postorder and
calculate the precedence of each node. For a node u, let precedepre[u] be its
precedence in preorder and let precedepost[u] be its precedence in postorder.
Let u and v be the two given nodes. Then according to the rules of preorder and
postorder traversal, we can see that :
In preorder traversal, as the root is visited first before the left subtree and the right
subtree, so,
If precedepre[u] <= precedepre[v], then
u is an ancestor of v or
u is to the left of v in the tree.
In postorder traversal, as the root is visited last, because, first we visit leftsubtree,
then right subtree and in the last we visit root so,
If precedepost[u] >= precedepost[v], then
u is an ancestor of v
or
u is to the right of v in the tree.
So for u to be an ancestor of v, both the following conditions have to be satisfied:
precedepre[u] <= precede[v] and precedepost[u] >= precedepost[v].
Thus, we can see that after spending some time in calculating preorder and postorder
precedence of each node in the tree, the ancestor of any node can be found in constant
time.
Ex. 2)
33
Design Techniques-I
34
Graph Algorithms
Program
DFS(G)
//This fragment of algorithm performs initializing
//and starts the depth first search process
1 for all vertices u V
2 {
status[u] = unknown;
3
parent[u] = NULL;
4
time = 0 }
5 for each vertex u V
6
{if status[u] == unknown
7
VISIT(u)
VISIT(U)
1 status[u] = discovered;
2 time = time + 1;
3 d[u] =time;}
4 for each Vertex v V adjacent to u
5
{if status[v] == unknown
6
parent[v] = u;
7
VISIT(V);
8 time = time + 1;
9 t[u] = time;
10 status[u] = visited;}
In the procedure DFS, the first for-loop initializes the status of each vertex to
unknown and parent or predecessor vertex to NULL. Then it creates a global variable
time and initializes it to 0. In the second for-loop belonging to this procedure, for each
node in the graph if that node is still unknown, the VISIT(u) procedure is called. Now
we can see that every time the VISIT (u) procedure will be called, the vertex u it will
become the root of a new tree in the forest of depth first search.
Whenever the procedure VISIT(u) will be called with parameter u, the vertex u will be
unknown. So in the procedure VISIT(u), first the status of vertex u is changed to
discovered, time is incremented by 1 and it is stored as discovery time of vertex u in
d[u].
When the VISIT procedure will be called for the first time, d[u] will be 1. In the
for-loop for each given vertex u, every unknown vertex adjacent to u is visited
recursively and the parent[] array is updated. When the for-loop concludes, i.e., when
every vertex adjacent to u is discovered, the time is increment by 1 and is stored as the
termination time of u i.e. t[u] and the status of vertex u is changed to visited.
Analysis of Depth-First Search
In procedure DFS(), each for loop takes time O(V), where V is the number of
vertices in V. The procedure VISIT is called once for every vertex of the graph. In the
procedure visit for each of the for-loop is executed equal to the number of edges
emerging from that node and yet not traversed. Considering the adjacency list of all
nodes to total number of edges traversed are O(E), where E, is the number of
edges in E. So the running time of DFS is, therefore, O (V+E).
35
Design Techniques-I
Example 2.4.1:
For the graph given in Figure 2.4.1.1. Use DFS to visit various vertices. The vertex
D is taken as the starting vertex and, if there are more than one vertices adjacent to a
vertex, then the adjacent vertices are visited in lexicographic order.
In the following,
(i)
the label i/ indicates that the corresponding vertex is the ith discovered vertex.
(ii)
the label i/j indicates that the corresponding vertex is the ith discovered vertex
and jth in the combined sequence of discovered and visited.
Figure 2.4.1.2: D has two neighbors by convention A is visited first i.e., the status of A changes to
discovered, d[A] = 2
Figure 2.4.1.3: A has two unknown neighbors B and C, so status of B changes to discovered, i.e.,
d[B] = 3
36
Graph Algorithms
Figure 2.4.1.5: All of Es neighbors are discovered so status of vertex E is changed to visited and
t[E] = 5
Figure 2.4.1.7: Similarly vertices G, E and H are discovered respectively with d[G] = 7, d[C] = 8
and d[H] = 9
37
Design Techniques-I
Figure 2.4.1.8: Now as all the neighbors of H are already discovered we backtrack, to C and stores
its termination time as t[H] = 10
Figure 2.4.1.9: We find the termination time of remaining nodes in reverse order, backtracking
along the original path ending with D.
The resultant parent pointer tree has its root at D, since this is the first node visited.
Each new node visited becomes the child of the most recently visited node. Also we
can see that while D is the first node to be discovered, it is the last node terminated.
This is due to recursion because each of Ds neighbors must be discovered and
terminated before D can be terminated. Also, all the edges of the graph, which are not
used in the traversal, are between a node and its ancestor. This property of depth-first
search differentiates it from breadth-first search tree.
Also we can see that the maximum termination time for any vertex is 16, which is
twice the number of vertices in the graph because time is incremented only when a
vertex is discovered or terminated and each vertex is discovered once and terminated
once.
Properties of Depth-first search Algorithm
(1)
Parenthesis Structure
In a graph G, if u and v are two vertices such that u is discovered before v then the
following cases may occur:
(a)
38
(b)
Graph Algorithms
In this case t[u] < t[v] so the two intervals are disjoint.
Note: We should remember that in depth-first search the third case of overlapping
intervals is not possible i.e., situation given below is not, possible because of
recursion.
(2)
(3)
(4)
Ex.3)
Trace how DFS traverses (i.e., discover and visits) the graph given below
when starting node/vertex is B.
39
Design Techniques-I
Edge Classification
Another interesting property of depth first search is that search can be used to classify
different type of edges of the directed graph G(V,E). This edge classification gives us
some more information about the graph.
The different edges are:
(a)
Tree Edge: An edge to a still unknown vertex i.e., edge (u,v) is a tree edge if
it is used to discover v for the first time.
(b)
(c)
(d)
Note: In an undirected graph, every edge is either a tree edge or back edge, i.e.,
forward edges or cross edges are not possible.
Example 2.4.2:
In the following directed graph, we consider the adjacent nodes in the increasing
alphabetic order and let starting vertex be.
Figure 2.4.2.2: a has unknown two neighbors a and d, by convention b is visited first, i.e the status
of b changes to discovered, d[a] =2
40
Graph Algorithms
Figure 2.4.2.3: b has two unknown neighbors c and d, by convention c is discovered first i.e.,
d[c] = 3
Figure 2.4.2.4: c has only a single neighbor a which is already discovered so c is terminated i.e.,
t[c] = 5
Figure 2.4.2.5: The algorithm backtracks recursively to b, the next unknown neighbor is d, whose
status is change to discovered i.e., d[d] = 5
41
Design Techniques-I
Figure 2.4.2.7: The algorithm backtracks recursively to b, which has no unknown neighbors, so
b(terminated) is visited i.e., t[b] = 7
Figure 2.4.2. 8: The algorithm backtracks to a which has no unknown neighbors so a is visited i.e.,
t[a] = 8.
Figure 2.4.2. 9: The connected component is visited so the algorithm moves to next component
starting from e (because we are moving in increasing alphabetic order) so e is
discovered i.e. , d[e] = 9
Figure 2.4.2. 10: e has two unknown neighbors f and g, by convention we discover f i.e.,
d[f] = 10
42
Graph Algorithms
Figure 2.4.2. 12: The algorithm backtracks to e, which has g as the next unknown neighbor, g is
discovered i.e., d[g] = 12
Figure 2.4.2. 13: The only neighbor of g is e, which is already discovered, so g(terminates) is
visited i.e., t[g] = 12
Figure 2.4.2. 14: The algorithm backtracks to e, which has no unknown neighbors left so
e (terminates) is visit i.e., t[e] = 14
43
Design Techniques-I
Given a directed graph, depth first search can be used to determine whether it
contains cycle.
(2)
(3)
Tree edges, forward edges and cross edges all go from a vertex of higher
termination time to a vertex of lower finish time whereas back edges go from a
vertex of lower termination time to a vertex of higher termination time.
(4)
A graph is acyclic if and only if f any depth first search forest of graph G yields
no back edges. This fact can be realized from property 3 explained above, that if
there are no back edges then all edges will go from a vertex of higher
termination time to a vertex of lower termination time. So there will be no
cycles. So the property which checks cycles in a directed graph can be verified
by ensuring there are no back edges.
9
10
11
12
13
14
15
16
status[v] = discovered
parent (v) = u
end for
enqueue(Q,v);
Graph Algorithms
dequeue(Q)
status[u] = visited
print u is visited
end while
The algorithm works as follows. Lines 1-2 initialize each vertex to unknown.
Because we have to start searching from vertex s, line 3 gives the status discovered
to vertex s. Line 4 inserts the initial vertex s in the queue. The while loop contains
statements from line 5 to end of the algorithm. The while loop runs as long as there
remains discovered vertices in the queue. And we can see that queue will only
contain discovered vertices. Line 6 takes an element u at the front of the queue and
in lines 7 to 10 12 the adjacency list of vertex u is traversed and each unknown vertex
v in the adjacency list of u, its status is marked as discovered, its parent is marked as u
and then it is inserted in the queue. In the line 13, vertex u is removed from the queue.
In line 14-15, when there are no more elements in adjacency list of u, vertex u is
removed from the queue its status is changed to visited and is also printed as visited.
The algorithm given above can also be improved by storing the distance of each
vertex u from the source vertex s using an array distance[] and also by permanently
recording the predecessor or parent of each discovered vertex in the array parent[]. In
fact, the distance of each reachable vertex from the source vertex as calculated by the
BFS is the shortest distance in terms of the number of edges traversed. So next we
present the modified algorithm for breadth first search.
2.5.2
Modified Algorithm
Program BFS(G,s)
1 for each vertex u s v {s}
2
status[u] = unknown
3
parent[u] = NULL
4
distance[u] = infinity
5 status[s] = discovered
6 distance[s] = 0
7 parent[s] = NULL
8 enqueue(Q,s)
9 while empty(Q) != false
10
u = front[Q]
11
for each vertex v adjacent to u
12
if status[v] = unknown
13
status[v] = discovered
14
parent[v] = u
15
distance[v] = distance[u] + 1
16
enqueue(Q,v)
17
dequeue(Q)
18
status[u] = visited
19
print u is visited
In the above algorithm the newly inserted line 3 initializes the parent of each vertex to
NULL, line 4 initializes the distance of each vertex from the source vertex to infinity,
line 6 initializes the distance of source vertex s to 0, line 7 initializes the parent of
source vertex s NULL, line 14 records the parent of v as u, line 15 calculates the
shortest distance of v from the source vertex s, as distance of u plus 1.
45
Design Techniques-I
Example 2.5.3:
In the figure given below, we can see the graph given initially, in which only source s
is discovered.
We take unknown (i.e., undiscovered) adjacent vertex of s and insert them in queue,
first a and then b. The values of the data structures are modified as given below:
Next, after completing the visit of a we get the figure and the data structures as given
below:
46
Graph Algorithms
47
Design Techniques-I
Graph Algorithms
Design Techniques-I
Although the example we have given below shows the best first search of a tree, it is
sometimes important to search a graph instead of a tree so we have to take care that
the duplicate paths are not pursued. To perform this job, an algorithm will work by
searching a directed graph in which a node represents a point in the problem space.
Each node, in addition to describing the problem space and the heuristic value
associated with it, will also contain a link or pointer to its best parent and points to its
successor node. Once the goal node is found, the parent link will allow us to trace the
path from source node to the goal node. The list of successors will allow it to pass the
improvement down to its successors if any of them are already existing.
In the algorithm given below, we assume two different list of nodes:
OPEN list is the list of nodes which have been found but yet not expanded
i.e., the nodes which have been discovered /generated but whose
children/successors are yet not discovered. Open list can be implemented in the
form of a queue in which the nodes will be arranged in the order of decreasing
prioity from the front i.e., the node with the most promising heuristic value (i.e.,
the highest priority node) will be at the first place in the list.
7
D
Figure 1
50
Figure 2
Figure 3
Graph Algorithms
D 4
A 9
A 5
Figure 4
D 4
A 9
A 11
Figure 5
Design Techniques-I
generated in a few nanoseconds. Therefore, for many complex games, we must accept
the fact that search to termination is impossible instead we must use partial searching
techniques.
For searching we can use either breadth first, depth first or heuristic methods except
that the termination conditions must now be specified. Several artificial termination
conditions can be specified based on factors such as time limit, storage space and the
depth of the deepest node in the search tree.
In a two player game, the first step is to define a static evaluation function efun(),
which attaches a value to each position or state of the game. This value indicates how
good it would be for a player to reach that position. So after the search terminates, we
must extract from the search tree an estimate of the best first move by applying a
static evaluation function efun() to the leaf nodes of the search tree. The evaluation
function measures the worth of the leaf node position. For example, in chess a simple
static evaluation function might attach one point for each pawn, four points for each
rook and eight points for queen and so on. But this static evaluation is too easy to be
of any real use. Sometimes we might have to sacrifice queen to prevent the opponent
from a winning move and to gain advantage in future so the key lies in the amount of
lookahead. The more number of moves we are able to lookahead before evaluating a
move, the better will be the choice.
In analyzing game trees, we follow a convention that the value of the evaluation
function will increase as the position becomes favourable to player MAX , so the
positive values will indicate position that favours MAX whereas for the positions
favourable to player MIN are represented by the static evaluation function
having negative values and values near zero correspond to game positions not
favourable to either MAX or MIN. In a terminal position, the static evaluation
function returns either positive infinity or negative infinity where as positive infinity
represents a win for player MAX and negative infinity represents a win for the player
MIN and a value zero represents a draw.
In the algorithm, we give ahead, the search tree is generated starting with the current
game position upto the end game position or lookahead limit is reached. Increasing
the lookahead limit increases search time but results in better choice. The final game
position is evaluated from the MAXs point of view. The nodes that belong to the
player MAX receive the maximum value of its children. The nodes for the player MIN
will select the minimum value of its children.
In the algorithm, lookahead limit represents the lookahead factor in terms of number
of steps, u and v represent game states or nodes, maxmove() and minmove() are
functions to describe the steps taken by player MAX or player MIN to choose a
move, efun() is the static evaluation function which attaches a positive or negative
integer value to a node ( i.e., a game state), value is a simple variable.
Now to move number of steps equal to the lookahead limit from a given game state u,
MAX should move to the game state v given by the following code :
maxval = for each game state w that is a successor of u
val = minmove(w,lookaheadlimit)
if (val >= maxval)
maxval = val
v=w
// move to the state v
The minmove() function is as follows :
minmove(w, lookaheadlimit)
{
if(lookaheadlimit = = 0 or w has no successor)
52
Graph Algorithms
return efun(w)
else
minval = +
for each successor x of w
val = maxmove(x,lookaheadlimit 1)
if (minval > val)
minval = val
return(minval)
}
The maxmove() function is as follows :
maxmove(w, lookaheadlimit)
{
if (lookaheadlimit = = 0 or w has no successor)
return efun(w)
else
maxval = for each successor x of w
val = minmove(x,lookaheadlimit 1)
if (maxval < val)
maxval = val
return(maxval)
}
We can see that in the minimax technique, player MIN tries to minimize te advantage
he allows to player MAX, and on the other hand player MAX tries to maximize the
advantage he obtains after each move.
Let us suppose the graph given below shows part of the game. The values of leaf
nodes are given using efun() procedure for a particular game then the value of nodes
above can be calculated using minimax principle.Suppose the lookahead limit is 4 and
it is MAXs turn.
9
2
2
3
-7
-2
-7 9
-2
-4
5 -4
-7
-7
-2
2-2
53
Design Techniques-I
spending time to search for children of the B node and so we can safely ignore all the
remaining children of B.
This shows that the search on same paths can sometimes be aborted ( i.e., it is not
required to explore all paths) because we find out that the search subtree will not take
us to any viable answer.
MAX
MIN
3
A5
This optimization is known as alpha beta pruning/procedure and the values, below
which search need not be carried out are known as alpha beta cutoffs.
A general algorithm for alpha beta procedure is as follows:
1.
2.
At MAX player level, before evaluating each child path, compare the returned
value of the previous path with the beta value. If the returned value is greater
then abort the search for the current node.
3.
At Min player level,before evaluating eachh child path, compare the returned
value of the previouss path with the alpha value. If the value is lesser then abort
the searh for the current node.
The alpha values of MAX nodes (including the start value) can never decrease.
The beta value of MIN nodes can never increase.
So we can see that remarkable reductions in the amount of search needed to evaluate a
good move are possible by using alpha beta pruning / procedure.
Analysis of BFS Algorithm
In the algorithm BFS, let us analyse the running time taken by the algorithm on a
graph G. We can see that each vertex is inserted n the queue exactly once and also
deleted from the queue exactly once. So for each the insertion and deletion from the
queue costs O(1) time therefore for all vertices queue insertion and deletion would
cost O(V) time. Because graph is represented using adjacency list and adjacency list
of each vertex is scanned at most once. We can see that the total length of all
adjacency list is no. of edge E in the graph G. So a total of O(E) time to spent in
scanning all adjacency lists. The initialization portion costs O(V). So the total running
time of BFS is O(V+E).
54
Graph Algorithms
d)
(a
b)(c + d)
Both of these problems are essentially equivalent. The data of both problems can be
represented by directed acyclic graph (See figure below). In the first each node is a
module; in the second example each node is an operator or an operand. Directed edges
occur when one node depends on the other, because of prerequisite relationships
among courses or the parenthesis order of the expression. The problem in both is to
find an acceptable ordering of the nodes satisfying the dependencies. This is referred
to as a topological ordering. More formally it is defined below.
Design Techniques-I
56
Graph Algorithms
RUNNING TIME
Let n is the number of vertices (or nodes, or activities) and m is the number of edges
(constraints). As each vertex is discovered only once, and for each vertex we loop
over all its outgoing edges once. Therefore, total running time is O(n+m).
2.8 SUMMARY
This unit discusses some searching and sorting techniques for sorting those problems
each of which can be efficiently represented in the form of a graph. In a graphical
representation of a problem, generally, a node represents a state of the problem, and
an arrow/arc represents a move between a pair of states.
Graph representation of a problem is introduced through the example of a game of
NIM/ Marienbad. Then a number of search algorithms viz., Depth-First, Breadth-First,
Best-First, and Minimax principal are discussed. Next, a sorting algorithm viz.,
Topological sort is discussed.
2.9
SOLUTIONS/ANSWERS
Ex. 1)
3,2
4,2
2,2
3,2
3,3
0,0
4,3
1,1
2,1
Ex.2)
A
57
Design Techniques-I
1/
A
2/
7/
C
11/
6/
3/
D
4/
8/
5/
9/
10/
1/11
A
2/5
7/10
C
11/9
6/4
3/3
4/1
5/2
8/8
9/6
10/7
Ex.3)
1/
B
58
Graph Algorithms
2/
A
1/
B
2/
A
1/
B
3/
Discrete Mathematics and Its Applications (Fifth Edition) K.N. Rosen: Tata
McGraw-Hill (2003).
2.
59
Dynamic Programming
Introduction
Objectives
The Problem of Making Change
The Principle of Optimality
Chained Matrix Multiplication
Matrix Multiplication Using Dynamic Programming
Summary
Solutions/Answers
Further Readings
Page Nos.
5
8
8
13
14
15
17
18
21
1.0 INTRODUCTION
In the earlier units of the course, we have discussed some well-known techniques,
including the divide-and-conquer technique, for developing algorithms for
algorithmically solvable problems. The divide-and-conquer technique, though quite
useful in solving problems from a number of problem domains, yet in some cases, as
shown below, may give quite inefficient algorithms to solve problems.
linear notation c (n, k)) where n and k are given non-negative integers with n k. One
way of defining and calculating the binomial coefficient is by using the following
recursive formula
if k ? n or k ? 2
3
n n / 3
n / 3
=
-
if 2 > k > n
k k / 3
k
2
otherwise
(1.0.1)
The following recursive algorithm named Bin (n, k), implements the above formula
for computing the binomial coefficient.
Function Bin (n, k)
If k = n or k = 0 then return 1
else return Bin (n/1, k/1) + Bin (n/1, k)
For computing Bin (n, k) for some given values of n and k, a number of terms Bin
(i, j), 1 i and 1 j k, particularly for smaller values of i and j, are repeatedly
calculated. For example, to calculate Bin (7, 5), we compute Bin (6, 5) and Bin (6, 4).
Now, for computing Bin (6, 5), we compute Bin (5, 4) and Bin (5, 5). But for
calculating Bin (6, 4) we have to calculate Bin (5, 4) again. If the above argument is
further carried out for still smaller values, the number of repetitions for Bin (i, j)
increases as values of i and j decrease.
For given values of n and k, in order to compute Bin (n, k), we need to call Bin (i, j)
for 1 i n 1 and 1 j k 1 and as the values of i and j decrease, the number of
times Bin (i, j) is required to be called and executed generally increases.
The above example follows the Divide-and-Conquer technique in the sense that the
task of calculating C(n, k) is replaced by the two relatively simpler tasks, viz.,
calculating C(n/1, k) and C (n/1, k/1). But this technique, in this particular case,
5
Design Techniques-II
for
for
all i = 0, 1, 2,., n
j = 1, 2, ., k
and
..
..
.
.
.
.
n
.
.
.
.
1
After filling up the entries of the first row, the table takes the following form:
0
..
.
.
.
.
n
1
.
.
.
1
Dynamic Programming
From the already calculated values of a given row i, adding successive pairs of
consecutive values, we get the values for (i + 1)th row. After completing the entries
for row with index 4, the table may appear as follows, where the blank entries to the
right of the main diagonal are all zeros.
0
.
.
.
.
n
.
.
.
.
1
..
We Summarize below the process followed above, for calculating C(i, j):
First of all, the simplest values C(i, 0) = 1 for i = 1, 2, ., n and C(0, j) = 0 for
j 1, are obtained directly from the given formula. Next, more complex values are
calculated from the already available less complex values. Obviously, the above
mentioned process is a bottom-up one.
Though the purpose in the above discussion was to introduce and explain the Dynamic
Programming technique, yet we may also consider the complexity of calculating
C (n, k) using the tabular method.
Space Complexity/Requirement: In view of the following two facts, it can be easily
seen that the amount of space required is not for all the n k entries but only for k
values of any row C (i, j) for j = 1, 2, .., k independent of the value of i:
(i)
(ii)
In general, when the value C(i, j) of the ith row is calculated the value C (I 1, j) is no
more required and hence the cell currently occupied by C (i 1, j) can be used to store
7
Design Techniques-II
the value C(i, j). Thus, at any time, one row worth of space is enough to calculate
C(n, k). Therefore, space requirement is s (k).
Time Complexity: If we notice the process of calculating successive values of the
table, we find that any value C (i, j) is obtained by adding 1s. For example,
C(4, 2) = 6 is obtained by adding C(3, 1) = 3 and C(3, 2) = 3. But then C(3, 1) is
obtained by adding C(2, 0) = 1 and C(2, 1) = 2. Again C(2, 1) is obtained by adding
C(1, 0) = 1 and C(1, 1) = 1. Similarly, C(3, 2) and hence C(4, 2) can be shown to
have been obtained by adding, directly or indirectly, a sequence of 1s. Also, the
number of additions for calculating C(n, k) can not be more than all the entries in the
(n/1) rows viz., 1, 2, .. (n/1), each row containing k elements.
Thus, number of additions n k.
1.1 OBJECTIVES
After going through this unit, you should be able to:
A+D
5896
and
(ii)
Let d1, d2, ..dk, with di >0 for i = 1, 2, , k, be the only coins that are available such
that each coin with denomination di is available in sufficient quantity for the purpose
8
of making payments. Further, let A, a positive integer, be the amount to be paid using
the above-mentioned coins. The problem is to use the minimum number of coins, for
the purpose.
Dynamic Programming
The problem with above mentioned algorithm based on greedy technique, is that in
some cases, it may either fail or may yield suboptimal solutions. In order to establish
inadequacy of greedy technique based algorithms, we consider the following two
examples.
Example 1.2.1: Let us assume a hypothetical situation in which we have supply of
rupee-notes of denominations 5, 3 and 2 and we are to collect an amount of Rs. 9.
Then using greedy technique, first we choose a note of Rupees 5. Next, we
choose a 3-Rupee note to make a total amount of Rupees 8. But then according to
greedy technique, we can not go ahead in the direction of collecting Rupees 9. The
failure of greedy technique is because of the fact that there is a solution otherwise, as
it is possible to make payment of Rupees 9 using notes of denominations of Rupees 5,
Rupees 3 and Rupees 2, viz., 9 = 5+2+2.
Example 1.2.2: Next, we consider another example, in which greedy algorithm may
yield a solution, but the solution may not be optimal, but only suboptimal. For this
purpose, we consider a hypothetical situation, in which currency notes of
denominations 1, 4 and 6 are available. And, we have to collect an amount of
8 = 6+1+1. But this solution uses three currency notes/coins, whereas another
solution using only two currency notes/coins, viz., 8 = 4+4, is available.
Next, we discuss how the Coin Problem is solved using Dynamic Programming
technique.
k. It is
Each of the denomination di, 1 i k, is made a row label and each of the value j for
1 j A is made a column label of the proposed table as shown below, where A is the
amount to be paid:
Amount
4 ..
denomination
1= d1
d2
.
.
.
di
.
.
.
dk
C[i, j]
Design Techniques-II
In the table given on page no. 9, 0 < d1 < d2 < < dk and C[i, j] denotes the
minimum number of coins of denominations d1, d2, ., di (only) that is used to make
an amount j, where more than one coin of the same denomination may be used. The
value C[i, j] is entered in the table with row label di and column label j.
Next, in respect of entries in the table, we make the following two observations:
(i)
(ii)
4 ..
d1
d2
di
dk
Next for i 2 and j 1, the value C[i , j], the minimum number of coins of
denominations upto di (only) to sum up to j, can be obtained recursively through
either of the following two ways:
(i)
di if j
di
(1.2.1)
C[i, j] = C[i-1, j] if i
(1.2.2)
Dynamic Programming
Comment 1.2.3
If j < di in case (1.1) then the Equation (1.1) is impossible. Mathematically we can
say C [ i, j di] = if j < di, because, then the case is automatically excluded from
consideration for calculating C[i, j].
Similarly we take
C[i/1, j] =
if
i<1
In order to explain the above method, let us consider the earlier example for which
greedy algorithm gave only suboptimal solution.
Example 1.2.4: Using Dynamic Programming technique, find out minimum number
of coins required to collect Rupees 8 out of coins of denominations 1, 4, 6.
From the earlier discussion we already know the following portion of the table to be
developed using Dynamic Programming technique.
0
d1 = 1
d2 = 4
di = 6
^ C [2, 1] = C [1, 1] = 1
Similarly, we can show that
C[3, 1] = C[2, 1] = 1
Next we consider
C[2, 2] = min [ 1 + C (2,
Again C[2, / 2] =
Therefore, C[2, 2] = [1, 2] = 2
Similarly, C[3, 2] = 2
On the similar lines, we can show that
C[2, 3] = C[1, 3] = 3
C[3, 3] = C[2, 3] = 3
Next, interesting case is C[2, 4], i.e., to find out minimum number of coins to make an
amount 4 out of coins of denominations 1, 4, 6. By definition,
11
Design Techniques-II
d1 = 1
d2 = 4
di = 6
Let us formalize the method explained above of computing C[k, A], in the general
case, in the form of the following algorithm:
Function Min-Num-Coins (A, D[1..k])
{gives the minimum number of coins to add up to A where the number k of
denominations and the value A are supplied by the calling procedure/program}
array C [1 k, 0 A]
For i = 1 to k
Read (d [i ] )
{reads the various denominations available with each denomination coin in sufficient
numbers}.
{assuming d1 = 1, initialize the table C { } as follows}
For i = 1 to k
C[ i, 0] = 0
For j = 1to A
C [ 1, j] = j
If i = 1 and j < d [ i ] then
C [ i, j] =
else
if
j < d [ i ] then
C [ i, j] = C [ i 1, j]
else
C[ i, j] = min 1+C {i, j d [i] ],
return C [ k, A]
C [i 1, j]}
Comments 1.2.5
Comment 1: The above algorithm explicitly gives only the number of coins, which
are minimum to make a pre-assigned amount A, yet it can also be used to determine
the set of coins of various denominations that add upto A.
depending upon whether 1 + C [i, j / di] C [ i 1, j] or not. Applying the above rule
recursively for decreasing values of i and j, we know which coins are chosen for
making an amount j out of the available coins.
Dynamic Programming
Design Techniques-II
(ii)
(iii)
(iv)
Plus
14 6 90 = 7560
14 90 4 = 5040
Plus
6 90 4 = 2160
14 6 4 = 336
This raises the question of how to parenthesize the pairs of matrices within the
expression A1A2 An, a product of n matrices which is defined; so as to optimize
the computation of the product A1A2 An. The product is known as Chained
Matrix Multiplication.
Dynamic Programming
Brute-Force Method: One way for finding the optimal method (i.e., the method
which uses minimum number of scalar (numerical) operations) is to parenthesize the
expression A1A2 An in all possible ways and calculate number of scalar
multiplications required for each way. Then choose the way which requires minimum
number of scalar multiplications.
However, if T(n) denotes the number of ways of putting parentheses for pairing the
expression A1A2 An, T(n) is an exponentially increasing function. The rate at
which values of T(n) increase may be seen from the following table of values of T(n)
for n = 1, 2, .
n :
T(n):
1
1
2
1
3
2
4
5
5
14
10
4862
15
2674440
Let us first define the problem. Let Ai, 1 i n, be a di-1 di matrix. Let the vector
d [ 0.. n] stores the dimensions of the matrices, where the dimension of
Ai is di-1 di for i = 1, 2, , n. By definition, any subsequence AjAk of
A1A2 An for 1 j k n is a well-defined product of matrices. Let us consider a
table m [1.. n, 1 .. n] in which the entries mij for 1 i j n, represent optimal (i.e.,
minimum) number of operations required to compute the product matrix (Ai Aj).
We fill up the table diagonal-wise, i.e., in one iteration we fill-up the table one
diagonal mi, i+s, at a time, for some constant s 0. Initially we consider the biggest
diagonal mii for which s = 0. Then next the diagonal mi, i+s for s = 1 and so on.
First, filling up the entries mii , i = 1, 2, , n.
Now mii stands for the minimum scalar multiplications required to compute the
product of single matrix Ai. But number of scalar multiplications required are zero.
Hence,
mii = 0
for i = 1, 2, n.
15
Design Techniques-II
mi(i +1) denotes the minimum number of scalar multiplication required to find the
product Ai Ai+1. As Ai is di-1 di matrix and Ai+1 is di di+1 matrix. Hence,
there is a unique number for scalar multiplication for computing Ai Ai+1 giving
mi, (i +1) = di-1didi+1
for i = 1, 2, , (n 1).
The above case is also subsumed by the general case
mi(i +s)
for s
i + s.
Assuming optimal number of scalar multiplication viz., mi,j and mi+1, j are already
known, we can say that
mi(i +s) = mini
j i+s
for i = 1, 2, , n s.
When the term di-1 dj di+s represents the number of scalar multiplications required to
multiply the resultant matrices (Ai Aj) and (Aj+1 Ai+s)
Summing up the discussion, we come the definition mi,i+s for i=1,2, (n /1) as
for s = 0:
for s = 1:
mi,i = 0
mi,i+1 = di-1 di di+1
for i = 1, 2, , n
for i = 1, 2, , (n/1)
j i+s(mij+mj+1,i+s
+ di-1 di di+1)
for i= 1, 2, , (n/s)
of order
of order
of order
of order
14
6
90
4
6
90
4
35
m12
m23
m34
=
=
=
d0 d1 d2
d1 d2 d3
d2 d3 d4
=
=
=
7560
14 6 90
6 90 4
9 4 35
=
=
=
7560
3240
1260
Dynamic Programming
3240
0
1260
m13
m24
Finally, for s = 3
m14
{when k = 1}
{when k = 2}
{when k = 3}
1.6 SUMMARY
(1)
(2)
(3)
(4)
17
Design Techniques-II
(5)
1.7 SOLUTIONS/ANSWERS
Ex. 1)
First of all, it can be easily verified that Principle of Optimality is valid in the
sense for an optimal solution of the overall problem each subsolution is also
optimal. Because in this case, a non-optimal solution of a subproblem, when
replaced by a better solution of the subproblem, would lead to a better than
optimal solution of the overall problem. A contradiction.
As usual, for solving an optimization problem using Dynamic Programming
technique, we set up a table V[1..n, 0..W].
In order to label the rows we first of all, order the given objects according to
increasing relative values R = v/w.
Thus first object O1 is the one with minimum relative value R1. The object O2 is
the one with next least relative value R2 and so on. The last object, in the
sequence, is On, with maximum relative weight Rn.
The ith row of the Table Corresponds to object Oi having ith relative value,
when values are arranged in increasing order. The jth column corresponds to
weight j for 0 j W. The entry Knap [i, j] denotes the maximum value that can
be packed in knapsack when objects O1, O2, ,Oi only are used and the
included objects have weight at most j.
Next, in order to fill up the entries Knap[i, j], 1 i n and 0 j W, of the table,
we can check, as was done in the coin problem that,
(i)
Knap [i, 0] = 0
for i = 1, 2, , n
(ii)
Knap [ 1, j] = V, for j = 1, , W
where V is the value of O1
Another version allows any fraction xi with 0 xi 1. However, in this problem, we assume either xi = 1
or xi = 0.
18
j
.
.
.
n
0
.
.
.
0
Dynamic Programming
(ii)
Thus we define
Knap [i, j] = max {Knap [i/1, j], Knap [i/1, j - wi] + vi}
The above equation is valid for i = 2 and j wi. In order that the above equation may
be applicable otherwise also, without violating the intended meaning we take,
(i)
(ii)
We explain the Dynamic Programming based solution suggested above, through the
following example.
We are given six objects, whose weights are respectively 1, 2, 5, 6, 7, 10 units and
whose values respectively are 1, 6, 18, 22, 28, 43. Thus relative values in increasing
order are respectively 1.00, 3.00, 3.60, 3.67, 4.00 and 4.30. If we can carry a
maximum weight of 12, then the table below shows that we can compose a load
whose value is 49.
Weights
0 1 2
9 10
11 12
w1= 1, v1 = 1, R1 = 1.00
0 1 1
w2 = 2, v2 = 6, R2 = 3.00
0 1 6
7 7
w3 = 5, v3 = 18, R3 = 3.60
0 1 6
18
19 24 25 25 25 25
25
W4 = 6, v4 = 22, R4 =3.67
0 1 6
18 22 24 28 29 29 40
41
W5 = 7, v5 = 28 R5 = 4.00
0 1 6
18 22 28 29 34 35 40
46
0 1 6
18 22 28 29 34 43 44
49
Relative Values
19
Design Techniques-II
Algorithm for the solution of the solution explained above of the Knapsack
Problem.
Function Knapsack (W, Weight [ 1..n], value [1..n])
for i = 1, , n
for j = 1, , W
20
Dynamic Programming
2.
3.
4.
5.
6.
7.
21
Design Techniques-II
Page Nos.
2.0
2.1
2.2
2.3
Introduction
Objectives
Some Examples
Formalization of Greedy Technique
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.3.1
22
23
23
25
27
31
34
38
41
41
46
2.0 INTRODUCTION
Algorithms based on Greedy technique are used for solving optimization problems.
An optimization problem is one in which some value (or set of values) of interest is
required to be either minimized or maximized w.r.t some given relation on the values.
Such problems include maximizing profits or minimizing costs of, say, production of
some goods. Other examples of optimization problems are about
finding the minimum number of currency notes required for an amount, say of
Rs. 289, where arbitrary number of currency notes of each denomination from
Rs. 1 to Rs. 100 are available, and
finding shortest path covering a number of given cities where distances between
pair of cities are given.
As we will study later, the algorithms based on greedy technique, if exist, are easy to
think of, implement and explain about. However, for many interesting optimization
problems, no algorithm based on greedy technique may yield optimal solution. In
support of this claim, let us consider the following example:
Example 1.1: Let us suppose that we have to go from city A to city E through either
city B or city C or city D with costs of reaching between pairs of cities as shown
below:
A
3000
B
8000
4000
C
5000
5000
4000
E
Figure 2.0.1
Then the greedy technique suggests that we take the route from A to B, the cost of
which Rs.3000, is the minimum among the three costs, (viz., Rs. 3000, Rs. 4000 and
Rs. 5000) of available routes.
22
However, at B there is only one route available to reach E. Thus, greedy algorithm
suggests the route from A to B to E, which costs Rs.11000. But, the route from A to C
to E, costs only Rs.9000. Also, the route from A to D to E costs also Rs.9000.
Thus, locally better solution, at some stage, suggested by greedy technique yields
overall (or globally) costly solution.
Greedy Techniques
2.1 OBJECTIVES
After studying unit, you should be able to:
23
Design Techniques-II
289 and
ii)
289, then D1
D.
In other words, the picked-up notes denomination D is the largest among all the
denominations satisfying condition (i) above.
The above-mentioned step of picking note of denomination D, satisfying the above
two conditions, is repeated till either the amount of Rs.289/- is formed or we are clear
that we can not make an amount or Rs.289/- out of the given denominations.
We apply the above-mentioned intuitive solution as follows:
To deliver Rs. 289 with minimum number of currency notes, the notes of different
denominations are chosen and rejected as shown below:
Chosen-Note-Denomination
100
100
100
50
50
20
20
10
10
5
5
2
2
Total-Value-So far
0+100 289
100+100= 289
200+100 > 289
200+50 289
250+50 > 289
250 + 20 289
270 + 20 > 289
270 + 10 289
280 + 10 > 289
280 + 5 289
285 + 5 > 289
285 + 2 < 289
287 + 2 = 289
24
i)
ii)
notes becomes 100, which is greater than 90. Therefore, we do not pick up
any note of denomination 50 or above.
iii)
iv)
v)
Next, we attempt to pick up a note of next denomination, viz., 20. But, in that
case the sum of the denomination of the picked up notes becomes 80+20=100,
which is again greater than 90. Therefore, we do not pick up only note of
denomination 20 or above.
vi)
Greedy Techniques
A set or list of given/candidate values from which choices are made, to reach
a solution. For example, in the case of Minimum Number of Notes problem,
the list of candidate values (in rupees) of notes is {1, 2, 5, 10, 20, 50, 100, 500,
1000}. Further, the number of notes of each denomination should be clearly
25
Design Techniques-II
(iii) Set of Considered and Rejected Values: As the name suggests, this is the set
of all those values, which are considered but rejected. Let us call this set as
RV: Set of considered and Rejected Values
A candidate value may belong to both CV and RV. But, once a value is put in
RV, then this value can not be put any more in CV. For example, to make an
amount of Rs. 289, once we have chosen two notes each of denomination 100,
we have
CV = {100, 100}
At this stage, we have collected Rs. 200 out of the required Rs. 289. At this
stage RV = {1000, 500}. So, we can chose a note of any denomination except
those in RV, i.e., except 1000 and 500. Thus, at this stage, we can chose a note
of denomination 100. However, this choice of 100 again will make the total
amount collected so far, as Rs. 300, which exceeds Rs. 289. Hence we reject
the choice of 100 third time and put 100 in RV, so that now RV = {1000, 500,
100}. From this point onward, we can not chose even denomination 100.
Next, we consider some of the functions, which need to be defined in an
algorithm using greedy technique to solve an optimization problem.
(iv)
A function say SolF that checks whether a solution is reached or not. However,
the function does not check for the optimality of the obtained solution. In the
case of Minimum Number of Notes problem, the function SolF finds the sum of
all values in the multi-set CV and compares with the desired amount, say
Rs. 289. For example, if at one stage CV = {100, 100} then sum of values in
CV is 200 which does not equal 289, then the function SolF returns Solution
not reached. However, at a later stage, when CV = {100, 100, 50, 20, 10, 5, 2,
2}, then as the sum of values in CV equals the required amount, hence the
function SolF returns the message of the form Solution reached.
It may be noted that the function only informs about a possible solution.
However, solution provided through SolF may not be optimal. For instance in
the Example 2.2.3, when we reach CV = {60, 10, 10}, then SolF returns
Solution reached. However, as discussed earlier, the solution
80 = 60 + 10 + 10 using three notes is not optimal, because, another solution
using only two notes, viz., 80 = 40 + 40, is still cheaper.
(v)
26
Selection Function say SelF finds out the most promising candidate value out
of the values not yet rejected, i.e., which are not in RV. In the case of Minimum
Number of Notes problem, for collecting Rs. 289, at the stage when
RV = {1000, 500} and CV = {100, 100} then first the function SelF attempts
the denomination 100. But, through function SolF, when it is found that by
addition of 100 to the values already in CV, the total value becomes 300 which
exceeds 289, the value 100 is rejected and put in RV. Next, the function SelF
attempts the next lower denomination 50. The value 50 when added to the sum
of values in CV gives 250, which is less than 289. Hence, the value 50 is
returned by the function SelF.
(vi)
Greedy Techniques
The Feasibility-Test Function, say FeaF. When a new value say v is chosen
by the function SelF, then the function FeaF checks whether the new set,
obtained by adding v to the set CV of already selected values, is a possible part
of the final solution. Thus in the case of Minimum Number of Notes problem,
if amount to be collected is Rs. 289 and at some stage, CV = {100, 100}, then
the function SelF returns 50. At this stage, the function FeaF takes the control.
It adds 50 to the sum of the values in CV, and on finding that the sum 250 is
less than the required value 289 informs the main/calling program that {100,
100, 50} can be a part of some final solution, and needs to be explored further.
(vii) The Objective Function, say ObjF, gives the value of the solution. For
example, in the case of the problem of collecting Rs. 289; as CV = {100, 100,
50, 20, 10, 5, 2, 2} is such that sum of values in CV equals the required value
289, the function ObjF returns the number of notes in CV, i.e., the number 8.
After having introduced a number of sets and functions that may be required by
an algorithm based on greedy technique, we give below the outline of greedy
technique, say Greedy-Structure. For any actual algorithm based on greedy
technique, the various structures the functions discussed above have to be
replaced by actual functions.
These functions depend upon the problem under consideration. The GreedyStructure outlined below takes the set GV of given values as input parameter
and returns CV, the set of chosen values. For developing any algorithm based
on greedy technique, the following function outline will be used.
2.3.1
27
Design Techniques-II
Definitions
A Spanning tree of a connected graph, say G = (V, E) with V as set of vertices and E
as set of edges, is its connected acyclic subgraph (i.e., a tree) that contains all the
vertices of the graph.
A minimum spanning tree of a weighted connected graph is its spanning tree of the
smallest weight, where the weight of a tree is defined as the sum of the weights on all
its edges.
The minimum spanning tree problem is the problem of finding a minimum
spanning tree for a given weighted connected graph.
The minimum spanning tree problem has a number of useful applications in the
following type of situations:
Suppose, we are given a set of cities alongwith the distances between each pair of
cities. In view of the shortage of funds, it is desired that in stead of connecting
directly each pair of cities, we provide roads, costing least, but allowing passage
between any pair cities along the provided roads. However, the road between some
pair of cities may not be direct, but may be passing through a number of other cities.
Next, we illustrate the concept of spanning tree and minimum spanning tree through
the following example.
Let us consider the connected weighted graph G given in Figure 4.1.
1
3
Figure: 2.4.1
For the graph of Figure 2.4.1given above, each of Figure 2.4.2, Figure, 2. 4.3 and
Figure. 2.4.4 shows a spanning tree viz., T1, T2 and T3 respectively.
Out of these, T1 is a minimal spanning tree of G, of weight 1+2+3 = 6.
1
a
3
d
T1
Figure: 2.4.2
28
Greedy Techniques
T2
Figure: 2.4.3
T3
Figure: 2.4.4
Remark 2.4.1:
The weight may denote (i) length of an edge between pair of vertices or (ii) the cost of
reaching from one town to the other or (iii) the cost of production incurred in reaching
from one stage of production to the immediate next stage of production or (iv) the cost
of construction of a part of a road or of laying telephone lines between a pair of towns.
Remark 2.4.2:
The weights on edges are generally positive. However, in some situations the weight
of an edge may be zero or even negative. The negative weight may appear to be
appropriate when the problem under consideration is not about minimizing costs but
about maximizing profits and we still want to use minimum spanning tree
algorithms. However, in such cases, it is not appropriate to use negative weights,
because, more we traverse the negative-weight edge, lesser the cost. However, with
repeated traversals of edges, the cost should increase in stead of decreasing.
Therefore, if we want to apply minimum spanning tree technique to maximizing
profit problems, then in stead of using negative weights, we replace profits pi by M/pi
where M is some positive number s.t
M > Max {pij
Remark 2.4.3:
From graph theory, we know that a given connected graph with n vertices, must have
exactly (n /1) edges.
As mentioned earlier, whenever we want to develop an algorithm based on greedy
technique, we use the function Greedy-Structure given under 2.3.1. For this purpose,
29
Design Techniques-II
we need to find appropriate values of the various sets and functions discussed in
Section 3.
In the case of the problem of finding minimum-spanning tree for a given
connected graph, the appropriate values are as follows:
(i)
(ii)
CV: The structure of chosen values is given by those edges from E, which
together will form the required minimum-weight spanning tree.
(iii)
RV: set of rejected values will be given by those edges in E, which at some
stage will form a cycle with earlier selected edges.
(iv)
In the case of the problem of minimum spanning tree, the function SolF
that checks whether a solution is reached or not, is the function that checks
that
(a)
(b)
(c)
(v)
Selection Function: depends upon the particular algorithm used for the
purpose. There are two well-known algorithms, viz., Prims algorithm and
Kruskals algorithm for finding the Minimum Spanning Tree. We will
discuss these algorithms in detail in subsequent sections.
(vi)
FeaF: Feasibility Test Function: In this case, when the selection function
SelF returns an edge depending on the algorithm, the feasibility test function
FeaF will check whether the newly found edge forms a cycle with the earlier
slected edges. If the new edge actually forms a cycle then generally the newly
found edge is dropped and search for still another edge starts. However, in
some of the algorithms, it may happen that some earlier chosen edge is
dropped.
(vii)
In the case of Minimum Spanning Tree problem, the objective function may
return
(a)
(b)
the set of edges that constitute the required minimum spanning tree and
the weight of the tree selected in (a) above.
4
8
e
30
Greedy Techniques
VT = (a)
ET = h
5
1.5
e
Figure: 2.5.1
In the first iteration, the edge having weight which is the minimum of the weights
of the edges having a as one of its vertices, is chosen. In this case, the edge ab with
weight 1 is chosen out of the edges ab, ac and ad of weights respectively 1,5 and 2.
Thus, after First iteration, we have the given graph with chosen edges in bold and
VT and ET as follows:
a
VT = (a, b)
ET = ( (a,b))
1.5
e
Figure: 2.5.2
31
Design Techniques-II
In the next iteration, out of the edges, not chosen earlier and not making a cycle with
earlier chosen edge and having either a or b as one of its vertices, the edge with
minimum weight is chosen. In this case the vertex b does not have any edge
originating out of it. In such cases, if required, weight of a non-existent edge may be
taken as . Thus choice is restricted to two edges viz., ad and ac respectively of
weights 2 and 5. Hence, in the next iteration the edge ad is chosen. Hence, after
second iteration, we have the given graph with chosen edges and VT and ET as
follows:
VT = (a, b, d)
ET = ((a, b), (a, d))
1.5
e
Figure: 2.5.3
In the next iteration, out of the edges, not chosen earlier and not making a cycle with
earlier chosen edges and having either a, b or d as one of its vertices, the edge with
minimum weight is chosen. Thus choice is restricted to edges ac, dc and de with
weights respectively 5, 3, 1.5. The edge de with weight 1.5 is selected. Hence, after
third iteration we have the given graph with chosen edges and VT and ET as
follows:
VT = (a, b, d, e)
ET = ((a, b), (a, d), (d, e))
1.5
e
Figure: 2.5.4
In the next iteration, out of the edges, not chosen earlier and not making a cycle with
earlier chosen edge and having either a, b, d or e as one of its vertices, the edge with
minimum weight is chosen. Thus, choice is restricted to edges dc and ac with weights
respectively 3 and 5. Hence the edge dc with weight 3 is chosen. Thus, after fourth
iteration, we have the given graph with chosen edges and VT and ET as follows:
32
VT = (a, b, d, e, c)
ET = (( a b), (a d) (d e) (d c))
Greedy Techniques
1.5
e
Figure: 2.5.5
At this stage, it can be easily seen that each of the vertices, is on some chosen edge
and the chosen edges form a tree.
Given below is the semiformal definition of Prims Algorithm
Algorithm Spanning-Prim (G)
// the algorithm constructs a minimum spanning tree
// for which the input is a weighted connected graph G = (V, E)
// the output is the set of edges, to be denoted by ET, which together constitute a
minimum spanning tree of the given graph G
// for the pair of vertices that are not adjacent in the graph to each other, can be given
// the label indicating infinite distance between the pair of vertices.
// the set of vertices of the required tree is initialized with the vertex v0
VT {vo }
ET h // initially ET is empty
// let n = number of vertices in V
For i = 1 to n - 1 do
find a minimum-weight edge e = (v1, u1) among all the edges such that v1 is in VT and
u1 is in V VT.
VT VT { u1}
ET = ET { e}
Return ET
Ex. 3) Using Prims algorithm, find a minimal spanning tree for the graph given
1
below:
a
b
4
8
e
33
Design Techniques-II
First of all, order all the weights of the edges in increasing order. Then repeat
the following two steps till a set of edges is selected containing all the vertices
of the given graph.
(ii)
Choose an edge having the weight which is the minimum of the weights of the
edges not selected so far.
(iii) If the new edge forms a cycle with any subset of the earlier selected edges, then
drop it, else, add the edge to the set of selected edges.
We illustrate the Kruskals algorithm through the following:
Example 2.6.1:
Let us consider the following graph, for which the minimal spanning tree is required.
a
4.2
6
e
Figure: 2.6.1
Let Eg denote the set of edges of the graph that are chosen upto some stage.
According to the step (i) above, the weights of the edges are arranged in increasing
order as the set
{1, 3, 4.2 ,5, 6}
In the first iteration, the edge (a,b) is chosen which is of weight 1, the minimum of
all the weights of the edges of the graph.
As single edge do not form a cycle, therefore, the edge (a,b) is selected, so that
Eg = ((a,b))
34
After first iteration, the graph with selected edges in bold is as shown below:
a
Greedy Techniques
4.2
6
e
Figure: 2.6.2
Second Iteration
Next the edge (c,d) is of weight 3, minimum for the remaining edges. Also edges
(a,b) and (c,d) do not form a cycle, as shown below. Therefore, (c,d) is selected
so that,
Eg = ((a,b), (c,d))
Thus, after second iteration, the graph with selected edges in bold is as shown below:
4.2
6
e
Figure: 2.6.3
It may be observed that the selected edges do not form a connected subgraph or
subtree of the given graph.
Third Iteration
Next, the edge (a,d) is of weight 4.2, the minimum for the remaining edges. Also the
edges in Eg alongwith the edge (a,d) do not form a cycle. Therefore, (a,d) is selected
so that new Eg = ((a,b), (c,d), (a,d)). Thus after third iteration, the graph with selected
edges in bold is as shown below.
35
Design Techniques-II
4.2
6
e
Figure: 2.6.4
Fourth Iteration
Next, the edge (a,c) is of weight 5, the minimum for the remaining edge. However,
the edge (a,c) forms a cycles with two edges in Eg, viz., (a,d) and (c,d). Hence (a,c) is
not selected and hence not considered as a part of the to-be-found spanning tree.
1
4.2
6
e
Figure: 2.6.5
At the end of fourth iteration, the graph with selected edges in bold remains the same
as at the end of the third iteration, as shown below:
1
a
4.2
6
e
Figure: 2.6.6
36
Greedy Techniques
Fifth Iteration
Next, the edge (e,d), the only remaining edge that can be considered, is considered.
As (e,d) does not form a cycle with any of the edges in Eg. Hence the edge (e,d) is put
in Eg. The graph at this stage, with selected edge in bold is as follows.
Error!
1
a
4.2
6
e
Figure: 2.6.7
At this stage we find each of the vertices of the given graph is a vertex of
some edge in Eg. Further we observe that the edges in Eg form a tree, and hence, form
the required spanning tree. Also, from the choice of the edges in Eg, it is clear that the
spanning tree is of minimum weight. Next, we consider semi-formal definition of
Kruskals algorithm.
ALGORITHM Spanning-Kruskal (G)
// The algorithm constructs a minimum spanning tree by choosing successively edges
// of minimum weights out of the remaining edges.
// The input to the algorithm is a connected graph G = (V, E), in which V is the set of
// vertices and E, the set of edges and weight of each edge is also given.
// The output is the set of edges, denoted by ET, which constitutes a minimum
// spanning tree of G
// the variable edge-counter is used to count the number of selected edges so far.
// variable t is used to count the number of edges considered so far.
Arrange the edges in E in nondecreasing order of the weights of edges. After the
arrangement, the edges in order are labeled as e1, e2, eE
ET h // initialize the set of tree edges as empty
edge-counter 0 // initialize the ecounter to zero
t0
// initialize the number of processed edges as zero
// let n = number of edges in V
While edge-counter < n 1
t t + 1 // increment the counter for number of edges considered so far
If the edges et does not form a cycle with any subset of edges in ET then
begin
// if, et alongwith edges earlier in ET do not form a cycle
// then add et to ET and increase edge counter
ET ET { et};
edge-counter edge-counter + 1
end if
return ET
37
Design Techniques-II
s ( a log a) time is required for sorting the edges in increasing orders of lengths
(ii)
An efficient Union-Find operation takes (2a) find operations and (n /1) merge
operations.
1
8
e
However, we have misused the notation in the sense that we used the notation (a,b) to
denote an unordered set of two elements, i.e., a set in which order of occurrence of a
and b does not matter. In Mathematics the usual notation for an unordered set is
{a,b}. In this section, we use parentheses (i.e., (and)) to denote ordered sets and
braces (i.e., {and}) to denote a geneal (i.e., unordered set).
Greedy Techniques
Definition:
A directed graph or digraph G = (V(G), E(G)) where V(G) denotes the set of
vertices of G and E(G) the set of directed edges, also called arcs, of G. An arc from a
to b is denoted as (a, b). Graphically it is denoted as follows:
a
b,
in which the arrow indicates the direction. In the above case, the vertex a is sometimes
called the tail and the vertex b is called the head of the arc or directed edge.
Definition:
A Weighted Directed Graph is a directed graph in which each arc has an assigned
weight. A weighted directed graph may be denoted as G = (V(G), E(G)), where any
element of E(G) may be of the form (a,b,w) where w denotes the weight of the arc
(a,b). The directed Graph G = ((a, b, c, d, e), ((b, a, 3), (b, d, 2) (a, d,7), (c, b, 4),
(c, d, 5), (d, e, 4), (e, c, 6))) is diagrammatically represented as follows:
4
b
3
2
Figure: 2.7.1
Design Techniques-II
Error!
3
2
Figure: 2.7.2
Step
Initialization
Additional
node
a
S = Set-ofRemaining Nodes
(b, c, d, e)
(c, d, e)
[3, 3 + 4, 3 + 2, ]
(c, e)
(e)
[3, 7, 5, 9]
For minimum distance from a, the node b is directly accessed; the node c is accessed
through b; the node d is accessed through b; and the node e is accessed through b and
d.
Ex. 5) Using Dijkstras algorithm, find the minimum distances of all the nodes from
node b which is taken as the source node, for the following graph.
40
Greedy Techniques
1
2
e
2.8 SUMMARY
In this unit, we have discussed the greedy technique the essence of which is : In the
process of solving an optimization problem, initially and at subsequent stages,
evaluate the costs/benefits of the various available alternatives for the next step.
Choose the alternative which is optimal in the sense that either it is the least
costly or it is the maximum profit yielding. In this context, it may be noted that
the overall solution, yielded by choosing locally optimal steps, may not be optimal.
Next, well-known algorithms viz., Prims and Kruskals that use greedy technique, to
find spanning trees for connected graphs are discussed. Also Dijkstras algorithm for
solving Single-Source-Shortest path problem, again using greedy algorithm, is
discussed.
2.9 SOLUTIONS/ANSWERS
Ex.1)
Consider the following graph, in which vertices/nodes represent cities of a
country and each edge denotes a road between the cities denoted by the
vertices of the edge. The label on each edge denotes the distance in 1000
kilometers between the relevant cities. The problem is to find an optimal path
from A1 to A4.
A2
5
3
A4
A1
2
A3
Then greedy techniques suggests the route A1, A3, A4 of length 9000 kilometers,
whereas the optimal path A1, A2, A4 is of length 8000 kilometers only.
41
Design Techniques-II
Ex.2)
We will learn some systematic methods of finding a minimal spanning tree of
a graph in later sections. However, by hit and trial, we get the following
minimal spanning tree of the given graph.
Ex.3)
The student should include the explanation on the lines of Example 2.5.1.
However, the steps and stages in the process of solving the problem are as
follows.
Initially
a
VT = (a)
ET = h
4
8
e
42
Greedy Techniques
In the following figures, the edges in bold denote the chosen edges.
After First Iteration
VT = (a, b)
ET = ((a, b))
5
4
8
e
VT = (a, b, d)
ET = ((a, b), (a, d))
4
8
e
4
8
e
43
Design Techniques-II
VT = (a, b, c, d, e)
ET = ((a, b), (a, d), (c, d), (c, e))
5
4
8
e
Ex. 4)
The student should include the explanation on the lines of Example 2.6.1.
However, the steps and stages in the process of solving the problem are as
follows:
The edges in bold denote the selected edges.
After First Iteration
Eg = ((c, e))
4
1
8
e
1
8
e
44
Greedy Techniques
1
8
e
1
8
e
Now, on the above four edges all the vertices of the graph lie and these edges form a
tree which is the required minimal spanning tree.
Ex. 5)
A copy of the graph is given below
a
1
2
e
45
Step
Initialization
Additional
node
B
S = Set-ofRemaining Nodes
(a, c, d, e)
(a, c, d)
[6, 2, 3 1,]
(a, d)
[6, 2, 3 1,]
(a)
[5, 2, 3, 1]
Design Techniques-II
For minimum distance from b, node a is accessed through d and e; node c is accessed
through e; node d is accessed through e and node e is accessed directly.
46
1.
2.
3.
4.
5.
6.
7.
Page Nos.
Introduction
Objectives
Regular Expressions
3.2.1
3.2.2
3.2.3
3.2.4
3.3
3.4
47
47
47
Regular Languages
Finite Automata
53
54
3.4.1 Definition
3.4.2 Another Method to Describe FA
3.5
3.6
3.7
Summary
Solutions/Answers
Further Readings
59
59
60
3.0 INTRODUCTION
In the earlier two blocks and unit 1 and unit 2 of this block, we discussed a number of
issues and techniques about designing algorithms. However, there are a number of
problems for each of which, no algorithmic solution exists. Examples of such
problems will be provided in unit 2 of block 4. However, many of these examples are
found from the discipline of the well-known models of computation viz., finite
automata, push-down automata and Tuning machines. In this unit, we discuss the
topic of Finite Automata.
3.1 OBJECTIVES
After studying this unit, you should be able to:
3.2.1
For a language, defining rules can be of two types. The rules can either tell us how to
test a string of alphabet letters that we might be presented with, to see if it is a valid
word, i.e., a word in the language or the rules can tell us how to construct all the
words in the language by some clear procedures.
47
Design Techniques-II
48
languages. For example, in the English language when we concatenate Ram and
goes we get Ram goes. This is, indeed, a word but distinct from goes Ram.
Now, let us define the reverse of a language L. If c is a word in L, then reverse (c) is
the same string of letters spelled backward.
The reverse (L) = {reverse (w), wL}
Example 6: Reverse (zzz) = zzz
Reverse (173) = 371
Let us define a new language called PALINDROME over the alphabet U = {a,b}.
PALINDROME = {, and all strings w such that reverse (w) = w}
= {, a, b, aa, bb, aaa, aba, bab, bbb, aaaa, abba, }
Concatenating two words in PALINDROME may or may not give a word in
palindrome, e.g., if u = abba and v = abbcba, then uv = abbaabbcbba which is not
palindrome.
Now, we can generalise the use of the star operator to languages, i.e., to a set of
words, not just sets of alphabet letters.
Definition: If s is a set of words, then by s* we mean the set of all finite strings
formed by concatenating words from s, where any word may be used as often.
Example 8: If s = {cc, d}, then
s* = { or any word composed of factors of cc and d}
= { or all strings of cs and ds in which cs occur in even clumps}.
The string ccdcccd is not in s* since it has a clump of cs of length 3.
0.
Positive Closure: If we want to modify the concept of closure to refer to only the
concatenation leading to non-null strings from a set s, we use the notation + instead of
*. This plus operation is called positive closure.
Theorem 1: For any set s of strings prove that s* = (s*)* = s**
Proof: We know that every word in s** is made up of factors from s*.
49
Design Techniques-II
(ii)
3.2.3
Certain sets of strings or languages can be represented in algebraic fashion, then these
algebraic expressions of languages are called regular expressions. Regular
expressions are in Bold face. The symbols that appear in regular use of the letters of
the alphabet U are the symbol for the null string , parenthesis, the star operator, and
the plus sign.
The set of regular expressions is defined by the following rules:
1.
2.
3.
For example, now we would build expression from the symbols 0,1 using the
operations of union, concatenation, and Kleene closure.
(i) 01 means a zero followed by a one (concatenation)
(ii) 0+1 means either a zero or a one (union)
(iii) 0* means +0+00+000+.. (Kleen closure).
With parentheses, we can build larger expressions. And, we can associate meanings
with our expressions. Heres how
Expression
(0+1)*
0*10*10*
(0+1)*11
50
Set represented
all strings over {0,1}
strings containing exactly two ones
strings which end with two ones.
Example 9: The language L defined by the regular expression ab*a is the set of all
strings of as and bs that begin and end with as, and that have nothing but bs inside.
Ex.4)
Find a regular expression over the alphabet {0,1,} to describe the set of all
binary numerals without leading zeroes (except 0 itself). So the language is
the set
{0,1,10,11,100,101,110,111,}.
(R+S)+T = R+(S+T)
51
Design Techniques-II
2.
R+R = R
3.
R+h = h+R = R.
4.
R+S = S+R
5.
Rh = hR = h
6.
R = R = R
7.
(RS)T = R(ST)
8.
R(S+T) = RS+RT
9.
(S+T)R = SR+TR
10. h* = * =
11. R*R* = R* = (R*)*
12. RR* = R*R = R* = +RR*
13. (R+S)* = (R*S*)* = (R*+S*)* = R*S* = (R*S)*R* = R*(SR*)*
14. (RS)* = (R*S*)* = (R*+S*)*
Theorem 2: Prove that R+R = R
Proof : We know the following equalities:
R(S+T) = RS+RT
Proof: The following set of equalities will prove the property:
L(R (S+T)) =
=
=
=
L(R)L(S+T)
L(R)(L(S)UL(T))
(L(R)L(S))U(L(R)L(T))
L(RS+RT)
Similarly, by using the equalities we can prove the rest. The proofs of the rest of the
equalities are left as exercises.
Example 15: Show that R+RS*S = a*bS*, where R = b+aa*b and S is any regular
expression.
(property 12)
= (b+aa*b)S* (definition of R)
= (+aa*) bS* (properties 6 and 8)
= a*bS*. (Property 12)
Try an exercise now.
Ex.5) Establish the following equality of regular expressions:
b*(abb*+aabb*+aaabb*)* = (b+ab+aab+aaab)*
52
(i) The language = {xy : xL and yM} is a regular expression associated with the
regular expression lm.
(ii) The regular expression l+m is associated with the language formed by the union
of the sets L and M.
language (l+m) = LM
(iii) The language associated with the regular expression (l)* is L*, the Kleen Closure
of the set L as a set of words:
language (l*) = L*.
Now, we shall derive an important relation that, all finite languages are regular.
Theorem 4: If L is a finite language, then L can be defined by a regular expression.
In other words, all finite languages are regular.
Proof: A language is finite if it contains only finitely many words.
To make one regular expression that defines the language L, turn all the words in L
into bold face type and insert plus signs between them. For example, the regular
expression that defines the language L = {baa, abbba, bababa} is baa + abbba +
bababa.
Example 16: If L = {aa, ab, ba, bb}, then the corresponding regular expression is
aa + ab +ba + bb.
Design Techniques-II
Ex.7)
Find a regular expression for each of the following languages over the
alphabet {a,b}.
(a) strings with even length.
(b) strings containing the sub string aba.
In our day to day life we oftenly use the word Automatic. Automation is the process
where the output is produced directly from the input without direct involvement of
mankind. The input passes from various states in process for the processing of a
language we use very important finite state machine called finite automata.
3.4.1
Definition
A system where energy and information are transformed and used for performing
some functions without direct involvement of man is called automaton. Examples are
automatic machine tools, automatic photo printing tools, etc.
A finite automata is similar to a finite state machine. A finite automata consists of
five parts:
(1)
(2)
(3)
(4)
We always indicate the start state by writing the word start with an arrow painting to
it. Final states are indicated by double circle.
The single arrow out of state 4 labelled with a,b is short hand for two arrows from
state 4, going to the same place, one labelled a and one labelled b. It is easy to check
that this digraph represents a DFA over {a,b} because there is a start state, and each
state emits exactly two arrows, one labelled with a and one labelled with b.
So, we can say that a finite automaton is a collection of three tuples:
1.
A finite set of states, one of which is designated as the initial state, called the start
state, and some (may be none) of which we designated as final states.
2.
An alphabet U of possible input letters from which are formed strings that are to
be read one letter at a time.
3.
A finite set of transitions that tell for each state and for each letter of the input
alphabet which state to go to next.
For example, the input alphabet has only two letters a and b. Let us also assume that
there are only three states, x, y and z. Let the following be the rules of transition:
1.
2.
3.
4.
5.
Let us also designate state x as the starting state and state z as the only final state.
Let us examine what happens to various input strings when presented to this FA. Let
us start with the string aaa. We begin, as always, in state x. The first letter of the
string is an a, and it tells us to go state y (by rule 1). The next input (instruction) is
also an a, and this tells us (by rule 3) to go back to state x. The third input is another
a, and (by Rule 1) again we go to the state y. There are no more input letters in the
55
Design Techniques-II
input string, so our trip has ended. We did not finish in the final state (state z), so we
have an unsuccessful termination of our run.
The string aaa is not in the language of all strings that leave this FA in state z. The set
of all strings that do leave as in a final state is called the language defined by the finite
automaton. The input string aaa is not in the language defined by this FA. We may
say that the string aaa is not accepted by this FA because it does not lead to a final
state. We may also say aaa is rejected by this FA. The set of all strings accepted is
the language associated with the FA. So, we say that L is the language accepted by
this FA. FA is also called a language recogniser.
Let us examine a different input string for this same FA. Let the input be abba. As
always, we start in state x. Rule 1 tells us that the first input letter, a, takes us to state
y. Once we are in state y we read the second input letter, which is ab. Rules 4 now
tells us to move to state z. The third input letter is a b, and since we are in state z,
Rule 5 tells us to stay there. The fourth input letter is an a, and again Rule 5 says state
z. Therefore, after we have followed the instruction of each input letter we end up in
state z. State z is designated as a final state. So, the input string abba has taken us
successfully to the final state. The string abba is therefore a word in the language
associated with this FA. The word abba is accepted by this FA.
It is not difficult for us to predict which strings will be accepted by this FA. If an
input string is made up of only the letter a repeated some number of times, then the
action of the FA will be jump back and forth between state x and state y. No such
word can ever be accepted.
To get into state z, it is necessary for the string to have the letter b in it as soon as a b
is encountered in the input string, the FA jumps immediately to state z no matter what
state it was before. Once in state z, it is impossible to leave. When the input strings
run out, the FA will still be in state z, leading to acceptance of the string.
So, the FA above will accept all the strings that have the letter b in them and no other
strings. Therefore, the language associated with this FA is the one defined by the
regular expression (a+b)* b(a+b)*.
The list of transition rules can grow very long. It is much simpler to summarise them
in a table format. Each row of the table is the name of one of the states in FA, and
each column of this table is a letter of the input alphabet. The entries inside the table
are the new states that the FA moves into the transition states. The transition table for
the FA we have described is:
Table 1
Input
State
Start x
y
Final z
a
y
x
z
b
z
z
z
The machine we have already defined by the transition list and the transition table can
be depicted by the state graph in Figure 2.
56
Note: A single state can be start as well as final state both. There will be only one
start state and none or more than one final states in Finite Automaton.
3.4.2
The finite automata shown in Figure 3 can also be represented in Tabular form as
below:
Table 2
Input
Start
Final
State
Accept?
1
2
3
1
2
3
2
3
3
No
Yes
No
Before continuing, lets examine the computation of a finite automaton. Our first
example begins in state one and reads the input symbols in turn changing states as
necessary. Thus, a computation can be characterized by a sequence of states. (Recall
that Turing machine configurations needed the state plus the tape content. Since a
finite automata on never writes, we always know what is on the tape and need only
look at a state as a configuration). Here is the sequence for the input 0001001.
Input Read :
States :
0
1
0
1
0
1
1
1
0
2
0
2
1
2
Example 17 (An elevator controller): Lets imagine an elevator that serves two
floors. Inputs are calls to a floor either from inside the elevator or from the floor
itself. This makes three distinct inputs possible, namely:
57
0 - no calls
1 - call to floor one
2 - call to floor two
Design Techniques-II
The elevator itself can be going up, going down, or halted at a floor. If it is on a floor,
it could be waiting for a call or about to go to the other floor. This provides us with
the six states shown in Figure 4 along with the state graph for the elevator controller.
W1 Waiting on first floor
U1 About to go up
UP Going up
DN Going down
W2 Waiting-second floor
D2 About to go down.
State
W1 (wait on 1)
U1 (start up)
UP
DN
W2 (wait on 2)
D2 (start down)
None
W1
UP
W2
W1
W2
DN
Input
call to 1
W1
U1
D2
W1
DN
DN
call to 2
UP
UP
W2
U1
W2
D2
Accepting and rejecting states are not included in the elevator design because
acceptance is not an issue. If we were to design a more sophisticated elevator, it
might have states that indicated:
Finite automata
a)
b)
c)
power faukyrem
overloading, or
breakdown
Let us make a few small notes about the design. If the elevator is about to move ( i.e.,
in state U1 or D2) and it is called to the floor it is presently on it will stay. (This may
be good Try it next time you are in an elevator.) And, if it is moving (up or down)
and gets called back the other way, it remembers the call by going to the U1 or D2
state upon arrival on the next floor. Of course, the elevator does not do things like
open and close doors (these could be states too) since that would have added
complexity to the design. Speaking of complexity, imagine having 100 floors.
That is our levity for this section. Now that we know what a finite automaton is, we
must (as usual) define it precisely.
We also need some additional notation. The next state function is called the transition
function and the accepting states are often called final states. The entire machine is
usually defined by presenting a transition state table or a transition diagram. In this
way, the states, alphabet, transition function, and final states are constructively
defined. The starting state is usually the lowest numbered state. Our first example of
a finite automaton is:
M = ({q1, q2, q3}, {0,1}, f, q1, {q2}
Where the transition function f, is defined explicitly by either a state table or a state
graph.
3.5 SUMMARY
In this unit we introduced several formulations for regular languages, regular
expressions are algebraic representations of regular languages. Finite Automata are
machines that recognise regular languages. From regular expressions, we can derive
regular languages. We also made some other observations. Finite automata can be
used as output devices - Mealy and Moore machines.
3.6 SOLUTIONS/ANSWERS
Ex.1)
(i)
(ii)
(iii)
(iv)
(v)
ababbbaa
baaababb
ab abb ab abb
baa baa
ababbababb baa
(i)
Suppose aa = x
Then { x, b}* = {, x, b, xx, bb, xb, bx, xxx, bxx, xbx, xxb, bbx, bxb, xbb,
bbb} substituting x = aa
{aa,b}* = { , aa, b, aaaa, bb, aab, baa, aaaaaa, baaaa, aabaa,
Ex.2)
Design Techniques-II
Ex.3)
(a) a+b+c
(b) ab*+ba*
(c) +a(bb)*
Ex.4)
0+1(0+1)*
Ex.5)
Starting with the left side and using properties of regular expressions, we get
b*(abb* + aabb*+aaabb*)*
= b*((ab+aab+aaab)b*)* (property 9)
= (b + ab + aab + aaab)* (property 7).
Ex.6)
(a) {a,b}
(b) {a,,b,bb,.bn,.}
(c) {a,b,ab,bc,abb,bcc,abn,bcn,.}
Ex.7)
(a) (aa+ab+ba+bb)*
(b) (a+b)*aba(a+b)*
60
1.
2.
3.
4.
5.
Page Nos.
Introduction
Objectives
Formal Language & Grammar
Context Free Grammar (CFG)
Pushdown Automata (PDA)
Summary
Solutions/Answers
Further Readings
61
61
61
68
72
74
74
75
4.0 INTRODUCTION
We have mentioned earlier that not every problem can be solved algorithmically and
that good sources of examples of such problems are provided by formal models of
computation viz., FA, PDFA and TA. In the previous unit, we discussed FA. In this
unit, we discuss PDFA, CFG and related topics.
4.1 OBJECTIVES
After going through this unit, you should be able to:
I
Ram
Sam
reads
writes
From the above, we can collect all the values in two categories. One is with the
parameter changing its values further, and another is with termination. These
61
Design Techniques-II
collections are called variables and terminals, respectively. In the above discussion
variables are, <sentence>, <noun> and <verb>, and terminals are I, Ram, Sam, read,
write. As the sentence formation is started with <sentence>, this symbol is special
symbol and is called start symbol.
Now formally, a Grammar G = (V, U, P, S) where,
aS
bS
cS
S
(i)
(ii)
(iii)
(iv)
The desired derivation of the string is aacb. Each step in a derivation corresponds to a
branch of a tree and this true is called parse tree, whose root is the start symbol. The
completed derivation and parse tree are shown in the Figure 1,2,3:
Figure 1: S aS
62
Figure 2: S aS aaS
Let us derive the string aacb, its parse tree is shown in Figure 4.
S aS aaS aacS aacbS aacb = aacb
63
Design Techniques-II
To the left hand side of the above production rule x is left context and y is right
context. If the derivation is applied to left most variable of the right hand side of any
production rule, then it is called leftmost derivation. And if applied to rightmost then
is called rightmost derivation.
The language of a Grammar:
A language is generated from a grammar. If G is a grammar with start symbol S and
set of terminals , then the language of G is the set
*
Any derivation involves the application production Rules. If the production rule is
*
Recursive productions: A production is called recursive if its left side occurs on its
right side. For example, the production S aS is recursive. A production A c is
indirectly recursive. If A derives a sentential form that contains A, Then, suppose we
have the following grammar:
S b/aA
A c/bS
the productions S aA and A bs are both indirectly recursive because of the
following derivations:
S aA abS,
A bS baA
A grammar is recursive if it contains either a recursive production or an indirectly
recursive production.
A grammar for an infinite language must be recursive.
Example 3: Consider {, a, aa, , an, } = {ann 0}.
Notice that any string in this language is either or of the form ax for some string x in
the language. The following grammar will derive any of these strings:
S /aS.
Now, we shall derive the string aaa:
S aS aaS aaaS aaa.
Example 4: Consider {, ab, aabb, , an bn, } = {anbnn 0}.
Notice that any string in this language is either or of the form axb for some string x
in the language. The following grammar will derive any of the strings:
S /aSb.
For example, we will derive the string aaabbb;
S aSb aaSbb aaaSbbb aaabbb.
Example 5: Consider a language {, ab, abab, , (ab)n, } = {(ab)nn 0}.
Notice that any string in this language is either or of the form abx for some string x
in the language. The following grammar will derive any of these strings:
S /abS.
64
Design Techniques-II
We can simplify the grammar by substituting for A to obtain the following grammar:
S aaS/bbS/
Example 9: Let U = {a, b, c}. Let S be the start symbol. Then, the language of
palindromes over the alphabet U has the grammar.
S aSa/bSb/cSc/a/b/c/.
For example, the palindrome abcba can be derived as follows:
S aSa abSba abcba
Ambiguity: A grammar is said to be ambiguous if its language contains some string
that has two different parse tree. This is equivalent to saying that some string has two
distinct leftmost derivations or that some string has two distinct rightmost derivations.
Example 10: Suppose we define a set of arithmetic expressions by the grammar:
E a/b/E/E
A derivation is called a leftmost derivation if at each step the leftmost nonterminal of the sentential form is reduced by some production.
A derivation is called a rightmost derivation if at each step the rightmost nonterminal of the sentential form is reduced by some production.
S S[S]/
For each of the following strings, construct a leftmost derivation, a rightmost
derivation and a parse tree.
Ex.9)
(a)
[]
(b)
[[ ]]
(c)
Find a grammar for each language
(a)
(b)
[][]
(d)
[[] [[]]]
{ambnm,nN, n>m}.
{ambcnnN}.
67
Design Techniques-II
2.
3.
Where the strings of terminals and non-terminals can consist of only terminals or of
only non-terminals, or any combination of terminals and non-terminals or even the
empty string.
The language generated by a CFG is the set of all strings of terminals that can be
produced from the start symbol S using the productions as substitutions. A language
generated by a CFG is called a context-free language.
Example 11: Find a grammar for the language of decimal numerals by observing that
a decimal numeral is either a digit or a digit followed by a decimal numeral.
68
S D/DS
D 0/1/2/3/4/5/6/7/8/9
The language generated by the grammar has all the strings formed by a, b,c .z, 0,
1,..9.
We shall give a derivation of string a2b to show that it is an identifier.
S LA aA aDA a2A a2LA a2bA a2b
Context-Free Language: Since the set of regular language is closed under all the
operations of union, concatenation, Kleen star, intersection and complement. The set
of context free languages is closed under union, concatenation, Kleen star only.
Union
Theorem 1: if L1 and L2 are context-free languages, then L1UL2 is a context-free
language.
Proof: If L1 and L2 are context-free languages, then each of them has a context-free
grammar; call the grammars G1 and G2. Our proof requires that the grammars have
no non-terminals in common. So we shall subscript all of G1s non-terminals with a 1
and subscript all of G2s non-terminals with a 2. Now, we combine the two grammars
into one grammar that will generate the union of the two languages. To do this, we
add one new non-terminal, S, and two new productions.
S S1
S2
S is the starting non-terminal for the new union grammar and can be replaced either
by the starting non-terminal for G1 or for G2, thereby generating either a string from
L1 or from L2. Since the non-terminals of the two original languages are completely
different, and once we begin using one of the original grammars, we must complete
the derivation using only the rules from that original grammar. Note that there is no
need for the alphabets of the two languages to be the same.
Concatenation
Theorem 2: If L1 and L2 are context-free languages, then L1L2 is a context-free
language.
Proof : This proof is similar to the last one. We first subscript all of the non-terminals
of G1 with a 1 and all the non-terminals of G2 with a 2. Then, we add a new
nonterminal, S, and one new rule to the combined grammar:
S S1S2
69
Design Techniques-II
S is the starting non-terminal for the concatenation grammar and is replaced by the
concatenation of the two original starting non-terminals.
Kleene Star
Theorem 3: If L is a context-free language, then L* is a context-free language.
Proof : Subscript the non-terminals of the grammar for L with a 1. Then add a new
starting nonterminal, S, and the rules
S S1S
N
The rule S S1S is used once for each string of L that we want in the string of L*,
then the rule S N is used to kill off the S.
Intersection
Now, we will show that the set of context-free languages is not closed under
intersection. Think about the two languages L1 = {anbncmn,m 0} and
L2 = {ambncnn,m 0}. These are both context-free languages and we can give a
grammar for each one:
G1:
S AB
A aAb
N
B cB
N
G2:
S AB
A aA
N
B bBc
N
The strings in L1 contain the same number of as as bs, while the strings in L2 contain
the same number of bs as cs. Strings that have to be both in L1 and in L2, i.e., strings
in the intersection, must have the same numbers of as as bs and the same number of
bs as cs.
Thus, L1L2 = {anbncnn 0}. Using Pumping lemma for context-free languages it
n
70
{anbn} always have the same number of as as bs, the intersection of these two
languages is the set {anbn}, which is context-free.
(b)
For any two positive integers p and q, the language of all words of the
form ax by az, where x, y, z = 1, 2, 3 and y = px + qz.
71
Design Techniques-II
The pop operation reads the top symbol and removes it from the stack.
(ii)
The push operation writes a designated symbol onto the top of the stack. For
example, push (x) means put x on top of the stack.
j
Figure 7: Directed graph
It takes fine pieces of information to describe a labelled edge. We can also represent
it by the following 5-tuple, which is called a PDA instruction.
(i, L, S, 0, j)
An instruction of this form is executed as follows, where w is an input string whose
letters are scanned from left to right.
If the PDA is in state i, and either L is the current letter of w being scanned or L = ,
and the symbol on top of the stack is S, then perform the following actions:
(1)
(2)
(3)
The second kind of nondeterminism occurs when a state emits two edges labelled with
the same stack symbol, where one input symbol is and the other input symbol is not.
For example, the following two 5-tuples represent non-determinism because the
machine has the option of consuming the input letter b or cleaning it alone.
(i, , c, pop, j)
(i, b, c, push(D), k).
Example 14: The language {anbnn 0} can be accepted by a PDA. We will keep
track of the number of as in an input string by pushing the symbol Y onto the stack
for each a. A second state will be used to pop the stack for each b encountered. The
following PDA will do the job, where x is the initial symbol on the stack:
73
Design Techniques-II
Example 15: (An empty stack PDA): Lets consider the language {anbnn 0}, the
PDA that follows will accept this language by empty stack, where X is the initial
symbol on the stack.
PDA shown in Figure 9 can also be represented by the following three instructions:
(0, a, X, push (X), 0),
(0, , X, pop, 1),
(1, b, X, pop, 1).
This PDA is non-determinstic. Lets see how a computation proceeds. For example,
a computation sequence for the input string aabb can be as follows:
(0, aabb, X) start in state 0 with X on the stack
(0, abb, XX) consume a and push X
(0, bb, XXX) consume a and push X
(1, bb, XX) pop.
(1, b, X) consume b and pop
(1, , ) consume b and pop (stack is empty)
Now, try some exercises.
Ex.13) Build a PDA that accepts the language odd palindrome.
Ex.14) Build a PDA that accepts the language even palindrome.
4.5 SUMMARY
In this unit we have considered the recognition problem and found out whether we can
solve it for a larger class of languages. The corresponding accepter for the contextfree languages are PDAs. There are some languages which are not context free. We
can prove the non-context free languages by using the pumping lemma. Also in this
unit we discussed about the equivalence two approaches, of getting a context free
language. One approach is using context free grammar and other is Pushdown
Automata.
4.6 SOLUTIONS/ANSWERS
Ex.1)
(a)
S S[S] [S] [ ]
(b)
Ex.2)
(a)
S aSb/aAb
A bA/b
Ex.3)
(a)
S aSa/bSb/
(b)
S aSa/bSb/a/b.
Ex.4)
(a)
Type 2.
(a)
S AB
S aAb5/
B b7Ba/
(b)
S AB
A aAbp/
B bqBa/
Ex.5)
Ex.6)
Suppose language is {wcwT:w{a,b}*} then pda is
(0, a, x, push (a), 0), (0, b, x, push (b), 0),
(0, a, a, push (a), 0), (0, b, a, push (b), 0),
(0, a, b, push (a), 0), (0, b, b, push (b), 0),
(0, c, a, nop, 1), (0, c, b, nop, 1),
(0, c, x, nop, 1), (1, a, a, pop, 1),
(1, b, b, pop, 1), (1, , x, nop, 2),
Ex.7)
Language is {wwT:w{a,b}*}. Similarly as Ex 6.
2.
3.
Design Techniques-II
76
Page Nos.
Introduction
Objectives
Prelude to Formal Definition
Turing Machine: Formal Definition and Examples
Instantaneous Description and Transition Diagram
5
6
6
8
13
1.5
1.6
1.7
1.8
1.9
1.10
16
19
21
31
31
38
1.0 INTRODUCTION
In unit 3 and unit 4 of block 4, we discussed two of the major approaches to modeling
of computation viz. the automata/machine approach and linguistic/grammatical
approach. Under grammatical approach, we discussed two models viz., Regular
Languages and Context-free Languages.
Under automata approach, we discussed two models viz., Finite Automata and
Pushdown Automata. Next, we discuss still more powerful automata for computation.
Turing machine (TM) is the next more powerful model of automata approach
which recognizes more languages than Pushdown automata models do. Also Phrasestructure model is the corresponding grammatical model that matches Turing
machines in computational power.
Notations: Turing Machine (TM), Deterministic Turing Machine, Non-Deterministic
Turing Machine, Turing Thesis, Computation, Computational Equivalence,
Configuration of TM, Turing-Acceptable Language, Turing Decidable Language,
Recursively Enumerable Language, Turing Computable Function.
TM
Q
F
a,b,c
:
:
:
:
:
:
:
x or x :
#
:
,,:
L
:
R
:
q
:
s or q0 :
Turing Machine
Set of tape symbols, includes #, the blank symbol
Set of input/machine symbols, does not include #
the finite set of states of TM
Set of final states
Members of
Variable for members of
Any symbol of other than x
The blank symbol
Variables for String over
Move the Head to the Left
Move the Head to the Right
A state of TM, i.e., q Q
The start/initial state
5
Complexity &
Completeness
Halt or h:
e or :
C1 M C2:
C1*C2:
w1 a w2:
The halt state. The same symbol h is used for the purpose of denoting
halt state for all halt state versions of TM. And then h is not used for
other purposes.
The empty string
Configuration C2 is obtained from configuration C1in one move
of the machine M
Configuration C2 is obtained from configuration C1in finite number
of moves.
The symbol a is the symbol currently being scanned by the Head
Or
w1 a w2:
1.1 OBJECTIVES
After going through this unit, you should be able to:
x
x
x
x
define and explain various terms mentioned under the title key words in the
previous section;
construct TMs for simple computational tasks;
realize some simple mathematical functions as TMs; and
apply modular techniques for the construction of TMs for more complex
functions and computational tasks from TMs already constructed for simple
functions and tasks.
Read /Write
Head
Finite Control
TURING MACHINE
6
Figure: 1.2.1
..
..
a tape, with an end on the left but infinite on the right side. The tape is divided
into squares or cells, with each cell capable of holding one of the tape symbols
including the blank symbol #. At any time, there can be only finitely many cells
of the tape that can contain non-blank symbols. The set of tape symbols is
denoted by * .
As the very first step in the sequence of operations of a TM, the input, as a
finite sequence of the input symbols is placed in the left-most cells of the
tape. The set of input symbols denoted by , does not contain the blank
symbol #. However, during operations of a TM, a cell may contain a tape
symbol which is not necessarily an input symbol.
There are versions of TM, to be discussed later, in which the tape may be
infinite in both left and right sides having neither left end nor right end.
(ii)
a finite control, which can be in any one of the finite number of states.
The states in TM can be divided in three categories viz.,
(a)
the Initial state, the state of the control just at the time when TM starts its
operations. The initial state of a TM is generally denoted by q0 or s.
(b)
the Halt state, which is the state in which TM stops all further operations.
The halt state is generally denoted by h. The halt state is distinct from the
initial state. Thus, a TM HAS AT LEAST TWO STATES.
(c)
Other states
(iii) a tape head (or simply Head), is always stationed at one of the tape cells and
provides communication for interaction between the tape and the finite control.
The Head can read or scan the symbol in the cell under it. The symbol is
communicated to the finite control. The control taking into consideration the
symbol and its current state decides for further course of action including
x
x
x
The course of action is called a move of the Turing Machine. In other words, the
move is a function of current state of the control and the tape symbol being
scanned.
In case the control decides for change of the symbol in the cell being scanned, then
the change is carried out by the head. This change of symbol in the cell being
scanned is called writing of the cell by the head.
Initially, the head scans the left-most cell of the tape.
7
Complexity &
Completeness
Now, we are ready to consider a formal definition of a Turing Machine in the next
section.
(ii)
(v)
(vi)
h Q is the Halt State, in which the machine stops any further activity.
Remark 1.3.1
Again, there are a number of variations in literature of even the above version of TM.
For example, some authors allow at one time only one of the two actions viz.,
(i) writing of the current cell and (ii) movement of the Head to the left or to the right.
However, this restricted version of TM can easily be seen to be computationally
equivalent to the definition of TM given above, because one move of the TM given by
the definition can be replaced by at most two moves of the TM introduced in the
Remark.
In the next unit, we will discuss different versions of TM and issues relating to
equivalences of these versions.
In order to illustrate the ideas involved, let us consider the following simple
examples.
Example 1.3.2
Consider the Turing Machine (Q, 6, , G, qo, h) defined below that erases all the nonblank symbols on the tape, where the sequence of non-blank symbols does not contain
any blank symbol # in-between:
8
q: State
q0
q0
q0
h
G (q, V)
{q0, #, R}
{q0, #, R}
{h, #, N}
ACCEPT
Complexity &
Completeness
Next, TM should mark the b, if it exists, which is immediately on the right of the
previously marked b. i.e., should mark the b which is the left-most b which is yet to be
marked.
But, in order to recognize the yet-to-be-marked left-most b, we must change each of
the bs, immediately on marking, to some other symbol say B. Also, for each b, we
attempt to find the left-most yet-to-be-marked d. In order to identify the left-most
yet-to-be-marked d, we should change each of the ds immediately on marking it, by
some other symbol say D.
Thus we require two additional Tape symbols B and D, i.e., = {b, d, B, D #}.
After one iteration of replacing one b by B and one d by D the tape would be of the
form
B
from configuration
B
B
D
D
n
q2
to configuration
B
B
D
n
q0
..
D
n
q0
The above string is to be rejected. But if we take G (q0, D) as q0 then whole process
of matching bs and ds will be again repeated and then even the (initial) input of the
form
b
Complexity &
Completeness
#
n
q1
n
q0
As the string is to be rejected, therefore, we take G(q0, #) as undefined
We have considered all possible cases of input strings over 6 = {b,d} and in which,
while scanning from left, occurrence of the first # indicates termination of strings
over .
After the above discussion, the design of the TM that accepts strings of the form
bndn and rejects all other strings over {b, d}, may be summarized as follows:
The TM is given by (Q, 6, , G, q0, h) where
Q = {q0, q1, q2, q3, h}
6 = { b, d}
= {b, d, B, D, #}
The next-move partial function G is given by
b
d
q0
{q1, B, R)
*
q1
{q1, b, R)
{q2, D, L)
q2
{q2, b, L)
*
q3
*
*
h
*
*
* Indicates the move is not defined.
12
B
*
*
(q0, B, R)
*
*
D
(q3, D, R)
{q1, D, R)
{q2, D, L)
(q3, D, R)
*
#
*
*
*
(h, #, N)
Accept
Remark 1.3.4
(ii)
The cells of the tape of an FA or a PDA are only read/scanned but are never
changed/written into, whereas the cells of the tape of a TM may be written
also.
The tape head of an FA or a PDA always moves from left to right. However,
the tape head of a TM can move in both directions.
As a consequence of facts mentioned in (i) and (ii) above, we conclude that in
the case of FA and PDA the information in the tape cells already scanned do
not play any role in deciding future moves of the automaton, but in the case of
a TM, the information contents of all the cells, including the ones earlier
scanned also play a role in deciding future moves. This leads to the slightly
different definitions of configuration or Instantaneous Description (ID) in
the case of a TM.
1.4.1
Instantaneous Description
The total configuration or, for short just, configuration of a Turing Machine is the
information in respect of:
(i)
Contents of all the cells of the tape, starting from the leftmost cell up to atleast
the last cell containing a non-blank symbol and containing all cells upto the cell
being scanned.
(ii)
Complexity &
Completeness
There are various notations used for denoting the total configuration of a Turing
Machine.
Notation 1: We use the notations, illustrated below through an example:
Let the TM be in state q3 scanning the symbol g with the symbols on the tape as
follows:
#
g
n
q3
then we may denote the corresponding configuration as (q5, e, g, u). And, if the tape
is like
a
n
q6
Then the configuration is (q6, abc, g, e) or (q6, abc g ) or alternatively as abcq6g by the
following notation.
1.4.2
Transition Diagrams
q0
q1
q2
h
0
(q2, 0, R)
(q2, 0, L)
-
1
(q1, #, R)
(q1, 1, R)
-
#
(q2, #, R)
(h, #, N )
(h, #, N )
-
Then, the above Turing Machine may be denoted by the Transition Diagram shown
below, where we assume that q0 is the initial state and h is a final state.
1/#,R
1/1, R
q1
q0
#/#, R
#/#,N
0/0, R
h
q2
#/#, N
0/0, L
Figure: 1.4.2.1
Ex. 3) Design a TM M that recognizes the language L of all strings over {a, b, c}
with
(i) number of as = Number of bs = Number of cs and
15
(ii) if (i) is satisfied, the final contents of the tape are the same as the input, i.e.,
the initial contents of the tape are also the final contents of the tape, else
rejects the string.
Complexity &
Completeness
Ex. 4) Draw the Transition Diagram of the TM that recognizes strings of the form bn
dn, n t1 and was designed in the previous section.
Ex. 5) Design a TM that accepts all the language of all palindromes over the alphabet
{a, b}. A palindrome is a string which equals the string obtained by reversing
the order of occurrence of letters in it. Further, find computations for each of
the strings (i) babb (ii) bb (iii) bab.
Ex. 6) Construct a TM that copies a given string over {a, b}. Further find a
computation of the TM for the string aab.
x
x
x
For the definition and notation for Move, assume the TM is in the configuration
(q, a1 a2 ai-1, ai , ai+1 an)
Case (i) G( ai, q) = G(b, p, L), for motion to left
Consider the following three subcases:
Case i(a) if i > 1, then the move is the activity of TM of going from the configuration
(q, a1 a2 ai-1, ai , ai+1 an) to the configuration
(p, a1 ai-2, ai-1, ai ai + 1 an) and is denoted as
q, a1 a2 ai-1, ai , ai+1 an) m (p, a1,ai-2, ai-1, b, ai + 1 an).
The suffix M, denoting the TM under consideration, may be dropped, if the machine
under consideration is known from the context.
Case i(b) if i = 1, the move leads to hanging configuration, as TM is already
scanning left-most symbol and attempts to move to the left, which is not possible.
Hence move is not defined.
Case i(c) when i = n and b is the blank symbol #, then the move is denoted as
(q, a1 a2 an-1, an, e) (q, a1 a2 an-2, an-1, ,e ).
Case (ii)
16
Complexity &
Completeness
Y
N
fL (Z) =
if Z L
if Z L
Remark 1.5.1
A very important fact in respect of Turing acceptability of a string (or a language)
needs our attention. The fact has been discussed in details in a later unit about
undecidability. However, we briefly mention it below.
For a TM M and an input string Z * , even after a large number of moves we
may not reach the halt state. However, from this we can neither conclude that
Halt state will be reached in a finite number of moves nor can we conclude that
Halt state will not be reached in a finite number moves.
This raises the question of how to decide that an input string w is not accepted by
a TM M.
An input string w is said to be not accepted by a TM M = (Q, , , G, q0, h) if any
of the following three cases arise:
18
(i)
(ii)
The tape Head is scanning the left-most cell containing the symbol x and
the state of M is say q and G (x, q) suggests a move to the left of the
current cell. However, there is no cell to the left of the left-most cell.
(iii) The TM on the given input w enters an infinite loop. For example, if
configuration is as
x
n
q0
1.6 OBSERVATIONS
The concept of TM is one of the most important concepts in the theory of
Computation. In view of its significance, we discuss a number of issues in respect of
TMs through the following remarks.
Remark 1.6.1
Turing Machine is not just another computational model, which may be further
extended by another still more powerful computational model. It is not only the most
powerful computational model known so far but also is conjectured to be the ultimate
computational model.
Turing Thesis: The power of any computational process is captured within the class
of Turing Machines.
It may be noted that Turing thesis is just a conjecture and not a theorem, hence,
Turing Thesis can not be logically deduced from more elementary facts. However, the
conjecture can be shown to be false, if a more powerful computational model is
proposed that can recognize all the languages which are recognized by the TM model
and also recognizes at least one more language that is not recognized by any TM.
In view of the unsuccessful efforts made in this direction since 1936, when Turing
suggested his model, at least at present, it seems to be unlikely to have a more
powerful computational model than TM Model.
Remark 1.6.2
The Finite Automata and Push-Down Automata models were used only as accepting
devices for languages in the sense that the automata, when given an input string from
a language, tells whether the string is acceptable or not. The Turing Machines are
designed to play at least the following three different roles:
(i)
As accepting devices for languages, similar to the role played by FAs and
PDAs.
(ii)
Complexity &
Completeness
(i)
The Head can move in both Left and Right directions whereas in PDA/FA the
head moves only to the Right.
(ii)
The TM, while scanning a cell, can both read the cell and also, if required,
change the value of the cell, i.e., can write in the cell. In Finite Automata, the
Head only can read the cell. It can be shown that the Halt State version of TM is
equivalent to the Final State version of Turing Machine.
(iii) In this version, the TM machine halts only if in a given state and a given symbol
under the head, no next move is possible. Then the (initial) input on the tape of
TM, is unacceptable.
Definition: Acceptability of Z * in Final State Version
Let M1 = (Q, , , G, q0, F)
be a TM in final state version. Then w is said to be acceptable if C0 is the initial
configuration with w as input string to M1 and
C0 | * Cn
is such that
Cn = (p, , a, E)
20
with p in F, set of final states, and a , the set of tape symbols, and , E *
A language Recognizer/acceptor
(ii)
A computer of Functions
x
x
A string Z over some alphabet say will be written on the tape as #Z#, where #
is the blank symbol.
Also initially, the TM will be scanning the right-most # of the string #Z#.
Thus, the initial configuration, (q0, #Z#) represents the starting point for the
computation of the function with Z as input.
The assumption facilitates computation of composition of functions.
Though, most of the time, we require functions of one or more arguments having only
integer values with values of arguments under the functions again as integers, yet, we
consider functions with domain and codomain over arbitrary alphabet sets say 60 and
61 respectively, neither of which contains the blank symbol #.
Next we define what is meant by computation, using Turing Machine, of a
function
f: 60* o 61*
Definition: A function f: f: 60* o 61* is said to be Turing-Computable, or simply
computable, if there is a Turing Machine M = (Q, 6, , G, q0, h ), where 6 contains the
following holds:
(q0, #Z#,) |*m (h, # P #,)
whenever Z 60* and P 61* satisfying f(Z) = P.
21
Complexity &
Completeness
Remark 1.7.2
It may be noted that, if the string Z contains some symbols from the set
6 60, i.e., symbols not belonging to the domain of f, then the TM may hang or may
not halt at all.
Remark 1.7.3
Next, we discuss the case of functions which require k arguments, where k may be
any finite integer, greater than or equal to zero. For example,
the operation PLUS takes two arguments m and n and returns m + n.
The function f with the rule
f (x, y, z) = (2x + y) * z
takes three arguments.
The function C with rule
C ( ) = 17
takes zero number of arguments
Let us now discuss how to represent k distinct arguments of a function f on the
tape. Suppose k = 3 and x1 x2, y1 y2 y3 and z1 z2 are the three strings as three
arguments of function f. If these three arguments are written on the tape as
#
x2
x1
y1
Y2
y3
z1
z2
then the above tape contents may even be interpreted as a single argument viz.,
x1 x2, y1 y2 y3 z1 z2. Therefore, in order, to avoid such an incorrect interpretation,
the arguments are separated by #. Thus, the above three arguments will be written on
the tape as
#
x1
x2
y1
Y2
y3
z1
z2
In general, if a function f takes k t 1 arguments say Z1, Z2, , Zk where each of these
arguments is a string over 60 (i.e., each Zi belongs to 60*) and if f (Z1, Z2, , Zk) = P
for some P 61*; then we say f is Turing Computable if there is a Turing Machine
M such that
(q0 , e, #Z1#Z2 #Zk#, e)
Remark 1.7.4
In stead of functions with countable, but otherwise arbitrary sets as domains and
ranges, we consider only those functions, for each of which the domain and range is
the set of natural numbers. This is not a serious restriction in the sense that any
countable set can, through proper encoding, be considered as a set of natural numbers.
For natural numbers, there are various representations; some of the well-known
representations are Roman Numerals (e.g., VI for six), Decimal Numerals (6 for six),
Binary Numerals (110 for six). Decimal number system uses 10 symbols viz., 0, 1, 2,
3,4, 5, 6, 7, 8 and 9. Binary number system uses two symbols denoted by 0 and 1.
In the discussion of Turing Computable Functions, the unary representation
described below is found useful. The unary number system uses one symbol only:
22
Let the symbol be denoted by I then the number with name six is represented as I I I I
I I. In this notation, zero is represented by empty/null string. Any other number say
twenty is represented in unary systems by writing the symbol I, twenty times. In order
to facilitate the discussion, the number n, in unary notation will be denoted by In in
stead of writing the symbol I, n times.
The advantage of the unary representation is that, in view of the fact that most of the
symbols on the tape are input symbols and if the input symbol is just one, then the
next state will generally be determined by only the current state, because the other
determinant of the next state viz., tape symbol is most of the time the unary symbol.
We recall that for the set X, the notation X* represents the set of all finite strings of
symbols from the set X. Thus, any function f from the set of natural number to the set
of natural numbers, in the unary notation, is a function of the form f : {I}* o {I}*
Definition: The function f: N o N with f(n) = m for each n N and considered as
f: {I}* o {I}*, with {I} a unary number system, will be called Turing Computable
function, if a TM M can be designed such that M starting in initial tape
configuration
# I I . I #
with n consective Is between the two #s of the above string, halts in the following
configuration
# I I I #
containing f(n) = m Is between the two #s
The above idea may be further generalized to the functions of more than one
integer arguments. For example, SUM of two natural numbers n and m takes two
integer arguments and returns the integer (n + m). The initial configuration with the
tape containing the representation of the two arguments say n and m respectively, is of
the form
# I I I # I I I #
where the string contains respectively n and m Is between respective pairs of #s and
Head scans the last #. The function SUM will be Turing computable if we can
design a TM which when started with the initial tape configuration as given above,
halts in the Tape configuration as given below:
# I I I I .. I #
where the above string contains n + m consecutive Is between pair of #s.
Example 1.7.5
Show that the SUM function is Turing Computable.
The problem under the above-mentioned example may also be stated as: Construct a
TM that finds the sum of two natural numbers.
The following design of the required TM, is not efficient yet explains a number of
issues about which a student should be aware while designing a TM for
computing a function.
Legal and Illegal Configurations for SUM function:
In order to understand the design process of any TM for a (computable) function in
general and that of SUM in particular, let us consider the possible legal as well as
illegal initial configuration types as follows:
23
Complexity &
Completeness
Note: in the following, the sequence denotes any sequence of Is possibly empty
and the sequences *** denotes any sequence of Tape symbols possibly empty and
possibly including #. Underscore denotes the cell being scanned.
Legal initial configuration types:
Configuration (i)
#
#
n
q0
representing n = 0, m =0
***
Configuration (ii)
#
#
n
q0
n = 0, m z 0
***
Configuration (iii)
#
#
n
q0
n z 0, m = 0
***
Configuration (iv)
#
n z0, m z 0
#
n
q0
***
#
n
q0
containing two or more than two #s to the left of # being scanned in initial
configuration, as valid, where denotes sequence of Is only.
Some illegal initial configurations:
Configuration (v)
***
I
n
***
Where at least one of *** does not contain # and initially the Head is scanning an I or
any symbol other than # . The configuration is invalid as it does not contain required
number of #s.
Configuration (vi), though is a special case of the above-mentioned configuration, yet
it needs to be mentioned separately.
24
***
***
***
Where *** does not contain # then the configuration represents only one of the
natural numbers.
Also, in case of legal initial configurations, the final configuration that represents the
result m + n should be of the firm.
#
..
#
n
halt
(ii)
the TM Head attempts to fall off the left edge (i.e., the TM has Hanging
configuration); or
the TM does not have a move in a non-Halt state.
(iii)
(b)
In this case of legal moves for TM for SUM function, first move of the Head
should be to the Left only
(c)
In this case, initially there are at least two more #s on the left of the # being
scanned. Therefore, to keep count of the #s, we must change state after
scanning each # . Let q1, q2 and q3 be the states in which the required TM enters
after scanning the three #s
(d)
In this case the movement of the Head, after scanning the initial # and also after
scanning one more # on the left, should continue to move to the Left only, so as
to be able to ensure the presence of third # also. Also, in states q1 and q2, the
TM need not change state on scanning I.
Thus we have,
G (q0, #) = (q1, #, L),
25
G (q1, #) = (q2, #, L)
and
G(q1, I) = (q1, I, L), G(q2, I) = (q2, I, L).
Complexity &
Completeness
However, from this point onward, the Head should start moving to the Right.
? G (q2, #) = (q3, #, R).
Thus, at this stage we are in a configuration of the form.
#
#
n
q3
For further guidance in the matter of the design of the required TM, we
again look back on the legal configurations.
(e)
In the configuration just shown above in q3, if the symbol being scanned is # (as
in case of configuration (i) and configuration (ii)), then the only action required
is to skip over Is, if any, and halt at the next # on the right.
However, if the symbol being scanned in q3 of the above configuration, happens
to be an I (as in case of configuration (iii) and configuration (iv)) then the
actions to be taken, that are to be discussed after a while, have to be different.
But in both cases, movement of the Head has to be to the Right. Therefore, we
need two new states say q4 and q5 such that
= (q4, #, R)
G(q3, #)
(the processing~scanning argument on the left, is completed).
G(q3, I)
= (q5, I, R)
(the scanning of the argument on the left, is initiated).
Taking into consideration the cases of the initial configuration (i) and configuration
(ii) we can further say that
G(q4, I)
=
(q4, I, R)
G(q4, #)
=
(halt, #, N)
Next, taking into consideration the cases of initial configuration (iii) and configuration
(iv) cases, we decide about next moves including the states etc., in the current state
q5.
We are in the following general configuration
(that subsumes the initial configuration (iii) and configuration (iv) cases)
I
n
q5
Where the blank spaces between #s may be empty or non-empty sequence of Is.
Next landmark symbol is the next # on the right. Therefore, we may skip over the Is
without changing the state i.e.,
G(q5, I)
= (q5, I, R)
But we must change the state when # is encountered in q5, otherwise, the next
sequence of Is will again be skipped over and we will not be able to distinguish
between configuration (iii) and configuration (iv) for further necessary action.
26
Therefore,
G(q5, #) = (q6, #, R)
(notice that, though at this stage, scanning of the argument on the left is completed,
yet we can not enter in state q4, as was done earlier, because in this case, the
sequence of subsequent actions have to be different. In this case, the# in the middle
has to be deleted, which is not done in state q4).
Thus, at this stage we have the general configuration as
#
#
n
q6
Next, in q6, if the current symbol is a #, as is the case in configuration (iii), then we
must halt after moving to the left i.e.,
G(q6, #) = (halt, #, L)
we reach the final configuration
0#
#
n
halt
I
#
n
q6
Then the following sequence of actions is required for deleting the middle #:
Action (i): To remove the # in the middle so that we get a continuous sequence of Is
to represent the final result. For this purposes, we move to the left and replace the #
by I. But then it will give one I more than number of Is required in the final result.
Therefore,
Action (ii): We must find out the rightmost I and replace the rightmost I by # and
stop, i.e., enter halt state. In order to accomplish Action (ii) we reach the next # on
the right, skipping over all Is and then on reaching the desired #, and then move left
to an I over there. Next, we replace that I by # and halt.
Translating the above actions in terms of formal moves, we get
For Action (i)
G(q6, I)
G(q7, #)
= (q7, I, L)
= (q8, I, R)
(at this stage we have replaced the # in the middle of two sequences of Is by an I)
For Action (ii)
G(q8, I)
G(q8, #)
G(q9, I)
=
=
=
(q8, I, R)
(q9, #, L)
(halt, #, N)
27
Complexity &
Completeness
It can be verified that through above-mentioned moves, the designed TM does not
have a next-move at some stage in the case of each of the illegal configurations.
Formally, the SUM TM can be defined as:
SUM = (Q, , , G, q0, h)
where Q = { q0, q1,.q10, halt}
={I}
= { I, # }
and
the next-move (partial) function G is given by the Table
I
(q1, I, L)
(q2, I, L)
(q5, I, R)
(q4, I, R)
(q5, I, R)
(q7, I, L)
(q8, I, R)
(halt, #, N)
-
q0
q1
q2
q3
q4
q5
q6
q7
q8
q9
halt
#
(q1, #, L)
(q2, #, L)
(q3, #, R)
(q4, #, R)
(halt, #, N)
(q6, #, R)
(halt, #, L)
(q8, I, R)
(q9, #, L)
-
=
=
=
=
=
(q1, #, L)
(q2, #, L)
(q1, I, L)
(q3, #, R)
(q2, I, L)
Case I
When n = 0 covering configuration (i) and configuration (ii) The general
configuration is of the form
#
#
#
n
q3
To get representation of zero, as, one of the multiplier and multiplic and is zero, the
result must be zero. We should enter state say q4 which skips all Is and meets the
next # on the right.
Once the Head meets the required #, Head should move to the left replacing all Is by
#s and halt on the # it encounters so that we have the configuration
#
#
n
Halt
The moves suggested by the above explanation covering configuration (i) and
configuration (ii) are:
G (q3,
G (q4,
G (q4,
G (q5,
G (q5,
#)
I)
#)
I)
#)
=
=
=
=
=
(q4, #, R)
(q4, I, R)
(q5, #, L)
(q5, #, L)
(Halt, #, R)
Case II
Covering configuration (iii), we have at one stage
#
I
n
q3
If we take G(q3, I) = (q4, #, R), then we get the following desired configuration in
finite number of moves:
#
#
n
Halt
Case III
While covering the configuration (iv), At one stage, we are in the configuration
~
#
I
n
q3
m n Is o
~
#
~m m Is o~
I
#
m m n Is
I
I
o~
#
n
Halt
29
Complexity &
Completeness
The strategy to get the representation for n m Is consists of the following steps:
(i)
I .. I
~m m Is o ~
~m ((n1).m) Is o ~ n
Then we replace the # between two sequences of Is by I and replace the right-most I
by # and halt.
The case of illegal initial configurations may be handled on similar lines as were
handed for SUM Turing machine
Remark 1.7.8
The informal details given above for the design of TM for multiplication function
are acceptable as complete answer/solution for any problem about design of a
Turing Machine. However, if more detailed formal design is required, the
examiner should explicitly mention about the required details.
Details of case (iii) are not being provided for the following reasons:
(i)
(ii)
After some time we will learn how to construct more complex machines out of
already constructed machines, starting with the construction of very simple
machines. One of the simple machines discussed later is a copying machine
which copies symbols on a part of the tape, in other locations on the tape.
Ex. 7) Design a TM to compute the binary function MONUS (or also called PROPER
SUBTRACTION) defined as follows:
Monus : N u N o N
(Note 0 also belongs to N)
such that
m n
if m t n
else
monus (m, n) =
Ex.8)
30
f: N o {0, 1}
is such that
0
1
f (n) =
if n is even,
if n is odd
1.8 SUMMARY
In this unit, after giving informal idea of what a Turing machine is, the concept is
formally defined and illustrated through a number of examples. Further, it is explained
how TM can be used to compute mathematical functions. Finally, a technique is
explained for designing more and more complex TMs out of already designed TMs,
starting with some very simple TMs.
1.9 SOLUTIONS/ANSWERS
Ex. 1)
Figure: 1.9.1
* = {a, b, #}.
The next move function G is given by the transition diagram above. If the
input string is of even length the TM reaches the halt state h. However, if the
input string is of odd length, then TM does not find any next move in state q1
indicating rejection of the string.
Ex. 2)
Figure: 1.9.2
31
Complexity &
Completeness
Figure: 1.9.3
In state q1, we move towards left skipping over all symbols to reach the
leftmost symbol of the tape and enter state q5.
In q5, we start searching for b by moving to the right skipping over all nonblank symbols except b and if such b exists, reach state q2.
In state q2, we move towards left skipping over all symbols to reach the
leftmost symbol of the tape and enter q6.
In q6, we start searching for c by moving to the right skipping over all nonblank symbols except c and if such c exists, reach state q3.
In state q2, we move towards left skipping all symbols to reach the leftmost
symbol of the tape and enter state q0.
If in any one of the states q4, q5 or q6 no next move is possible, then reject the
string.
Else repeat the above process till all as are converted to As, all bs to Bs
and all cs to Cs.
32
Step II: is concerned with the restoring of as from As, bs from Bs and cs
from Cs, while moving from right to left in state q7 and then after
successfully completing the work move to halt state h.
Ex. 4)
The Transition Diagram of the TM that recognizes strings of the form bn dn,
n t1 and designed in the previous section is given by the following Figure.
Figure: 1.9.4
Ex. 5)
Figure: 1.9.5
In state q0, at any stage if TM finds the blank symbol then TM has
found a palindrome of even length. Otherwise, it notes the symbol
being read and attempts to match it with last non-blank symbol on the
tape. If the symbol is a, the TM replaces it by # goes to state q1, in
which it skips all as and bs and on #, the TM from q1 will go to q3 to
find a matching a in last non-blank symbol position. If a is found, TM
goes to q5 replace a by #. However, if b is found then TM has no more
indicating the string is not a palindrome. However, if in state q2 only #s
are found, then it indicates that the previous a was the middle most
symbol of the given string indicating palindrome of odd length.
33
Similar is the case when b is found in state q0, except that the next state is
q2 in this case and roles of as and bs are interchanged in the above
argument.
Complexity &
Completeness
(ii)
(i)
(ii)
Figure: 1.9.6
In the solution of the problem, we can deviate slightly from our convention of
placing the input string on the left-most part of the tape. In this case, we
place # in the leftmost cell of the tape followed by the input string.
***
#
***
Complexity &
Completeness
..
.
n
#
n
q0
I
n
q1
and
#
n
q1
guide us to moves
G (q1, I) = (q2, #, L)
change of state is essential else other Is will also be converted to #s,
G (q1, #) = ( halt, #, N)
Observation 3: The moves are guided by principle that convert the left-most
# on the right side the corresponding right-most I to # on the left-side
G (q2, I) = (q2, I, L)
G (q2, #) = (q3, #, L)
G (q3, I) = (q3, I, L)
G (q3, #) = (q4, #, R)
(We have reached the right-most # on the left of all Is as shown below)
#
n
q4
If we have configration of the form
# #
n
q4
36
I to
then it must have resulted from initial configuration in which m < n represented by
say
#
#
n
q4
Therefore, we must now enter a state say q7 which skips all Is on the right and then
halts
Therefore
G (q4, #) = (q7, #, R)
G (q7, I) = (q7, I, R)
G ( q7, #) = ( halt, #, N)
Next, we consider G (q4, I)
G (q4, I) = (q5, #, R)
(state must be changed otherwise, all Is will be changed to #s)
G (q5, I) = (q5, I, R)
G (q5, #) = (q6, #, R)
(the middle # is being crossed while moving from left to right)
G (q6, I) = (q6, I, R)
G (q6, #) = (q0, #, N)
(the left-most # on right side is scanned in q6 to reach q0 so that whole process
may be repeated again.)
Summarizing the above moves the transition table for G function is given by
I
q0
q1
q2
q3
q4
q5
q6
q7
Halt
(q2, #, L)
(q2, I, L)
(q3, I, L)
(q5, #, R)
(q5, I, R)
(q6, I, R)
(q7, I, R)
-
#
(q1, #, L)
(halt, #,L)
(q3, #, L)
(q4, #, L)
(q7, #, R)
(q6, #, R)
(q6, # R)
(halt, #, N)
-
Ex.8)
#
n
halt
37
Complexity &
Completeness
#
n
halt
= (q2, #, L)
= (q1, #, L)
= (halt, #, N)
= (q2, #, L)
= (q3, #, R)
=
(halt, I, R)
#
(q2, #,L)
(q3, #, R)
(halt, #, N)
(halt, I, R)
-
I
(q1, #, L)
(q2, #, L)
(q1, #, L)
-
1.10
38
FURTHER READINGS
1.
2.
3.
4.
39
Algorithmically
Unsolvable Problems
UNIT 2 ALGORITHMICALLY
UNSOLVABLE PROBLEMS
Structure
2.0
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
Introduction
Objectives
Decidable and Undecidable Problems
The Halting Problem
Reduction to Another Undecidable Problem
Undecidability of Post Correspondence Problem
Undecidable Problems for Context Free Languages
Other Undecidable Problems
Summary
Solutions/Answers
Further Readings
Page Nos.
39
39
39
40
44
46
47
48
49
49
52
2.0 INTRODUCTION
In this unit, we discuss issues and problems that exhibit the limitations of computing
devices in solving problems. We also prove the undecidability of the halting
problem. It is related to Gdel's Incompleteness Theorem which states that there is
no system of logic strong enough to prove all true sentences of number theory.
In addition, we will discuss a number of other problems, which though can be
formulated properly, yet are not solvable through any computational means. And we
will prove that such problems cannot be solved no matter what language is used,
what machine is used, and how much computational resources are devoted in
attempting to solve the problem etc.
2.1 OBJECTIVES
After going through this unit, you should be able to:
x
x
x
x
Complexity &
Completeness
where,
q0 Z denotes the initial configuration with left-most symbol of the string Z being
scanned in state q0 and qf g(Z) denotes the final c.
A function is said to be uncomputable if no such machine exists. There may be a
Turing machine that can compute f on part of its domain, but we call the function
computable only if there is a Turing machine that computes the function on the whole
of its domain.
For some problems, we are interested in simpler solution in terms of yes or no.
For example, we consider problem of context free grammar i.e., for a context free
grammar G, Is the language L(G) ambiguous. For some G the answer will be yes,
for others it will be no, but clearly we must have one or the other. The problem is
to decide whether the statement is true for any G we are given. The domain for this
problem is the set of all context free grammars. We say that a problem is decidable if
there exists a Turing machine that gives the correct answer for every statement in the
domain of the problem.
Similarly, consider the problem of equivalence of context free grammar i.e., to
determine whether two context free grammars are equivalent. Again, given context
free grammars G1 and G2, the answer may be yes or no. The problem is to
decide whether the statement is true for any two given context free grammars G1 and
G2. The domain for this problem is the set of all context free grammars. We say that
a problem is decidable if there exists a Turing machine that gives the correct answer
for every statement in the domain of the problem.
A class of problems with two outputs yes or no is said to be decidable (solvable)
if there exists some definite algorithm which always terminates (halts) with one of
two outputs yes or no. Otherwise, the class of problems is said to be undecidable
(unsolvable).
x
x
What we need is an algorithm that can determine the correct answer for any M and w
by performing some analysis on the machines description and the input. But, we will
show that no such algorithm exists.
Let us see first, proof devised by Alan Turing (1936) that halting problem is
unsolvable.
40
Suppose you have a solution to the halting problem in terms of a machine, say, H.
H takes two inputs:
1.
2.
Algorithmically
Unsolvable Problems
a program M and
an input w for the program M.
halt
loop
H
So now H can be revised to take M as both inputs (the program and its input) and H
should be able to determine if M will halt on M as its input.
Let us construct a new, simple algorithm K that takes H's output as its input and does
the following:
1.
2.
halt
loop
loop
halt
halt
loop
loop
halt
If H says that K halts then K itself would loop (thats how we constructed it).
If H says that K loops then K will halt.
In either case H gives the wrong answer for K. Thus H cannot work in all cases.
41
Complexity &
Completeness
Weve shown that it is possible to construct an input that causes any solution H to
fail. Hence, The halting problem is undecidable.
Now, we formally define what we mean by the halting problem.
Definition 1.1: Let WM be a string that describes a Turing machine M = (Q, 6, 7, G,
q0, F), and let w be a string in 6*. We will assume that WM and w are encoded as a
string of 0s and 1s. A solution of the halting problem is a Turing machine H, which
for any WM and w, performs the computation
q0 WM w ~* x1 qy x2 if M applied to w halts, and
q0 WM w ~* y1 qn y2 if M applied to w does not halt.
Here qy and qn are both final states of H.
Theorem 1.1: There does not exist any Turing machine H that behaves as required
by Definition 1.1. The halting problem is therefore undecidable.
Proof: We provide proof by contradiction. Let us assume that there exists an
algorithm, and consequently some Turing machine H, that solves the halting
problem. The input to H will be the string WM w. The requirement is then that, the
Turing machine H will halt with either a yes or no answer. We capture this by asking
that H will halt in one of two corresponding final states, say, qy or qn . We want H to
operate according to the following rules:
q0 WM w ~* M x1 qy x2
q0 WM w ~* M y1 qn y2
qy
WM w
q0
qn
We can achieve this by adding two more states say, q1 and q2. Transitions are defined
from qy to q1, from q1 to q2 and from q2 to q1, regardless of the tape symbol, in such a
way that the tape remains unchanged. This is shown by another block diagram given
below.
42
Algorithmically
Unsolvable Problems
qy
q1
WM w
q2
q0
qn
2.
2.
If H says yes, then apply M to w. But M must halt, so it will ultimately tell
us whether w is in L or not.
43
Complexity &
Completeness
Does a TM meet its specification ? that is, does it have any bugs.
Not so surprising, Although this result is sweeping in scope, may be it is not too
surprising. If a simple question such as whether a program halts or not is
undecidable, why should one expect that any other property of the input/output
behaviour of programs is decidable? Rices theorem makes it clear that failure to
decide halting implies failure to decide any other interesting question about the
input/output behaviour of programs. Before we consider Rices theorem, we need to
understand the concept of problem reduction on which its proof is based.
Reducing problem B to problem A means finding a way to convert problem B to
problem A, so that a solution to problem A can be used to solve problem B.
One may ask, Why is this important? A reduction of problem B to problem A shows
that problem A is at least as difficult to solve as problem B.Also, we can show the
following:
x
x
M1 halts on all inputs also tells us whether M halts on input w, which would be a
solution to the halting problem.
Algorithmically
Unsolvable Problems
x
x
The fact that the totality problem is undecidable means that we cannot write a
program that can find any infinite loop in any program.
The fact that the equivalence problem is undecidable means that the code
optimization phase of a compiler may improve a program, but can never
guarantee finding the optimally efficient version of the program. There may be
potentially improved versions of the program that it cannot even be sure are
equivalent.
We now describe a more general way of showing that a problem is undecidable
i.e., Rices theorem. First we introduce some definitions.
x
x
A property of a program (TM) can be viewed as the set of programs that have
that property.
A functional (or non-trivial) property of a program (TM) is one that some
programs have and some dont.
x
x
(ii)
nontrivial, in the sense that it is a property of some programs but not all
programs.
45
Complexity &
Completeness
Rices theorem can be used to show that whether the language accepted by a Turing
machine is context-free, regular, or even finite, are undecidable problems.
Not all properties of programs are functional.
x
x
x
If there exists the sequence (i, j, k, ..., m) satisfying above conditions then
(i, j, k, ..., m) is a solution of PCP.
Let us consider some examples.
Example 3: Consider the following instance of the PCP:
Alphabet 6 = { a, b }
List L = (a, ab)
List M = (aa, b)
We see that ( 1, 2 ) is a sequence of integers that solves this PCP instance, since the
concatenation of a and ab is equal to the concatenation of aa and b
(i.e ,w1 w2 = v1 v2 = aab). Other solutions include: ( 1, 2, 1, 2 ) , ( 1, 2, 1, 2, 1, 2 )
and so on.
Example 4: Consider the following instance of the PCP Alphabet 6 = { 0, 1 }
List L = ( 0, 01000, 01 )
List M = ( 000, 01, 1 )
A sequence of integers that solves this problem is ( 2, 1, 1, 3 ), since the
concatenation of 01000, 0, 0 and 01 is equal to the concatenation of 01, 000, 000 and
1 (i.e., w2 w1 w1 w3= v2 v1v1 v3 =010000001).
46
Algorithmically
Unsolvable Problems
where the set of productions P is the union of the two subsets: the first set PA
consists of
S o SA,
SA ouiSAai | uiai,
i = 1, 2,, n,
S o SB,
SB oviSBai | viai,
B
i = 1, 2,, n.
Now take
GA = ({S, SA }, {a1, a2, , am} , PA, S)
and
GB = ({S, SB }, {a1, a2, , am} , PB, S)
B
Then,
LA = L(GA),
LB = L(GB),
B
and
L (G) = LA LB.
B
47
Complexity &
Completeness
Consequently, if G is ambiguous, then the Post correspondence problem with the pair
(A, B) has a solution. Conversely, If G is unambiguous, then the Post correspondence
problem cannot have solution.
If there existed an algorithm for solving the ambiguity problem, we could adapt it to
solve the Post correspondence problem. But, since there is no algorithm for the Post
correspondence problem, we conclude that the ambiguity problem is undecidable.
Does Turing machine M halt for any input? (That is, is L(M)=?)
If G is a unrestricted grammar.
Does L(G) = ?
Does L(G) = ?
Does L1 L2 = ?
Does L1 = L2 ?
Does L1 L2 ?
Does L empty ?
Does L finite ?
Algorithmically
Unsolvable Problems
Ex.7)
2.8 SUMMARY
x
x
x
x
2.9 SOLUTIONS/ANSWERS
Ex. 1)
The problem is described as follows: Given any Turing machine M = (Q,
6, 7, G, q0, F) and any q Q, w 6+, to determine whether Turing machine
M, when given input w, ever enters state q.
The problem is to determine whether Turing machine M, when given input
w, ever enters state q.
49
The only way a Turing machine M halts is if it enters a state q for which
some transition function G(qi, ai) is undefined. Add a new final state Z to
the Turing machine, and add all these missing transitions to lead to state Z.
Now use the (assumed) state-entry procedure to test whether state Z is ever
entered when M is given input w. This will reveal whether the original
machine M halts. We conclude that it must not be possible to build the
assumed state-entry procedure.
Complexity &
Completeness
Ex. 2)
It is another problem which is undecidable. The problem is described as
follows: Given a Turing machine M, does Turing machine M halts when
given a blank input tape?
Here, we will reduce the blank tape halting problem to the halting problem.
Given M and w, we first construct from M a new machine Mw that starts
with a blank tape, writes w on it, then positions itself in configuration q0w.
After that, Mw acts exactly like M. Hence, Mw will halt on a blank tape if
and only if M halts on w.
Suppose that the blank tape halting problem were decidable. Given any M
and w, we first construct Mw, then apply the blank tape halting problem
algorithm to it. The conclusion tells us whether M applied to w will halt.
Since this can be done for any M and w, an algorithm for the blank tape
halting problem can be converted into an algorithm for the halting problem.
Since the halting problem is undecidable, the same must be true for the
blank tape halting problem.
Ex. 3)
There is no solution to this problem, since for any potential solution, the
concatenation of the strings from list L will contain half as many letters as
the concatenation of the corresponding strings from list M.
Ex. 4)
We can not have string beginning with w2 = abb as the counterpart v2 = bb
exists in another sequence and first character does not match. Similarly, no
string can begin with w3 = bab as the counterpart v3 = abb exists in another
sequence and first character does not match. The next choice left with us is
start the string with w1 = ba from L and the counterpart v1 = bab from M.
So, we have
ba
bab
The next choice from L must begin with b. Thus, either we choose w1 or w3
as their string starts with symbol b. But, the choice of w1 will make two
string look like:
baba
babbab
While the choice of w3 direct to make choice of v3 and the string will look
like:
50
Algorithmically
Unsolvable Problems
babab
bababb
Since the string from list M again exceeds the string from list L by the
single symbol 1, a similar argument shows that we should pick up w3 from
list L and v3 from list M. Thus, there is only one sequence of choices that
generates compatible strings, and for this sequence string M is always one
character longer. Thus, this instance of PCP has no solution.
Ex. 5)
We see that ( 2, 1, 1, 3 ) is a sequence of integers that solves this PCP
instance, since the concatenation of babbb, b, b and ba is equal to the
concatenation of ba, bbb, bbb and a (i.e., w2 w1 w1 w3 = v2 v1 v1 v3 =
babbbbbba).
Ex. 6)
For each string in A and corresponding string in B, the length of string of
A is less than counterpart string of B for the same sequence number.
Hence, the string generated by a sequence of strings from A is shorter than
the string generated by the sequence of corresponding strings of B.
Therefore, the PCP has no solution.
Ex. 7)
Proof : Consider two grammars
GA = ({ SA }, {a1, a2, , am} , PA, SA)
and
GB = ({SB }, {a1, a2, , am} , PB, SB).
B
SB oviSBai | viai,
B
i = 1, 2,, n.
where consider two sequences of strings A = (u1, u2, , um) and B = (v1,
v2, , vm) over some alphabet . Choose a new set of distinct symbols a1,
a2, , am, such that
{a1, a2, , am} = ,
Suppose that L(GA) and L(GB) have a common element, i.e.,
B
Then the pair (A, B) has a PC-solution. Conversely, if the pair does not
have a PC- solution, then L(GA) and L(GB) cannot have a common
element. We conclude that L(GA) L(GB) is nonempty if and only if (A,
B) has a PC- solution.
B
51
Complexity &
Completeness
52
2.
3.
4.
5.
Page Nos.
Introduction
Objectives
Notations for the Growth Rates of Functions
3.2.1
3.2.2
3.2.3
3.2.4
3.2.5
3.2.6
3.2.7
3.2.8
Complexity of
Algorithms
53
55
55
65
70
71
75
76
79
3.0 INTRODUCTION
In unit 2 of the block, we discussed a number of problems which cannot be solved by
algorithmic means and also discussed a number of issues about such problems.
In this unit, we will discuss the issue of efficiency of computation of an algorithm in
terms of the amount of time used in its execution. On the basis of analysis of an
algorithm, the amount of time that is estimated to be required in executing an
algorithm, will be referred to as the time complexity of the algorithm. The time
complexity of an algorithm is measured in terms of some (basic) time unit (not second
or nano-second). Generally, time taken in executing one move of a TM, is taken as
(basic) time unit for the purpose. Or, alternatively, time taken in executing some
elementary operation like addition, is taken as one unit. More complex operations like
multiplication etc, are assumed to require an integral number of basic units. As
mentioned earlier, given many algorithms (solutions) for solving a problem, we would
like to choose the most efficient algorithm from amongst the available ones. For
comparing efficiencies of algorithms, that solve a particular problem, time
complexities of algorithms are considered as functions of the sizes of the problems (to
be discussed). The time complexity functions of the algorithms are compared in terms
of their growth rates (to be defined) as growth rates are considered important measures
of comparative efficiencies.
The concept of the size of a problem, though a fundamental one, yet is difficult to
define precisely. Generally, the size of a problem, is measured in terms of the size of
the input. The concept of the size of an input of a problem may be explained
informally through examples. In the case of multiplication of two nxn (squares)
matrices, the size of the problem may be taken as n2, i.e, the number of elements in
each matrix to be multiplied. For problems involving polynomials, the degrees of the
polynomials may be taken as measure of the sizes of the problems.
For a problem, a solution with time complexity which can be expressed as a
polynomial of the size of the problem, is considered to have an efficient solution.
However, not many problems that arise in practice, admit any efficient algorithms, as
these problems can be solved, if at all, by only non-polynomial time algorithms. A
problem which does not have any (known) polynomial time algorithm is called an
intractable problem.
53
Complexity &
Completeness
We may note that the term solution in its general form need not be an algorithm. If by
tossing a coin, we get the correct answer to each instance of a problem, then the
process of tossing the coin and getting answers constitutes a solution. But, the process
is not an algorithm. Similarly, we solve problems based on heuristics, i.e., good
guesses which, generally but not necessarily always, lead to solutions. All such cases
of solutions are not algorithms, or algorithmic solutions. To be more explicit, by an
algorithmic solution A of a problem L (considered as a language) from a problem
domain *, we mean that among other conditions, the following are satisfied. A is a
step-by-step method in which for each instance of the problem, there is a definite
sequence of execution steps (not involving any guess work). A terminates for each
x*, irrespective of whether x L or x L.
In this sense of algorithmic solution, only a solution by a Deterministic TM is called
an algorithm. A solution by a Non-Deterministic TM may not be an algorithm.
(i)
Thus starting with two distinct classes of problems, viz., tractable problems and
intractable problems, we introduced two classes of problems called P and NP. Some
interesting relations known about these classes are:
(i)
(ii)
(The relation (ii) above simply follows from the fact that every Deterministic TM is a
special case of a Non-Deterministic TM).
However, it is not known whether P=NP or P NP. This forms the basis for the
subject matter of the rest of the chapter. As a first step, we introduce some notations
to facilitate the discussion of the concept of computational complexity.
54
3.1 OBJECTIVES
Complexity of
Algorithms
x
x
x
x
x
x
3.2.1
An important consequence of the above discussion is that if the time taken by one
machine in executing a solution of a problem is a polynomial (or exponential)
function in the size of the problem, then time taken by every machine is a polynomial
(or exponential) function respectively, in the size of the problem. Thus, functions
differing from each other by constant factors, when treated as time complexities
should not be treated as different, i.e., should be treated as complexity-wise
equivalent.
3.2.2
Asymptotic Considerations
Computers are generally used to solve problems involving complex solutions. The
complexity of solutions may be either because of the large number of involved
computational steps and/or large size of input data. The plausibility of the claim
apparently follows from the fact that, when required, computers are used generally not
to find the product of two 2 u 2 matrices but to find the product of two n u n matrices
for large n running into hundreds or even thousands.
Similarly, computers, when required, are generally used not to find roots of quadratic
equations but for finding roots of complex equations including polynomial equations
of degrees more than hundreds or sometimes even thousands.
55
Complexity &
Completeness
The above discussion leads to the conclusion that when considering time complexities
f1(n) and f2(n) of (computer) solutions of a problem of size n, we need to consider and
compare the behaviours of the two functions only for large values of n. If the relative
behaviours of two functions for smaller values conflict with the relative behaviours
for larger values, then we may ignore the conflicting behaviour for smaller values.
For example, if the earlier considered two functions
f1(n) = 1000 n2
f2(n) = 5n4
and
represent time complexities of two solutions of a problem of size n, then despite the
fact that
f1 (n) t f2 (n)
for n d 14,
we would still prefer the solution having f1 (n) as time complexity because
f1(n) d f2 (n)
This explains the reason for the presence of the phrase n t k in the definitions
of the various measures of complexities discussed below:
3.2.3
In the following we discuss some well-known growth rate notations. These notations
denote relations from functions to functions.
For example, if functions
f, g: NN
are given by
f(n) = n2 5n
and
g(n) = n2
then
O(f(n)) = g(n)
or
O(n2 5n) = n2
56
(i)
(ii)
:
(: (n2 ) is pronounced as big-omega of n2 or sometimes just as
omega of n2)
(iii) 4
(iv)
(v)
Complexity of
Algorithms
Remark 3.2.3.1
In the discussion of any one of the five notations, generally two functions say f and g
are involved. The functions have their domains and Codomains as N, the set of natural
numbers, i.e.,
f: NN
g: NN
These functions may also be considered as having domain and codomain as R.
Remark 3.2.3.2
The purpose of these asymptotic growth rate notations and functions denoted by these
notations, is to facilitate the recognition of essential character of a complexity
function through some simpler functions delivered by these notations. For example, a
complexity function f(n) = 5004 n3 + 83 n2 + 19 n + 408, has essentially the same
behaviour as that of g(n) = n3 as the problem size n becomes larger and larger. But
g(n) = n3 is much more comprehensible than the function f(n). Let us discuss the
notations, starting with the notation O.
for all xt k
(A)
f(x)
f(x)
x3
x4
f(x)
=
=
=
z
z
O (x3)
O (x4)
O (f(x))
O (f(x))
O ( x2)
Solutions
Part (i)
Consider
f(x) = 2x3 +3x2 +1
d 2x3 +3x3 +1 x3 = 6x3
for all x t 1
Complexity &
Completeness
Thus, we have found the required constants C and k. Hence f(x) is O(x3).
Part (ii)
As above, we can show that
f(x) d 6 x4
for all x t 1.
However, we may also, by computing some values of f(x) and x4, find C and k as
follows:
f(1) = 2+3+1 = 6
f(2) = 2.23 + 3.22 + 1 = 29
f(3) = 2.33 + 3.32 + 1 = 82
;
;
;
(1)4 = 1
(2)4 = 16
(3)4 = 81
for C = 2
and k = 3 we have
f(x) d 2. x4
for all x t k
Hence, f(x) is O(x4).
Part (iii)
for C = 1 and k = 1 we get
x3 d C (2x3 + 3x2 +1) for all x t k
Part (iv)
We prove the result by contradiction. Let there exist positive constants C and k
such that
x4 d C (2x3 + 3x2 +1) for all x t k
?x4 d C (2x3 +3x3+x3) = 6Cx3 for xtk
? x4 d 6 C x3 for all x t k.
implying x d 6C
for all xt k
58
Complexity of
Algorithms
Remark 3.2.4.2
It can be easily seen that for given functions f(x) and g(x), if there exists one pair of C
and k with f(x) d C.g (x) for all x t k, then there exist infinitely many pairs (Ci, ki)
which satisfy
f(x) d Ci g(x)
Because for any Ci t C and any ki t k, the above inequality is true, if f(x)d c.g(x) for
all x t k.
3.2.5
The Notation :
whenever x t k
f(x) = : (x3)
h(x)= : (x3)
h(x)= : (x2)
x3 = : (h(x))
x2 z : (h(x))
Solutions:
Part (i)
For C =1, we have
f(x) t C x3 for all x t 1
Part (ii)
h(x) = 2x3 3x2+2
Let C and k > 0 be such that
2x33x2+2 t C x3
for all x t k
i.e., (2C) x33x2+2 t 0 for all x t k
Then C = 1 and kt 3 satisfy the last inequality.
Part (iii)
2x3 3x2+2 = : (x2)
Let the above equation be true.
Then there exists positive numbers C and k
s.t.
2x3 3x2+2 t C x2
for all x t k
2 x3 (3 + C) x2 + 2 t 0
59
Complexity &
Completeness
It can be easily seen that lesser the value of C, better the chances of the above
inequality being true. So, to begin with, let us take C = 1 and try to find a value of k
s.t
2x3 4x2+2 t 0.
For x t 2, the above inequality holds
? k=2 is such that
2x3 4x2+2 t 0 for all x t k
Part (iv)
Let the equality
x3 = : (2x33x2+2)
be true. Therefore, let C>0 and k > 0 be such that
x3 t C(2(x33/2 x2 +1))
For C = and k = 1, the above inequality is true.
Part (v)
We prove the result by contradiction.
Let x2 = : (3x32x2+2)
Then, there exist positive constants C and k such that
x2 t C (3x3 2x2 + 2)
for all x t k
for all x t k
(2 C 1)
,
C
3.2.6
The Notation 4
Provides simultaneously both asymptotic lower bound and asymptotic upper bound
for a given function.
Let f(x) and g(x) be two functions, each from the set of natural numbers or positive
real numbers to positive real numbers. Then f(x) said to be 4 (g(x)) (pronounced as
big-theta of g of x) if, there exist positive constants C1, C2 and k such that
C2 g(x) d f(x) d C1 g(x) for all x t k.
(Note the last inequalities represent two conditions to be satisfied simultaneously viz.,
C2 g(x) d f(x) and f(x) d C1 g(x)).
We state the following theorem without proof, which relates the three functions
O, :, 4
Theorem: For any two functions f(x) and g(x), f(x) = 4 (g(x)) if and only if
f(x) = O (g(x)) and f(x) = : (g(x)).
60
Examples 3.2.6.1:
(i)
f(x) = 4 (x3)
(ii)
f(x) z 4 (x2)
(iii)
f(x) z 4 (x4)
show that
Complexity of
Algorithms
Solutions
Part (i)
for C1 = 3, C2 = 1 and k = 4
C2 x3 d f(x) d C1 x3
1.
for all x t k
Part (ii)
We can show by contradiction that no C1 exists.
Let, if possible for some positive integers k and C1, we have 2x3+3x2+1dC1. x2 for all
xtk
Then
x3d C1 x2 for all xtk
i.e.,
xd C1 for all xtk
But for
x= max ^C1 1, k `
The last inequality is not true
Part (iii)
f(x) z 4 (x4)
We can show by contradiction that there does not exist C2
s.t
C2 x4 d (2x3 + 3x2 + 1)
If such a C2 exists for some k then C2 x4 d 2x3 + 3x2 + 1 d 6x3 for all x t kt1,
implying
C2 x d 6 for all x t k
6
1
C
2
But for x =
3.2.7
The Notation o
The asymptotic upper bound provided by big-oh notation may or may not be
tight in the sense that if f(x) = 2x3 + 3x2 +1
Then for f (x) = O (x3), though there exist C and k such that
f(x) d C (x3) for all x t k
yet there may also be some values for which the following equality also holds
61
for x t k
f(x) = C (x3)
Complexity &
Completeness
However, if we consider
f(x) = O (x4)
then there can not exist positive integer C s.t
for all x t k
f (x) = C x4
The case of f (x) = O (x4), provides an example for the next notation of small-oh.
The Notation o
Let f(x) and g(x) be two functions, each from the set of natural numbers or positive
real numbers to positive real numbers.
Further, let C > 0 be any number, then f(x) = o(g(x)) (pronounced as little oh of
g of x) if there exists natural number k satisfying
f(x) < C g(x) for all x t kt1
(B)
In the case of little-oh the constant C does not depend on the two functions f (x)
and g (x). Rather, we can arbitrarily choose C >0.
(ii)
The inequality (B) is strict whereas the inequality (A) of big-oh is not
necessarily strict.
(i)
(ii)
Solution
Let C > 0 be given and to find out k satisfying the requirement of little-oh.
Consider
2x3 + 3x2 + 1 < C xn
= 2+
3 1
< C x n-3
x x3
Case when n = 4
Then above inequality becomes
2+
3 1
<C x
x x3
7
C
if we take k = max ,1
then
2x3 + 3x2 + 1 < C x4
for x t k.
for n t 4
for all x t k
7
c
with k = max ,1
62
Complexity of
Algorithms
Part (ii)
We prove the result by contradiction. Let, if possible, f(x) = 0(xn) for nd3.
Then there exist positive constants C and k such that 2x3+3x2+1< C xn
for all xt k.
Dividing by x3 throughout, we get
2+
3 1
< C xn-3
x x2
n d 3 and x t k
As C is arbitrary, we take
C = 1, then the above inequality reduces to
2+
3 1
< C. xn-3 for n d 3 and x t k t 1.
x x2
3 1
d1
x x2
for n d 3 and x t k t 1.
for n d 3
However, the last inequality is not true. Therefore, the proof by contradiction.
Generalizing the above example, we get the
Example 3.2.7.2: If f(x) is a polynomial of degree m and g(x) is a polynomial of
degree n. Then
f(x) = o(g(x)) if and only if n>m.
we state (without proof) below two results which can be useful in finding small-oh
upper bound for a given function
More generally, we have
Theorem 3.2.7.3: Let f(x) and g(x) be functions in definition of small-oh notation.
Then f(x) = o(g(x) if and only if
Lim
Lim x
f ( x)
=0
g ( x)
Next, we introduce the last asymptotic notation, namely, small-omega. The relation of
small-omega to big-omega is similar to what is the relation of small-oh to big-oh.
Complexity &
Completeness
f(x) = Z (g(x))
if there exist a positive integer k s.t
f(x) > C g(x)
for all x t k
Example 3.2.8.1: If f(x) = 2x3 + 3x2 + 1
then
f(x) = Z (x)
and also
f(x) = Z (x2)
Solution:
Let C be any positive constant.
Consider
2x3 + 3x2 + 1 > C x
To find out kt 1 satisfying the conditions of the bound Z.
2x2 + 3x +
1
> C (dividing throughout by x)
x
1
t 2x2 + 3x >2k2 +3k > 2C2 +3C > C. (4 k t C+1)
x
? f(x) = Z (x)
1
x2
1
x2
>C
Hence
f(x) = Z (x2)
In general, we have the following two theorems (stated without proof).
Theorem 3.2.8.2: If f(x) is a polynomial of degree n, and g(x) is a polynomial of
degree n, then
f(x) = Z (g(x)) if and only if m > n.
More generally
Theorem 3.2.8.3: Let f(x) and g(x) be functions in the definitions of little-omega
Then f(x) = Z g(x) if and only if
64
Lim
f (x)
x o f g (x)
or
Lim
g (x)
x o f f (x)
Complexity of
Algorithms
Ex.1)
Ex.2)
Ex.3)
Possible inputs
Possible outcomes
Entitles occurring and operations on these entities in the (dynamic)
problem domains.
In this sense of definition of a problem, what to talk of solving, most of the problems
can not even be defined. Think of the following problems.
a)
b)
c)
These are some of problems, the definition of each of which require enumeration of
potentially infinite parameters, and hence are almost impossible to define.
(II)
(III)
Complexity &
Completeness
(IV)
(V)
Last, but probably most interesting class include large number of problems,
for each of which, it is not known whether it is in P or not in P.
These problems fall somewhere between class III and class IV given above
However, for each of the problems in the class, it is known that it is in NP,
i.e., each can be solved by at least one Non-Deterministic Turing Machine,
the time complexity of which is a polynomial function of the size of the
problem.
A problem from the class NP can equivalently but in more intuitive way, be
defined as one for which a potential solution, if given, can be verified in
polynomial time whether the potential solution is actually a solution or not.
The problems in this class, are called NP-Complete problems (to be formally defined
later). More explicitly, a problem is NP-complete if it is in NP and for which no
polynomial-time Deterministic TM solution is known so far.
Most interesting aspect of NP-complete problems, is that for each of these problems
neither, so far, it has been possible to design a Deterministic polynomial-time TM
solving the problem nor it has been possible to show that Deterministic polynomial time TM solution can not exist.
The idea of NP-completeness was introduced by Stephen Cook
in 1971 and the
satisfiability problem defined below is the first problem that was proved to be NPcomplete, of course, by S. Cook.
Next, we enumerate some of the NP-complete problems without justifying why
these problems have been placed in the class. Justification for some of these
problems will be provided in later sections.
A good source for the study of NP-complete problems and of related topics is Garey &
Johnson+
Problem 1: Satisfiability problem (or, for short, SAT) states: Given a Boolean
expression, is it satisfiable?
Explanation: A Boolean expression involves
(i)
Boolean variables x1, x2,..., xi , , each of which can assume a value either
TRUE ( generally denoted by 1) or FALSE (generally denoted by 0) and
(ii)
*
Cook S.A: The complexity of Theorem providing procedures, proceedings of the third annual ACM
symposium on the Theory of Computing, New York: Association of Computing Machinery, 1971,
pp. 151-158.
+ Garey M.R. and Johnson D.S. : Computers and Intractability: A guide to the Theory of
NP-Completeness, H.Freeman, New York, 1979.
66
Complexity of
Algorithms
For example
((x1 x2) x3) is (legal) Boolean expression.
Next, we explain other concepts involved in SAT.
Truth Assignment: For each of the variables involved in a given Boolean
expression, associating a value of either 0 or 1, gives a truth assignment, which in turn
gives a truth-value to the Boolean expression.
For example: Let x1= 0, x2=1, and x3=1 be one of the eight possible assignments to
a Boolean expression involving x1, x2 and x3
Truth-value of a Boolean expression.
Truth value of ( (x1 x2) x3) for the truthassignment x1=0, x2=1 and x3=1 is
((0 1) 1) = (0 0) =0
Satisfiable Boolean expression: A Boolean expression is said to be satisfiable if at
least one truth assignment makes the Boolean expression True.
For example:
x1=1, x2=0 and x3= 0 is one assignment that makes the Boolean
expression ((x1 x2) x3) True. Therefore, ((x1 x2) x3) is
satisfiable.
Problem 5:
Given a set of cities C= {C1, C2, . Cn} with n >1, and a function d which assigns to
each pair of cities (Ci, Cj) some cost of travelling from Ci to Cj. Further, a positive
integer/real number B is given. The problem is to find a route (covering each city
exactly once) with cost at most B.
Problem 6:
Complexity &
Completeness
Further, it may be noted that HCP is a special case of TSP in which the cost between
pairs of nodes is the same, say 1.
Example:
Then the above graph has one Hamiltonian circuit viz., (1, 2, 4, 3, 1)
Problem 7:
Explanation: A vertex cover for a graph G is a set C of vertices so that each edge of
G has an endpoint in G. For example, for the graph shown above,
{1, 2, 3} is a vertex cover. It can be easily seen that every superset of
a vertex cover of a graph is also a vertex cover of the graph.
Problem 8:
As the vertices 1, 2, 3 are mutually adjacent therefore, we require at least three colours
for k-colouring problem.
Problem 9:
Explanation: For a given graph G = (V, E), two vertices v1 and v2 are said to be
adjacent if there is an edge connecting the two vertices in the graph.
A subgraph H= (V1, E1) of a graph G = (V, E) is a graph such that
68
Complexity of
Algorithms
Explanation: Two graphs H1 = (V1, E1) and H2 = (V2, E2) are said to be isomorphic
if we can rename the vertices in V2 in such a manner that after renaming, the graph H1
and H2 look identical (not necessarily pictorially, but as ordered pairs of sets)
For Example
1
Explanation: For a given graph G = (V,E ), a subset E1 of the set of edges E of the
graph, is said to be an edge cover of G, if every vertex is an end of at least one of the
edges in E1.
For Example, for the graph
1
The two-edge set {(1, 4), (2, 3)} is an edge cover for the graph.
Problem 13:
Exact cover problem: For a given set P = {S1, S2, , Sm}, where
each Si is a subset of a given set S, is there a subset Q of P such
that for each x in S, there is exactly one Si in Q for which x is in
Si ?
69
Complexity &
Completeness
70
Complexity of
Algorithms
Polynomial-time
P1
P2
Reduction
Though we have already explained the concept of NP-Completeness, yet for the sake
of completeness, we give below the formal definition of NP-Compleness
Definition: NP-Complete Problem: A Problem P or equivalently its language L1
is said to be NP-complete if the following two conditions are satisfied:
(i)
(ii)
In this context, we introduce below another closely related and useful concept.
Definition: NP-Hard Problem: A problem L is said to be NP-hard if for any
problem L1 in NP, there is a polynomial-time reduction of L1 to L:
In other words, a problem L is hard if only condition (ii) of NP-Completeness is
satisfied. But the problem has may be so hard that establishing L as an NP-class
problem is so far not possible.
However, from the above definitions, it is clear that every NP-complete problem L
must be NP-Hard and additionally should satisfy the condition that L is an NP-class
problem.
In the next section, we discuss NP-completeness of some of problems discussed in the
previous section.
Complexity &
Completeness
Assuming the satisfiality problem as NP-complete, the rest of the problems that we
establish as NP-complete, are established by reduction method as explained above.
A diagrammatic notation of the form.
P
Q
SAT
3-CNF-SAT
Clique Problem
Subset -Sum
Vertex Cover
Hamiltonian Cycle
Travelling Salesman
Figure: 3.1
x
x
Complexity of
Algorithms
x1
x2
x3
x1
x1
x 2
x 3
Figure: 3.2
For each of the literals, create a graph node, and connect each node to every node in
other clauses, except those with the same variable but different sign. This graph can
be easily computed from a boolean formula in 3-CNF-SAT in polynomial time.
Consider an example, if we have
= ( x1 V x2 V x3 ) ( x1 V x2 V x3 ) ( x1 V x2 V x3 )
then G is the graph shown in Figure 3.2 above.
In the given example, a satisfying assignment of is ( x1 = 0, x2 = 0, x3 = 1). A
corresponding clique of size k = 3 consists of the vertices corresponding to x2 from
the first clause, x3 from the second clause, and x3 from the third clause.
The problem of finding n-element clique is equivalent to finding a set of literals
satisfying SAT. Because there are no edges between literals of the same clause, such
a clique must contain exactly one literal from each clause. And because there are no
edges between literals of the same variable but different sign, if node of literal x is in
the clique, no node of literal of form x is.
This proves that finding n-element clique in 3n-element graph is NP-Complete.
Example 5: Show that the Vertex cover problem is an NP- complete.
A vertex cover of an undirected graph G = (V, E) is a subset V'of the vertices of the
graph which contains at least one of the two endpoints of each edge.
73
Complexity &
Completeness
B
A
F
D
Figure: 3.3
Figure: 3.4
The vertex cover problem is the optimization problem of finding a vertex cover of
minimum size in a graph. The problem can also be stated as a decision problem :
VERTEX-COVER = {<G, k>| graph G has a vertex cover of size k }.
A deterministic algorithm to find a vertex cover in a graph is to list all subsets of
vertices of size k and check each one to see whether it forms a vertex cover. This
algorithm is exponential in k.
Proof : To show that Vertex cover problem NP, for a given graph G = (V, E), we
take V V and verifies to see if it forms a vertex cover. Verification can be done
by checking for each edge (u, v) E whether u V or v V. This verification can
be done in polynomial time.
Now, We show that clique problem can be transformed to vertex cover problem in
polynomial time. This transformation is based on the notion of the complement of a
graph G. Given an undirected graph G = (V, E), we define the complement of G as
G = (V, E), where E = { (u, v) | (u, v) E}. i.e., G is the graph containing exactly
those edges that are not in G. The transformation takes a graph G and k of the clique
problem. It computes the complement G which can be done in polynomial time.
To complete the proof, we can show that this transformation is indeed reduction : the
graph has a clique of size k if and only if the graph G has a vertex cover of size
|V| k.
Suppose that G has a clique V V with |V| = k. We claim that V V is a vertex
cover in G. Let (u, v) be any edge in E. Then, (u, v) E, which implies that atleast
one of u or v does not belong to V, since every pair of vertices in V is connected by
an edge of E. Equivalently, atleast one of u or v is in V V, which means that edge
(u, v) is covered by V V. Since (u, v) was chosen arbitrarily from E, every edge of
E is covered by a vertex in V V. Hence, the set V V, which has size |V| k,
forms a vertex cover for G.
Conversely, suppose that G has a vertex cover V V , where |V| = |V| - k. Then,
for all u, v V, if (u, v) E, then u V or v V or both. The contrapositive of
this implication is that for all u, v V, if u V and v V, then (u, v) E. In
other words, V V is a clique, and it has size |V| |V| = k.
For example, The graph G(V,E) has a clique {A, B, E}. The complement of graph G
is given by G and have independent set given by {C, D, F}.
This proves that finding the vertex cover is NP-Complete.
74
Ex.4)
Ex.5)
Ex.6)
Ex.7)
Complexity of
Algorithms
3.6 SUMMARY
In this unit in number of concepts are defined.
P denotes the class of all problems, for each of which there is at least one known
polynomial time Deterministic TM solving it.
NP denotes the class of all problems, for each of which, there is at least one known
Non-Deterministic polynomial time solution. However, this solution may not be
reducible to a polynomial time algorithm, i.e., to a polynomial time DTM.
Next, five Well Known Asymptotic Growth Rate Notations are defined.
The notation O provides asymptotic upper bound for a given function.
Let f(x) and g(x) be two functions each from the set of natural numbers or set of
positive real numbers to positive real numbers.
Then f (x) is said to be O (g(x)) (pronounced as big-oh of g of x) if there exist two
positive integer/real number Constants C and k such that
f (x) d C g(x)
for all xt k
whenever x t k
The Notation 4
Provides simultaneously both asymptotic lower bound and asymptotic upper bound
for a given function.
Let f(x) and g(x) be two functions, each from the set of natural numbers or positive
real numbers to positive real numbers. Then f(x) said to be 4 (g(x)) (pronounced as
big-theta of g of x) if, there exist positive constants C1, C2 and k such that
C2 g(x) d f(x) d C1 g(x) for all x t k.
The Notation o
Let f(x) and g(x) be two functions, each from the set of natural numbers or positive
real numbers to positive real numbers.
Further, let C > 0 be any number, then f(x) = o(g(x)) (pronounced as little oh of g of
x) if there exists natural number k satisfying
f(x) < C g(x) for all x t kt1
75
Complexity &
Completeness
The Notation Z
Again the asymptotic lower bound : may or may not be tight. However, the
asymptotic bound Z cannot be tight. The formal definition of Z is follows:
Let f(x) and g(x) be two functions each from the set of natural numbers or the set of
positive real numbers to set of positive real numbers.
Further
Let C > 0 be any number, then
f(x) = Z (g(x))
if there exist a positive integer k s.t
f(x) > C g(x)
for all x t k
3.7 SOLUTIONS/ANSWERS
Ex.1)
n!/nn = (n/n) ((n1)/n) ((n2)/n) ((n3)/n)(2/n)(1/n)
= 1(1(1/n)) (1-(2/n)) (1(3/n))(2/n)(1/n)
Each factor on the right hand side is less than equal to 1 for all value of n.
Hence, The right hand side expression is always less than one.
Therefore, n!/nn d 1
or,
n! d nn
Therefore, n! =O( nn)
Ex. 2)
For large value of n, 3logn < < n2
Therefore, 3logn/ n2< < 1
(n2 + 3logn)/ n2 =1 + 3logn/ n2
or, (n2 + 3logn)/ n2 <2
or, n2 + 3logn = O(n2).
76
Complexity of
Algorithms
Ex.3)
We have, 2n/5n < 1
or, 2n <5n
Therefore, 2n = O(5n).
Ex. 4)
Given a set of integers, we have to divide the set in to two disjoint sets such
that their sum value is equal .
A deterministic algorithm to find two disjoint sets is to list all possible
combination of two subsets such that one set contain k elements and other
contains remaining (nk) elements. Then to check if the sum of elements of
one set is equal to the sum of elments of another set. Here, the possible
number of combination is C(n, k). This algorithm is exponential in n.
To show that the partition problem NP, for a given set S, we take S1 S,
S2 S and S1 S2 = and verify to see if the sum of all elements of set S1 is
equal to the sum of all elements of set S2. This verification can be done in
polynomial time.
Hence, the partition problem is NP.
Ex. 5)
The graph coloring problem is to determine the minimum number of colors
needed to color given graph G(V, E) vertices such that no two adjacent
vetices has the same color. A deterministic algorithm for this requires
exponential time.
If we cast the graphcoloring problem as a decision problem i.e., can we
color the graph G with k-colors such that no two adjacent vertices have same
color ? We can verify that if this is possible then it is possible in polynomial
time.
Hence, The graph coloring problem is NP.
Ex. 6)
An independent set is defined as a subset of a vertices in a graph such that no
two vertices are adjacent.
The independent set problem is the optimization problem of finding an
independent set of maximum size in a graph. The problem can also be stated
as a decision problem :
INDEPENDENT-SET = {<G, k>| G has an independent set of atleast size k}.
A deterministic algorithm to find an independent set in a graph is to list all
subsets of vertices of size k and check each one to see whether it forms an
independent set. This algorithm is exponential in k.
Proof : To show that the independent set problem NP, for a given graph
G = (V, E), we take V V and verifies to see if it forms an independent set.
Verification can be done by checking for u V and v V, does (u,v) E .
This verification can be done in polynomial time.
Now, We show that clique problem can be transformed to independent set
problem in polynomial time.The transformation is similar clique to vertex
77
Complexity &
Completeness
E
F
Figure: 3.5
Figure. 3.5
D
Figure: 3.6
Complexity of
Algorithms
Ex.7)
Proof : To show that travelling salesman problem NP, we show that verification of
the problem can be done in polynomial time. Given a constant M and a
closed circuit path of a weighted graph G = (V, E) . Does such path exists in
graph G and total weight of such path is less than M ?, Verification can be
done by checking, does (u,v) E and the sum of weights of these edges is
less than M. This verification can be done in polynomial time.
Now, We show that Hamiltonian circuit problem can be transformed to
travelling problem in polynomial time. It can be shown that , Hamiltonian
circuit problem is a special case of the travelling salesman problem. Towards
this goal, given any Graph G(V, E), we construct an instance of the |V|-city
Travelling salesman by letting dij = 1 if (vi, vj) E, and 2 otherwise. We let
the cost of travel M equal to |V|. It is immediate that there is a tour of length
M or less if and only if there exists a Hamiltonian circuit in G.
Hence, The travelling salesman is NP-complete.
2.
3.
4.
5.
6.
79