Analysis
Analysis
Analysis
1
Contents
16 Applications of Hashing 49
Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
20 Cyclomatic Complexity 63
Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
23 In-Place Algorithm 72
Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2
Contents
3
Chapter 1
void fun()
{
int i, j;
for (i=1; i<=n; i++)
for (j=1; j<=log(i); j++)
printf("GeeksforGeeks");
}
Time Complexity of the above function can be written as Θ(log 1) + Θ(log 2) + Θ(log 3)
+ . . . . + Θ(log n) which is Θ (log n!)
Order of growth of ‘log n!’ and ‘n log n’ is same for large values of n, i.e., Θ (log n!) = Θ(n
log n). So time complexity of fun() is Θ(n log n).
The expression Θ(log n!) = Θ(n log n) can be easily derived from following Stirling’s
approximation (or Stirling’s formula).
Sources:
http://en.wikipedia.org/wiki/Stirling%27s_approximation
Source
https://www.geeksforgeeks.org/a-time-complexity-question/
4
Chapter 2
The above algorithm divides the problem into a subproblems, each of size n/b and solve
them recursively to compute the problem and the extra work done for problem is given
by f(n), i.e., the time to create the subproblems and combine their results in the above
procedure.
So, according to master theorem the runtime of the above algorithm can be expressed as:
5
Chapter 2. Advanced master theorem for divide and conquer recurrences
Not all recurrence relations can be solved with the use of the master theorem i.e. if
This theorem is an advance version of master theorem that can be used to determine running
time of divide and conquer algorithms if the recurrence is of the following form :-
6
Chapter 2. Advanced master theorem for divide and conquer recurrences
Source
https://www.geeksforgeeks.org/advanced-master-theorem-for-divide-and-conquer-recurrences/
7
Chapter 3
unsigned fun(unsigned n)
{
if (n == 0) return 1;
if (n == 1) return 2;
Consider the following questions for above code ignoring compiler optimization.
a) What does the above code do?
b) What is the time complexity of above code?
c) Can the time complexity of above function be reduced?
What does fun(n) do?
In the above code, fun(n) is equal to 2*fun(n-1). So the above function returns 2n . For
example, for n = 3, it returns 8, for n = 4, it returns 16.
What is the time complexity of fun(n)?
Time complexity of the above function is exponential. Let the Time complexity be T(n).
T(n) can be written as following recurrence. Here C is a machine dependent constant.
8
Chapter 3. Algorithm Practice Question for Beginners | Set 1
The above recurrence has solution as Θ(2n ). We can solve it by recurrence tree method. The
recurrence tree would be a binary tree with height n and every level would be completely
full except possibly the last level.
C
/ \
C C
/ \ / \
C C C C
/ \ / \ / \ / \
. . . . . . . .
. . . . . . . .
Height of Tree is Θ(n)
unsigned fun(unsigned n)
{
if (n == 0) return 1;
if (n == 1) return 2;
return 2*fun(n-1);
}
Time complexity of the above solution is Θ(n). T Let the Time complexity be T(n). T(n)
can be written as following recurrence. Here C is a machine dependent constant.
T(n) = T(n-1) + C
We can solve it by recurrence tree method. The recurrence tree would be a skewed binary
tree (every internal node has only one child) with height n.
C
/
C
/
C
/
.
.
Height of Tree is Θ(n)
9
Chapter 3. Algorithm Practice Question for Beginners | Set 1
The above function can be further optimized using divide and conquer technique to calculate
powers.
unsigned fun(unsigned n)
{
if (n == 0) return 1;
if (n == 1) return 2;
unsigned x = fun(n/2);
return (n%2)? 2*x*x: x*x;
}
Time complexity of the above solution is Θ(Logn). Let the Time complexity be T(n). T(n)
can be approximately written as following recurrence. Here C is a machine dependent
constant.
T(n) = T(n/2) + C
We can solve it by recurrence tree method. The recurrence tree would be a skewed binary
tree (every internal node has only one child) with height Log(n).
C
/
C
/
C
/
.
.
Height of Tree is Θ(Logn)
We can also directly compute fun(n) using bitwise left shift operator ‘<
unsigned fun(unsigned n)
{
return 1 << n;
}
Source
https://www.geeksforgeeks.org/algorithm-practice-question-beginners-set-1/
This article is contributed by Kartik. Please write comments if you find anything incorrect,
or you want to share more information about the topic discussed above.
10
Chapter 4
int main()
{
char str[] = "10010111";
11
Chapter 4. Amortized analysis for increment in counter
Output:
10011000
On a simple look on program or algorithm, its running cost looks proportional to the number
of bit but in real, it is not proportional to number of bit. Let’s see how !
Let’s assume that increment operation is performed k time. We see that in every increment,
its rightmost bit is getting flipped. So, number of flipping for LSB is k. For, second rightmost
is flipped after a gap, i.e., 1 time in 2 increment. 3rd rightmost – 1 time in 4 increment. 4th
rightmost – 1 time in 8 increment. So, number of flipping is k/2 for 2nd rightmost bit, k/4
for 3rd rightmost bit, k/8 for 4th rightmost bit and so on …
Total cost will be the total number of flipping, that is,
C(k) = k + k/2 + k/4 + k/8 + k/16 + …… which is Geometric Progression series and also,
C(k) < k + k/2 + k/4 + k/8 + k/16 + k/32 + …… up to infinity
So, C(k) < k/(1-1/2)
and so, C(k) < 2k
So, C(k)/k < 2
Hence, we find that average cost for increment a counter for one time is constant and it does
not depend on the number of bit. We conclude that increment of a counter is constant cost
operation.
Refernces :
1. http://www.cs.cornell.edu/courses/cs3110/2013sp/supplemental/recitations/rec21.
html
2. http://faculty.cs.tamu.edu/klappi/csce411-s17/csce411-amortized3.pdf
Source
https://www.geeksforgeeks.org/amortized-analysis-increment-counter/
12
Chapter 5
int fun(int n)
{
for (int i = 1; i <= n; i++)
{
for (int j = 1; j < n; j += i)
{
// Some O(1) task
}
}
}
13
Chapter 5. An interesting time complexity question
As a side note, the sum of infinite harmonic series is counterintuitive as the series diverges.
The value of is ∞. This is unlike geometric series as geometric series with ratio
less than 1 converges.
Reference:
http://en.wikipedia.org/wiki/Harmonic_series_%28mathematics%29#Rate_of_divergence
http://staff.ustc.edu.cn/~csli/graduate/algorithms/book6/chap03.htm
This article is contributed by Rahul. Please write comments if you find anything incorrect,
or you want to share more information about the topic discussed above
Source
https://www.geeksforgeeks.org/interesting-time-complexity-question/
14
Chapter 6
We need to prove that T(n) <= cnLogn. We can assume that it is true
for values smaller than n.
T(n) = 2T(n/2) + n
<= cn/2Log(n/2) + n
= cnLogn - cnLog2 + n
= cnLogn - cn + n
<= cnLogn
2) Recurrence Tree Method: In this method, we draw a recurrence tree and calculate
the time taken by every level of tree. Finally, we sum the work done at all levels. To
15
Chapter 6. Analysis of Algorithm | Set 4 (Solving Recurrences)
draw the recurrence tree, we start from the given recurrence and keep drawing till we find
a pattern among levels. The pattern is typically a arithmetic or geometric series.
cn2
/ \
T(n/4) T(n/2)
cn2
/ \
c(n2)/16 c(n2)/4
/ \ / \
T(n/16) T(n/8) T(n/8) T(n/4)
Breaking down further gives us following
cn2
/ \
c(n2)/16 c(n2)/4
/ \ / \
c(n2)/256 c(n2)/64 c(n2)/64 c(n2)/16
/ \ / \ / \ / \
3) Master Method:
Master Method is a direct way to get the solution. The master method works only for
following type of recurrences or for recurrences that can be transformed to following type.
16
Chapter 6. Analysis of Algorithm | Set 4 (Solving Recurrences)
In recurrence tree method, we calculate total work done. If the work done at leaves is
polynomially more, then leaves are the dominant part, and our result becomes the work
done at leaves (Case 1). If work done at leaves and root is asymptotically same, then our
result becomes height multiplied by work done at any level (Case 2). If work done at root
is asymptotically more, then our result becomes work done at root (Case 3).
Examples of some standard algorithms whose time complexity can be evaluated
using Master Method
Merge Sort: T(n) = 2T(n/2) + Θ(n). It falls in case 2 as c is 1 and Logb a] is also 1. So the
solution is Θ(n Logn)
Binary Search: T(n) = T(n/2) + Θ(1). It also falls in case 2 as c is 0 and Logb a is also 0.
So the solution is Θ(Logn)
Notes:
1) It is not necessary that a recurrence of the form T(n) = aT(n/b) + f(n) can be solved
using Master Theorem. The given three cases have some gaps between them. For example,
the recurrence T(n) = 2T(n/2) + n/Logn cannot be solved using master method.
2) Case 2 can be extended for f(n) = Θ(nc Logk n)
If f(n) = Θ(nc Logk n) for some constant k >= 0 and c = Logb a, then T(n) = Θ(nc Logk+1 n)
Practice Problems and Solutions on Master Theorem.
Next – Analysis of Algorithm | Set 5 (Amortized Analysis Introduction)
References:
http://en.wikipedia.org/wiki/Master_theorem
MIT Video Lecture on Asymptotic Notation | Recurrences | Substitution, Master Method
Introduction to Algorithms 3rd Edition by Clifford Stein, Thomas H. Cormen, Charles E.
Leiserson, Ronald L. Rivest
17
Chapter 6. Analysis of Algorithm | Set 4 (Solving Recurrences)
Source
https://www.geeksforgeeks.org/analysis-algorithm-set-4-master-method-solving-recurrences/
18
Chapter 7
19
Chapter 7. Analysis of Algorithm | Set 5 (Amortized Analysis Introduction)
The solution to this trade-off problem is to use Dynamic Table (or Arrays). The idea is to
increase size of table whenever it becomes full. Following are the steps to follow when table
becomes full.
1) Allocate memory for a larger table of size, typically twice the old table.
2) Copy the contents of old table to new table.
3) Free the old table.
If the table has space available, we simply insert new item in available space.
What is the time complexity of n insertions using the above scheme?
If we use simple analysis, the worst case cost of an insertion is O(n). Therefore, worst case
cost of n inserts is n * O(n) which is O(n2 ). This analysis gives an upper bound, but not a
tight upper bound for n insertions as all insertions don’t take Θ(n) time.
20
Chapter 7. Analysis of Algorithm | Set 5 (Amortized Analysis Introduction)
So using Amortized Analysis, we could prove that the Dynamic Table scheme has O(1)
insertion time which is a great result used in hashing. Also, the concept of dynamic table
is used in vectors in C++, ArrayList in Java.
Following are few important notes.
1) Amortized cost of a sequence of operations can be seen as expenses of a salaried person.
The average monthly expense of the person is less than or equal to the salary, but the person
can spend more money in a particular month by buying a car or something. In other months,
he or she saves money for the expensive month.
2) The above Amortized Analysis done for Dynamic Array example is called Aggregate
Method. There are two more powerful ways to do Amortized analysis called Accounting
Method and Potential Method. We will be discussing the other two methods in separate
posts.
3) The amortized analysis doesn’t involve probability. There is also another different notion
of average case running time where algorithms use randomization to make them faster and
expected running time is faster than the worst case running time. These algorithms are
analyzed using Randomized Analysis. Examples of these algorithms are Randomized Quick
Sort, Quick Select and Hashing. We will soon be covering Randomized analysis in a different
post.
Sources:
Berkeley Lecture 35: Amortized Analysis
MIT Lecture 13: Amortized Algorithms, Table Doubling, Potential Method
http://www.cs.cornell.edu/courses/cs3110/2011sp/lectures/lec20-amortized/amortized.
htm
21
Chapter 7. Analysis of Algorithm | Set 5 (Amortized Analysis Introduction)
Source
https://www.geeksforgeeks.org/analysis-algorithm-set-5-amortized-analysis-introduction/
22
Chapter 8
f(n) = O(g(n)) if there exists a positive integer n0 and a positive constant c, such
that f(n)�c.g(n) � n�n0
The general step wise procedure for Big-O runtime analysis is as follows:
� Constant Multiplication:
If f(n) = c.g(n), then O(f(n)) = O(g(n)) ; where c is a nonzero constant.
23
Chapter 8. Analysis of Algorithms | Big-O analysis
� Polynomial Function:
If f(n) = a0 + a1 .n + a2 .n2 + —- + am .nm , then O(f(n)) = O(nm ).
� Summation Function:
If f(n) = f1 (n) + f2 (n) + —- + fm (n) and fi (n)�fi+1 (n) � i=1, 2, —-, m,
then O(f(n)) = O(max(f1 (n), f2 (n), —-, fm (n))).
� Logarithmic Function:
If f(n) = loga n and g(n)=logb n, then O(f(n))=O(g(n))
; all log functions grow in the same manner in terms of Big-O.
Basically, this asymptotic notation is used to measure and compare the worst-case scenarios
of algorithms theoretically. For any algorithm, the Big-O analysis should be straightforward
as long as we correctly identify the operations that are dependent on n, the input size.
Runtime Analysis of Algorithms
In general cases, we mainly used to measure and compare the worst-case theoretical running
time complexities of algorithms for the performance analysis.
The fastest possible running time for any algorithm is O(1), commonly referred to as Con-
stant Running Time. In this case, the algorithm always takes the same amount of time to
execute, regardless of the input size. This is the ideal runtime for an algorithm, but it’s
rarely achievable.
In actual cases, the performance (Runtime) of an algorithm depends on n, that is the size
of the input or the number of operations is required for each input item.
The algorithms can be classified as follows from the best-to-worst performance (Running
Time Complexity):
24
Chapter 8. Analysis of Algorithms | Big-O analysis
If n = 10, If n=20,
log(10) = 1; log(20) = 2.996;
10 = 10; 20 = 20;
10log(10)=10; 20log(20)=59.9;
102=100; 202=400;
210=1024; 220=1048576;
10!=3628800; 20!=2.432902e+1818;
25
Chapter 8. Analysis of Algorithms | Big-O analysis
• Firstly, the implementation of the program is responsible for memory usage. For
example, we can assume that recursive implementation always reserves more memory
than the corresponding iterative implementation of a particular problem.
• And the other one is n, the input size or the amount of storage required for each item.
For example, a simple algorithm with a high amount of input size can consume more
memory than a complex algorithm with less amount of input size.
Algorithmic Examples of Memory Footprint Analysis: The algorithms with examples are
classified from the best-to-worst performance (Space Complexity) based on the worst-case
scenarios are mentioned below:
Source
https://www.geeksforgeeks.org/analysis-algorithms-big-o-analysis/
26
Chapter 9
Given two algorithms for a task, how do we find out which one is better?
One naive way of doing this is – implement both the algorithms and run the two programs
on your computer for different inputs and see which one takes less time. There are many
problems with this approach for analysis of algorithms.
1) It might be possible that for some inputs, first algorithm performs better than the second.
And for some inputs second performs better.
2) It might also be possible that for some inputs, first algorithm perform better on one
machine and the second works better on other machine for some other inputs.
Asymptotic Analysis is the big idea that handles above issues in analyzing algorithms. In
Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size
(we don’t measure the actual running time). We calculate, how does the time (or space)
taken by an algorithm increases with the input size.
For example, let us consider the search problem (searching a given item) in a sorted array.
One way to search is Linear Search (order of growth is linear) and other way is Binary
Search (order of growth is logarithmic). To understand how Asymptotic Analysis solves the
above mentioned problems in analyzing algorithms, let us say we run the Linear Search on
27
Chapter 9. Analysis of Algorithms | Set 1 (Asymptotic Analysis)
a fast computer and Binary Search on a slow computer. For small values of input array
size n, the fast computer may take less time. But, after certain value of input array size,
the Binary Search will definitely start taking less time compared to the Linear Search even
though the Binary Search is being run on a slow machine. The reason is the order of growth
of Binary Search with respect to input size logarithmic while the order of growth of Linear
Search is linear. So the machine dependent constants can always be ignored after certain
values of input size.
Does Asymptotic Analysis always work?
Asymptotic Analysis is not perfect, but that’s the best way available for analyzing algorithms.
For example, say there are two sorting algorithms that take 1000nLogn and 2nLogn time
respectively on a machine. Both of these algorithms are asymptotically same (order of
growth is nLogn). So, With Asymptotic Analysis, we can’t judge which one is better as we
ignore constants in Asymptotic Analysis.
Also, in Asymptotic analysis, we always talk about input sizes larger than a constant value.
It might be possible that those large inputs are never given to your software and an algorithm
which is asymptotically slower, always performs better for your particular situation. So, you
may end up choosing an algorithm that is Asymptotically slower but faster for your software.
Next – Analysis of Algorithms | Set 2 (Worst, Average and Best Cases)
References:
MIT’s Video lecture 1 on Introduction to Algorithms.
Improved By : Danail Kozhuharov
Source
https://www.geeksforgeeks.org/analysis-of-algorithms-set-1-asymptotic-analysis/
28
Chapter 10
#include <stdio.h>
29
Chapter 10. Analysis of Algorithms | Set 2 (Worst, Average and Best Cases)
getchar();
return 0;
}
= Θ(n)
30
Chapter 10. Analysis of Algorithms | Set 2 (Worst, Average and Best Cases)
For some algorithms, all the cases are asymptotically same, i.e., there are no worst and
best cases. For example,Merge Sort. Merge Sort does Θ(nLogn) operations in all cases.
Most of the other sorting algorithms have worst and best cases. For example, in the typical
implementation of Quick Sort (where pivot is chosen as a corner element), the worst occurs
when the input array is already sorted and the best occur when the pivot elements always
divide array in two halves. For insertion sort, the worst case occurs when the array is reverse
sorted and the best case occurs when the array is sorted in the same order as output.
Next – Analysis of Algorithms | Set 3 (Asymptotic Notations)
References:
MIT’s Video lecture 1 on Introduction to Algorithms.
Source
https://www.geeksforgeeks.org/analysis-of-algorithms-set-2-asymptotic-analysis/
31
Chapter 11
32
Chapter 11. Analysis of Algorithms | Set 3 (Asymptotic Notations)
The above definition means, if f(n) is theta of g(n), then the value f(n) is always between
c1*g(n) and c2*g(n) for large values of n (n >= n0). The definition of theta also requires
that f(n) must be non-negative for values of n greater than n0.
33
Chapter 11. Analysis of Algorithms | Set 3 (Asymptotic Notations)
Let us consider the same Insertion sort example here. The time complexity of Insertion Sort
can be written as Ω(n), but it is not a very useful information about insertion sort, as we
are generally interested in worst case and sometimes in average case.
Exercise:
Which of the following statements is/are valid?
1. Time Complexity of QuickSort is Θ(n^2)
2. Time Complexity of QuickSort is O(n^2)
3. For any two functions f(n) and g(n), we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n))
and f(n) = Ω(g(n)).
4. Time complexity of all computer algorithms can be written as Ω(1)
Important Links :
• There are two more notations called little o and little omega. Little o provides strict
upper bound (equality condition is removed from Big O) and little omega provides
strict lower bound (equality condition removed from big omega)
• Analysis of Algorithms | Set 4 (Analysis of Loops)
• Recent Articles on analysis of algorithm.
References:
Lec 1 | MIT (Introduction to Algorithms)
Introduction to Algorithms 3rd Edition by Clifford Stein, Thomas H. Cormen, Charles E.
Leiserson, Ronald L. Rivest
This article is contributed by Abhay Rathi. Please write comments if you find anything
incorrect, or you want to share more information about the topic discussed above.
Source
https://www.geeksforgeeks.org/analysis-of-algorithms-set-3asymptotic-notations/
34
Chapter 12
// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions
}
2) O(n): Time Complexity of a loop is considered as O(n) if the loop variables is incre-
mented / decremented by a constant amount. For example following functions have O(n)
time complexity.
35
Chapter 12. Analysis of Algorithms | Set 4 (Analysis of Loops)
3) O(nc ): Time complexity of nested loops is equal to the number of times the innermost
statement is executed. For example the following sample loops have O(n2 ) time complexity
For example Selection sort and Insertion Sort have O(n2 ) time complexity.
4) O(Logn) Time Complexity of a loop is considered as O(Logn) if the loop variables is
divided / multiplied by a constant amount.
For example Binary Search(refer iterative implementation) has O(Logn) time complexity.
Let us see mathematically how it is O(Log n). The series that we get in first loop is 1, c, c2 ,
c3 , … ck . If we put k equals to Logc n, we get cLogc n which is n.
5) O(LogLogn) Time Complexity of a loop is considered as O(LogLogn) if the loop vari-
ables is reduced / increased exponentially by a constant amount.
36
Chapter 12. Analysis of Algorithms | Set 4 (Analysis of Loops)
When there are consecutive loops, we calculate time complexity as sum of time complexities
of individual loops.
How to calculate time complexity when there are many if, else statements inside
loops?
As discussed here, worst case time complexity is the most useful among best, average and
worst. Therefore we need to consider worst case. We evaluate the situation when values in
if-else conditions cause maximum number of statements to be executed.
For example consider the linear search function where we consider the case when element is
present at the end or not present at all.
When the code is too complex to consider all if-else cases, we can get an upper bound by
ignoring if else and other complex control statements.
How to calculate time complexity of recursive functions?
Time complexity of a recursive function can be written as a mathematical recurrence relation.
To calculate time complexity, we must know how to solve recurrences. We will soon be
discussing recurrence solving techniques as a separate post.
Quiz on Analysis of Algorithms
Next – Analysis of Algorithm | Set 4 (Solving Recurrences)
Source
https://www.geeksforgeeks.org/analysis-of-algorithms-set-4-analysis-of-loops/
37
Chapter 13
{ 3T(n-1), if n>0,
T(n) = { 1, otherwise
Solution:
38
Chapter 13. Analysis of Algorithms | Set 5 (Practice Problems)
{ 2T(n-1) - 1, if n>0,
T(n) = { 1, otherwise
Solution:
function(int n)
{
if (n==1)
return;
for (int i=1; i<=n; i++)
{
for (int j=1; j<=n; j++)
{
printf("*");
break;
}
}
}
39
Chapter 13. Analysis of Algorithms | Set 5 (Practice Problems)
function(int n)
{
if (n==1)
return;
for (int i=1; i<=n; i++)
{
// Inner loop executes only one
// time due to break statement.
for (int j=1; j<=n; j++)
{
printf("*");
break;
}
}
}
Time Complexity of the above function O(n). Even though the inner loop is bounded by n,
but due to break statement it is executing only once.
void function(int n)
{
int count = 0;
for (int i=n/2; i<=n; i++)
for (int j=1; j<=n; j = 2 * j)
for (int k=1; k<=n; k = k * 2)
count++;
}
void function(int n)
{
int count = 0;
for (int i=n/2; i<=n; i++)
40
Chapter 13. Analysis of Algorithms | Set 5 (Practice Problems)
void function(int n)
{
int count = 0;
for (int i=n/2; i<=n; i++)
for (int j=1; j+n/2<=n; j = j++)
for (int k=1; k<=n; k = k * 2)
count++;
}
void function(int n)
{
int count = 0;
void function(int n)
{
int i = 1, s =1;
while (s <= n)
{
i++;
s += i;
printf("*");
}
}
Solution: We can define the terms ‘s’ according to relation si = si-1 + i. The value of ‘i’
increases by one for each iteration. The value contained in ‘s’ at the ith iteration is the sum
41
Chapter 13. Analysis of Algorithms | Set 5 (Practice Problems)
of the first ‘i’ positive integers. If k is total number of iterations taken by the program, then
while loop terminates if: 1 + 2 + 3 ….+ k = [k(k+1)/2] > n So k = O(√n).
Time Complexity of the above function O(√n).
void function(int n)
{
int count = 0;
for (int i=0; i<n; i++)
for (int j=i; j< i*i; j++)
if (j%i == 0)
{
for (int k=0; k<j; k++)
printf("*");
}
}
void function(int n)
{
int count = 0;
// executes n times
for (int i=0; i<n; i++)
Source
https://www.geeksforgeeks.org/analysis-algorithms-set-5-practice-problems/
42
Chapter 14
43
Chapter 14. Analysis of algorithms | little o and little omega notations
In mathematical relation,
f(n) = o(g(n)) means
lim f(n)/g(n) = 0
n→∞
Examples:
Is 7n + 8 � o(n2 )?
In order for that to be true, for any c, we have to be able to find an n0 that makes
f(n) < c * g(n) asymptotically true.
lets took some example,
If c = 100,we check the inequality is clearly true. If c = 1/100 , we’ll have to use
a little more imagination, but we’ll be able to find an n0. (Try n0 = 1000.) From
these examples, the conjecture appears to be correct.
then check limits,
lim f(n)/g(n) = lim (7n + 8)/(n2 ) = lim 7/2n = 0 (l’hospital)
n→∞ n→∞ n→∞
hence 7n + 8 � o(n2 )
Little � asymptotic notation
Definition : Let f(n) and g(n) be functions that map positive integers to positive real
numbers. We say that f(n) is �(g(n)) (or f(n) � �(g(n))) if for any real constant c > 0, there
exists an integer constant n0 � 1 such that f(n) > c * g(n) � 0 for every integer n � n0.
f(n) has a higher growth rate than g(n) so main difference between Big Omega (Ω) and
little omega (�) lies in their definitions.In the case of Big Omega f(n)=Ω(g(n)) and the
bound is 0<=cg(n)0, but in case of little omega, it is true for all constant c>0.
In mathematical relation,
if f(n) � �(g(n)) then,
lim f(n)/g(n) = ∞
n→∞
Example:
Prove that 4n + 6 � �(1);
the little omega(�) running time can be proven by applying limit formula given below.
if lim f(n)/g(n) = ∞ then functions f(n) is �(g(n))
n→∞
here,we have functions f(n)=4n+6 and g(n)=1
lim (4n+6)/(1) = ∞
n→∞
and,also for any c we can get n0 for this inequality 0 <= c*g(n) < f(n), 0 <= c*1 < 4n+6
Hence proved.
References :
Introduction to algorithems
44
Chapter 14. Analysis of algorithms | little o and little omega notations
Source
https://www.geeksforgeeks.org/analysis-of-algorithems-little-o-and-little-omega-notations/
45
Chapter 15
• Heap sort –
Best, average and worst case time complexity: nlogn which is independent of distribu-
tion of data.
• Quick sort –
It is a divide and conquer approach with recurrence relation:
46
Chapter 15. Analysis of different sorting techniques
Worst case: when the array is sorted or reverse sorted, the partition algorithm divides
the array in two subarrays with 0 and n-1 elements. Therefore,
Best case and Average case: On an average, the partition algorithm divides the array
in two subarrays with equal size. Therefore,
T(n) = 2T(n/2) + cn
Solving this we get, T(n) = O(nlogn)
• Radix sort –
Best, average and worst case time complexity: nk where k is the maximum number of
digits in elements of array.
• Count sort –
Best, average and worst case time complexity: n+k where k is the size of count array.
• Bucket sort –
Best and average time complexity: n+k where k is the number of buckets.
Worst case time complexity: n^2 if all elements belong to same bucket.
In-place/Outplace technique –
A sorting technique is inplace if it does not use any extra memory to sort the array.
Among the comparison based techniques discussed, only merge sort is outplace technique
as it requires extra array to merge the sorted subarrays.
Among the non-comparison based techniques discussed, all are outplace techniques. Count-
ing sort uses counting array and bucket sort uses hash table for sorting the array.
Stable/Unstable technique –
A sorting technique is stable if it does not change the order of element with same value.
Out of comparison based techniques, bubble sort, insertion sort and merge sort are stable
techniques. Selection sort is unstable as it may change the order of elements with same
value. For example, consider the array 4, 4, 1, 3.
In first iteration, minimum element found is 1 and it is swapped with 4 at 0th position.
Therefore, order of 4 with respect to 4 at 1st position will change. Similarly, quick sort and
heap sort are also unstable.
Out of non-comparison based techniques, Counting sort and Bucket sort are stable sorting
techniques whereas radix sort stability depends on underlying algorithm used for sorting.
Analysis of sorting techniques :
47
Chapter 15. Analysis of different sorting techniques
Que – 1. Which sorting algorithm will take least time when all elements of input array are
identical? Consider typical implementations of sorting algorithms.
(A) Insertion Sort
(B) Heap Sort
(C) Merge Sort
(D) Selection Sort
Solution: As discussed, insertion sort will have complexity of n when input array is already
sorted.
Que – 2. Consider the Quicksort algorithm. Suppose there is a procedure for finding a
pivot element which splits the list into two sub-lists each of which contains at least one-fifth
of the elements. Let T(n) be the number of comparisons required to sort n elements. Then,
(GATE-CS-2012)
(A) T(n) <= 2T(n/5) + n
(B) T(n) <= T(n/5) + T(4n/5) + n
(C) T(n) <= 2T(4n/5) + n
(D) T(n) <= 2T(n/2) + n
Solution: The complexity of quick sort can be written as:
As given in question, one list contains 1/5th of total elements. Therefore, another list will
have 4/5 of total elements. Putting values, we get:
T(n) = T(n/5) + T(4n/5) + cn, which matches option (B).
Source
https://www.geeksforgeeks.org/analysis-of-different-sorting-techniques/
48
Chapter 16
Applications of Hashing
• Message Digest
• Password Verification
• Data Structures(Programming Languages)
• Compiler Operation
• Rabin-Karp Algotithm
• Linking File name and path together
Message Digest:
This is an application of cryptographic Hash Functions. Cryptographic hash functions are
the functions which produce an output from which reaching the input is close to impossible.
This property of hash functions is called irreversibility.
Lets take an Example:
Suppose you have to store your files on any of the cloud services available. You have to
be sure that the files that you store are not tampered by any third party. You do it by
computing “hash” of that file using a Cryptographic hash algorithm. One of the common
cryptographic hash algorithms is SHA 256. The hash thus computed has a maximum size
of 32 bytes. So a computing the
hash of large number of files will not be a problem. You save these hashes on your local
machine.
Now, when you download the files, you compute the hash again. Then you match it with
the previous hash computed. Therefore, you know whether your files were tampered or not.
49
Chapter 16. Applications of Hashing
If anybody tamper with the file, the hash value of the file will definitely change. Tampering
the file without changing the hash is nearly impossible.
Password Verification
Cryptographic hash functions are very commonly used in password verification. Let’s un-
derstand this using an Example:
When you use any online website which requires a user login, you enter your E-mail and
password to authenticate that the account you are trying to use belongs to you. When the
password is entered, a hash of the password is computed which is then sent to the server
for verification of the password. The passwords stored on the server are actually computed
hash values of the original passwords. This is done to ensure that when the password is sent
from client to server, no sniffing is there.
Data Structures(Programming Languages):
Various programming languages have hash table based Data Structures. The basic idea is
to create a key-value pair where key is supposed to be a unique value, whereas value can be
same for different keys. This implementation is seen in unordered_set & unordered_map
in C++, HashSet & HashMap in java, dict in python etc.
Compiler Operation:
The keywords of a programming language are processed differently than other identifiers.
To differentiate between the keywords of a programming language(if, else, for, return etc.)
and other identifiers and to successfully compile the program, the compiler stores all these
keywords in a set which is implemented using a hash table.
Rabin-Karp Algorithm:
One of the most famous applications of hashing is the Rabin-Karp algorithm. This is
basically a string-searching algorithm which uses hashing to find any one set of patterns in
a string. A practical application of this algorithm is detecting plagiarism. To know more
about Rabin-Karp algo go through Searching for Patterns | Set 3 (Rabin-Karp Algorithm).
Linking File name and path together:
When moving through files on our local system, we observe two very crucial components of
a file i.e. file_name and file_path. In order to store the correspondence between file_name
and file_path the system uses a map(file_name, file_path)which is implemented using a
hash table.
Related articles:
Hashing vs BST
Hashing vs Trie
Source
https://www.geeksforgeeks.org/applications-of-hashing/
50
Chapter 17
Implementation:
51
Chapter 17. Asymptotic Analysis and comparison of sorting algorithms
int* generate_random_array(int n)
{
srand(time(NULL));
int *a = malloc(sizeof(int) * n), i;
for(i = 0; i < n; ++i)
a[i] = rand() % MAX_ELEMENT_IN_ARRAY;
return a;
}
52
Chapter 17. Asymptotic Analysis and comparison of sorting algorithms
if(size <= k)
{
//printf("Performed insertion sort- start = %d and end = %d\n", start, end);
return insertion_sort_asc(a,start,end);
}
int mid = start + (end - start) / 2;
insertion_and_merge_sort_combine(a,start,mid,k);
insertion_and_merge_sort_combine(a,mid + 1,end,k);
merge(a,start,end,mid);
53
Chapter 17. Asymptotic Analysis and comparison of sorting algorithms
}
}
int *a = generate_random_array(n);
int *b = copy_array(a,n);
start = clock();
insertion_sort_asc(b,0,n-1);
end = clock();
insertion_sort_time += ((double) (end - start)) / CLOCKS_PER_SEC;
free(b);
int *c = copy_array(a,n);
start = clock();
merge_sort(c,n);
end = clock();
merge_sort_time += ((double) (end - start)) / CLOCKS_PER_SEC;
free(c);
int *d = copy_array(a,n);
start = clock();
insertion_and_merge_sort_combine(d,0,n-1,40);
end = clock();
merge_sort_and_insertion_sort_mix_time+=((double) (end - start))/CLOCKS_PER_SEC;
free(d);
start = clock();
qsort(a,n,sizeof(int),cmpfunc);
end = clock();
qsort_time += ((double) (end - start)) / CLOCKS_PER_SEC;
free(a);
}
insertion_sort_time /= number_of_times;
merge_sort_time /= number_of_times;
merge_sort_and_insertion_sort_mix_time /= number_of_times;
qsort_time /= number_of_times;
printf("\nTime taken to sort:\n"
"%-35s %f\n"
"%-35s %f\n"
54
Chapter 17. Asymptotic Analysis and comparison of sorting algorithms
"%-35s %f\n"
"%-35s %f\n\n",
"(i)Insertion sort: ",
insertion_sort_time,
"(ii)Merge sort: ",
merge_sort_time,
"(iii)Insertion-mergesort-hybrid: ",
merge_sort_and_insertion_sort_mix_time,
"(iv)Qsort library function: ",
qsort_time);
}
55
Chapter 17. Asymptotic Analysis and comparison of sorting algorithms
The Analysis:
• Input: The user has to supply the number of times he/she wants to test the algorithm
corresponding to number of test cases. For each test case the user must enter two space
separated integers denoting the input size ’n’ and the ‘num_of_times’ denoting the
number of times he/she wants to run the analysis and take average. (Clarification: If
‘num_of_times’ is 10 then each of the algorithm specified above runs 10 times and
the average is taken. This is done because the input array is generated randomly
corresponding to the input size which you specify. The input array could be all sorted.
Our it could correspond to the worst case .i.e. descending order. In order to avoid
running times of such input arrays. The algorithm is run ‘num_of_times‘ and the
average is taken.)
clock() routine and CLOCKS_PER_SEC macro from is used to measure the time
taken.
Compilation: I have written the above code in Linux environment (Ubuntu 16.04 LTS).
Copy the code snippet above. Compile it using gcc, key in the inputs as specified and
admire the power of sorting algorithms!
• Results: As you can see for small input sizes, insertion sort beats merge sort by 2
* 10^-6 sec. But this difference in time is not so significant. On the other hand, the
hybrid algorithm and qsort() library function, both perform as good as insertion sort.
The input size is now increased by approximately 100 times to n = 1000 from n = 30.
The difference is now tangible. Merge sort runs 10 times faster than insertion sort.
There is again a tie between the performance of the hybrid algorithm and the qsort()
routine. This suggests that the qsort() is implemented in a way which is more or less
similar to our hybrid algorithm i.e., switching between different algorithms to make
the best out of them.
56
Chapter 17. Asymptotic Analysis and comparison of sorting algorithms
Finally, the input size is increased to 10^5 (1 Lakh!) which is most probably the ideal
size used in practical scenario’s. Compared to the previous input n = 1000 where
merge sort beat insertion sort by running 10 times faster, here the difference is even
more significant. Merge sort beats insertion sort by 100 times!
The hybrid algorithm which we have written in fact does out perform the traditional
merge sort by running 0.01 sec faster. And lastly, qsort() the library function, finally
proves us that implementation also plays a crucial role while measuring the running
times meticulously by running 3 milliseconds faster! �
Note: Do not run the above program with n >= 10^6 since it will take a lot of computing
power. Thank you and Happy coding! �
Source
https://www.geeksforgeeks.org/asymptotic-analysis-comparison-sorting-algorithms/
57
Chapter 18
In the above example function, each call adds a new level to the stack.
Sum(5)
->sum(4)
->sum(3)
->sum(2)
->sum(1)
->sum(0)
Each of these calls is added to the call stack and takes up actual memory. So code like this
would take O(n) time and O(n) auxiliary space.
58
Chapter 18. Auxiliary Space with Recursive Functions
However, just because you have n calls total doesn’t mean it takes O(n) space. Consider
the below functions, which adds adjacent elements between 0 and n :
Example:
In this example there will be roughly O(n) calls to pairSum. However, those calls do not
exist simultaneously on the call stack, so we need only O(1) space.
Source
https://www.geeksforgeeks.org/auxiliary-space-recursive-functions/
59
Chapter 19
Complexity of different
operations in Binary tree,
Binary Search Tree and AVL
tree
Complexity of different operations in Binary tree, Binary Search Tree and AVL tree - Geeks-
forGeeks
In this article, we will discuss complexity of different operations in binary trees including
BST and AVL trees. Before understanding this article, you should have basic idea about:
Binary Tree, Binary Search Tree and AVL Tree.
The main operations in binary tree are: search, insert and delete. We will see the worst
case time complexity of these operations in binary trees.
Binary Tree –
In a binary tree, a node can have maximum two children. Consider the left skewed binary
tree shown in Figure 1.
60
Chapter 19. Complexity of different operations in Binary tree, Binary Search Tree and
AVL tree
• Deletion: For deletion of element 2, we have to traverse all elements to find 2 (assum-
ing we do breadth first traversal). Therefore, deletion in binary tree has worst case
complexity of O(n).
• Searching: For searching element 1, we have to traverse all elements (in order 3, 2,
1). Therefore, searching in binary search tree has worst case complexity of O(n). In
general, time complexity is O(h) where h is height of BST.
• Insertion: For inserting element 0, it must be inserted as left child of 1. Therefore,
we need to traverse all elements (in order 3, 2, 1) to insert 0 which has worst case
complexity of O(n). In general, time complexity is O(h).
• Deletion: For deletion of element 1, we have to traverse all elements to find 1 (in
order 3, 2, 1). Therefore, deletion in binary tree has worst case complexity of O(n).
In general, time complexity is O(h).
61
Chapter 19. Complexity of different operations in Binary tree, Binary Search Tree and
AVL tree
Source
https://www.geeksforgeeks.org/complexity-different-operations-binary-tree-binary-search-tree-avl-tree/
62
Chapter 20
Cyclomatic Complexity
M = E – N + 2P
where,
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
Steps that should be followed in calculating cyclomatic complexity and test cases design
are:
63
Chapter 20. Cyclomatic Complexity
A = 10
IF B > C THEN
A = B
ELSE
A = C
ENDIF
Print A
Print B
Print C
The cyclomatic complexity calculated for above code will be from control flow graph. The
graph shows seven shapes(nodes), seven lines(edges), hence cyclomatic complexity is 7-7+2
= 2.
Use of Cyclomatic Complexity:
• Determining the independent path executions thus proven to be very helpful for De-
velopers and Testers.
• It can make sure that every path have been tested at least once.
• Thus help to focus more on uncovered paths.
• Code coverage can be improved.
• Risk associated with program can be evaluated.
• These metrics being used earlier in the program helps in reducing the risks.
64
Chapter 20. Cyclomatic Complexity
Reference: https://en.wikipedia.org/wiki/Cyclomatic_complexity
Source
https://www.geeksforgeeks.org/cyclomatic-complexity/
65
Chapter 21
Difference between
Deterministic and
Non-deterministic Algorithms
66
Chapter 21. Difference between Deterministic and Non-deterministic Algorithms
Example :
1.j= choice(a, n)
2.if(A[j]==x) then
{
write(j);
success();
}
3.write(0); failure();
67
Chapter 21. Difference between Deterministic and Non-deterministic Algorithms
Source
https://www.geeksforgeeks.org/difference-between-deterministic-and-non-deterministic-algorithms/
68
Chapter 22
T(n) = 2T(n/2) + cn
T(n) = 2T(n/2) + √n
These types of recurrence relations can be easily solved using master method (Put link to
master method).
For recurrence relation T(n) = 2T(n/2) + cn, the values of a = 2, b = 2 and k =1. Here
logb(a) = log2(2) = 1 = k. Therefore, the complexity will be Θ(nlog2(n)).
Similarly for recurrence relation T(n) = 2T(n/2) + √n, the values of a = 2, b = 2 and k
=1/2. Here logb(a) = log2(2) = 1 > k. Therefore, the complexity will be Θ(n).
Type 2: Linear recurrence relations –
Following are some of the examples of recurrence relations based on linear recurrence rela-
tion.
These types of recurrence relations can be easily soled using substitution method (Put link
to substitution method).
For example,
69
Chapter 22. Different types of recurrence relations and their solutions
T(n) = T(n-1) + n
= T(n-2) + (n-1) + n
= T(n-k) + (n-(k-1))….. (n-1) + n
Substituting k = n, we get
T(n) = T(√n) + 1
S(m) = Θ(logm)
As n = 2^m or m = log2(n),
T(n) = T(2^m) = S(m) = Θ(logm) = Θ(loglogn)
T(n) = 2T(n-1) + c
= 2(2T(n-2)+ c) + c = 2^2*T(n-2) + (c + 2c)
= 2^k*T(n-k) + (c + 2c + .. kc)
Substituting k = (n-1), we get
T(n) = 2^(n-1)*T(1) + (c + 2c + (n-1)c) = O(2^n)
70
Chapter 22. Different types of recurrence relations and their solutions
Source
https://www.geeksforgeeks.org/different-types-recurrence-relations-solutions/
71
Chapter 23
In-Place Algorithm
An in-place algorithm is an algorithm that does not need an extra space and
produces an output in the same memory that contains the data by transforming
the input ‘in-place’. However, a small constant extra space used for variables is
allowed.
In-place means that the algorithm does not use extra space for manipulating the
input but may require a small though nonconstant extra space for its operation.
Usually, this space is O(log n), though sometimes anything in o(n) (Smaller than
linear) is allowed [Source : Wikipedia]
C++
72
Chapter 23. In-Place Algorithm
Java
class GFG
{
/* Function to reverse arr[]
from start to end*/
public static void revereseArray(int []arr,
int n)
{
// Create a copy array
// and store reversed
// elements
int []rev = new int[n];
for (int i = 0; i < n; i++)
rev[n - i - 1] = arr[i];
73
Chapter 23. In-Place Algorithm
/* Utility function to
print an array */
public static void printArray(int []arr,
int size)
{
for (int i = 0; i < size; i++)
System.out.print(arr[i] + " ");
System.out.println("");
}
// Driver code
public static void main(String[] args)
{
int arr[] = {1, 2, 3, 4, 5, 6};
int n = arr.length;
printArray(arr, n);
revereseArray(arr, n);
System.out.println("Reversed array is");
printArray(arr, n);
}
}
Python3
74
Chapter 23. In-Place Algorithm
# Driver code
if __name__ == "__main__":
arr = [1, 2, 3, 4, 5, 6]
n = len(arr)
print(*arr)
revereseArray(arr, n);
print("Reversed array is")
print(*arr)
Output:
1 2 3 4 5 6
Reversed array is
6 5 4 3 2 1
75
Chapter 23. In-Place Algorithm
C++
76
Chapter 23. In-Place Algorithm
revereseArray(arr, n);
cout << "Reversed array is" << endl;
printArray(arr, n);
return 0;
}
Java
class GFG
{
public static int __(int x, int y) {return x;}
/* Utility function to
print an array */
public static void printArray(int []arr,
int size)
{
for (int i = 0; i < size; i++)
System.out.print(Integer.toString(arr[i]) + " ");
System.out.println("");
}
// Driver code
public static void main(String[] args)
{
int []arr = new int[]{1, 2, 3, 4, 5, 6};
int n = arr.length;
printArray(arr, n);
revereseArray(arr, n);
System.out.println("Reversed array is");
printArray(arr, n);
}
}
77
Chapter 23. In-Place Algorithm
Python3
# Driver code
if __name__ == "__main__":
arr = [1, 2, 3, 4, 5, 6]
n = len(arr)
print(*arr)
revereseArray(arr, n)
print("Reversed array is")
print(*arr)
Output:
1 2 3 4 5 6
Reversed array is
6 5 4 3 2 1
78
Chapter 23. In-Place Algorithm
This needs O(1) extra space for exchanging elements and is an example of in-place algorithm.
Which Sorting Algorithms are In-Place and which are not?
In Place : Bubble sort, Selection Sort, Insertion Sort, Heapsort.
Not In-Place : Merge Sort. Note that merge sort requires O(n) extra space.
What about QuickSort? Why is it called In-Place?
QuickSort uses extra space for recursive function calls. It is called in-place according to broad
definition as extra space required is not used to manipulate input, but only for recursive
calls.
Improved By : Harshit Saini
Source
https://www.geeksforgeeks.org/in-place-algorithm/
79
Chapter 24
80
Chapter 24. Iterated Logarithm log*(n)
// Driver code
int main()
{
int n = 100, base = 5;
cout << "Log*(" << n << ") = "
<< recursiveLogStar(n, base) << "\n";
return 0;
}
Java
class GFG
{
static int _log(double x,
double base)
{
return (int)(Math.log(x) /
Math.log(base));
}
// Driver code
public static void main (String[] args)
{
int n = 100, base = 5;
System.out.println("Log*(" + n + ") = " +
recursiveLogStar(n, base));
}
}
PHP
81
Chapter 24. Iterated Logarithm log*(n)
<?php
// Recursive PhP program to find
// value of Iterated Logarithm
// Driver code
$n = 100; $base = 5;
echo "Log*(" , $n , ")"," = ",
recursiveLogStar($n, $base), "\n";
Output :
Log*(100) = 2
Iterative Implementation :
Improved By : jit_t
82
Chapter 24. Iterated Logarithm log*(n)
Source
https://www.geeksforgeeks.org/iterated-logarithm-logn/
83
Chapter 25
For Case 1
A naive solution that is using two for-loops works as it gives us a complexity of O(N2 ), which
even in the worst case will perform 106 operations which are well under 108 . Ofcourse O(N)
and O(NlogN) is also acceptable in this case.
For Case 2
We have to think of a better solution than O(N2 ), as in worst case, it will perform 1010
84
Chapter 25. Knowing the complexity in competitive programming
operations as N is 105 . So complexity acceptable for this case is either O(NlogN) which is
approximately 106 (105 * ~10) operations well under 108 or O(N).
For Case 3
Even O(NlogN) gives us TLE as it performs ~109 operations which are over 108 . So the
only solution which is acceptable is O(N) which in worst case will perform 10^8 operations.
The code for the given problem can be found on : https://www.geeksforgeeks.org/
write-a-c-program-that-given-a-set-a-of-n-numbers-and-another-number-x-determines-whether-or-not-there-exist-t
Source
https://www.geeksforgeeks.org/knowing-the-complexity-in-competitive-programming/
85
Chapter 26
In the above example after 3rd iteration of the loop max value is 7, which holds true for first
3 elements of array A. Here, the loop invariant condition is that max is always maximum
among the first i elements of array A.
Loop Invariant condition of various algorithms:
Prerequisite: insertion sort, selection sort, quick sort, bubblesort,
Selection Sort:
In selection sort algorithm we find the minimum element from the unsorted part and put it
at the beginning.
min_idx = 0
86
Chapter 26. Loop Invariant Condition with Examples of Sorting Algorithms
{
min_idx = i;
for (j = i+1 to n-1)
if (arr[j] < arr[min_idx])
min_idx = j;
swap(&arr[min_idx], &arr[i]);
}
In the above pseudo code there are two loop invariant condition:
1. In the outer loop, array is sorted for first i elements.
2. In the inner loop, min is always the minimum value in A[i to j].
Insertion Sort:
In insertion sort, loop invariant condition is that the subarray A[0 to i-1] is always sorted.
for (i = 1 to n-1)
{
key = arr[i];
j = i-1;
while (j >= 0 and arr[j] > key)
{
arr[j+1] = arr[j];
j = j-1;
}
arr[j+1] = key;
}
Quicksort:
In quicksort algorithm, after every partition call array is divided into 3 regions:
1. Pivot element is placed at its correct position.
2. Elements less than pivot element lie on the left side of pivot element.
3. Elements greater than pivot element lie on the right side of pivot element.
87
Chapter 26. Loop Invariant Condition with Examples of Sorting Algorithms
pivot = arr[high];
i = (low - 1)
for (j = low; j <= high- 1; j++)
if (arr[j] <= pivot)
i++;
swap arr[i] and arr[j]
swap arr[i + 1] and arr[high])
return (i + 1)
}
Bubble Sort:
In bubble sort algorithm, after each iteration of the loop largest element of the array is
always placed at right most position. Therefore, the loop invariant condition is that at the
end of i iteration right most i elements are sorted and in place.
for (i = 0 to n-1)
for (j = 0 to j arr[j+1])
swap(&arr[j], &arr[j+1]);
Source
https://www.geeksforgeeks.org/loop-invariant-condition-examples-sorting-algorithms/
88
Chapter 27
for some constants c, a>0, b>0, k>=0 and function f(n). If f(n) is O(nk ), then
1. If a<1 then T(n) = O(nk )
2. If a=1 then T(n) = O(nk+1 )
3. if a>1 then T(n) = O(nk an/b )
Proof of above theorem( By substitution method ):
From above function, we have:
T(n) = aT(n-b) + f(n)
T(n-b) = aT(n-2b) + f(n-b)
T(n-2b) = aT(n-3b) + f(n-2b)
Now,
T(n-b) = a2 T(n-3b) + af(n-2b) + f(n-b)
T(n) = a3 T(n-3b) + a2 f(n-2b) + af(n-b) + f(n)
T(n) = Σi=0 to n ai f(n-ib) + constant, where f(n-ib) is O(n-ib)
T(n) = O(nk Σi=0 to n/b ai )
Where,
If a<1 then Σi=0 to n/b ai = O(1), T(n) = O(nk )
If a=1 then Σi=0 to n/b ai = O(n), T(n) = O(nk+1 )
89
Chapter 27. Master Theorem For Subtract and Conquer Recurrences
#include<stdio.h>
int fib(int n)
{
if (n <= 1)
return n;
return fib(n-1) + fib(n-2);
}
int main ()
{
int n = 9;
printf("%d", fib(n));
getchar();
return 0;
}
Output
34
More Examples:
• Example-1:
T(n) = 3T(n-1), n>0
= c, n<=0
90
Chapter 27. Master Theorem For Subtract and Conquer Recurrences
• Example-2:
T(n) = T(n-1) + n(n-1), if n>=2
= 1, if n=1
Sol:a=1, b=1, f(n)=n(n-1) so k=2;
Since a=1, T(n) = O(nk+1 )
T(n)= O(n2+1 )
T(n)= O(n3 )
• Example-3:
T(n) = 2T(n-1) – 1, if n>0
= 1, if n<=0
Sol: This recurrence can’t be solved using above method
since function is not of form T(n) = aT(n-b) + f(n)
Source
https://www.geeksforgeeks.org/master-theorem-subtract-conquer-recurrences/
91
Chapter 28
time() : time() function returns the time since the Epoch(jan 1 1970) in
seconds.
Header File : “time.h”
Prototype / Syntax : time_t time(time_t *tloc);
Return Value : On success, the value of time in seconds since the Epoch
is returned, on error -1 is returned.
Below program to demonstrate how to measure execution time using time() func-
tion.
#include <bits/stdc++.h>
using namespace std;
92
Chapter 28. Measure execution time with high precision in C/C++
{
}
}
int main()
{
/* Time function returns the time since the
Epoch(jan 1 1970). Returned time is in seconds. */
time_t start, end;
fun();
Output:
clock() : clock() returns the number of clock ticks elapsed since the
program was launched.
Header File : “time.h”
Prototype / Syntax : clock_t clock(void);
Return Value : On success, the value returned is the CPU time
used so far as a clock_t; To get the number of seconds used, divide by
CLOCKS_PER_SEC.on error -1 is returned.
Below program to demonstrate how to measure execution time using clock() func-
tion.you can also see this
93
Chapter 28. Measure execution time with high precision in C/C++
#include <bits/stdc++.h>
using namespace std;
int main()
{
/* clock_t clock(void) returns the number of clock ticks
elapsed since the program was launched.To get the number
of seconds used by the CPU, you will need to divide by
CLOCKS_PER_SEC.where CLOCKS_PER_SEC is 1000000 on typical
32 bit system. */
clock_t start, end;
fun();
Output:
94
Chapter 28. Measure execution time with high precision in C/C++
#include <bits/stdc++.h>
#include <sys/time.h>
using namespace std;
int main()
{
/* The function gettimeofday() can get the time as
well as timezone.
int gettimeofday(struct timeval *tv, struct timezone *tz);
The tv argument is a struct timeval and gives the
number of seconds and micro seconds since the Epoch.
struct timeval {
time_t tv_sec; // seconds
suseconds_t tv_usec; // microseconds
}; */
struct timeval start, end;
// start timer.
gettimeofday(&start, NULL);
fun();
95
Chapter 28. Measure execution time with high precision in C/C++
// stop timer.
gettimeofday(&end, NULL);
Output:
struct timespec {
time_t tv_sec; //seconds
long tv_nsec; //nanoseconds
};
96
Chapter 28. Measure execution time with high precision in C/C++
#include <bits/stdc++.h>
#include <sys/time.h>
using namespace std;
int main()
{
/* int clock_gettime( clockid_t clock_id, struct
timespec *tp ); The clock_gettime() function gets
the current time of the clock specified by clock_id,
and puts it into the buffer pointed to by tp.tp
parameter points to a structure containing
atleast the following members:
struct timespec {
time_t tv_sec; // seconds
long tv_nsec; // nanoseconds
};
clock id = CLOCK_REALTIME, CLOCK_PROCESS_CPUTIME_ID,
CLOCK_MONOTONIC ...etc
CLOCK_REALTIME : clock that measures real (i.e., wall-clock) time.
CLOCK_PROCESS_CPUTIME_ID : High-resolution per-process timer
from the CPU.
CLOCK_MONOTONIC : High resolution timer that is unaffected
by system date changes (e.g. NTP daemons). */
struct timespec start, end;
// start timer.
// clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start);
// clock_gettime(CLOCK_REALTIME, &start);
clock_gettime(CLOCK_MONOTONIC, &start);
fun();
// stop timer.
// clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end);
// clock_gettime(CLOCK_REALTIME, &end);
clock_gettime(CLOCK_MONOTONIC, &end);
97
Chapter 28. Measure execution time with high precision in C/C++
Output:
chrono : Chrono library is used to deal with date and time. This library was
designed to deal with the fact that timers and clocks might be different on
different systems and thus to improve over time in terms of precision.chrono
is the name of a header, but also of a sub-namespace, All the elements in
this header are not defined directly under the std namespace (like most of
the standard library) but under the std::chrono namespace.
#include <bits/stdc++.h>
#include <chrono>
using namespace std;
int main()
{
auto start = chrono::high_resolution_clock::now();
98
Chapter 28. Measure execution time with high precision in C/C++
fun();
time_taken *= 1e-9;
Output:
Source
https://www.geeksforgeeks.org/measure-execution-time-with-high-precision-in-c-c/
99
Chapter 29
NP-Completeness | Set 1
(Introduction)
100
Chapter 29. NP-Completeness | Set 1 (Introduction)
NP-complete problems are the hardest problems in NP set. A decision problem L is NP-
complete if:
1) L is in NP (Any given solution for NP-complete problems can be verified quickly, but
there is no efficient known solution).
2) Every problem in NP is reducible to L in polynomial time (Reduction is defined below).
A problem is NP-Hard if it follows property 2 mentioned above, doesn’t need to follow
property 1. Therefore, NP-Complete set is also a subset of NP-Hard set.
101
Chapter 29. NP-Completeness | Set 1 (Introduction)
Learning reduction in general is very important. For example, if we have library functions
to solve certain problem and if we can reduce a new problem to one of the solved problems,
we save a lot of time. Consider the example of a problem where we have to find minimum
product path in a given directed graph where product of path is multiplication of weights
of edges along the path. If we have code for Dijkstra’s algorithm to find shortest path, we
can take log of all weights and use Dijkstra’s algorithm to find the minimum product path
rather than writing a fresh code for this new problem.
How to prove that a given problem is NP complete?
From the definition of NP-complete, it appears impossible to prove that a problem L is
NP-Complete. By definition, it requires us to that show every problem in NP is polynomial
time reducible to L. Fortunately, there is an alternate way to prove it. The idea is to take
a known NP-Complete problem and reduce it to L. If polynomial time reduction is possible,
we can prove that L is NP-Complete by transitivity of reduction (If a NP-Complete problem
is reducible to L in polynomial time, then all problems are reducible to L in polynomial time).
What was the first problem proved as NP-Complete?
There must be some first NP-Complete problem proved by definition of NP-Complete prob-
lems. SAT (Boolean satisfiability problem)is the first NP-Complete problem proved by
Cook (See CLRS book for proof).
It is always useful to know about NP-Completeness even for engineers. Suppose you are
asked to write an efficient algorithm to solve an extremely important problem for your
company. After a lot of thinking, you can only come up exponential time approach which is
impractical. If you don’t know about NP-Completeness, you can only say that I could not
come with an efficient algorithm. If you know about NP-Completeness and prove that the
problem as NP-complete, you can proudly say that the polynomial time solution is unlikely
to exist. If there is a polynomial time solution possible, then that solution solves a big
problem of computer science many scientists have been trying for years.
We will soon be discussing more NP-Complete problems and their proof for NP-
Completeness.
References:
MIT Video Lecture on Computational Complexity
Introduction to Algorithms 3rd Edition by Clifford Stein, Thomas H. Cormen, Charles E.
Leiserson, Ronald L. Rivest
http://www.ics.uci.edu/~eppstein/161/960312.html
102
Chapter 29. NP-Completeness | Set 1 (Introduction)
Source
https://www.geeksforgeeks.org/np-completeness-set-1/
103
Chapter 30
Online Algorithm
104
Chapter 30. Online Algorithm
Because an online algorithm does not know the whole input, it might make decisions that
later turn out not to be optimal,
Note that insertion sort produces the optimum result. Therefore, for many problems, online
algorithms cannot match the performance of offline algorithms.
Example of Online Algorithms are :
1. Insertion sort
2. Perceptron
3. Reservoir sampling
4. Greedy algorithm
5. Adversary model
6. Metrical task systems
7. Odds algorithm
Online Problems: There are many problems that offer more than one online algorithm as
solution:
1. Canadian Traveller Problem
2. Linear Search Problem
3. K-server problem
4. Job shop scheduling problem
5. List update problem
6. Bandit problem
7. Secretary problem
Reference :
https://en.wikipedia.org/wiki/Online_algorithm
Source
https://www.geeksforgeeks.org/online-algorithm/
105
Chapter 31
Performance of loops (A
caching question)
// Function 1
int fun1(int arr[R][C])
{
int sum = 0;
for (int i=0; i<R; i++)
for (int j=0; j<C; j++)
sum += arr[i][j];
}
// Function 2
int fun2(int arr[R][C])
{
int sum = 0;
for (int j=0; j<C; j++)
for (int i=0; i<R; i++)
sum += arr[i][j];
}
In C/C++, elements are stored in Row-Major order. So the first implementation has better
spatial locality (nearby memory locations are referenced in successive iterations). Therefore,
first implementation should always be preferred for iterating multidimensional arrays.
106
Chapter 31. Performance of loops (A caching question)
Source
https://www.geeksforgeeks.org/performance-of-loops-a-caching-question/
107
Chapter 32
int a = 0, b = 0;
for (i = 0; i < N; i++) {
a = a + rand();
}
for (j = 0; j < M; j++) {
b = b + rand();
}
Options:
Output:
Explanation: The first loop is O(N) and the second loop is O(M). Since we don’t know
which is bigger, we say this is O(N + M). This can also be written as O(max(N, M)).
Since there is no additional space being utilized, the space complexity is constant / O(1)
2. What is the time complexity of following code:
108
Chapter 32. Practice Questions on Time Complexity Analysis
int a = 0;
for (i = 0; i < N; i++) {
for (j = N; j > i; j--) {
a = a + i + j;
}
}
Options:
1. O(N)
2. O(N*log(N))
3. O(N * Sqrt(N))
4. O(N*N)
Output:
4. O(N*N)
Explanation:
The above code runs total no of times
= N + (N – 1) + (N – 2) + … 1 + 0
= N * (N + 1) / 2
= 1/2 * N^2 + 1/2 * N
O(N^2) times.
3. What is the time complexity of following code:
int i, j, k = 0;
for (i = n / 2; i <= n; i++) {
for (j = 2; j <= n; j = j * 2) {
k = k + n / 2;
}
}
Options:
1. O(n)
2. O(nLogn)
3. O(n^2)
4. O(n^2Logn)
Output:
2. O(nLogn)
109
Chapter 32. Practice Questions on Time Complexity Analysis
Explanation:If you notice, j keeps doubling till it is less than or equal to n. Number of
times, we can double a number till it is less than n would be log(n).
Let’s take the examples here.
for n = 16, j = 2, 4, 8, 16
for n = 32, j = 2, 4, 8, 16, 32
So, j would run for O(log n) steps.
i runs for n/2 steps.
So, total steps = O(n/ 2 * log (n)) = O(n*logn)
4. What does it mean when we say that an algorithm X is asymptotically more
efficient than Y?
Options:
int a = 0, i = N;
while (i > 0) {
a += i;
i /= 2;
}
Options:
1. O(N)
2. O(Sqrt(N))
3. O(N / 2)
4. O(log N)
Output:
4. O(log N)
110
Chapter 32. Practice Questions on Time Complexity Analysis
Source
https://www.geeksforgeeks.org/practice-questions-time-complexity-analysis/
111
Chapter 33
112
Chapter 33. Practice Set for Recurrence Relations
Source
https://www.geeksforgeeks.org/practice-set-recurrence-relations/
113
Chapter 34
Pseudo-polynomial Algorithms
Source
https://www.geeksforgeeks.org/pseudo-polynomial-in-algorithms/
114
Chapter 35
115
Chapter 35. Python Code for time Complexity plot of Heap Sort
return 2 * i + 2
# heapSize = len(A)
# print("left", l, "Rightt", r, "Size", heapSize)
if l<= heapSize(A) and A[l] > A[i] :
largest = l
else:
largest = i
if r<= heapSize(A) and A[r] > A[largest]:
largest = r
if largest != i:
# print("Largest", largest)
A[i], A[largest]= A[largest], A[i]
# print("List", A)
MaxHeapify(A, largest)
116
Chapter 35. Python Code for time Complexity plot of Heap Sort
elements = list()
times = list()
for i in range(1, 10):
plt.xlabel('List Length')
plt.ylabel('Time Complexity')
plt.plot(elements, times, label ='Heap Sort')
plt.grid()
plt.legend()
plt.show()
# This code is contributed by Ashok Kajal
Output :
Source
https://www.geeksforgeeks.org/python-code-for-time-complexity-plot-of-heap-sort/
117
Chapter 36
T(n) ------(1)
/ \
T(n/2) T(n/2) ------(2)
/ \ / \
Case 2 covers the case when the children node and the parent node does an equal amount
of work.
118
Chapter 36. Regularity condition in the master theorem.
For example the equation T(n)=2T(n/2)+n falls under the category of case 2 and we can
clearly see from it’s tree below that at each level the children nodes perform as much work
as the parent node.
T(n) ------(n)
/ \
T(n/2) T(n/2) ------(n)
/ \ / \
Case 3 covers the scenario that the parent node does more work than the children nodes.
T(n)=T(n/2)+n is a example of case 3 where the parent performs more work than the
child.
T(n) ------(n)
|
T(n/2) ------(n/2)
|
In case 1 and case 2 the case conditions themselves make sure that work done by children
is either more or equal to the parent but that is not the case with case 3.
In case 3 we apply a regulatory condition to make sure that the parent does at least as
much as the children.
The regulatory condition for case 3 is
af(n/b)<=cf(n).
This says that f(n) (the amount of work done in the root) needs to be at least as big as the
sum of the work done in the lower levels.
The equation T(n) = T(n/2) + n(sin(n – /2) + 2) is a example where regulatory
condition makes a huge difference. The equation isn’t satisfying case 1 and case 2. In case
3 for large values of n it can never satisfy the regulatory condition. Hence this equation is
beyond the scope of master theorem.
Source
https://www.geeksforgeeks.org/regularity-condition-master-theorem/
119
Chapter 37
We are not allowed to use comparison based sorting algorithms like QuickSort, MergeSort,
etc.
Since elements are small, we use array elements as index. We store element counts in a
count array. Once we have count array, we traverse the count array and print every present
element its count times.
120
Chapter 37. Sorting without comparison of elements
{
// Count of elements in given range
int m = max - min + 1;
int main()
{
int arr[] = {10, 10, 1, 4, 4, 100, 0};
int min = 0, max = 100;
int n = sizeof(arr)/sizeof(arr[0]);
sortArr(arr, n, min, max);
return 0;
}
Output:
0 1 4 4 10 10 100
Source
https://www.geeksforgeeks.org/sorting-without-comparison-of-elements/
121
Chapter 38
Tail Recursion
Why do we care?
The tail recursive functions considered better than non tail recursive functions as tail-
recursion can be optimized by compiler. The idea used by compilers to optimize tail-
recursive functions is simple, since the recursive call is the last statement, there is nothing
left to do in the current function, so saving the current function’s stack frame is of no use
(See this for more details).
Can a non-tail recursive function be written as tail-recursive to optimize it?
Consider the following function to calculate factorial of n. It is a non-tail-recursive function.
Although it looks like a tail recursive at first look. If we take a closer look, we can see that
the value returned by fact(n-1) is used in fact(n), so the call to fact(n-1) is not the last
thing done by fact(n)
C++
#include<iostream>
122
Chapter 38. Tail Recursion
return n*fact(n-1);
}
Java
class GFG {
// A NON-tail-recursive function.
// The function is not tail
// recursive because the value
// returned by fact(n-1) is used
// in fact(n) and call to fact(n-1)
// is not the last thing done by
// fact(n)
static int fact(int n)
{
if (n == 0) return 1;
return n*fact(n-1);
}
// Driver program
public static void main(String[] args)
{
System.out.println(fact(5));
}
}
Python 3
123
Chapter 38. Tail Recursion
# A NON-tail-recursive function.
# The function is not tail
# recursive because the value
# returned by fact(n-1) is used
# in fact(n) and call to fact(n-1)
# is not the last thing done by
# fact(n)
def fact(n):
if (n == 0):
return 1
return n * fact(n-1)
C#
using System;
class GFG {
// A NON-tail-recursive function.
// The function is not tail
// recursive because the value
// returned by fact(n-1) is used
// in fact(n) and call to fact(n-1)
// is not the last thing done by
// fact(n)
static int fact(int n)
{
if (n == 0)
return 1;
return n * fact(n-1);
}
124
Chapter 38. Tail Recursion
PHP
<?php
// A NON-tail-recursive function.
// The function is not tail
// recursive because the value
// returned by fact(n-1) is used in
// fact(n) and call to fact(n-1) is
// not the last thing done by fact(n)
// Driver Code
echo fact(5);
Output :
120
The above function can be written as a tail recursive function. The idea is to use one more
argument and accumulate the factorial value in second argument. When n reaches 0, return
the accumulated value.
C++
#include<iostream>
using namespace std;
125
Chapter 38. Tail Recursion
Java
class GFG {
// Driver code
static public void main (String[] args)
{
System.out.println(fact(5));
}
}
Python 3
126
Chapter 38. Tail Recursion
# to calculate factorial
def factTR(n, a):
if (n == 0):
return a
return factTR(n - 1, n * a)
C#
class GFG {
// Driver code
static public void Main ()
{
Console.WriteLine(fact(5));
}
}
127
Chapter 38. Tail Recursion
PHP
<?php
// A tail recursive function
// to calculate factorial
function factTR($n, $a)
{
if ($n == 0) return $a;
Output :
120
Source
https://www.geeksforgeeks.org/tail-recursion/
128
Chapter 39
TOH(n, x, y, z)
{
if (n >= 1)
{
// put (n-1) disk to z by using y
TOH((n-1), x, z, y)
Analysis of Recursion
129
Chapter 39. Time Complexity Analysis | Tower Of Hanoi (Recursion)
Solving it by BackSubstitution :
———–equation-2
———–equation-3
Put value of T(n-2) in equation–2 with help of equation-3
——equation-4
Put value of T(n-1) in equation-1 with help of equation-4
After Generalization :
Source
https://www.geeksforgeeks.org/time-complexity-analysis-tower-hanoi-recursion/
130
Chapter 40
k=1
Sum = 1 + 2 + 3 ... n
= n(n+1)/2
= n2 + n/2
k=2
Sum = 12 + 22 + 32 + ... n12.
= n(n+1)(2n+1)/6
= n3/3 + n2/2 + n/6
131
Chapter 40. Time Complexity of Loop with Powers
k=3
Sum = 13 + 23 + 33 + ... n13.
= n2(n+1)2/4
= n4/4 + n3/2 + n2/4
Source
https://www.geeksforgeeks.org/time-complexity-of-loop-with-powers/
132
Chapter 41
Time Complexity of a Loop when Loop variable “Expands or Shrinks” exponentially - Geeks-
forGeeks
For such cases, time complexity of the loop is O(log(log(n))).The following cases analyse
different aspects of the problem.
Case 1 :
2 2 3 log (log(n))
In this case, i takes values 2, 2k , (2k )k = 2k , (2k )k = 2k , …, 2k k . The last term
klogk (log(n)) log(n)
must be less than or equal to n, and we have 2 =2 = n, which completely
agrees with the value of our last term. So there are in total logk (log(n)) many iterations, and
each iteration takes a constant amount of time to run, therefore the total time complexity
is O(log(log(n))).
Case 2 :
133
Chapter 41. Time Complexity of a Loop when Loop variable “Expands or Shrinks”
exponentially
2 3 log (log(n))
In this case, i takes values n, n1/k , (n1/k )1/k = n1/k , n1/k , …, n1/k k , so there are in
total logk (log(n)) iterations and each iteration takes time O(1), so the total time complexity
is O(log(log(n))).
Refer below article for analysis of different types of loops.
https://www.geeksforgeeks.org/analysis-of-algorithms-set-4-analysis-of-loops/
Source
https://www.geeksforgeeks.org/time-complexity-loop-loop-variable-expands-shrinks-exponentially/
134
Chapter 42
BUILD-HEAP(A)
heapsize := size(A);
for i := floor(heapsize/2) downto 1
do HEAPIFY(A, i);
end for
END
A quick look over the above algorithm suggests that the running time is ,
.
For finding the Time Complexity of building a heap, we must know the number of nodes
having height h.
135
Chapter 42. Time Complexity of building a heap
For this we use the fact that, A heap of size n has at most nodes with height h.
Now to derive the time complexity, we express the total cost of Build-Heap as-
(1)
Step 2 uses the properties of the Big-Oh notation to ignore the ceiling function and the con-
(2)
On differentiating both sides and multiplying by x, we get
(3)
Putting the result obtained in (3) back in our derivation (1), we get
136
Chapter 42. Time Complexity of building a heap
(4)
Hence Proved that the Time complexity for Building a Binary Heap is .
Reference :
http://www.cs.sfu.ca/CourseCentral/307/petra/2009/SLN_2.pdf
Source
https://www.geeksforgeeks.org/time-complexity-of-building-a-heap/
137
Chapter 43
void fun(int n)
{
int j = 1, i = 0;
while (i < n)
{
// Some O(1) task
i = i + j;
j++;
}
}
The loop variable ‘i’ is incremented by 1, 2, 3, 4, … until i becomes greater than or equal to
n.
The value of i is x(x+1)/2 after x iterations. So if loop runs x times, then x(x+1)/2 < n.
Therefore time complexity can be written as Θ(√n).
This article is contributed by Piyush Gupta. Please write comments if you find anything
incorrect, or you want to share more information about the topic discussed above
Source
https://www.geeksforgeeks.org/time-complexity-where-loop-variable-is-incremented-by-1-2-3-4/
138
Chapter 44
Before proceeding with this article make sure you are familiar with the recursive approach
discussed in Program for Fibonacci numbers
Analysis of the recursive Fibonacci program:
On solving the above recursive equation we get the upper bound of Fibonacci as
but this is not the tight upper bound. The fact that Fibonacci can be mathematically
represented as a linear recursive function can be used to find the tight upper bound.
Now Fibonacci is defined as
= +
The characteristic equation for this function will be
= +
– – =
139
Chapter 44. Time complexity of recursive Fibonacci program
=( + )/ and =( – )/
Now we know that solution of a linear recursive function is given as
= +
= +
Clearly and are asymptotically the same as both functions are repre-
senting the same thing.
Hence it can be said that
=
or we can write below (using the property of Big O notation that we can drop lower order
terms)
=
This is the tight upper bound of fibonacci.\
Fun Fact:
1.6180 is also called the golden ratio. You can read more about golden ratio here: Golden
Ratio in Maths
Source
https://www.geeksforgeeks.org/time-complexity-recursive-fibonacci-program/
140
Chapter 45
int main() {
// n is 8 lakhs
int n = 800000;
// Normal loop
for (int i = 1; i <= n; i++)
sum += i;
141
Chapter 45. Time taken by Loop unrolling vs Normal loop
Java
class GFG
{
// n is 8 lakhs
int n = 800000;
142
Chapter 45. Time taken by Loop unrolling vs Normal loop
// Normal loop
for (int i = 1; i <= n; i++)
sum += i;
// Unrolling technique
// (assuming that n is
// a multiple of 8).
sum = 0;
for (int i = 1; i <= n; i += 8)
{
sum += i ;
sum += (i + 1);
sum += (i + 2);
sum += (i + 3);
sum += (i + 4);
sum += (i + 5);
sum += (i + 6);
sum += (i + 7);
}
143
Chapter 45. Time taken by Loop unrolling vs Normal loop
Python3
if __name__ == "__main__":
# n is 8 lakhs
n = 800000;
# Normal loop
for i in range(1, n + 1):
sum += i
# to mark start
# time of unrolling
t = clock()
# Unrolling technique
# (assuming that n is
# a multiple of 8).
sum = 0
for i in range(1, n + 1, 8):
sum += i
sum += (i + 1)
sum += (i + 2)
sum += (i + 3)
sum += (i + 4)
sum += (i + 5)
sum += (i + 6)
sum += (i + 7)
# to mark the
144
Chapter 45. Time taken by Loop unrolling vs Normal loop
# end of loop
t = clock() - t
Output:
Please refer loop unrolling for comparison of normal loops and loop unrolling.
Improved By : Harshit Saini
Source
https://www.geeksforgeeks.org/time-taken-loop-unrolling-vs-normal-loop/
145
Chapter 46
Understanding Time
Complexity with Simple
Examples
146
Chapter 46. Understanding Time Complexity with Simple Examples
You will get surprising results i.e. for N = 10 you may get 0.5ms time and for N = 10,
000 you may get 0.2 ms time. Also you will get different timings on different machine. So,
we can say that actual time require to execute code is machine dependent (whether you
are using pentium1 or pentiun5) and also it considers network load if your machine is in
LAN/WAN. Even you will not get same timings on same machine for same code, the reason
behind that the current network load.
Now, the question arises if time complexity is not the actual time require executing the code
then what is it?
The answer is : Instead of measuring actual time required in executing each statement in
the code, we consider how many times each statement execute.
For example:
#include <stdio.h>
int main()
{
printf("Hello World");
}
Output:
Hello World
In above code “Hello World!!!” print only once on a screen. So, time complexity is constant:
O(1) i.e. every time constant amount of time require to execute code, no matter which
operating system or which machine configurations you are using.
Now consider another code:
#include <stdio.h>
void main()
{
int i, n = 8;
for (i = 1; i <= n; i++) {
printf("Hello Word !!!");
}
}
Output:
147
Chapter 46. Understanding Time Complexity with Simple Examples
In above code “Hello World!!!” will print N times. So, time complexity of above code is
O(N).
Source : Reddit
The co-author of this article is Varsha Lokare.
Source
https://www.geeksforgeeks.org/understanding-time-complexity-simple-examples/
148
Chapter 47
Source
https://www.geeksforgeeks.org/g-fact-86/
149