Design Analysis of Algorithms CH 1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 34

Design Analysis Of Algorithms

By Arunima Sengupta
What is an Algorithm?
1. The word Algorithm means ” A set of finite rules or instructions
to be followed in calculations or other problem-solving
operations ”
Or
” A procedure for solving a mathematical problem in a finite
number of steps that frequently involves recursive operations”.
2. An algorithm is like a step-by-step recipe or set of instructions
for solving a particular problem. Just as a recipe guides you
through making a dish, an algorithm guides a computer through
solving a task.
Real-life Example: Making a Peanut Butter Sandwich
Let's say you want to create an algorithm for making a peanut butter sandwich.
Here's a simple one:
• Get Ingredients:
– Bread
– Peanut Butter
– Jelly
• Prepare the Bread:
– Take two slices of bread.
• Spread Peanut Butter:
– Use a knife to spread peanut butter on one slice.
• Add Jelly:
– Spread jelly on the other slice.
• Combine Slices:
– Press the peanut butter slice and jelly slice together.
• Cut and Serve:
– Optionally, cut the sandwich into halves or quarters.
Each of these steps is like an instruction in the algorithm. The order is crucial, and
if you follow them correctly, you'll end up with a peanut butter sandwich
Key Characteristics of Algorithms:
• Input: Algorithms take inputs (ingredients in
our example).
• Output: They produce an output or result (a
peanut butter sandwich).
• Clear Steps: Each step is well-defined and
unambiguous.
• Termination: The algorithm stops at some
point (you finish making the sandwich).
• Correctness: If you follow the steps correctly,
you get the desired result.
Characteristics Of Algorithm
More Complex Example: Sorting Cards
Let's say you have a deck of cards in random order, and you want to arrange them in
ascending order. Here's a simple algorithm:
• Start with the First Card:
– Pick the first card.
• Compare with the Next Card:
– Compare it with the next card.
• Swap if Necessary:
– If the first card is greater than the next, swap them.
• Move to the Next Card:
– Move to the next card and repeat steps 2-3.
• Repeat Until Done:
– Keep repeating until you've compared all pairs of cards.
By following these steps, you can organize the cards in ascending order, and this is
essentially a simple sorting algorithm.

In summary, algorithms are like sets of instructions we follow to solve problems, and
we use them in our daily lives, often without realizing it.
Refer
• https://www.geeksforgeeks.org/introduction-
to-algorithms/
What is Performance Analysis-Space
complexity, Time complexity
• Time Complexity:
Time complexity measures how the runtime of
an algorithm grows with the size of the input. In
simpler terms, it tells us how long it takes for an
algorithm to complete as the input gets larger.
• Real-life Example: Making a Sandwich
• Imagine you have a set of instructions for making a
sandwich, and you want to know how the time
complexity relates to the number of steps you
have to follow. If each step takes a fixed amount of
time, and the total time grows linearly with the
number of steps, then the time complexity is said
to be O(n), where "n" is the number of steps.
• For instance, if it takes 1 minute to make a
sandwich with 5 steps, it might take roughly 2
minutes for 10 steps. The time complexity in this
case is linear.
• 2. Space Complexity:
• Space complexity measures how much
additional memory an algorithm needs to
solve a problem, based on the size of the
input. It tells us how the memory usage grows
as the input size increases.
• Real-life Example: Grocery Shopping List
• Think of creating a grocery shopping list as an algorithm with
space complexity. If you write down each item you need on the
list, the space complexity is O(n), where "n" is the number of
items on your list. The more items you need, the more space
(paper) you require.
• For example, if you need 5 items and you write them on a
small piece of paper, you might need a larger piece of paper or
a second sheet if your list grows to 10 items. The space
complexity increases with the number of items on your
shopping list.
• In summary, time complexity is about how the runtime of an
algorithm increases with the size of the input, and space
complexity is about how much additional memory is required
as the input size grows. These concepts help us understand
and compare the efficiency of different algorithms.
• Algorithm ADD SCALAR(A, B)
//Description: Perform arithmetic addition of
two numbers
//Input: Two scalar variables A and B
//Output: variable C, which holds the addition
of A and B
C <- A + B
return C

• The addition of two scalar numbers requires


one addition operation. the time complexity of
this algorithm is constant, so T(n) = O(1) .
Suppose a problem is to find whether a pair (X, Y)
exists in an array, A of N elements whose sum is Z. The
simplest idea is to consider every pair and check if it
satisfies the given condition or not.
The pseudo-code is as follows:
• int a[n];
for(int i = 0;i < n;i++)
cin >> a[i]

for(int i = 0;i < n;i++)


for(int j = 0;j < n;j++)
if(i!=j && a[i]+a[j] == z)
return true
return false
Refer
• https://www.geeksforgeeks.org/time-
complexity-and-space-complexity/
Input Length Worst Accepted Time Complexity Usually type of solutions

10 -12 O(N!) Recursion and backtracking

15-18 O(2N * N) Recursion, backtracking, and


bit manipulation

18-22 O(2N * N) Recursion, backtracking, and bit


manipulation

30-40 O(2N/2 * N) Meet in the middle, Divide and Conquer

100 O(N4) Dynamic programming, Constructive

400 O(N3) Dynamic programming, Constructive

Dynamic programming, Binary Search,


2K O(N2* log N) Sorting,
Divide and Conquer

10K O(N2) Dynamic programming, Graph, Trees,


Constructive

1M O(N* log N) Sorting, Binary Search, Divide and


Conquer

100M O(N), O(log N), O(1) Constructive, Mathematical,


Greedy Algorithms
Practice
• https://www.geeksforgeeks.org/practice-
questions-time-complexity-analysis/
Asymptotic Notations
1. Big O notation (O):
• It is defined as upper bound and upper bound on an algorithm
is the most amount of time required ( the worst case
performance).
Big O notation is used to describe the asymptotic upper
bound.
• Mathematically, if f(n) describes the running time of an
algorithm; f(n) is O(g(n)) if there exist positive
constant C and n0 such that,
• 0 <= f(n) <= Cg(n) for all n >= n0
• n = used to give upper bound a function.
If a function is O(n), it is automatically O(n-square) as well.
Graphic example for Big O :
2. Big Omega notation (Ω) :
• It is define as lower bound and lower bound on an
algorithm is the least amount of time required ( the most
efficient way possible, in other words best case).
Just like O notation provide an asymptotic upper
bound, Ω notation provides asymptotic lower bound.

• Let f(n) define running time of an algorithm;


f(n) is said to be Ω(g (n)) if there exists positive
constant C and (n0) such that
• 0 <= Cg(n) <= f(n) for all n >= n0
• n = used to given lower bound on a function
If a function is Ω(n-square) it is automatically Ω(n) as well.
Graphical example for Big Omega (Ω):
3. Big Theta notation (Θ) :
• It is define as tightest bound and tightest bound is the best
of all the worst case times that the algorithm can take.
• Let f(n) define running time of an algorithm.
f(n) is said to be Θ(g(n)) if f(n) is O(g(n)) and f(n) is Ω(g(n)).
Mathematically,
– 0 <= f(n) <= C1g(n) for n >= n0
0 <= C2g(n) <= f(n) for n >= n0
Merging both the equation, we get :
– 0 <= C2g(n) <= f(n) <= C1g(n) for n >= n0
• The equation simply means there exist positive constants C1
and C2 such that f(n) is sandwich between C2 g(n) and
C1g(n).
Graphic example of Big Theta (Θ):
Difference Between Big oh, Big Omega and
Big Theta :
S.No. Big O Big Omega (Ω) Big Theta (Θ)

It is like (<=)
It is like (>=) It is like (==)
rate of growth of an algorithm
1. rate of growth is greater than meaning the rate of growth is
is less than or equal to a
or equal to a specified value. equal to a specified value.
specific value.

The upper bound of algorithm


The algorithm’s lower bound is The bounding of function from
is represented by Big O
represented by Omega above and below is
notation. Only the above
2. notation. The asymptotic lower represented by theta notation.
function is bounded by Big O.
bound is given by Omega The exact asymptotic behavior
Asymptotic upper bound is
notation. is done by this theta notation.
given by Big O notation.

3. Big O – Upper Bound Big Omega (Ω) – Lower Bound Big Theta (Θ) – Tight Bound

It is define as lower bound and


It is define as upper bound and
lower bound on an algorithm is It is define as tightest bound
upper bound on an algorithm is
the least amount of time and tightest bound is the best
4. the most amount of time
required ( the most efficient of all the worst case times that
required ( the worst case
way possible, in other words the algorithm can take.
performance).
best case).

Mathematically – Big Theta is 0


Mathematically: Big Oh is 0 <= Mathematically: Big Omega is 0
5. <= C2g(n) <= f(n) <= C1g(n) for
f(n) <= Cg(n) for all n >= n0 <= Cg(n) <= f(n) for all n >= n0
n >= n0
Define and Implementation Divide and
conquer: General method, applications
Divide And Conquer
This technique can be divided into the following three
parts:
• Divide: This involves dividing the problem into
smaller sub-problems.
• Conquer: Solve sub-problems by calling recursively
until solved.
• Combine: Combine the sub-problems to get the
final solution of the whole problem.
The following are some standard algorithms that follow Divide
and Conquer algorithm.
• Quicksort is a sorting algorithm. The algorithm picks a pivot
element and rearranges the array elements so that all
elements smaller than the picked pivot element move to the
left side of the pivot, and all greater elements move to the
right side. Finally, the algorithm recursively sorts the subarrays
on the left and right of the pivot element.
• Merge Sort is also a sorting algorithm. The algorithm divides
the array into two halves, recursively sorts them, and finally
merges the two sorted halves.
• Strassen’s Algorithm is an efficient algorithm to multiply two
matrices. A simple method to multiply two matrices needs 3
nested loops and is O(n^3). Strassen’s algorithm multiplies
two matrices in O(n^2.8974) time.
Binary Search
• Binary Search is defined as a
searching algorithm used in a sorted array
by repeatedly dividing the search interval in
half. The idea of binary search is to use the
information that the array is sorted and
reduce the time complexity to O(log N).
Conditions for when to apply Binary Search in a Data Structure:

• To apply Binary Search algorithm:


• The data structure must be sorted.
• Access to any element of the data structure
takes constant time.
Binary Search Algorithm:

In this algorithm,
• Divide the search space into two halves by
finding the middle index “mid”.
• Compare the middle element of the search space
with the key.
• If the key is found at middle element, the process
is terminated.
• If the key is not found at middle element, choose
which half will be used as the next search space.
– If the key is smaller than the middle element, then the
left side is used for next search.
– If the key is larger than the middle element, then the
right side is used for next search.
• This process is continued until the key is found or
the total search space is exhausted.
How does Binary Search work?

• To understand the working of binary search,


consider the following illustration:
– Consider an array arr[] = {2, 5, 8, 12, 16, 23, 38,
56, 72, 91}, and the target = 23.
– First Step: Calculate the mid and compare the mid
element with the key. If the key is less than mid
element, move to left and if it is greater than the
mid then move search space to the right.
– Key (i.e., 23) is greater than current mid element
(i.e., 16). The search space moves to the right.

You might also like