f799239504_Microsoft_PowerPoint_DAA_Lecture_Unit_1
f799239504_Microsoft_PowerPoint_DAA_Lecture_Unit_1
f799239504_Microsoft_PowerPoint_DAA_Lecture_Unit_1
Unit -1
Introduction: Characteristics of algorithm. Analysis of algorithm: Asymptotic analysis of complexity bounds -
best, average and worst-case behavior; Performance measurements of Algorithm, Time and space trade-offs,
Analysis of recursive algorithms through recurrence relations: Master’s Theorem and Recursion Tree method.
For example,
• Task: to make a cup of tea.
Algorithm:
Step 1: Add water and milk to the kettle,
Step 2: Boil it, add tea leaves,
Step 3: Add sugar, and then serve it in cup.
Introduction
What is Computer Algorithm?
A set of finite steps to accomplish or complete a task that is described precisely enough
that a computer can run it.
Described precisely:
• It is quite difficult for a machine to know how much water, milk and etc. to be added in the
previous tea making algorithm.
• These algorithms run on computers or computational devices.
• For example, GPS in our smartphones, Operation of E-commerce websites and apps.
• GPS uses shortest path algorithm. Online shopping uses cryptography which uses RSA
private key algorithm.
Characteristics of an Algorithm
A Good Algorithm:
• Moreover, the analysis of an algorithm can help us understand it better, and can suggest
informed improvements.
• Algorithms tend to become shorter, simpler, and more elegant during the analysis process.
Analysis of Algorithms
Computational Complexity:
• The branch of theoretical computer science where the goal is to classify algorithms
according to their efficiency and computational problems according to their inherent
difficulty is known as computational complexity.
• Paradoxically, such classifications are typically not useful for predicting performance or
for comparing algorithms in practical applications because they focus on order-of-growth
worst-case performance.
• In this subject, we will mainly focus on the analysis that can be used to predict
performance and compare algorithms.
Analysis of Algorithms
A complete analysis of the running time of an algorithm involves the following steps:
Average-Case Analysis:
• Elementary probability theory gives a number of different ways to compute the average
value of a quantity.
• While they are quite closely related, it will be convenient for us to explicitly identify two
different approaches to compute the mean.
Asymptotic Analysis of Complexity Bounds
• The efficiency or running time of an algorithm is stated as a function relating the input
length to the number of steps (time complexity) or storage locations (space complexity).
• Algorithm analysis is an important part of a broader computational complexity theory,
which provides theoretical estimates for the resources needed by any algorithm which
solves a given computational problem.
• These estimates provide an insight into reasonable directions of search for efficient
algorithms.
• In theoretical analysis of algorithms it is common to estimate their complexity in the
asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input.
• Big-o (O) notation, Big-omega (Ω) notation and Big-theta (Θ) notation are used to this
end.
Asymptotic Analysis of Complexity Bounds
Rule of Thumbs:
• Simple programs can be analyzed by counting the nested loops of the program. A single
loop over n items yields f(n) = n. A loop within a loop yields f(n) = . A loop within a
loop within a loop yields f(n) = .
• Given a series of for loops that are sequential, the slowest of them determines the
asymptotic behavior of the program. Two nested loops followed by a single loop is
asymptotically the same as the nested loops alone, because the nested loops dominate over
simple loop.
Best,Average and Worst-case Behavior
Input size
• The previous sorting of the items in the input array in an ascending (or descending) order.
• It maintains the sorted and un-sorted parts in an array.
• It takes the items from the un-sorted part and inserts into the sorted part in its appropriate
position.
Best, Average and Worst-case Behavior
Best Case Analysis:
• In the figure, items [1, 4, 7, 11, 53] are already sorted and now we want to place 33 in its
appropriate place.
• The item to be inserted are compared with the items from right to left one-by-one until we
found an item that is smaller than the item we are trying to insert.
• We compare 33 with 53 since 53 is bigger we move one position to the left and compare 33
with 11.
• Since 11 is smaller than 33, we place 33 just after 11 and move 53 one step to the right.
• Here we did 2 comparisons. It the item was 55 instead of 33, we would have performed
only one comparison.
• That means, if the array is already sorted then only one comparison is necessary to place
each item to its appropriate place and one scan of the array would sort it.
Best, Average and Worst-case Behavior
Worst Case Analysis:
• In real life, most of the time we do the worst case analysis of an algorithm. Worst case
running time is the longest running time for any input of size n.
• In the linear search, the worst case happens when the item we are searching is in the last
position of the array or the item is not in the array.
• In both the cases, we need to go through all n items in the array. The worst case runtime is,
therefore, O(n).
• Worst case performance is more important than the best case performance in case of linear
search because of the following reasons.
Best, Average and Worst-case Behavior
Worst Case Analysis:
• The item we are searching is rarely in the first position. If the array has 1000 items from 1
to 1000. If we randomly search the item from 1 to 1000, there is 0.001 percent chance that
the item will be in the first position.
• Most of the time the item is not in the array (or database in general). In which case it takes
the worst case running time to run.
• Similarly, in insertion sort, the worst case scenario occurs when the items are reverse
sorted. The number of comparisons in the worst case will be in the order of and hence
the running time is O( ).
• Knowing the worst-case performance of an algorithm provides a guarantee that the
algorithm will never take any time longer.
Best, Average and Worst-case Behavior
Average Case Analysis:
• Sometimes we do the average case analysis on algorithms. Most of the time the average case is
roughly as bad as the worst case.
• In the case of insertion sort, when we try to insert a new item to its appropriate position, we
compare the new item with half of the sorted item on average.
• The complexity is still in the order of , which is the worst-case running time.
• It is usually harder to analyze the average behavior of an algorithm than to analyze its behavior in
the worst case. This is because it may not be apparent what constitutes an “average” input for a
particular problem.
• A useful analysis of the average behavior of an algorithm, therefore, requires a prior knowledge of
the distribution of the input instances which is an unrealistic requirement.
• Therefore often we assume that all inputs of a given size are equally likely and do the probabilistic
analysis for the average case.
Best, Average and Worst-case Behavior
Formal Notations:
• Asymptotic notations are used to mathematically frame or bound the runtime performance
of an algorithm.
• Big-o notation (O):
Big-o notation (O(n)) is the formal way to express the upper bound of an algorithm’s running
time.
Best, Average and Worst-case Behavior
Omega Notation (Ω):
• The Omega Notation (Ω(n)) is the formal way to express the lower bound of an
algorithm’s running time.
Ω(f
Best, Average and Worst-case Behavior
Theta Notation (Θ):
• The Theta Notation (Θ(n)) is the formal way to express the contrast between both the
upper and lower bounds of an algorithm’s running time.
1. Asymptotic analysis tells us the behavior only for sufficiently large values of n. For
smaller values of n, the run time may not follow the asymptotic curve. To determine the
point beyond which the asymptotic curve is followed, we need to examine the times for
several values of n.
2. Even in the region where the asymptotic behavior is exhibited, the time growth may not
lie exactly on the predicted curve (straight line) because of the effects of low-order terms
that are discarded in the asymptotic analysis.
Time and Space Trade-offs
• There is no definite law related to time and space complexity trade offs. There is however a
tendency for all sorts of algorithmic problem to have multiple solutions, with some
requiring less time at the expense of space, and others requiring more space at the expense
of time.
• When trying to optimize algorithms, it is very often the case that using more space, for
example in the form of pre-calculations, leads to better time-wise performance. Studying
time and space complexity can be helpful in observing this tendency, but it can also be
misleading.
• The most common case of optimization for speed is the use of lookup tables, sacrificing
some amount of memory to avoid recalculation. Another example is data compression:
take the numerous image or audio file formats with each of their benefits and drawbacks.
Time and Space Trade-offs
How do we analyze an algorithm's running time without running the algorithm?
• We need to count the number of steps the algorithm performs. The problem is that there are
many different types of steps, and each may require a different amount of time. For
example, a division may take longer to compute than an addition does. One way to analyze
an algorithm is to count the number of different steps separately. But listing all the types of
steps separately will be, in most cases, too cumbersome.
• Furthermore, the implementation of the different steps depends on the specific computer or
the programming language used in the implementation. We are trying to avoid that
dependency. Instead of counting all steps, we focus on the one type of step that seems to us
to be the major step. For example, if we are analyzing a sorting algorithm, then we choose
comparisons as the major step.
Time and Space Trade-offs
• Space complexity is used to compare different algorithms for the same problem, in which
case the input/output requirements are fixed.
• Also, we cannot do without the input or output, and we want to count only the storage that
may be saved.
• We also do not count the storage required for the program itself, since it is independent of
the size of the input.
• Like time complexity, space complexity refers to worst case, and it is usually denoted as an
asymptotic expression in the size of the input.
Analysis of Recursive Algorithms
What is Recursion?
• In computer science, when a function (or method or subroutine) calls itself, we call it
recursion.
• Most of the programming languages out there support recursion and its one of the
fundamental concepts you need to master while learning data structures and algorithms.
• Recursion is the key to divide and conquer paradigm where we divide the bigger problem
into smaller pieces, solve the smaller pieces individually and combine the results.
• Recursions are heavily used in Graphs and Trees and almost all the data structures that
have a parent-child relationship.
n! = n×(n−1)×(n−2)×…×1
• If we observe the above equation carefully, we will notice that everything after first n is the
factorial of n−1. So the equation can be written as:
n! = n×(n−1)!
Analysis of Recursive Algorithms
Why do We Use Recursion in Programming?
• The equation mentioned in the previous slide says that to find the value of n!.
• We need to first find the value of (n−1)!.
• If we find the value of (n−1)!, we can simply multiply it by n.
• Now, how do we calculate the value of (n−1)?
We repeat the same process:
(n−1)! = (n−1)×(n−2)!