Lecture 1 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Design and Analysis of Algorithm Lecture # 1

Lecture # 1
Why this course?
The goal is to design fast, efficient and effective solution to a problem domain. Some
problems are easy to solve and some are hard. Quite cleverness is required to design
solution with fast and better approach. Designing new system need a new technology and
background of the new technology is the enhancement of existing algorithm. The study of
algorithm is to design efficient algorithm not only limited in reducing cost and time but to
enhance scalability, reliability and availability.
The main concern of the course ensures:
i) Correctness of solution
ii) Decomposition of application into small and clear units which can be maintained
precisely
iii) Improving the performance of application

INTRODUCTION TO ANALYSIS OF ALGORITHM


A lay man perceives that a computer perform anything and everything. It is very difficult to
ensure that it is not really the computer but the man behind computer who does the whole
thing.
For example users just enter their queries and can get information as he/she desire. A
common man rarely understands that a man made procedure called search has done the
entire task and the only support provided by the computer is the execution speed and
organized storage information.
‘Algorithm’ is defined after the name of Abu Ja’ far Muhammad ibn Musa Al-Khwarizmi,
Ninth century , al-jabr means “restoring” referring to the process of moving a subtracted
quantity to other side of an equation; al-muqabala is “comparing” and refers to subtracting
equal quantities from both sides of an equation.

Definition of Algorithm
 An algorithm is a set of rules for carrying out calculation either by hand or on a
machine.
 An algorithm is a sequence of computational steps that transform the input into the
output.
 An algorithm is a sequence of operations performed on data that have to be
organized in the data structures.
 An algorithm is a sequence of unambiguous instructions for solving a problem, i.e.,
for obtaining a required output for any legitimate input in a finite amount of time.
Design and Analysis of Algorithm Lecture # 1

Asymptotic notation
Asymptotic notation is a mathematical tool used to describe the efficiency of an algorithm or
the rate at which a function grows as the input size increases. It is a way to simplify and
express the behavior of a function as it approaches infinity.

There are three common types of asymptotic notation used in algorithm analysis: big O
notation, big Omega notation, and big Theta notation.

Big O notation is used to describe the upper bound of the growth rate of a function. It
represents the maximum amount of time or space required by an algorithm, as a function of
the input size, in the worst-case scenario. For example, if an algorithm has a time complexity
of O(n^2), this means that the algorithm will take no more than n^2 units of time to run,
where n is the size of the input.

Big Omega notation is used to describe the lower bound of the growth rate of a function. It
represents the minimum amount of time or space required by an algorithm.

Big-O notation describes the upper bound or worst-case scenario of an algorithm's time
complexity. It tells us how quickly the runtime of an algorithm grows as the size of the input
increases. For example, if an algorithm has a time complexity of O(n), it means that the
maximum time it takes to run will grow linearly with the size of the input.

Big-Omega notation describes the lower bound or best-case scenario of an algorithm's time
complexity. It tells us how quickly the runtime of an algorithm can decrease as the size of the
input increases. For example, if an algorithm has a time complexity of Omega(n), it means
that the minimum time it takes to run will grow linearly with the size of the input.

Big-Theta notation provides a tight bound on the algorithm's time complexity by describing
both the upper and lower bounds. It gives us a range of time complexity that an algorithm
will fall within. For example, if an algorithm has a time complexity of Theta(n), it means that
its runtime will grow linearly with the size of the input, and both the best-case and worst-case
scenarios will fall within that same linear growth rate.

In summary, asymptotic notation provides us with a way to analyze the efficiency of an


algorithm without getting bogged down in specific details of implementation or input size. It
helps us compare the performance of different algorithms and make informed decisions about
which one to use in a given situation.

Asymptotic notation is a way of measuring how quickly an algorithm's runtime grows as the
size of the input increases. It provides us with a general idea of how efficient an algorithm is
without getting into the specifics.

Big-O notation tells us the maximum time an algorithm takes to run, Big-Omega tells us the
minimum time, and Big-Theta gives us a range in which the algorithm's time complexity
falls.

Overall, asymptotic notation helps us compare different algorithms and choose the most
efficient one for a given problem, without worrying too much about the details.
Design and Analysis of Algorithm Lecture # 1

1. Let's say you have an algorithm that sorts a list of numbers using the bubble sort
algorithm. The worst-case time complexity of this algorithm is O(n^2), which means
that the maximum time it takes to sort the list will grow quadratically with the size of
the list. In other words, if you have a list of 1000 numbers, the worst-case scenario is
that the algorithm will take around 1 million (1000^2) operations to complete.
2. Consider another algorithm that finds the maximum element in a list of numbers. The
best-case scenario for this algorithm is that the maximum element is at the beginning
of the list, so the algorithm only needs to look at the first element to find it. In this
case, the time complexity of the algorithm is Omega(1), which means that the
minimum time it takes to find the maximum element is constant, regardless of the size
of the list.
3. Let's say you have a third algorithm that searches for an element in a sorted list using
the binary search algorithm. The time complexity of this algorithm is Theta(log n),
which means that the time it takes to search the list grows logarithmically with the
size of the list. In other words, if you have a list of 1000 elements, the algorithm will
take around 10 operations to find the element, since log base 2 of 1000 is
approximately 10.

These examples show how asymptotic notation can help us understand the behavior of
different algorithms as the input size grows larger. It allows us to compare algorithms and
choose the most efficient one for a given problem.

Quadratically

Quadratically means that something is growing with the square of the input size. In other
words, if the input size doubles, the output size will quadruple.

For example, if an algorithm has a time complexity of O(n^2), it means that the time it takes
to complete the algorithm will grow with the square of the input size. If the input size doubles
from 1000 to 2000, the time it takes to complete the algorithm will increase by a factor of 4
(since 2000^2 is 4 times larger than 1000^2).

So, quadratically refers to a growth rate that is proportional to the square of the input size.

Logarith:

Logarithmically means that something is growing with the logarithm of the input size. In
other words, the growth rate is proportional to the logarithm of the input size.

For example, if an algorithm has a time complexity of O(log n), it means that the time it takes
to complete the algorithm will grow logarithmically with the input size. If the input size
doubles from 1000 to 2000, the time it takes to complete the algorithm will only increase by a
small constant amount (since log(2000) is only slightly larger than log(1000)).

So, logarithmically refers to a growth rate that is proportional to the logarithm of the input
size.

In mathematics, the logarithm is a mathematical function that measures the number of times a
certain number (known as the base) must be multiplied by itself to obtain another number.
Design and Analysis of Algorithm Lecture # 1

For example, the logarithm base 2 of 8 is 3, because 2 multiplied by itself three times gives us
8 (i.e., 2^3 = 8). Similarly, the logarithm base 10 of 1000 is 3, because 10 multiplied by itself
three times gives us 1000 (i.e., 10^3 = 1000).

You might also like