Understanding Time Complexity With Simple Examples

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 78

Understanding Time Complexity with

Simple Examples
Last Updated : 22 Apr, 2024



A lot of students get confused while understanding the concept of time


complexity, but in this article, we will explain it with a very simple
example.
Q. Imagine a classroom of 100 students in which you gave your pen to
one person. You have to find that pen without knowing to whom you
gave it.
Here are some ways to find the pen and what the O order is.
 O(n2): You go and ask the first person in the class if he has the pen.
Also, you ask this person about the other 99 people in the classroom if
they have that pen and so on,
This is what we call O(n2).
 O(n): Going and asking each student individually is O(N).
 O(log n): Now I divide the class into two groups, then ask: “Is it on the
left side, or the right side of the classroom?” Then I take that group and
divide it into two and ask again, and so on. Repeat the process till you
are left with one student who has your pen. This is what you mean by
O(log n).
I might need to do:
 The O(n2) searches if only one student knows on which student the
pen is hidden.
 The O(n) if one student had the pen and only they knew it.
 The O(log n) search if all the students knew, but would only tell me if I
guessed the right side.
The above O -> is called Big – Oh which is an asymptotic notation. There
are other asymptotic notations like theta and Omega.
NOTE: We are interested in the rate of growth over time with respect to the
inputs taken during the program execution.
Is the Time Complexity of an Algorithm/Code the

same as the Running/Execution Time of Code?


The Time Complexity of an algorithm/code is not equal to the actual time
required to execute a particular code, but the number of times a statement
executes. We can prove this by using the time command.
For example: Write code in C/C++ or any other language to find the
maximum between N numbers, where N varies from 10, 100, 1000, and
10000. For Linux based operating system (Fedora or Ubuntu), use the below
commands:
To compile the program: gcc program.c – o program
To execute the program: time ./program
You will get surprising results i.e.:
 For N = 10: you may get 0.5 ms time,
 For N = 10,000: you may get 0.2 ms time.
 Also, you will get different timings on different machines. Even if you
will not get the same timings on the same machine for the same code, the
reason behind that is the current network load.
So, we can say that the actual time required to execute code is machine-
dependent (whether you are using Pentium 1 or Pentium 5) and also it
considers network load if your machine is in LAN/WAN.

What is meant by the Time Complexity of an


Algorithm?
Now, the question arises if time complexity is not the actual time required to
execute the code, then what is it?
The answer is:
Instead of measuring actual time required in executing each statement in
the code, Time Complexity considers how many times each statement
executes.
Example 1: Consider the below simple code to print Hello World
C++CJavaPython3C#Javascript
#include <iostream>
using namespace std;

int main()
{
cout << "Hello World";
return 0;
}

// This code is contributed by vikash36905.

Output
Hello World

Time Complexity: In the above code “Hello World” is printed only once on
the screen.
So, the time complexity is constant: O(1) i.e. every time a constant amount
of time is required to execute code, no matter which operating system or
which machine configurations you are using.
Auxiliary Space: O(1)
Example 2:
C++CJavaPython3C#Javascript
#include <iostream>
using namespace std;

int main()
{

int i, n = 8;
for (i = 1; i <= n; i++) {
cout << "Hello World !!!\n";
}
return 0;
}

// This code is contributed by vikash36905.

Output
Hello World !!!

Hello World !!!

Hello World !!!


Hello World !!!

Hello World !!!

Hello World !!!

Hello World !!!

Hello World !!!

Time Complexity: In the above code “Hello World !!!” is printed only n
times on the screen, as the value of n can change.
So, the time complexity is linear: O(n) i.e. every time, a linear amount of
time is required to execute code.
Auxiliary Space: O(1)
Example 3:
C++CJavaPython3C#Javascript
#include <iostream>
using namespace std;

int main()
{

int i, n = 8;
for (i = 1; i <= n; i=i*2) {
cout << "Hello World !!!\n";
}
return 0;
}

// This code is contributed by Suruchi Kumari

Output
Hello World !!!

Hello World !!!

Hello World !!!

Hello World !!!

Time Complexity: O(log2(n))


Auxiliary Space: O(1)
Example 4:
C++CJavaPython3C#Javascript
#include <iostream>
#include <cmath>
using namespace std;

int main()
{

int i, n = 8;
for (i = 2; i <= n; i=pow(i,2)) {
cout << "Hello World !!!\n";
}
return 0;
}

// This code is contributed by Suruchi Kumari

Output
Hello World !!!

Hello World !!!

Time Complexity: O(log(log n))


Auxiliary Space: O(1)

How To Find The Time Complexity Of An

Algorithm?
Now let us see some other examples and the process to find the time
complexity of an algorithm:
Example: Let us consider a model machine that has the following
specifications:
 Single processor
 32 bit
 Sequential execution
 1 unit time for arithmetic and logical operations
 1 unit time for assignment and return statements
Q1. Find the Sum of 2 numbers on the above machine:
For any machine, the pseudocode to add two numbers will be something like
this:
C++CJavaPython3C#Javascript
// Pseudocode : Sum(a, b) { return a + b }
#include <iostream>
using namespace std;

int sum(int a,int b)


{
return a+b;
}

int main() {
int a = 5, b = 6;
cout<<sum(a,b)<<endl;
return 0;
}

// This code is contributed by akashish__

Output
11

Time Complexity:
 The above code will take 2 units of time(constant):
o one for arithmetic operations and
o one for return. (as per the above conventions).
 Therefore total cost to perform sum operation (Tsum) = 1 + 1 = 2
 Time Complexity = O(2) = O(1), since 2 is constant
Auxiliary Space: O(1)
Q2. Find the sum of all elements of a list/array
The pseudocode to do so can be given as:
C++CJavaPython3C#Javascript
#include <iostream>
using namespace std;

int list_Sum(int A[], int n)


// A->array and
// n->number of elements in array
{
int sum = 0;
for (int i = 0; i <= n - 1; i++) {
sum = sum + A[i];
}
return sum;
}

int main()
{
int A[] = { 5, 6, 1, 2 };
int n = sizeof(A) / sizeof(A[0]);
cout << list_Sum(A, n);
return 0;
}

// This code is contributed by akashish__

Output
14

To understand the time complexity of the above code, let’s see how much
time each statement will take:
C++CJavaPython3C#Javascript
int list_Sum(int A[], int n)
{
int sum = 0; // cost=1 no of times=1
for(int i=0; i<n; i++) // cost=2 no of times=n+1 (+1 for the
end false condition)
sum = sum + A[i] ; // cost=2 no of times=n
return sum ; // cost=1 no of times=1
}
Therefore the total cost to perform sum operation
Tsum=1 + 2 * (n+1) + 2 * n + 1 = 4n + 4 =C1 * n + C2 = O(n)
Therefore, the time complexity of the above code is O(n)
Q3. Find the sum of all elements of a matrix
For this one, the complexity is a polynomial equation (quadratic equation
for a square matrix)
 Matrix of size n*n => Tsum = a.n2 + b.n + c
 Since Tsum is in order of n2, therefore Time Complexity = O(n2)
C++JavaPython3C#Javascript
#include <iostream>
using namespace std;

int main()
{
int n = 3;
int m = 3;
int arr[][3]
= { { 3, 2, 7 }, { 2, 6, 8 }, { 5, 1, 9 } };
int sum = 0;

// Iterating over all 1-D arrays in 2-D array


for (int i = 0; i < n; i++) {

// Printing all elements in ith 1-D array


for (int j = 0; j < m; j++) {

// Printing jth element of ith row


sum += arr[i][j];
}
}
cout << sum << endl;
return 0;
}
// contributed by akashish__

Output
43

Time Complexity: O(n*m)


The program iterates through all the elements in the 2D array using two
nested loops. The outer loop iterates n times and the inner loop iterates m
times for each iteration of the outer loop. Therefore, the time complexity of
the program is O(n*m).
Auxiliary Space: O(n*m)
The program uses a fixed amount of auxiliary space to store the 2D array
and a few integer variables. The space required for the 2D array is nm
integers. The program also uses a single integer variable to store the sum of
the elements. Therefore, the auxiliary space complexity of the program is
O(nm + 1), which simplifies to O(n*m).
In conclusion, the time complexity of the program is O(nm), and the auxiliary
space complexity is also O(nm).

What is Algorithm | Introduction to


Algorithms
Last Updated : 04 Jun, 2024



Definition of Algorithm
The word Algorithm means ” A set of finite rules or instructions to be
followed in calculations or other problem-solving operations ”
Or
” A procedure for solving a mathematical problem in a finite number of
steps that frequently involves recursive operations”.
Therefore Algorithm refers to a sequence of finite steps to solve a particular
problem.
Use of the Algorithms:
Algorithms play a crucial role in various fields and have many applications.
Some of the key areas where algorithms are used include:
1. Computer Science: Algorithms form the basis of computer
programming and are used to solve problems ranging from simple
sorting and searching to complex tasks such as artificial intelligence and
machine learning.
2. Mathematics: Algorithms are used to solve mathematical problems,
such as finding the optimal solution to a system of linear equations or
finding the shortest path in a graph.
3. Operations Research: Algorithms are used to optimize and make
decisions in fields such as transportation, logistics, and resource
allocation.
4. Artificial Intelligence: Algorithms are the foundation of artificial
intelligence and machine learning, and are used to develop intelligent
systems that can perform tasks such as image recognition, natural
language processing, and decision-making.
5. Data Science: Algorithms are used to analyze, process, and extract
insights from large amounts of data in fields such as marketing, finance,
and healthcare.
These are just a few examples of the many applications of algorithms. The
use of algorithms is continually expanding as new technologies and fields
emerge, making it a vital component of modern society.
Algorithms can be simple and complex depending on what you want to
achieve.
It can be understood by taking the example of cooking a new recipe. To
cook a new recipe, one reads the instructions and steps and executes them
one by one, in the given sequence. The result thus obtained is the new dish
is cooked perfectly. Every time you use your phone, computer, laptop, or
calculator you are using Algorithms. Similarly, algorithms help to do a task
in programming to get the expected output.
The Algorithm designed are language-independent, i.e. they are just plain
instructions that can be implemented in any language, and yet the output
will be the same, as expected.

What is the need for algorithms?


1. Algorithms are necessary for solving complex problems efficiently and
effectively.
2. They help to automate processes and make them more reliable, faster,
and easier to perform.
3. Algorithms also enable computers to perform tasks that would be
difficult or impossible for humans to do manually.
4. They are used in various fields such as mathematics, computer science,
engineering, finance, and many others to optimize processes, analyze
data, make predictions, and provide solutions to problems.

What are the Characteristics of an Algorithm?


As one would not follow any written instructions to cook the recipe, but
only the standard one. Similarly, not all written instructions for
programming are an algorithm. For some instructions to be an algorithm, it
must have the following characteristics:
 Clear and Unambiguous: The algorithm should be unambiguous. Each
of its steps should be clear in all aspects and must lead to only one
meaning.
 Well-Defined Inputs: If an algorithm says to take inputs, it should be
well-defined inputs. It may or may not take input.
 Well-Defined Outputs: The algorithm must clearly define what output
will be yielded and it should be well-defined as well. It should produce at
least 1 output.
 Finite-ness: The algorithm must be finite, i.e. it should terminate after a
finite time.
 Feasible: The algorithm must be simple, generic, and practical, such that
it can be executed with the available resources. It must not contain some
future technology or anything.
 Language Independent: The Algorithm designed must be language-
independent, i.e. it must be just plain instructions that can be
implemented in any language, and yet the output will be the same, as
expected.
 Input: An algorithm has zero or more inputs. Each that contains a
fundamental operator must accept zero or more inputs.
 Output: An algorithm produces at least one output. Every instruction
that contains a fundamental operator must accept zero or more inputs.
 Definiteness: All instructions in an algorithm must be unambiguous,
precise, and easy to interpret. By referring to any of the instructions in an
algorithm one can clearly understand what is to be done. Every
fundamental operator in instruction must be defined without any
ambiguity.
 Finiteness: An algorithm must terminate after a finite number of steps in
all test cases. Every instruction which contains a fundamental operator
must be terminated within a finite amount of time. Infinite loops or
recursive functions without base conditions do not possess finiteness.
 Effectiveness: An algorithm must be developed by using very basic,
simple, and feasible operations so that one can trace it out by using just
paper and pencil.
Properties of Algorithm:
 It should terminate after a finite time.
 It should produce at least one output.
 It should take zero or more input.
 It should be deterministic means giving the same output for the same
input case.
 Every step in the algorithm must be effective i.e. every step should do
some work.

Types of Algorithms:
There are several types of algorithms available. Some important algorithms
are:
1. Brute Force Algorithm:

It is the simplest approach to a problem. A brute force algorithm is the first


approach that comes to finding when we see a problem.
2. Recursive Algorithm:

A recursive algorithm is based on recursion. In this case, a problem is


broken into several sub-parts and called the same function again and again.
3. Backtracking Algorithm:

The backtracking algorithm builds the solution by searching among all


possible solutions. Using this algorithm, we keep on building the solution
following criteria. Whenever a solution fails we trace back to the failure
point build on the next solution and continue this process till we find the
solution or all possible solutions are looked after.
4. Searching Algorithm:

Searching algorithms are the ones that are used for searching elements or
groups of elements from a particular data structure. They can be of different
types based on their approach or the data structure in which the element
should be found.
5. Sorting Algorithm:

Sorting is arranging a group of data in a particular manner according to the


requirement. The algorithms which help in performing this function are
called sorting algorithms. Generally sorting algorithms are used to sort
groups of data in an increasing or decreasing manner.
6. Hashing Algorithm:

Hashing algorithms work similarly to the searching algorithm. But they


contain an index with a key ID. In hashing, a key is assigned to specific
data.
7. Divide and Conquer Algorithm:

This algorithm breaks a problem into sub-problems, solves a single sub-


problem, and merges the solutions to get the final solution. It consists of the
following three steps:
 Divide
 Solve
 Combine
8. Greedy Algorithm:

In this type of algorithm, the solution is built part by part. The solution for
the next part is built based on the immediate benefit of the next part. The
one solution that gives the most benefit will be chosen as the solution for the
next part.
9. Dynamic Programming Algorithm:

This algorithm uses the concept of using the already found solution to avoid
repetitive calculation of the same part of the problem. It divides the problem
into smaller overlapping subproblems and solves them.
10. Randomized Algorithm:

In the randomized algorithm, we use a random number so it gives immediate


benefit. The random number helps in deciding the expected outcome.
To learn more about the types of algorithms refer to the article about “Types
of Algorithms“.

Advantages of Algorithms:
 It is easy to understand.
 An algorithm is a step-wise representation of a solution to a given
problem.
 In an Algorithm the problem is broken down into smaller pieces or steps
hence, it is easier for the programmer to convert it into an actual
program.

Disadvantages of Algorithms:
 Writing an algorithm takes a long time so it is time-consuming.
 Understanding complex logic through algorithms can be very difficult.
 Branching and Looping statements are difficult to show in
Algorithms(imp).

How to Design an Algorithm?


To write an algorithm, the following things are needed as a pre-requisite:
1. The problem that is to be solved by this algorithm i.e. clear problem
definition.
2. The constraints of the problem must be considered while solving the
problem.
3. The input to be taken to solve the problem.
4. The output is to be expected when the problem is solved.
5. The solution to this problem is within the given constraints.
Then the algorithm is written with the help of the above parameters such
that it solves the problem.
Example: Consider the example to add three numbers and print the sum.
Step 1: Fulfilling the pre-requisites

As discussed above, to write an algorithm, its prerequisites must be


fulfilled.
1. The problem that is to be solved by this algorithm: Add 3 numbers
and print their sum.
2. The constraints of the problem that must be considered while solving
the problem: The numbers must contain only digits and no other
characters.
3. The input to be taken to solve the problem: The three numbers to be
added.
4. The output to be expected when the problem is solved: The sum of
the three numbers taken as the input i.e. a single integer value.
5. The solution to this problem, in the given constraints: The solution
consists of adding the 3 numbers. It can be done with the help of the ‘+’
operator, or bit-wise, or any other method.

Step 2: Designing the algorithm

Now let’s design the algorithm with the help of the above pre-requisites:
Algorithm to add 3 numbers and print their sum:
1. START
2. Declare 3 integer variables num1, num2, and num3.
3. Take the three numbers, to be added, as inputs in variables num1, num2,
and num3 respectively.
4. Declare an integer variable sum to store the resultant sum of the 3
numbers.
5. Add the 3 numbers and store the result in the variable sum.
6. Print the value of the variable sum
7. END

Step 3: Testing the algorithm by implementing it.

To test the algorithm, let’s implement it in C language.


Program:
C++CJavaPythonC#Javascript
// C++ program to add three numbers
// with the help of above designed
// algorithm
#include <bits/stdc++.h>
using namespace std;

int main()
{

// Variables to take the input of


// the 3 numbers
int num1, num2, num3;

// Variable to store the resultant sum


int sum;

// Take the 3 numbers as input


cout << "Enter the 1st number: ";
cin >> num1;
cout << " " << num1 << endl;

cout << "Enter the 2nd number: ";


cin >> num2;
cout << " " << num2 << endl;

cout << "Enter the 3rd number: ";


cin >> num3;
cout << " " << num3;

// Calculate the sum using + operator


// and store it in variable sum
sum = num1 + num2 + num3;

// Print the sum


cout << "\nSum of the 3 numbers is: "
<< sum;
return 0;
}

// This code is contributed by shivanisinghss2110

Output
Enter the 1st number: 0

Enter the 2nd number: 0

Enter the 3rd number: -1577141152

Sum of the 3 numbers is: -1577141152

Here is the step-by-step algorithm of the code:


1. Declare three variables num1, num2, and num3 to store the three
numbers to be added.
2. Declare a variable sum to store the sum of the three numbers.
3. Use the cout statement to prompt the user to enter the first number.
4. Use the cin statement to read the first number and store it in num1.
5. Use the cout statement to prompt the user to enter the second number.
6. Use the cin statement to read the second number and store it in num2.
7. Use the cout statement to prompt the user to enter the third number.
8. Use the cin statement to read and store the third number in num3.
9. Calculate the sum of the three numbers using the + operator and store it
in the sum variable.
10.Use the cout statement to print the sum of the three numbers.
11. The main function returns 0, which indicates the successful execution of
the program.
Time complexity: O(1)
Auxiliary Space: O(1)
One problem, many solutions: The solution to an algorithm can be or
cannot be more than one. It means that while implementing the algorithm,
there can be more than one method to implement it. For example, in the
above problem of adding 3 numbers, the sum can be calculated in many
ways:
 + operator
 Bit-wise operators
 . . etc

How to analyze an Algorithm?


For a standard algorithm to be good, it must be efficient. Hence the
efficiency of an algorithm must be checked and maintained. It can be in two
stages:
1. Priori Analysis:

“Priori” means “before”. Hence Priori analysis means checking the


algorithm before its implementation. In this, the algorithm is checked when
it is written in the form of theoretical steps. This Efficiency of an algorithm
is measured by assuming that all other factors, for example, processor speed,
are constant and have no effect on the implementation. This is done usually
by the algorithm designer. This analysis is independent of the type of
hardware and language of the compiler. It gives the approximate answers for
the complexity of the program.
2. Posterior Analysis:

“Posterior” means “after”. Hence Posterior analysis means checking the


algorithm after its implementation. In this, the algorithm is checked by
implementing it in any programming language and executing it. This
analysis helps to get the actual and real analysis report about correctness(for
every possible input/s if it shows/returns correct output or not), space
required, time consumed, etc. That is, it is dependent on the language of the
compiler and the type of hardware used.

What is Algorithm complexity and how to find it?


An algorithm is defined as complex based on the amount of Space and Time
it consumes. Hence the Complexity of an algorithm refers to the measure of
the time that it will need to execute and get the expected output, and the
Space it will need to store all the data (input, temporary data, and output).
Hence these two factors define the efficiency of an algorithm.
The two factors of Algorithm Complexity are:
 Time Factor: Time is measured by counting the number of key
operations such as comparisons in the sorting algorithm.
 Space Factor: Space is measured by counting the maximum memory
space required by the algorithm to run/execute.
Therefore the complexity of an algorithm can be divided into two types:
1. Space Complexity: The space complexity of an algorithm refers to the
amount of memory required by the algorithm to store the variables and get
the result. This can be for inputs, temporary operations, or outputs.
How to calculate Space Complexity?
The space complexity of an algorithm is calculated by determining the
following 2 components:

 Fixed Part: This refers to the space that is required by the algorithm. For
example, input variables, output variables, program size, etc.
 Variable Part: This refers to the space that can be different based on the
implementation of the algorithm. For example, temporary variables,
dynamic memory allocation, recursion stack space, etc.
Therefore Space complexity S(P) of any algorithm P is S(P) = C +
SP(I), where C is the fixed part and S(I) is the variable part of the
algorithm, which depends on instance characteristic I.
Example: Consider the below algorithm for Linear Search
Step 1: START
Step 2: Get n elements of the array in arr and the number to be searched in
x
Step 3: Start from the leftmost element of arr[] and one by one compare x
with each element of arr[]
Step 4: If x matches with an element, Print True.
Step 5: If x doesn’t match with any of the elements, Print False.
Step 6: END
Here, There are 2 variables arr[], and x, where the arr[] is the variable
part of n elements and x is the fixed part. Hence S(P) = 1+n. So, the space
complexity depends on n(number of elements). Now, space depends on data
types of given variables and constant types and it will be multiplied
accordingly.
2. Time Complexity: The time complexity of an algorithm refers to the
amount of time required by the algorithm to execute and get the result. This
can be for normal operations, conditional if-else statements, loop statements,
etc.
How to Calculate, Time Complexity?
The time complexity of an algorithm is also calculated by determining the
following 2 components:
 Constant time part: Any instruction that is executed just once comes in
this part. For example, input, output, if-else, switch, arithmetic
operations, etc.
 Variable Time Part: Any instruction that is executed more than once,
say n times, comes in this part. For example, loops, recursion, etc.
Therefore Time complexity (𝑃) T(P) of any algorithm P
is T(P) = C + TP(I), where C is the constant time part and TP(I) is the
variable part of the algorithm, which depends on the instance
characteristic I.
Example: In the algorithm of Linear Search above, the time complexity is
calculated as follows:
Step 1: –Constant Time
Step 2: — Variable Time (Taking n inputs)
Step 3: –Variable Time (Till the length of the Array (n) or the index of the
found element)
Step 4: –Constant Time
Step 5: –Constant Time
Step 6: –Constant Time
Hence, T(P) = 1 + n + n(1 + 1) + 1 = 2 + 3n, which can be said as T(n).

Definition, Types, Complexity and


Examples of Algorithm
Last Updated : 16 Oct, 2023



An algorithm is a well-defined sequential computational technique that


accepts a value or a collection of values as input and produces the output(s)
needed to solve a problem.
Or we can say that an algorithm is said to be accurate if and only if it stops
with the proper output for each input instance.
NEED OF THE ALGORITHMS :
Algorithms are used to solve problems or automate tasks in a systematic and
efficient manner. They are a set of instructions or rules that guide the
computer or software in performing a particular task or solving a problem.
There are several reasons why we use algorithms:
 Efficiency: Algorithms can perform tasks quickly and accurately,
making them an essential tool for tasks that require a lot of calculations
or data processing.
 Consistency: Algorithms are repeatable and produce consistent results
every time they are executed. This is important when dealing with large
amounts of data or complex processes.
 Scalability: Algorithms can be scaled up to handle large datasets or
complex problems, which makes them useful for applications that
require processing large volumes of data.
 Automation: Algorithms can automate repetitive tasks, reducing the
need for human intervention and freeing up time for other tasks.
 Standardization: Algorithms can be standardized and shared among
different teams or organizations, making it easier for people to
collaborate and share knowledge.
Overall, algorithms are an essential tool for solving problems in a variety of
fields, including computer science, engineering, data analysis, finance, and
many others.
Example:

Consider a box where no one can see what’s happening inside, we say a
black box.
We give input to the box and it gives us the output we need but the
procedure that we might need to know behind the conversion of input to
desired output is an ALGORITHM.
An algorithm is independent of the language used. It tells the programmer
the logic used to solve the problem. So, it is a logical step-by-step procedure
that acts as a blueprint to programmers.

Real-life examples that define the use of algorithms:


 Consider a clock. We know the clock is ticking but how does the
manufacturer set those nuts and bolts so that it keeps on moving every 60
seconds, the min hand should move and every 60 mins, the hour hand
should move? So to solve this problem, there must be an algorithm
behind it.
 Seen someone cooking your favorite food for you? Is the recipe
necessary for it? Yes, it is necessary as a recipe is a sequential procedure
that turns a raw potato into a chilly potato. This is what an algorithm is:
following a procedure to get the desired output. Is the sequence
necessary to be followed? Yes, the sequence is the most important thing
that has to be followed to get what we want.

Types of Algorithms:
 Sorting algorithms: Bubble Sort, insertion sort, and many more. These
algorithms are used to sort the data in a particular format.
 Searching algorithms: Linear search, binary search, etc. These
algorithms are used in finding a value or record that the user demands.
 Graph Algorithms: It is used to find solutions to problems like finding
the shortest path between cities, and real-life problems like traveling
salesman problems.
Sorting algorithms are algorithms that take a collection of elements and
rearrange them in a specified order (e.g. ascending or descending). There are
many different sorting algorithms, each with its own strengths and weaknesses.
Some of the most commonly used sorting algorithms include:
Bubble sort: A simple sorting algorithm that repeatedly steps through the
list, compares adjacent elements and swaps them if they are in the wrong
order.
Insertion sort: A simple sorting algorithm that builds up the final sorted
array one item at a time, by comparing each new item to the items that have
already been sorted and inserting it in the correct position.
Selection sort: A simple sorting algorithm that repeatedly selects the
minimum element from the unsorted part of the array and moves it to the
end of the sorted part.
Merge sort: A divide-and-conquer sorting algorithm that works by dividing
the unsorted list into n sub-lists, sorting each sub-list, and then merging
them back into a single sorted list.
Quick sort: A divide-and-conquer sorting algorithm that works by selecting
a “pivot” element from the array and partitioning the other elements into
two sub-arrays, according to whether they are less than or greater than the
pivot. The sub-arrays are then sorted recursively.
Each of these algorithms has different time and space complexities, making
some more suitable for certain use cases than others.
Searching algorithms are algorithms that search for a particular element or
value in a data structure (such as an array or a linked list). Some of the most
commonly used searching algorithms include:
Linear search: A simple searching algorithm that iterates through every
element of a list until it finds a match.
Binary search: A searching algorithm that works by dividing a sorted list in
half repeatedly, until the desired element is found or it can be determined
that the element is not present.
Jump search: A searching algorithm that works by jumping ahead by fixed
steps in the list, until a suitable candidate is found, and then performing a
linear search in the surrounding elements.
Interpolation search: A searching algorithm that works by using
information about the range of values in the list to estimate the position of
the desired element and then verifying that it is indeed present.
Hash table search: A searching algorithm that uses a hash function to map
elements to indices in an array, and then performs constant-time lookups in
the array to find the desired element.
Each of these algorithms has different time and space complexities, making
some more suitable for certain use cases than others. The choice of which
algorithm to use depends on the specific requirements of the problem, such
as the size of the data structure, the distribution of values, and the desired
time complexity.
Graph algorithms are a set of algorithms that are used to process, analyze and
understand graph data structures. Graphs are mathematical structures used to
model relationships between objects, where the objects are represented as
vertices (or nodes) and the relationships between them are represented as edges.
Graph algorithms are used in a variety of applications such as network analysis,
social network analysis, recommendation systems, and in many other areas
where understanding the relationships between objects is important. Some of the
common graph algorithms include:
Shortest Path algorithms (e.g. Dijkstra’s, Bellman-Ford, A*)
Minimum Spanning Tree algorithms (e.g. Kruskal, Prim)
Maximum Flow algorithms (e.g. Ford-Fulkerson, Edmonds-Karp)
Network Flow algorithms (e.g. Bipartite Matching)
Connectivity algorithms (e.g. Depth-first Search, Breadth-first Search)

Why do we use algorithms?


Consider two kids, Aman and Rohan, solving the Rubik’s Cube. Aman
knows how to solve it in a definite number of steps. On the other hand,
Rohan knows that he will do it but is not aware of the procedure. Aman
solves the cube within 2 minutes whereas Rohan is still stuck and by the end
of the day, he somehow managed to solve it (might have cheated as the
procedure is necessary).
So the time required to solve with a procedure/algorithm is much more
effective than that without any procedure. Hence the need for an algorithm
is a must.
In terms of designing a solution to an IT problem, computers are fast but not
infinitely fast. The memory may be inexpensive but not free. So, computing
time is therefore a bounded resource and so is the space in memory. So we
should use these resources wisely and algorithms that are efficient in terms
of time and space will help you do so.

Creating an Algorithm:
Since the algorithm is language-independent, we write the steps to
demonstrate the logic behind the solution to be used for solving a problem.
But before writing an algorithm, keep the following points in mind:
 The algorithm should be clear and unambiguous.
 There should be 0 or more well-defined inputs in an algorithm.
 An algorithm must produce one or more well-defined outputs that are
equivalent to the desired output. After a specific number of steps,
algorithms must ground to a halt.
 Algorithms must stop or end after a finite number of steps.
 In an algorithm, step-by-step instructions should be supplied, and they
should be independent of any computer code.
Example: algorithm to multiply 2 numbers and print the result:
Step 1: Start
Step 2: Get the knowledge of input. Here we need 3 variables; a and b will
be the user input and c will hold the result.
Step 3: Declare a, b, c variables.
Step 4: Take input for a and b variable from the user.
Step 5: Know the problem and find the solution using operators, data
structures and logic
We need to multiply a and b variables so we use * operator and assign the
result to c.
That is c <- a * b
Step 6: Check how to give output, Here we need to print the output. So
write print c
Step 7: End
Example 1: Write an algorithm to find the maximum of all the elements
present in the array.
Follow the algorithm approach as below:
Step 1: Start the Program
Step 2: Declare a variable max with the value of the first element of the
array.
Step 3: Compare max with other elements using loop.
Step 4: If max < array element value, change max to new max.
Step 5: If no element is left, return or print max otherwise goto step 3.
Step 6: End of Solution
Example 2: Write an algorithm to find the average of 3 subjects.
Follow the algorithm approach as below:
Step 1: Start the Program
Step 2: Declare and Read 3 Subject, let’s say S1, S2, S3
Step 3: Calculate the sum of all the 3 Subject values and store result in Sum
variable (Sum = S1+S2+S3)
Step 4: Divide Sum by 3 and assign it to Average variable. (Average =
Sum/3)
Step 5: Print the value of Average of 3 Subjects
Step 6: End of Solution

Know about Algorithm Complexity:


Complexity in algorithms refers to the amount of resources (such as time or
memory) required to solve a problem or perform a task. The most common
measure of complexity is time complexity, which refers to the amount of
time an algorithm takes to produce a result as a function of the size of the
input. Memory complexity refers to the amount of memory used by an
algorithm. Algorithm designers strive to develop algorithms with the lowest
possible time and memory complexities, since this makes them more
efficient and scalable.
The complexity of an algorithm is a function describing the efficiency of the
algorithm in terms of the amount of data the algorithm must process.
Usually there are natural units for the domain and range of this function.
An algorithm is analyzed using Time Complexity and Space Complexity.
Writing an efficient algorithm help to consume the minimum amount of
time for processing the logic. For algorithm A, it is judged on the basis of
two parameters for an input of size n :
 Time Complexity: Time taken by the algorithm to solve the problem. It is
measured by calculating the iteration of loops, number of comparisons
etc.
 Time complexity is a function describing the amount of time an
algorithm takes in terms of the amount of input to the algorithm.
 “Time” can mean the number of memory accesses performed, the
number of comparisons between integers, the number of times some
inner loop is executed, or some other natural unit related to the amount
of real time the algorithm will take.
 Space Complexity: Space taken by the algorithm to solve the problem. It
includes space used by necessary input variables and any extra space
(excluding the space taken by inputs) that is used by the algorithm. For
example, if we use a hash table (a kind of data structure), we need an
array to store values so
 this is an extra space occupied, hence will count towards the space
complexity of the algorithm. This extra space is known as Auxiliary
Space.
 Space complexity is a function describing the amount of
memory(space)an algorithm takes in terms of the amount of input to the
algorithm.
 Space complexity is sometimes ignored because the space used is
minimal and/ or obvious, but sometimes it becomes an issue as time.
.The time complexity of the operations:
 The choice of data structure should be based on the time complexity of
the operations that will be performed.
 Time complexity is defined in terms of how many times it takes to run a
given algorithm, based on the length of the input.
 The time complexity of an algorithm is the amount of time it takes for
each statement to complete. It is highly dependent on the size of the
processed data.
 For example, if you need to perform searches frequently, you should use
a binary search tree.
.The space complexity of the operations:
 The choice of data structure should be based on the space complexity of
the operations that will be performed.
 The amount of memory used by a program to execute it is represented by
its space complexity.
 Because a program requires memory to store input data and temporal
values while running , the space complexity is auxiliary and input space.
 For example, if you need to store a lot of data, you should use an array.

cases in complexities:
There are two commonly studied cases of complexity in algorithms:
1.Best case complexity: The best-case scenario for an algorithm is the
scenario in which the algorithm performs the minimum amount of work
(e.g. takes the shortest amount of time, uses the least amount of memory,
etc.).
2.Worst case complexity: The worst-case scenario for an algorithm is the
scenario in which the algorithm performs the maximum amount of work
(e.g. takes the longest amount of time, uses the most amount of memory,
etc.).
In analyzing the complexity of an algorithm, it is often more informative to
study the worst-case scenario, as this gives a guaranteed upper bound on the
performance of the algorithm. Best-case scenario analysis is sometimes
performed, but is generally less important as it provides a lower bound that
is often trivial to achieve.

Advantages of Algorithms
 Easy to understand: Since it is a stepwise representation of a solution to
a given problem, it is easy to understand.
 Language Independent: It is not dependent on any programming
language, so it can easily be understood by anyone.
 Debug / Error Finding: Every step is independent / in a flow so it will
be easy to spot and fix the error.
 Sub-Problems: It is written in a flow so now the programmer can divide
the tasks which makes them easier to code.

Disadvantages of Algorithms
 Creating efficient algorithms is time-consuming and requires good
logical skills.
 Nasty to show branching and looping in algorithms.

Summer-time is here and so is the time to skill-up! More than 5,000 learners
have now completed their journey from basics of DSA to advanced level
development programs such as Full-Stack, Backend Development, Data
Science.

And why go anywhere else when our DSA to Development: Coding


Guide will help you master all this in a few months! Apply now to our DSA
to Development Program and our counsellors will connect with you for
further guidance & support.
symptotic Notation and Analysis (Based
on input size) in Complexity Analysis of
Algorithms
Last Updated : 04 May, 2023



Asymptotic Analysis is defined as the big idea that handles the above issues
in analyzing algorithms. In Asymptotic Analysis, we evaluate the
performance of an algorithm in terms of input size (we don’t measure the
actual running time). We calculate, how the time (or space) taken by an
algorithm increases with the input size.
Asymptotic notation is a way to describe the running time or space
complexity of an algorithm based on the input size. It is commonly used in
complexity analysis to describe how an algorithm performs as the size of the
input grows. The three most commonly used notations are Big O, Omega,
and Theta.
1. Big O notation (O): This notation provides an upper bound on the growth
rate of an algorithm’s running time or space usage. It represents the
worst-case scenario, i.e., the maximum amount of time or space an
algorithm may need to solve a problem. For example, if an algorithm’s
running time is O(n), then it means that the running time of the algorithm
increases linearly with the input size n or less.
2. Omega notation (?): This notation provides a lower bound on the growth
rate of an algorithm’s running time or space usage. It represents the best-
case scenario, i.e., the minimum amount of time or space an algorithm
may need to solve a problem. For example, if an algorithm’s running
time is ?(n), then it means that the running time of the algorithm
increases linearly with the input size n or more.
3. Theta notation (?): This notation provides both an upper and lower bound
on the growth rate of an algorithm’s running time or space usage. It
represents the average-case scenario, i.e., the amount of time or space an
algorithm typically needs to solve a problem. For example, if an
algorithm’s running time is ?(n), then it means that the running time of
the algorithm increases linearly with the input size n.
In general, the choice of asymptotic notation depends on the problem and
the specific algorithm used to solve it. It is important to note that asymptotic
notation does not provide an exact running time or space usage for an
algorithm, but rather a description of how the algorithm scales with respect
to input size. It is a useful tool for comparing the efficiency of different
algorithms and for predicting how they will perform on large input sizes.

Why performance analysis?


There are many important things that should be taken care of, like user-
friendliness, modularity, security, maintainability, etc. Why worry about
performance? The answer to this is simple, we can have all the above things
only if we have performance. So performance is like currency through
which we can buy all the above things. Another reason for studying
performance is – speed is fun! To summarize, performance == scale.
Imagine a text editor that can load 1000 pages, but can spell check 1 page
per minute OR an image editor that takes 1 hour to rotate your image 90
degrees left OR … you get it. If a software feature can not cope with the
scale of tasks users need to perform – it is as good as dead.

How to study efficiency of algorithms?


The way to study the efficiency of an algorithm is to implement it and
experiment by running the program on various test inputs while recording
the time spent during each execution. A simple mechanism in Java is based
on use of the currentTimeMillis() method of the System class for collecting
such running times. That method reports the number of milliseconds that
have passed since a benchmark time known
as the epoch (January 1, 1970 UTC).The key is that if we record the time
immediately before executing the algorithm and then immediately after it.
long start = System.currentTimeMillis( ); // record the starting time
/? (run the algorithm) ?/
long end = System.currentTimeMillis( ); // record the ending time
long elapsed = end ? start; //Total time elapsed
Measuring elapsed time provides a reasonable reflection of an algorithm’s
efficiency.
Given two algorithms for a task, how do we find out

which one is better?


One naive way of doing this is – to implement both the algorithms and run
the two programs on your computer for different inputs and see which one
takes less time. There are many problems with this approach for the analysis
of algorithms.
 It might be possible that for some inputs, the first algorithm performs
better than the second. And for some inputs second performs better.
 It might also be possible that for some inputs, the first algorithm
performs better on one machine, and the second works better on another
machine for some other inputs.
Asymptotic Analysis is the big idea that handles the above issues in
analyzing algorithms. In Asymptotic Analysis, we evaluate the performance
of an algorithm in terms of input size (we don’t measure the actual
running time). We calculate, how the time (or space) taken by an algorithm
increases with the input size.
For example, let us consider the search problem (searching a given item) in
a sorted array.
The solution to above search problem includes:
 Linear Search (order of growth is linear)
 Binary Search (order of growth is logarithmic).
To understand how Asymptotic Analysis solves the problems mentioned
above in analyzing algorithms,
 let us say:
o we run the Linear Search on a fast computer A and
o Binary Search on a slow computer B and
o pick the constant values for the two computers so that it tells us
exactly how long it takes for the given machine to perform the
search in seconds.
 Let’s say the constant for A is 0.2 and the constant for B is 1000 which
means that A is 5000 times more powerful than B.
 For small values of input array size n, the fast computer may take less
time.
 But, after a certain value of input array size, the Binary Search will
definitely start taking less time compared to the Linear Search even
though the Binary Search is being run on a slow machine.
Input
Size Running time on A Running time on B

10 2 sec ~1h

100 20 sec ~ 1.8 h

10^6 ~ 55.5 h ~ 5.5 h

10^9 ~ 6.3 years ~ 8.3 h

 The reason is the order of growth of Binary Search with respect to input
size is logarithmic while the order of growth of Linear Search is linear.
 So the machine-dependent constants can always be ignored after a
certain value of input size.
Running times for this example:
 Linear Search running time in seconds on A: 0.2 * n
 Binary Search running time in seconds on B: 1000*log(n)

Challenges of Experimental Analysis:


Experimental running times of two algorithms are difficult to directly
compare unless the experiments are performed in the same hardware and
software environments. Experiments can be done only on a limited set of
test inputs; hence, they leave out the running times of inputs not included in
the experiment (and these inputs may be important).
To overcome the challenges in the Experimental analysis Asymptotic
Analysis is used.

Does Asymptotic Analysis always work?


Asymptotic Analysis is not perfect, but that’s the best way available for
analyzing algorithms. For example, say there are two sorting algorithms that
take 1000nLogn and 2nLogn time respectively on a machine. Both of these
algorithms are asymptotically the same (order of growth is nLogn). So, With
Asymptotic Analysis, we can’t judge which one is better as we ignore
constants in Asymptotic Analysis.
Also, in Asymptotic analysis, we always talk about input sizes larger than a
constant value. It might be possible that those large inputs are never given to
your software and an asymptotically slower algorithm always performs
better for your particular situation. So, you may end up choosing an
algorithm that is Asymptotically slower but faster for your software.

Advantages or Disadvantages:

Advantages:

1. Asymptotic analysis provides a high-level understanding of how an


algorithm performs with respect to input size.
2. It is a useful tool for comparing the efficiency of different algorithms and
selecting the best one for a specific problem.
3. It helps in predicting how an algorithm will perform on larger input
sizes, which is essential for real-world applications.
4. Asymptotic analysis is relatively easy to perform and requires only basic
mathematical skills.
Disadvantages:
1. Asymptotic analysis does not provide an accurate running time or space
usage of an algorithm.
2. It assumes that the input size is the only factor that affects an algorithm’s
performance, which is not always the case in practice.
3. Asymptotic analysis can sometimes be misleading, as two algorithms
with the same asymptotic complexity may have different actual running
times or space usage.
4. It is not always straightforward to determine the best asymptotic
complexity for an algorithm, as there may be trade-offs between time and
space complexity.
Worst, Average and Best Case Analysis of
Algorithms
Last Updated : 14 Mar, 2024



In the previous post, we discussed how Asymptotic analysis


overcomes the problems of the naive way of analyzing algorithms.
But let’s take an overview of the asymptotic notation and learn
about What is Worst, Average, and Best cases of an algorithm:

Popular Notations in Complexity Analysis of

Algorithms
1. Big-O Notation

We define an algorithm’s worst-case time complexity by using the


Big-O notation, which determines the set of functions grows slower
than or at the same rate as the expression. Furthermore, it explains
the maximum amount of time an algorithm requires to consider all
input values.
2. Omega Notation

It defines the best case of an algorithm’s time complexity, the


Omega notation defines whether the set of functions will grow faster
or at the same rate as the expression. Furthermore, it explains the
minimum amount of time an algorithm requires to consider all input
values.
3. Theta Notation

It defines the average case of an algorithm’s time complexity, the


Theta notation defines when the set of functions lies in
both O(expression) and Omega(expression), then Theta notation
is used. This is how we define a time complexity average case for
an algorithm.

Measurement of Complexity of an Algorithm


Based on the above three notations of Time Complexity there are
three cases to analyze an algorithm:
1. Worst Case Analysis (Mostly used)

In the worst-case analysis, we calculate the upper bound on the


running time of an algorithm. We must know the case that causes a
maximum number of operations to be executed. For Linear Search,
the worst case happens when the element to be searched (x) is not
present in the array. When x is not present, the search() function
compares it with all the elements of arr[] one by one. Therefore, the
worst-case time complexity of the linear search would be O(n).
2. Best Case Analysis (Very Rarely used)

In the best-case analysis, we calculate the lower bound on the


running time of an algorithm. We must know the case that causes a
minimum number of operations to be executed. In the linear search
problem, the best case occurs when x is present at the first location.
The number of operations in the best case is constant (not
dependent on n). So time complexity in the best case would be ?(1)
3. Average Case Analysis (Rarely used)

In average case analysis, we take all possible inputs and calculate


the computing time for all of the inputs. Sum all the calculated
values and divide the sum by the total number of inputs. We must
know (or predict) the distribution of cases. For the linear search
problem, let us assume that all cases are uniformly
distributed (including the case of x not being present in the array).
So we sum all the cases and divide the sum by (n+1). Following is
the value of average-case time complexity.
Average Case Time = \sum_{i=1}^{n}\frac{\theta (i)}{(n+1)} = \frac{\
theta (\frac{(n+1)*(n+2)}{2})}{(n+1)} = \theta (n)

Which Complexity analysis is generally used?


Below is the ranked mention of complexity analysis notation based
on popularity:
1. Worst Case Analysis:

Most of the time, we do worst-case analyses to analyze algorithms.


In the worst analysis, we guarantee an upper bound on the running
time of an algorithm which is good information.
2. Average Case Analysis

The average case analysis is not easy to do in most practical cases


and it is rarely done. In the average case analysis, we must know
(or predict) the mathematical distribution of all possible inputs.
3. Best Case Analysis

The Best Case analysis is bogus. Guaranteeing a lower bound on


an algorithm doesn’t provide any information as in the worst case,
an algorithm may take years to run.
Interesting information about asymptotic notations:

A) For some algorithms, all the cases (worst, best, average) are
asymptotically the same. i.e., there are no worst and best cases.
 Example: Merge Sort does ?(n log(n)) operations in all cases.
B) Where as most of the other sorting algorithms have worst and
best cases.
 Example 1: In the typical implementation of Quick Sort (where
pivot is chosen as a corner element), the worst occurs when the
input array is already sorted and the best occurs when the pivot
elements always divide the array into two halves.
 Example 2: For insertion sort, the worst case occurs when the
array is reverse sorted and the best case occurs when the array
is sorted in the same order as output.

Examples with their complexity analysis:


1. Linear search algorithm:

C++CJavaC#JavascriptPHPPython3
// C++ implementation of the approach
#include <bits/stdc++.h>
using namespace std;

// Linearly search x in arr[].


// If x is present then return the index,
// otherwise return -1
int search(int arr[], int n, int x)
{
int i;
for (i = 0; i < n; i++) {
if (arr[i] == x)
return i;
}
return -1;
}

// Driver's Code
int main()
{
int arr[] = { 1, 10, 30, 15 };
int x = 30;
int n = sizeof(arr) / sizeof(arr[0]);

// Function call
cout << x << " is present at index "
<< search(arr, n, x);

return 0;
}

Output
30 is present at index 2

Time Complexity Analysis: (In Big-O notation)


 Best Case: O(1), This will take place if the element to be
searched is on the first index of the given list. So, the number of
comparisons, in this case, is 1.
 Average Case: O(n), This will take place if the element to be
searched is on the middle index of the given list.
 Worst Case: O(n), This will take place if:
o The element to be searched is on the last index
o The element to be searched is not present on the list
2. In this example, we will take an array of length (n) and deals with
the following cases :
 If (n) is even then our output will be 0
 If (n) is odd then our output will be the sum of the elements of the
array.
Below is the implementation of the given problem:
C++JavaC#JavascriptPHPPython3
// C++ implementation of the approach
#include <bits/stdc++.h>
using namespace std;

int getSum(int arr[], int n)


{
if (n % 2 == 0) // (n) is even
{
return 0;
}
int sum = 0;
for (int i = 0; i < n; i++) {
sum += arr[i];
}
return sum; // (n) is odd
}

// Driver's Code
int main()
{
// Declaring two array one of length odd and other of
// length even;
int arr[4] = { 1, 2, 3, 4 };
int a[5] = { 1, 2, 3, 4, 5 };

// Function call
cout << getSum(arr, 4)
<< endl; // print 0 because (n) is even
cout << getSum(a, 5)
<< endl; // print sum because (n) is odd
}
// This code is contributed by Suruchi Kumari

Output
0

15

Time Complexity Analysis:


 Best Case: The order of growth will be constant because in the
best case we are assuming that (n) is even.
 Average Case: In this case, we will assume that even and odd
are equally likely, therefore Order of growth will be linear
 Worst Case: The order of growth will be linear because in this
case, we are assuming that (n) is always odd.
For more details, please refer: Design and Analysis of Algorithms .
Worst, Average, and Best Case Analysis of Algorithms is a
technique used to analyze the performance of algorithms under
different conditions. Here are some advantages, disadvantages,
important points, and reference books related to this analysis
technique:
Advantages:

1. This technique allows developers to understand the performance


of algorithms under different scenarios, which can help in making
informed decisions about which algorithm to use for a specific
task.
2. Worst case analysis provides a guarantee on the upper bound of
the running time of an algorithm, which can help in designing
reliable and efficient algorithms.
3. Average case analysis provides a more realistic estimate of the
running time of an algorithm, which can be useful in real-world
scenarios.

Disadvantages:

1. This technique can be time-consuming and requires a good


understanding of the algorithm being analyzed.
2. Worst case analysis does not provide any information about the
typical running time of an algorithm, which can be a
disadvantage in real-world scenarios.
3. Average case analysis requires knowledge of the probability
distribution of input data, which may not always be available.

Important points:

1. The worst case analysis of an algorithm provides an upper bound


on the running time of the algorithm for any input size.
2. The average case analysis of an algorithm provides an estimate
of the running time of the algorithm for a random input.
3. The best case analysis of an algorithm provides a lower bound
on the running time of the algorithm for any input size.
4. The big O notation is commonly used to express the worst case
running time of an algorithm.
5. Different algorithms may have different best, average, and worst
case running times.
Complete Guide On Complexity Analysis
– Data Structure and Algorithms Tutorial
Last Updated : 29 Apr, 2024



Complexity analysis is defined as a technique to characterise the time taken


by an algorithm with respect to input size (independent from the machine,
language and compiler). It is used for evaluating the variations of
execution time on different algorithms.
What is the need for Complexity Analysis?

 Complexity Analysis determines the amount of time and space resources


required to execute it.
 It is used for comparing different algorithms on different input sizes.
 Complexity helps to determine the difficulty of a problem.
 often measured by how much time and space (memory) it takes to solve
a particular problem

Complete Guide On Complexity Analysis


Things to learn about Complexity Analysis

 What is Complexity Analysis?


 What is the need for Complexity Analysis?
 Asymptotic Notations
 How to measure complexity?
o 1. Time Complexity
o 2. Space Complexity
o 3. Auxiliary Space
 How does Complexity affect any algorithm?
o How to optimize the time and space complexity of an
Algorithm?
 Different types of Complexity exist in the program:
o 1. Constant Complexity
o 2. Logarithmic Complexity
o 3. Linear Complexity
o 4. Quadratic Complexity
o 5. Factorial Complexity
o 6. Exponential Complexity
 Worst Case time complexity of different data structures for
different operations
 Complexity Analysis Of Popular Algorithms
 Practice some questions on Complexity Analysis
 practice with giving Quiz
 Conclusion

Asymptotic Notations in Complexity Analysis:


1. Big O Notation

Big-O notation represents the upper bound of the running time of an


algorithm. Therefore, it gives the worst-case complexity of an algorithm.
By using big O- notation, we can asymptotically limit the expansion of a
running time to a range of constant factors above and below. It is a model
for quantifying algorithm performance.
Graphical Representation

Mathematical Representation of Big-O Notation:


O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n)
≤ cg(n) for all n ≥ n0 }
2. Omega Notation

Omega notation represents the lower bound of the running time of an


algorithm. Thus, it provides the best-case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm’s time
complexity. It is defined as the condition that allows an algorithm to
complete statement execution in the shortest amount of time.

Graphical Representation

Mathematical Representation of Omega notation :


Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n)
≤ f(n) for all n ≥ n0 }
Note: Ω (g) is a set
3. Theta Notation

Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an
algorithm. The execution time serves as both a lower and upper bound on
the algorithm’s time complexity. It exists as both, the most, and least
boundaries for a given input value.

Graphical Representation

Mathematical Representation:
Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤
c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}
4. Little ο asymptotic notation

Big-Ο is used as a tight upper bound on the growth of an algorithm’s


effort (this effort is described by the function f(n)), even though, as written,
it can also be a loose upper bound. “Little-ο” (ο()) notation is used to
describe an upper bound that cannot be tight.
Graphical Representation

Mathematical Representation:
f(n) = o(g(n)) means lim f(n)/g(n) = 0 n→∞
5. Little ω asymptotic notation

Let f(n) and g(n) be functions that map positive integers to positive real
numbers. We say that f(n) is ω(g(n)) (or f(n) ∈ ω(g(n))) if for any real
constant c > 0, there exists an integer constant n0 ≥ 1 such that f(n) > c *
g(n) ≥ 0 for every integer n ≥ n0.
Mathematical Representation:
if f(n) ∈ ω(g(n)) then,
lim f(n)/g(n) = ∞
n→∞
Note: In most of the Algorithm we use Big-O notation, as it is worst case
Complexity Analysis.

How to measure complexity?


The complexity of an algorithm can be measured in three ways:
1. Time Complexity

The time complexity of an algorithm is defined as the amount of time taken


by an algorithm to run as a function of the length of the input. Note that the
time to run is a function of the length of the input and not the actual
execution time of the machine on which the algorithm is running on
How is Time complexity computed?
To estimate the time complexity, we need to consider the cost of each
fundamental instruction and the number of times the instruction is executed.
 If we have statements with basic operations like comparisons, return
statements, assignments, and reading a variable. We can assume they
take constant time each O(1).
Statement 1: int a=5; // reading a variable
statement 2; if( a==5) return true; // return statement
statement 3; int x= 4>5 ? 1:0; // comparison
statement 4; bool flag=true; // Assignment
This is the result of calculating the overall time complexity.
total time = time(statement1) + time(statement2) + ... time
(statementN)
Assuming that n is the size of the input, let’s use T(n) to represent
the overall time and t to represent the amount of time that a statement or
collection of statements takes to execute.
T(n) = t(statement1) + t(statement2) + ... + t(statementN);
Overall, T(n)= O(1), which means constant complexity.
 For any loop, we find out the runtime of the block inside them and
multiply it by the number of times the program will repeat the loop.
for (int i = 0; i < n; i++) {
cout << “GeeksForGeeks” << endl;
}
For the above example, the loop will execute n times, and it will
print “GeeksForGeeks” N number of times. so the time taken to run this
program is:
T(N)= n *( t(cout statement))
= n * O(1)
=O(n), Linear complexity.
 For 2D arrays, we would have nested loop concepts, which means a loop
inside a loop.
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
cout << “GeeksForGeeks” << endl;
}
}
For the above example, the cout statement will execute n*m times, and it
will print “GeeksForGeeks” N*M number of times. so the time taken to
run this program is:
T(N)= n * m *(t(cout statement))
= n * m * O(1)
=O(n*m), Quadratic Complexity.
2. Space Complexity :

The amount of memory required by the algorithm to solve a given


problem is called the space complexity of the algorithm. Problem-solving
using a computer requires memory to hold temporary data or final result
while the program is in execution.
How is space complexity computed?
The space Complexity of an algorithm is the total space taken by the
algorithm with respect to the input size. Space complexity includes both
Auxiliary space and space used by input.
Space complexity is a parallel concept to time complexity. If we need to
create an array of size n, this will require O(n) space. If we create a two-
dimensional array of size n*n, this will require O(n2) space.
In recursive calls stack space also counts.
Example:
int add (int n){
if (n <= 0){
return 0;
}
return n + add (n-1);
}
Here each call add a level to the stack :
1. add(4)
2. -> add(3)
3. -> add(2)
4. -> add(1)
5. -> add(0)
Each of these calls is added to call stack and takes up actual
memory.
So it takes O(n) space.
However, just because you have n calls total doesn’t mean it takes O(n)
space.
Look at the below function :
int addSequence (int n){
int sum = 0;
for (int i = 0; i < n; i++){
sum += pairSum(i, i+1);
}
return sum;
}
int pairSum(int x, int y){
return x + y;
}
There will be roughly O(n) calls to pairSum. However, those
calls do not exist simultaneously on the call stack,
so you only need O(1) space.
3. Auxiliary Space :

The temporary space needed for the use of an algorithm is referred to as


auxiliary space. Like temporary arrays, pointers, etc.
It is preferable to make use of Auxiliary Space when comparing things like
sorting algorithms.
for example, sorting algorithms take O(n) space, as there is an input array
to sort. but auxiliary space is O(1) in that case.

How does Complexity affect any algorithm?


Time complexity of an algorithm quantifies the amount of time taken by an
algorithm to run as a function of length of the input. While, the space
complexity of an algorithm quantifies the amount of space or memory
taken by an algorithm to run as a function of the length of the input.
How to optimize the time and space complexity of an

Algorithm?
Optimization means modifying the brute-force approach to a problem. It is
done to derive the best possible solution to solve the problem so that it will
take less time and space complexity. We can optimize a program by either
limiting the search space at each step or occupying less search space from
the start.
We can optimize a solution using both time and space optimization. To
optimize a program,
1. We can reduce the time taken to run the program and increase the space
occupied;
2. we can reduce the memory usage of the program and increase its total
run time, or
3. we can reduce both time and space complexity by deploying relevant
algorithms

Different types of Complexity exist in the program:


1. Constant Complexity

If the function or method of the program takes negligible execution time.


Then that will be considered as constant complexity.
Example: The below program takes a constant amount of time.
C++JavaPython3C#JavaScript
// C program for the above approach
#include <stdio.h>

// Function to check if a
// number is even or odd
void checkEvenOdd(int N)
{
// Find remainder
int r = N % 2;

// Condition for even


if (r == 0) {
printf("Even");
}

// Otherwise
else {
printf("Odd");
}
}

// Driver Code
int main()
{
// Given number N
int N = 101;

// Function Call
checkEvenOdd(N);

return 0;
}

Output
Odd
Constant Complexity Graph

2. Logarithmic Complexity:

It imposes a complexity of O(log(N)). It undergoes the execution of the


order of log(N) steps. To perform operations on N elements, it often takes
the logarithmic base as 2.
Example: The below program takes logarithmic complexity.
C++JavaPython3C#JavaScript
// C++ program to implement recursive Binary Search
#include <bits/stdc++.h>
using namespace std;

// A recursive binary search function. It returns


// location of x in given array arr[l..r] is present,
// otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l) {
int mid = l + (r - l) / 2;
// If the element is present at the middle
// itself
if (arr[mid] == x)
return mid;

// If element is smaller than mid, then


// it can only be present in left subarray
if (arr[mid] > x)
return binarySearch(arr, l, mid - 1, x);

// Else the element can only be present


// in right subarray
return binarySearch(arr, mid + 1, r, x);
}

// We reach here when element is not


// present in array
return -1;
}

int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}

Output
Element is present at index 3
Logarithmic Complexity Graph

3. Linear Complexity:

It imposes a complexity of O(N). It encompasses the same number of steps


as that of the total number of elements to implement an operation on N
elements.
Example: The below program takes Linear complexity.
C++JavaPython3C#JavaScript
// C++ code to linearly search x in arr[]. If x
// is present then return its location, otherwise
// return -1

#include <iostream>
using namespace std;

int search(int arr[], int N, int x)


{
int i;
for (i = 0; i < N; i++)
if (arr[i] == x)
return i;
return -1;
}

// Driver's code
int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int N = sizeof(arr) / sizeof(arr[0]);

// Function call
int result = search(arr, N, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}

Output
Element is present at index 3
Linear Complexity Graph

4. Quadratic Complexity:

It imposes a complexity of O(n2). For N input data size, it undergoes the


order of N2 count of operations on N number of elements for solving a given
problem.
Example: The below program takes quadratic complexity.
C++JavaPython3JavaScript
// C++ program for the above approach
#include <bits/stdc++.h>

using namespace std;

// Function to find and print pair


bool chkPair(int A[], int size, int x)
{
for (int i = 0; i < (size - 1); i++) {
for (int j = (i + 1); j < size; j++) {
if (A[i] + A[j] == x) {
return 1;
}
}
}

return 0;
}

// Driver code
int main()
{
int A[] = { 0, -1, 2, -3, 1 };
int x = -2;
int size = sizeof(A) / sizeof(A[0]);

if (chkPair(A, size, x)) {


cout << "Yes" << endl;
}
else {
cout << "No" << x << endl;
}

return 0;
}

// This code is contributed by Samim Hossain Mondal.

Output
Yes
Quadratic Complexity Graph

5. Factorial Complexity:

It imposes a complexity of O(n!). For N input data size, it executes the order
of N! steps on N elements to solve a given problem.
Example: The below program takes factorial complexity.
C++JavaPython3JavaScript
// C++ program to print all
// permutations with duplicates allowed
#include <bits/stdc++.h>
using namespace std;

// Function to print permutations of string


// This function takes three parameters:
// 1. String
// 2. Starting index of the string
// 3. Ending index of the string.
void permute(string& a, int l, int r)
{
// Base case
if (l == r)
cout << a << endl;
else {
// Permutations made
for (int i = l; i <= r; i++) {

// Swapping done
swap(a[l], a[i]);

// Recursion called
permute(a, l + 1, r);

// backtrack
swap(a[l], a[i]);
}
}
}

// Driver Code
int main()
{
string str = "ABC";
int n = str.size();

// Function call
permute(str, 0, n - 1);
return 0;
}

// This is code is contributed by rathbhupendra

Output
ABC

ACB
BAC

BCA

CBA

CAB

Factorial Complexity Graph

6. Exponential Complexity:

It imposes a complexity of O(2N), O(N!), O(nk), …. For N elements, it will


execute the order of the count of operations that is exponentially dependable
on the input data size.
Example: The below program takes exponential complexity.
C++JavaPython3JavaScript
// A recursive solution for subset sum problem
#include <iostream>
using namespace std;
// Returns true if there is a subset
// of set[] with sum equal to given sum
bool isSubsetSum(int set[], int n, int sum)
{

// Base Cases
if (sum == 0)
return true;
if (n == 0)
return false;

// If last element is greater than sum,


// then ignore it
if (set[n - 1] > sum)
return isSubsetSum(set, n - 1, sum);

/* else, check if sum can be obtained by any


of the following:
(a) including the last element
(b) excluding the last element */
return isSubsetSum(set, n - 1, sum)
|| isSubsetSum(set, n - 1, sum - set[n - 1]);
}

// Driver code
int main()
{
int set[] = { 3, 34, 4, 12, 5, 2 };
int sum = 9;
int n = sizeof(set) / sizeof(set[0]);
if (isSubsetSum(set, n, sum) == true)
cout <<"Found a subset with given sum";
else
cout <<"No subset with given sum";
return 0;
}
// This code is contributed by shivanisinghss2110

Output
Found a subset with given sum

Exponential Complexity Graph

Worst Case time complexity of different data

structures for different operations


Data structure Access Search Insertion Deletion

Array O(1) O(N) O(N) O(N)


Data structure Access Search Insertion Deletion

Stack O(N) O(N) O(1) O(1)

Queue O(N) O(N) O(1) O(1)

Singly Linked
O(N) O(N) O(N) O(N)
list

Doubly
O(N) O(N) O(1) O(1)
Linked List

Hash Table O(N) O(N) O(N) O(N)

Binary
O(N) O(N) O(N) O(N)
Search Tree

AVL Tree O(log N) O(log N) O(log N) O(log N)

Binary Tree O(N) O(N) O(N) O(N)

Red Black
O(log N) O(log N) O(log N) O(log N)
Tree

Complexity Analysis Of Popular Algorithms:


Algorithm Complexity

1. Linear Search Algorithm O(N)

2 Binary Search O(LogN)

3.Bubble Sort O(N^2)


Algorithm Complexity

4.Insertion Sort O(N^2)

5.Selection Sort O(N^2)

6.QuickSort O(N^2) worst

7 Merge Sort O(N log(N))

8.Counting Sort O(N)

9 Radix Sort O((n+b) * logb(k)).

10.Sieve of Eratosthenes O(n*log(log(n)))

11.KMP Algorithm O(N)

12.Z algorithm O(M+N)

13. Rabin-Karp Algorithm O(N*M).

14.Johnson’s algorithm O(V2log V + VE)

15. Prim’s Algorithm O(V2)

16Kruskal’s Algorithm O(ElogV)

17. 0/1 Knapsack O(N * W)

18.Floyd Warshall Algorithm O(V3)

19.Breadth First Search O(V+E)


Algorithm Complexity

20. Depth first Search O(V + E)

Practice some questions on Complexity Analysis:


 Practice Questions on Time Complexity Analysis
 Miscellaneous Problems of Time Complexity
 Sample Practice Problems on Complexity Analysis of Algorithm

Practice with giving Quiz :


 Top MCQs on Complexity Analysis of Algorithms with Answers
 Top MCQs on NP-Complete Complexity with Answers
 Top MCQs on Complexity Analysis using Recurrence Relations with
Answers

Conclusion:
Complexity analysis is a very important technique to analyze any problem.
The interviewer often checks your ideas and coding skills by asking you to
write a code giving restrictions on its time or space complexities. By solving
more and more problems anyone can improve their logical thinking day by
day. Even in coding contests optimized solutions are accepted. The naive
approach can give TLE(Time limit exceed).

"The DSA course helped me a lot in clearing the interview rounds. It was
really very helpful in setting a strong foundation for my problem-solving
skills. Really a great investment, the passion Sandeep sir has towards
DSA/teaching is what made the huge difference." - Gaurav | Placed at
Amazon

Before you move on to the world of development, master the


fundamentals of DSA on which every advanced algorithm is built upon.
Choose your preferred language and start learning today:
How to Analyse Loops for Complexity
Analysis of Algorithms
Last Updated : 08 Mar, 2024



We have discussed Asymptotic Analysis, Worst, Average and Best


Cases and Asymptotic Notations in previous posts. In this post, an analysis
of iterative programs with simple examples is discussed.
The analysis of loops for the complexity analysis of algorithms involves
finding the number of operations performed by a loop as a function of the
input size. This is usually done by determining the number of iterations of
the loop and the number of operations performed in each iteration.
Here are the general steps to analyze loops for complexity analysis:
Determine the number of iterations of the loop. This is usually done by
analyzing the loop control variables and the loop termination condition.
Determine the number of operations performed in each iteration of the loop.
This can include both arithmetic operations and data access operations, such
as array accesses or memory accesses.
Express the total number of operations performed by the loop as a function
of the input size. This may involve using mathematical expressions or
finding a closed-form expression for the number of operations performed by
the loop.
Determine the order of growth of the expression for the number of
operations performed by the loop. This can be done by using techniques
such as big O notation or by finding the dominant term and ignoring lower-
order terms.

Constant Time Complexity O(1):


The time complexity of a function (or set of statements) is considered as
O(1) if it doesn’t contain a loop, recursion, and call to any other non-
constant time function.
i.e. set of non-recursive and non-loop statements
In computer science, O(1) refers to constant time complexity, which means
that the running time of an algorithm remains constant and does not depend
on the size of the input. This means that the execution time of an O(1)
algorithm will always take the same amount of time regardless of the input
size. An example of an O(1) algorithm is accessing an element in an array
using an index.
Example:
 swap() function has O(1) time complexity.
 A loop or recursion that runs a constant number of times is also
considered O(1). For example, the following loop is O(1).
 C++
 C
 Java
 C#
 Javascript
 Python3

// Here c is a positive constant

for (int i = 1; i <= c; i++) {

// some O(1) expressions

//This code is contributed by Kshitij

Linear Time Complexity O(n):


The Time Complexity of a loop is considered as O(n) if the loop variables
are incremented/decremented by a constant amount. For example following
functions have O(n) time complexity. Linear time complexity, denoted as
O(n), is a measure of the growth of the running time of an algorithm
proportional to the size of the input. In an O(n) algorithm, the running time
increases linearly with the size of the input. For example, searching for an
element in an unsorted array or iterating through an array and performing a
constant amount of work for each element would be O(n) operations. In
simple words, for an input of size n, the algorithm takes n steps to complete
the operation.
 C++
 C
 Java
 C#
 Javascript
 Python3

// Here c is a positive integer constant

for (int i = 1; i <= n; i = i + c) {

// some O(1) expressions

for (int i = n; i > 0; i = i - c) {

// some O(1) expressions

// This code is contributed by Kshitij

Quadratic Time Complexity O(nc):


The time complexity is defined as an algorithm whose performance is
directly proportional to the squared size of the input data, as in nested loops
it is equal to the number of times the innermost statement is executed. For
example, the following sample loops have O(n2) time complexity
Quadratic time complexity, denoted as O(n^2), refers to an algorithm whose
running time increases proportional to the square of the size of the input. In
other words, for an input of size n, the algorithm takes n * n steps to
complete the operation. An example of an O(n^2) algorithm is a nested loop
that iterates over the entire input for each element, performing a constant
amount of work for each iteration. This results in a total of n * n iterations,
making the running time quadratic in the size of the input.
 C++
 C
 Java
 C#
 Javascript
 Python3

// Here c is any positive constant

for (int i = 1; i <= n; i += c) {

for (int j = 1; j <= n; j += c) {

// some O(1) expressions

for (int i = n; i > 0; i -= c) {

for (int j = i + 1; j <= n; j += c) {


// some O(1) expressions

for (int i = n; i > 0; i -= c) {

for (int j = i - 1; j > 0; j -= c) {

// some O(1) expressions

// This code is contributed by Kshitij

Example: Selection sort and Insertion Sort have O(n2) time complexity.

Logarithmic Time Complexity O(Log n):


The time Complexity of a loop is considered as O(Logn) if the loop
variables are divided/multiplied by a constant amount. And also for
recursive calls in the recursive function, the Time Complexity is considered
as O(Logn).
 C++
 C
 Java
 C#
 Javascript
 Python3
for (int i = 1; i <= n; i *= c) {

// some O(1) expressions

for (int i = n; i > 0; i /= c) {

// some O(1) expressions

// This code is contributed by Kshitij

 C++
 C
 Java
 C#
 Javascript
 Python3

// Recursive function

void recurse(int n)

if (n <= 0)

return;
else {

// some O(1) expressions

recurse(n/c);

// Here c is positive integer constant greater than 1

// This code is contributed by Kshitij

Example: Binary Search(refer iterative implementation) has O(Logn) time


complexity.

Logarithmic Time Complexity O(Log Log n):


The Time Complexity of a loop is considered as O(LogLogn) if the loop
variables are reduced/increased exponentially by a constant amount.
 C++
 C
 Java
 C#
 Javascript
 Python3

// Here c is a constant greater than 1

for (int i = 2; i <= n; i = pow(i, c)) {

// some O(1) expressions


}

// Here fun() is sqrt or cuberoot or any other constant root

for (int i = n; i > 1; i = fun(i)) {

// some O(1) expressions

//This code is contributed by Kshitij

See this for mathematical details.

How to combine the time complexities of consecutive

loops?
When there are consecutive loops, we calculate time complexity as a sum of
the time complexities of individual loops.
To combine the time complexities of consecutive loops, you need to
consider the number of iterations performed by each loop and the amount of
work performed in each iteration. The total time complexity of the algorithm
can be calculated by multiplying the number of iterations of each loop by
the time complexity of each iteration and taking the maximum of all
possible combinations.
For example, consider the following code:
for i in range(n):
for j in range(m):
# some constant time operation
Here, the outer loop performs n iterations, and the inner loop performs m
iterations for each iteration of the outer loop. So, the total number of
iterations performed by the inner loop is n * m, and the total time
complexity is O(n * m).
In another example, consider the following code:
for i in range(n):
for j in range(i):
# some constant time operation
Here, the outer loop performs n iterations, and the inner loop performs i
iterations for each iteration of the outer loop, where i is the current iteration
count of the outer loop. The total number of iterations performed by the
inner loop can be calculated by summing the number of iterations performed
in each iteration of the outer loop, which is given by the formula sum(i)
from i=1 to n, which is equal to n * (n + 1) / 2. Hence, the total time
complex
 C++
 C
 Java
 C#
 Javascript
 Python3

//Here c is any positive constant

for (int i = 1; i <= m; i += c) {

// some O(1) expressions

for (int i = 1; i <= n; i += c) {

// some O(1) expressions

}
// Time complexity of above code is O(m) + O(n) which is O(m + n)

// If m == n, the time complexity becomes O(2n) which is O(n).

//This code is contributed by Kshitij

How to calculate time complexity when there are

many if, else statements inside loops?


As discussed here, the worst-case time complexity is the most useful among
best, average and worst. Therefore we need to consider the worst case. We
evaluate the situation when values in if-else conditions cause a maximum
number of statements to be executed.
For example, consider the linear search function where we consider the case
when an element is present at the end or not present at all.
When the code is too complex to consider all if-else cases, we can get an
upper bound by ignoring if-else and other complex control statements.

How to calculate the time complexity of recursive

functions?
The time complexity of a recursive function can be written as a
mathematical recurrence relation. To calculate time complexity, we must
know how to solve recurrences. We will soon be discussing recurrence-
solving techniques as a separate post.
Algorithms Cheat Sheet:

Algorithm Best Case Average Case Worst Case

Selection Sort O(n^2) O(n^2) O(n^2)

Bubble Sort O(n) O(n^2) O(n^2)

Insertion Sort O(n) O(n^2) O(n^2)

Tree Sort O(nlogn) O(nlogn) O(n^2)

Radix Sort O(dn) O(dn) O(dn)

Merge Sort O(nlogn) O(nlogn) O(nlogn)

Heap Sort O(nlogn) O(nlogn) O(nlogn)

Quick Sort O(nlogn) O(nlogn) O(n^2)

Bucket Sort O(n+k) O(n+k) O(n^2)

Counting Sort O(n+k) O(n+k) O(n+k)

Quiz on Analysis of Algorithms


For more details, please refer: Design and Analysis of Algorithms.
Please write comments if you find anything incorrect, or you want to share
more information about the topic discussed above.

"The DSA course helped me a lot in clearing the interview rounds. It was
really very helpful in setting a strong foundation for my problem-solving
skills. Really a great investment, the passion Sandeep sir has towards
DSA/teaching is what made the huge difference." - Gaurav | Placed at
Amazon
Before you move on to the world of development, master the
fundamentals of DSA on which every advanced algorithm is built upon.
Choose your preferred language and start learning today:

You might also like