Understanding Time Complexity With Simple Examples
Understanding Time Complexity With Simple Examples
Understanding Time Complexity With Simple Examples
Simple Examples
Last Updated : 22 Apr, 2024
int main()
{
cout << "Hello World";
return 0;
}
Output
Hello World
Time Complexity: In the above code “Hello World” is printed only once on
the screen.
So, the time complexity is constant: O(1) i.e. every time a constant amount
of time is required to execute code, no matter which operating system or
which machine configurations you are using.
Auxiliary Space: O(1)
Example 2:
C++CJavaPython3C#Javascript
#include <iostream>
using namespace std;
int main()
{
int i, n = 8;
for (i = 1; i <= n; i++) {
cout << "Hello World !!!\n";
}
return 0;
}
Output
Hello World !!!
Time Complexity: In the above code “Hello World !!!” is printed only n
times on the screen, as the value of n can change.
So, the time complexity is linear: O(n) i.e. every time, a linear amount of
time is required to execute code.
Auxiliary Space: O(1)
Example 3:
C++CJavaPython3C#Javascript
#include <iostream>
using namespace std;
int main()
{
int i, n = 8;
for (i = 1; i <= n; i=i*2) {
cout << "Hello World !!!\n";
}
return 0;
}
Output
Hello World !!!
int main()
{
int i, n = 8;
for (i = 2; i <= n; i=pow(i,2)) {
cout << "Hello World !!!\n";
}
return 0;
}
Output
Hello World !!!
Algorithm?
Now let us see some other examples and the process to find the time
complexity of an algorithm:
Example: Let us consider a model machine that has the following
specifications:
Single processor
32 bit
Sequential execution
1 unit time for arithmetic and logical operations
1 unit time for assignment and return statements
Q1. Find the Sum of 2 numbers on the above machine:
For any machine, the pseudocode to add two numbers will be something like
this:
C++CJavaPython3C#Javascript
// Pseudocode : Sum(a, b) { return a + b }
#include <iostream>
using namespace std;
int main() {
int a = 5, b = 6;
cout<<sum(a,b)<<endl;
return 0;
}
Output
11
Time Complexity:
The above code will take 2 units of time(constant):
o one for arithmetic operations and
o one for return. (as per the above conventions).
Therefore total cost to perform sum operation (Tsum) = 1 + 1 = 2
Time Complexity = O(2) = O(1), since 2 is constant
Auxiliary Space: O(1)
Q2. Find the sum of all elements of a list/array
The pseudocode to do so can be given as:
C++CJavaPython3C#Javascript
#include <iostream>
using namespace std;
int main()
{
int A[] = { 5, 6, 1, 2 };
int n = sizeof(A) / sizeof(A[0]);
cout << list_Sum(A, n);
return 0;
}
Output
14
To understand the time complexity of the above code, let’s see how much
time each statement will take:
C++CJavaPython3C#Javascript
int list_Sum(int A[], int n)
{
int sum = 0; // cost=1 no of times=1
for(int i=0; i<n; i++) // cost=2 no of times=n+1 (+1 for the
end false condition)
sum = sum + A[i] ; // cost=2 no of times=n
return sum ; // cost=1 no of times=1
}
Therefore the total cost to perform sum operation
Tsum=1 + 2 * (n+1) + 2 * n + 1 = 4n + 4 =C1 * n + C2 = O(n)
Therefore, the time complexity of the above code is O(n)
Q3. Find the sum of all elements of a matrix
For this one, the complexity is a polynomial equation (quadratic equation
for a square matrix)
Matrix of size n*n => Tsum = a.n2 + b.n + c
Since Tsum is in order of n2, therefore Time Complexity = O(n2)
C++JavaPython3C#Javascript
#include <iostream>
using namespace std;
int main()
{
int n = 3;
int m = 3;
int arr[][3]
= { { 3, 2, 7 }, { 2, 6, 8 }, { 5, 1, 9 } };
int sum = 0;
Output
43
Definition of Algorithm
The word Algorithm means ” A set of finite rules or instructions to be
followed in calculations or other problem-solving operations ”
Or
” A procedure for solving a mathematical problem in a finite number of
steps that frequently involves recursive operations”.
Therefore Algorithm refers to a sequence of finite steps to solve a particular
problem.
Use of the Algorithms:
Algorithms play a crucial role in various fields and have many applications.
Some of the key areas where algorithms are used include:
1. Computer Science: Algorithms form the basis of computer
programming and are used to solve problems ranging from simple
sorting and searching to complex tasks such as artificial intelligence and
machine learning.
2. Mathematics: Algorithms are used to solve mathematical problems,
such as finding the optimal solution to a system of linear equations or
finding the shortest path in a graph.
3. Operations Research: Algorithms are used to optimize and make
decisions in fields such as transportation, logistics, and resource
allocation.
4. Artificial Intelligence: Algorithms are the foundation of artificial
intelligence and machine learning, and are used to develop intelligent
systems that can perform tasks such as image recognition, natural
language processing, and decision-making.
5. Data Science: Algorithms are used to analyze, process, and extract
insights from large amounts of data in fields such as marketing, finance,
and healthcare.
These are just a few examples of the many applications of algorithms. The
use of algorithms is continually expanding as new technologies and fields
emerge, making it a vital component of modern society.
Algorithms can be simple and complex depending on what you want to
achieve.
It can be understood by taking the example of cooking a new recipe. To
cook a new recipe, one reads the instructions and steps and executes them
one by one, in the given sequence. The result thus obtained is the new dish
is cooked perfectly. Every time you use your phone, computer, laptop, or
calculator you are using Algorithms. Similarly, algorithms help to do a task
in programming to get the expected output.
The Algorithm designed are language-independent, i.e. they are just plain
instructions that can be implemented in any language, and yet the output
will be the same, as expected.
Types of Algorithms:
There are several types of algorithms available. Some important algorithms
are:
1. Brute Force Algorithm:
Searching algorithms are the ones that are used for searching elements or
groups of elements from a particular data structure. They can be of different
types based on their approach or the data structure in which the element
should be found.
5. Sorting Algorithm:
In this type of algorithm, the solution is built part by part. The solution for
the next part is built based on the immediate benefit of the next part. The
one solution that gives the most benefit will be chosen as the solution for the
next part.
9. Dynamic Programming Algorithm:
This algorithm uses the concept of using the already found solution to avoid
repetitive calculation of the same part of the problem. It divides the problem
into smaller overlapping subproblems and solves them.
10. Randomized Algorithm:
Advantages of Algorithms:
It is easy to understand.
An algorithm is a step-wise representation of a solution to a given
problem.
In an Algorithm the problem is broken down into smaller pieces or steps
hence, it is easier for the programmer to convert it into an actual
program.
Disadvantages of Algorithms:
Writing an algorithm takes a long time so it is time-consuming.
Understanding complex logic through algorithms can be very difficult.
Branching and Looping statements are difficult to show in
Algorithms(imp).
Now let’s design the algorithm with the help of the above pre-requisites:
Algorithm to add 3 numbers and print their sum:
1. START
2. Declare 3 integer variables num1, num2, and num3.
3. Take the three numbers, to be added, as inputs in variables num1, num2,
and num3 respectively.
4. Declare an integer variable sum to store the resultant sum of the 3
numbers.
5. Add the 3 numbers and store the result in the variable sum.
6. Print the value of the variable sum
7. END
int main()
{
Output
Enter the 1st number: 0
Fixed Part: This refers to the space that is required by the algorithm. For
example, input variables, output variables, program size, etc.
Variable Part: This refers to the space that can be different based on the
implementation of the algorithm. For example, temporary variables,
dynamic memory allocation, recursion stack space, etc.
Therefore Space complexity S(P) of any algorithm P is S(P) = C +
SP(I), where C is the fixed part and S(I) is the variable part of the
algorithm, which depends on instance characteristic I.
Example: Consider the below algorithm for Linear Search
Step 1: START
Step 2: Get n elements of the array in arr and the number to be searched in
x
Step 3: Start from the leftmost element of arr[] and one by one compare x
with each element of arr[]
Step 4: If x matches with an element, Print True.
Step 5: If x doesn’t match with any of the elements, Print False.
Step 6: END
Here, There are 2 variables arr[], and x, where the arr[] is the variable
part of n elements and x is the fixed part. Hence S(P) = 1+n. So, the space
complexity depends on n(number of elements). Now, space depends on data
types of given variables and constant types and it will be multiplied
accordingly.
2. Time Complexity: The time complexity of an algorithm refers to the
amount of time required by the algorithm to execute and get the result. This
can be for normal operations, conditional if-else statements, loop statements,
etc.
How to Calculate, Time Complexity?
The time complexity of an algorithm is also calculated by determining the
following 2 components:
Constant time part: Any instruction that is executed just once comes in
this part. For example, input, output, if-else, switch, arithmetic
operations, etc.
Variable Time Part: Any instruction that is executed more than once,
say n times, comes in this part. For example, loops, recursion, etc.
Therefore Time complexity (𝑃) T(P) of any algorithm P
is T(P) = C + TP(I), where C is the constant time part and TP(I) is the
variable part of the algorithm, which depends on the instance
characteristic I.
Example: In the algorithm of Linear Search above, the time complexity is
calculated as follows:
Step 1: –Constant Time
Step 2: — Variable Time (Taking n inputs)
Step 3: –Variable Time (Till the length of the Array (n) or the index of the
found element)
Step 4: –Constant Time
Step 5: –Constant Time
Step 6: –Constant Time
Hence, T(P) = 1 + n + n(1 + 1) + 1 = 2 + 3n, which can be said as T(n).
Consider a box where no one can see what’s happening inside, we say a
black box.
We give input to the box and it gives us the output we need but the
procedure that we might need to know behind the conversion of input to
desired output is an ALGORITHM.
An algorithm is independent of the language used. It tells the programmer
the logic used to solve the problem. So, it is a logical step-by-step procedure
that acts as a blueprint to programmers.
Types of Algorithms:
Sorting algorithms: Bubble Sort, insertion sort, and many more. These
algorithms are used to sort the data in a particular format.
Searching algorithms: Linear search, binary search, etc. These
algorithms are used in finding a value or record that the user demands.
Graph Algorithms: It is used to find solutions to problems like finding
the shortest path between cities, and real-life problems like traveling
salesman problems.
Sorting algorithms are algorithms that take a collection of elements and
rearrange them in a specified order (e.g. ascending or descending). There are
many different sorting algorithms, each with its own strengths and weaknesses.
Some of the most commonly used sorting algorithms include:
Bubble sort: A simple sorting algorithm that repeatedly steps through the
list, compares adjacent elements and swaps them if they are in the wrong
order.
Insertion sort: A simple sorting algorithm that builds up the final sorted
array one item at a time, by comparing each new item to the items that have
already been sorted and inserting it in the correct position.
Selection sort: A simple sorting algorithm that repeatedly selects the
minimum element from the unsorted part of the array and moves it to the
end of the sorted part.
Merge sort: A divide-and-conquer sorting algorithm that works by dividing
the unsorted list into n sub-lists, sorting each sub-list, and then merging
them back into a single sorted list.
Quick sort: A divide-and-conquer sorting algorithm that works by selecting
a “pivot” element from the array and partitioning the other elements into
two sub-arrays, according to whether they are less than or greater than the
pivot. The sub-arrays are then sorted recursively.
Each of these algorithms has different time and space complexities, making
some more suitable for certain use cases than others.
Searching algorithms are algorithms that search for a particular element or
value in a data structure (such as an array or a linked list). Some of the most
commonly used searching algorithms include:
Linear search: A simple searching algorithm that iterates through every
element of a list until it finds a match.
Binary search: A searching algorithm that works by dividing a sorted list in
half repeatedly, until the desired element is found or it can be determined
that the element is not present.
Jump search: A searching algorithm that works by jumping ahead by fixed
steps in the list, until a suitable candidate is found, and then performing a
linear search in the surrounding elements.
Interpolation search: A searching algorithm that works by using
information about the range of values in the list to estimate the position of
the desired element and then verifying that it is indeed present.
Hash table search: A searching algorithm that uses a hash function to map
elements to indices in an array, and then performs constant-time lookups in
the array to find the desired element.
Each of these algorithms has different time and space complexities, making
some more suitable for certain use cases than others. The choice of which
algorithm to use depends on the specific requirements of the problem, such
as the size of the data structure, the distribution of values, and the desired
time complexity.
Graph algorithms are a set of algorithms that are used to process, analyze and
understand graph data structures. Graphs are mathematical structures used to
model relationships between objects, where the objects are represented as
vertices (or nodes) and the relationships between them are represented as edges.
Graph algorithms are used in a variety of applications such as network analysis,
social network analysis, recommendation systems, and in many other areas
where understanding the relationships between objects is important. Some of the
common graph algorithms include:
Shortest Path algorithms (e.g. Dijkstra’s, Bellman-Ford, A*)
Minimum Spanning Tree algorithms (e.g. Kruskal, Prim)
Maximum Flow algorithms (e.g. Ford-Fulkerson, Edmonds-Karp)
Network Flow algorithms (e.g. Bipartite Matching)
Connectivity algorithms (e.g. Depth-first Search, Breadth-first Search)
Creating an Algorithm:
Since the algorithm is language-independent, we write the steps to
demonstrate the logic behind the solution to be used for solving a problem.
But before writing an algorithm, keep the following points in mind:
The algorithm should be clear and unambiguous.
There should be 0 or more well-defined inputs in an algorithm.
An algorithm must produce one or more well-defined outputs that are
equivalent to the desired output. After a specific number of steps,
algorithms must ground to a halt.
Algorithms must stop or end after a finite number of steps.
In an algorithm, step-by-step instructions should be supplied, and they
should be independent of any computer code.
Example: algorithm to multiply 2 numbers and print the result:
Step 1: Start
Step 2: Get the knowledge of input. Here we need 3 variables; a and b will
be the user input and c will hold the result.
Step 3: Declare a, b, c variables.
Step 4: Take input for a and b variable from the user.
Step 5: Know the problem and find the solution using operators, data
structures and logic
We need to multiply a and b variables so we use * operator and assign the
result to c.
That is c <- a * b
Step 6: Check how to give output, Here we need to print the output. So
write print c
Step 7: End
Example 1: Write an algorithm to find the maximum of all the elements
present in the array.
Follow the algorithm approach as below:
Step 1: Start the Program
Step 2: Declare a variable max with the value of the first element of the
array.
Step 3: Compare max with other elements using loop.
Step 4: If max < array element value, change max to new max.
Step 5: If no element is left, return or print max otherwise goto step 3.
Step 6: End of Solution
Example 2: Write an algorithm to find the average of 3 subjects.
Follow the algorithm approach as below:
Step 1: Start the Program
Step 2: Declare and Read 3 Subject, let’s say S1, S2, S3
Step 3: Calculate the sum of all the 3 Subject values and store result in Sum
variable (Sum = S1+S2+S3)
Step 4: Divide Sum by 3 and assign it to Average variable. (Average =
Sum/3)
Step 5: Print the value of Average of 3 Subjects
Step 6: End of Solution
cases in complexities:
There are two commonly studied cases of complexity in algorithms:
1.Best case complexity: The best-case scenario for an algorithm is the
scenario in which the algorithm performs the minimum amount of work
(e.g. takes the shortest amount of time, uses the least amount of memory,
etc.).
2.Worst case complexity: The worst-case scenario for an algorithm is the
scenario in which the algorithm performs the maximum amount of work
(e.g. takes the longest amount of time, uses the most amount of memory,
etc.).
In analyzing the complexity of an algorithm, it is often more informative to
study the worst-case scenario, as this gives a guaranteed upper bound on the
performance of the algorithm. Best-case scenario analysis is sometimes
performed, but is generally less important as it provides a lower bound that
is often trivial to achieve.
Advantages of Algorithms
Easy to understand: Since it is a stepwise representation of a solution to
a given problem, it is easy to understand.
Language Independent: It is not dependent on any programming
language, so it can easily be understood by anyone.
Debug / Error Finding: Every step is independent / in a flow so it will
be easy to spot and fix the error.
Sub-Problems: It is written in a flow so now the programmer can divide
the tasks which makes them easier to code.
Disadvantages of Algorithms
Creating efficient algorithms is time-consuming and requires good
logical skills.
Nasty to show branching and looping in algorithms.
Summer-time is here and so is the time to skill-up! More than 5,000 learners
have now completed their journey from basics of DSA to advanced level
development programs such as Full-Stack, Backend Development, Data
Science.
Asymptotic Analysis is defined as the big idea that handles the above issues
in analyzing algorithms. In Asymptotic Analysis, we evaluate the
performance of an algorithm in terms of input size (we don’t measure the
actual running time). We calculate, how the time (or space) taken by an
algorithm increases with the input size.
Asymptotic notation is a way to describe the running time or space
complexity of an algorithm based on the input size. It is commonly used in
complexity analysis to describe how an algorithm performs as the size of the
input grows. The three most commonly used notations are Big O, Omega,
and Theta.
1. Big O notation (O): This notation provides an upper bound on the growth
rate of an algorithm’s running time or space usage. It represents the
worst-case scenario, i.e., the maximum amount of time or space an
algorithm may need to solve a problem. For example, if an algorithm’s
running time is O(n), then it means that the running time of the algorithm
increases linearly with the input size n or less.
2. Omega notation (?): This notation provides a lower bound on the growth
rate of an algorithm’s running time or space usage. It represents the best-
case scenario, i.e., the minimum amount of time or space an algorithm
may need to solve a problem. For example, if an algorithm’s running
time is ?(n), then it means that the running time of the algorithm
increases linearly with the input size n or more.
3. Theta notation (?): This notation provides both an upper and lower bound
on the growth rate of an algorithm’s running time or space usage. It
represents the average-case scenario, i.e., the amount of time or space an
algorithm typically needs to solve a problem. For example, if an
algorithm’s running time is ?(n), then it means that the running time of
the algorithm increases linearly with the input size n.
In general, the choice of asymptotic notation depends on the problem and
the specific algorithm used to solve it. It is important to note that asymptotic
notation does not provide an exact running time or space usage for an
algorithm, but rather a description of how the algorithm scales with respect
to input size. It is a useful tool for comparing the efficiency of different
algorithms and for predicting how they will perform on large input sizes.
10 2 sec ~1h
The reason is the order of growth of Binary Search with respect to input
size is logarithmic while the order of growth of Linear Search is linear.
So the machine-dependent constants can always be ignored after a
certain value of input size.
Running times for this example:
Linear Search running time in seconds on A: 0.2 * n
Binary Search running time in seconds on B: 1000*log(n)
Advantages or Disadvantages:
Advantages:
Algorithms
1. Big-O Notation
A) For some algorithms, all the cases (worst, best, average) are
asymptotically the same. i.e., there are no worst and best cases.
Example: Merge Sort does ?(n log(n)) operations in all cases.
B) Where as most of the other sorting algorithms have worst and
best cases.
Example 1: In the typical implementation of Quick Sort (where
pivot is chosen as a corner element), the worst occurs when the
input array is already sorted and the best occurs when the pivot
elements always divide the array into two halves.
Example 2: For insertion sort, the worst case occurs when the
array is reverse sorted and the best case occurs when the array
is sorted in the same order as output.
C++CJavaC#JavascriptPHPPython3
// C++ implementation of the approach
#include <bits/stdc++.h>
using namespace std;
// Driver's Code
int main()
{
int arr[] = { 1, 10, 30, 15 };
int x = 30;
int n = sizeof(arr) / sizeof(arr[0]);
// Function call
cout << x << " is present at index "
<< search(arr, n, x);
return 0;
}
Output
30 is present at index 2
// Driver's Code
int main()
{
// Declaring two array one of length odd and other of
// length even;
int arr[4] = { 1, 2, 3, 4 };
int a[5] = { 1, 2, 3, 4, 5 };
// Function call
cout << getSum(arr, 4)
<< endl; // print 0 because (n) is even
cout << getSum(a, 5)
<< endl; // print sum because (n) is odd
}
// This code is contributed by Suruchi Kumari
Output
0
15
Disadvantages:
Important points:
Graphical Representation
Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an
algorithm. The execution time serves as both a lower and upper bound on
the algorithm’s time complexity. It exists as both, the most, and least
boundaries for a given input value.
Graphical Representation
Mathematical Representation:
Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤
c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}
4. Little ο asymptotic notation
Mathematical Representation:
f(n) = o(g(n)) means lim f(n)/g(n) = 0 n→∞
5. Little ω asymptotic notation
Let f(n) and g(n) be functions that map positive integers to positive real
numbers. We say that f(n) is ω(g(n)) (or f(n) ∈ ω(g(n))) if for any real
constant c > 0, there exists an integer constant n0 ≥ 1 such that f(n) > c *
g(n) ≥ 0 for every integer n ≥ n0.
Mathematical Representation:
if f(n) ∈ ω(g(n)) then,
lim f(n)/g(n) = ∞
n→∞
Note: In most of the Algorithm we use Big-O notation, as it is worst case
Complexity Analysis.
Algorithm?
Optimization means modifying the brute-force approach to a problem. It is
done to derive the best possible solution to solve the problem so that it will
take less time and space complexity. We can optimize a program by either
limiting the search space at each step or occupying less search space from
the start.
We can optimize a solution using both time and space optimization. To
optimize a program,
1. We can reduce the time taken to run the program and increase the space
occupied;
2. we can reduce the memory usage of the program and increase its total
run time, or
3. we can reduce both time and space complexity by deploying relevant
algorithms
// Function to check if a
// number is even or odd
void checkEvenOdd(int N)
{
// Find remainder
int r = N % 2;
// Otherwise
else {
printf("Odd");
}
}
// Driver Code
int main()
{
// Given number N
int N = 101;
// Function Call
checkEvenOdd(N);
return 0;
}
Output
Odd
Constant Complexity Graph
2. Logarithmic Complexity:
int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}
Output
Element is present at index 3
Logarithmic Complexity Graph
3. Linear Complexity:
#include <iostream>
using namespace std;
// Driver's code
int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int N = sizeof(arr) / sizeof(arr[0]);
// Function call
int result = search(arr, N, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}
Output
Element is present at index 3
Linear Complexity Graph
4. Quadratic Complexity:
return 0;
}
// Driver code
int main()
{
int A[] = { 0, -1, 2, -3, 1 };
int x = -2;
int size = sizeof(A) / sizeof(A[0]);
return 0;
}
Output
Yes
Quadratic Complexity Graph
5. Factorial Complexity:
It imposes a complexity of O(n!). For N input data size, it executes the order
of N! steps on N elements to solve a given problem.
Example: The below program takes factorial complexity.
C++JavaPython3JavaScript
// C++ program to print all
// permutations with duplicates allowed
#include <bits/stdc++.h>
using namespace std;
// Swapping done
swap(a[l], a[i]);
// Recursion called
permute(a, l + 1, r);
// backtrack
swap(a[l], a[i]);
}
}
}
// Driver Code
int main()
{
string str = "ABC";
int n = str.size();
// Function call
permute(str, 0, n - 1);
return 0;
}
Output
ABC
ACB
BAC
BCA
CBA
CAB
6. Exponential Complexity:
// Base Cases
if (sum == 0)
return true;
if (n == 0)
return false;
// Driver code
int main()
{
int set[] = { 3, 34, 4, 12, 5, 2 };
int sum = 9;
int n = sizeof(set) / sizeof(set[0]);
if (isSubsetSum(set, n, sum) == true)
cout <<"Found a subset with given sum";
else
cout <<"No subset with given sum";
return 0;
}
// This code is contributed by shivanisinghss2110
Output
Found a subset with given sum
Singly Linked
O(N) O(N) O(N) O(N)
list
Doubly
O(N) O(N) O(1) O(1)
Linked List
Binary
O(N) O(N) O(N) O(N)
Search Tree
Red Black
O(log N) O(log N) O(log N) O(log N)
Tree
Conclusion:
Complexity analysis is a very important technique to analyze any problem.
The interviewer often checks your ideas and coding skills by asking you to
write a code giving restrictions on its time or space complexities. By solving
more and more problems anyone can improve their logical thinking day by
day. Even in coding contests optimized solutions are accepted. The naive
approach can give TLE(Time limit exceed).
"The DSA course helped me a lot in clearing the interview rounds. It was
really very helpful in setting a strong foundation for my problem-solving
skills. Really a great investment, the passion Sandeep sir has towards
DSA/teaching is what made the huge difference." - Gaurav | Placed at
Amazon
Example: Selection sort and Insertion Sort have O(n2) time complexity.
C++
C
Java
C#
Javascript
Python3
// Recursive function
void recurse(int n)
if (n <= 0)
return;
else {
recurse(n/c);
loops?
When there are consecutive loops, we calculate time complexity as a sum of
the time complexities of individual loops.
To combine the time complexities of consecutive loops, you need to
consider the number of iterations performed by each loop and the amount of
work performed in each iteration. The total time complexity of the algorithm
can be calculated by multiplying the number of iterations of each loop by
the time complexity of each iteration and taking the maximum of all
possible combinations.
For example, consider the following code:
for i in range(n):
for j in range(m):
# some constant time operation
Here, the outer loop performs n iterations, and the inner loop performs m
iterations for each iteration of the outer loop. So, the total number of
iterations performed by the inner loop is n * m, and the total time
complexity is O(n * m).
In another example, consider the following code:
for i in range(n):
for j in range(i):
# some constant time operation
Here, the outer loop performs n iterations, and the inner loop performs i
iterations for each iteration of the outer loop, where i is the current iteration
count of the outer loop. The total number of iterations performed by the
inner loop can be calculated by summing the number of iterations performed
in each iteration of the outer loop, which is given by the formula sum(i)
from i=1 to n, which is equal to n * (n + 1) / 2. Hence, the total time
complex
C++
C
Java
C#
Javascript
Python3
}
// Time complexity of above code is O(m) + O(n) which is O(m + n)
functions?
The time complexity of a recursive function can be written as a
mathematical recurrence relation. To calculate time complexity, we must
know how to solve recurrences. We will soon be discussing recurrence-
solving techniques as a separate post.
Algorithms Cheat Sheet:
"The DSA course helped me a lot in clearing the interview rounds. It was
really very helpful in setting a strong foundation for my problem-solving
skills. Really a great investment, the passion Sandeep sir has towards
DSA/teaching is what made the huge difference." - Gaurav | Placed at
Amazon
Before you move on to the world of development, master the
fundamentals of DSA on which every advanced algorithm is built upon.
Choose your preferred language and start learning today: