0% found this document useful (0 votes)
58 views

Unit - 1

The document discusses different types of algorithms including recursive, brute force, divide and conquer, greedy, dynamic programming and their characteristics. It provides examples and explanations of each algorithm design technique.

Uploaded by

hrishabhjoshi123
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Unit - 1

The document discusses different types of algorithms including recursive, brute force, divide and conquer, greedy, dynamic programming and their characteristics. It provides examples and explanations of each algorithm design technique.

Uploaded by

hrishabhjoshi123
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 138

DATA STRUCTURES AND ALGORITHMS

USING C

UNIT 1

Swati Jain, Assistant Professor, VSIT, VIPS - TC


ALGORITHMS

• An algorithm is a set of commands that must be followed for a computer to perform


calculations or other problem-solving operations.
• According to its formal definition, an algorithm is a finite set of instructions
carried out in a specific order to perform a particular task.
• It is not the entire program or code; it is simple logic to a problem represented as
an informal description in the form of a flowchart or pseudocode.
• Algorithms are generally developed independently of underlying languages, which means
that an algorithm can be implemented in more than one programming language.
• Unambiguity, fineness, effectiveness, and language independence are some of the
characteristics of an algorithm. The scalability and performance of an algorithm are
the primary factors that contribute to its importance.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ALGORITHMS

• Problem: A problem can be defined as a real-world problem or real-world instance


problem for which you need to develop a program or set of instructions. An algorithm
is a set of instructions.
• Algorithm: An algorithm is defined as a step-by-step process that will be designed
for a problem.
• Input: After designing an algorithm, the algorithm is given the necessary and desired
inputs.
• Processing unit: The input will be passed to the processing unit, producing the
desired output.
• Output: The outcome or result of the program is referred to as the output.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
CHARACTERISTICS OF AN ALGORITHM

• Unambiguity: A perfect algorithm is defined as unambiguous, which means that its


instructions should be clear and straightforward.
• Finiteness: An algorithm must be finite. Finiteness in this context means that the
algorithm should have a limited number of instructions, i.e., the instructions should
be countable.
• Effectiveness: Because each instruction in an algorithm affects the overall process,
it should be adequate.
• Language independence: An algorithm must be language-independent, which means that
its instructions can be implemented in any language and produce the same results.
• Definiteness: The steps of the algorithm must be precisely defined or unambiguously
specified.
• Generality: An algorithm must be generic enough to solve all problems of a
particular class.
• Input-Output: The algorithm must have certain initial and precise inputs, and outputs
that may be generated both at its intermediate and final steps. Swati Jain, Assistant Professor, VSIT,
VIPS - TC
ALGORITHM DESIGN TECHNIQUES IN DATA STRUCTURES
• Selecting a proper design technique for algorithms is a complex but important task.
Following are some of the main algorithm design techniques:
ü Recursive algorithm
ü Brute-force or exhaustive search
ü Divide and Conquer
ü Greedy Algorithms
ü Dynamic Programming
ü Branch and Bound Algorithm
ü Randomized Algorithm
ü Backtracking
• A given problem can be solved in various different approaches and some approaches
deliver much more efficient results than others.
• Algorithm analysis is a technique used to measure the effectiveness and performance of
the algorithms. It helps to determine the quality of an algorithm based on several
parameters such as user-friendliness, maintainability, security, space usage and usage
of other resources. Swati Jain, Assistant Professor, VSIT,
VIPS - TC
RECURSIVE ALGORITHM

• This is one of the most interesting Algorithms as it calls itself with a smaller
value as inputs which it gets after solving for the current inputs. In more simpler
words, It’s an Algorithm that calls itself repeatedly until the problem is solved.
• Problems such as the Tower of Hanoi or DFS of a Graph can be easily solved by using
these Algorithms.
• For example, here is a code that finds a factorial using a recursion Algorithm:
Fact(y)
If y is 0
return 1
return (y*Fact(y-1)) /* this is where the recursion happens*/

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
BRUTE-FORCE SEARCH

• It is a simple approach of addressing a problem that relies on huge processing power


and testing of all possibilities to improve efficiency.
• Suppose you forgot the combination of a 4-digit padlock and to avoid purchasing the
new one, you have to open the lock using brute-force search method. You will have to
try all possible 4-digit combinations from 0 to 9 to unlock it. That combination
could be anything between 0000 to 9999, hence there are 10,000 combinations. So we
can say that in the worst case, you have to try 10, 000 times to find your actual
combination.
• A classic example in computer science is the traveling salesman problem (TSP).
Suppose a salesman needs to visit 10 cities across the country. How does one
determine the order in which those cities should be visited such that the total
distance traveled is minimized?
• The brute force solution is simply to calculate the total distance for every possible
route and then select the shortest one. This is not particularly efficient because it
is possible to eliminate many possible routes through clever algorithms.
• The time complexity of brute force is O(mn), which is sometimes written as O(n*m) .
Swati Jain, Assistant Professor, VSIT,
So, if we were to search for a string of "n" characters in a string
VIPS -of
TC "m" characters
DIVIDE AND CONQUER

• Divide and conquer algorithm


works on top-down approach and is
preferred for large problems. As
the name says divide and conquer,
it follows following steps:
• Step 1: Divide the problem into
several subproblems.
• Step 2: Conquer or solve each
sub-problem.
• Step 3: Combine each sub-problem
to get the required result.
• Divide and Conquer solve each
subproblem recursively, so each
subproblem will be the smaller
original problem.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
GREEDY ALGORITHM

• A greedy algorithm is a simple, intuitive algorithm that is used in optimization problems. The
algorithm makes the optimal choice at each step as it attempts to find the overall optimal way
to solve the entire problem. Greedy algorithms are quite successful in some problems, such
as Huffman encoding which is used to compress data, or Dijkstra's algorithm, which is used to
find the shortest path through a graph.
• However, in many problems, a greedy strategy does not produce an optimal solution.
• For example, in the animation below, the greedy algorithm seeks to find the path with the
largest sum. It does this by selecting the largest available number at each step. The greedy
algorithm fails to find the largest sum, however, because it makes decisions based only on the
information it has at any one step, without regard to the overall problem.
• With a goal of reaching the largest sum, at each step, the greedy algorithm will choose what
appears to be the optimal immediate choice, so it will choose 12 instead of 3 at the second
step and will not reach the best solution, which contains 99.
• Greedy algorithms take all of the data in a particular problem, and then set a
rule for which elements to add to the solution at each step of the algorithm.
In the animation above, the set of data is all of the numbers in the graph, and the
rule was to select the largest number available at each level of the graph.
The solution that the algorithm builds is the
Swati Jain,
VIPS - TC
sumProfessor,
Assistant of all VSIT,of those choices
DYNAMIC PROGRAMMING

• Dynamic Programming (DP) is an algorithmic technique for solving optimization problems


by breaking them into simpler sub-problems and storing each sub-solution so that the
corresponding sub-problem can be solved only once.
• Dynamic Programming is a good methodology for optimization problems that seek the
maximal or minimal solution with restrictions as it searches through all possible sub-
problems and never recomputes the conclusion to any sub-problem.
• It’s an algorithmic strategy for breaking down an optimization problem into smaller
sub-problems and leveraging the fact that the best solution for the overall problem is
defined by the best solution for its sub-problems.
• For example in case of Fibonacci Series in which each number is the sum of the two
preceding numbers. Suppose the first two numbers of the series are 0, 1. If it is
asked to find the nth number of the series, we can do that as follows: Fib(n) =
Fib(n-1) + Fib(n-2), for n>1
• Here, to solve the overall problem i.e., Fib(n), we have to break it down into two
smaller sub-problems i.e., Fib(n-1) and Fib(n-2). Hence, we can use Dynamic
Programming to solve above mentioned problem, which is elaborated Swati
in Jain,
more detail
Assistant inVSIT,
Professor,
VIPS - TC
the following figure(next slide):
DYNAMIC PROGRAMMING

Fibonacci Series using Dynamic Programming

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
BRANCH AND BOUND ALGORITHM

• For combinatory, discrete, and general mathematical optimization problems, branch and
bound algorithms are applied to determine the optimal solution.
• A branch and bound algorithm searches the set of all possible solutions before
recommending the best one.
• This algorithm enumerates possible candidate solutions in a stepwise manner by
exploring all possible set of solutions.
• How? - First of all we build a rooted decision tree where the root node represents the
entire search space. Each child node is a part of the solution set and is a partial
solution.
• Based on the optimal solution, we set an upper and lower
bound for a given problem before constructing the rooted
decision tree and we need to make a decision about which
node to include in the solution set at each level. It is very
important to find upper and lower bound and to find upper
bound any local optimization method can be used. It can also
be found by picking any point in the search space and Swati Jain, Assistant Professor, VSIT,
convex relaxation. Whereas, duality can be used for finding VIPS - TC
RANDOMIZED ALGORITHM

• Randomized Algorithm refers to an algorithm that uses random numbers to determine


what to do next at any point in its logic. In a standard algorithm, it is usually
used to reduce either the running time(time complexity), or the memory used(space
complexity).
• The algorithm works by creating a random number, ‘r,’ from a set of numbers and
making decisions based on its value.
• This algorithm could assist in making a decision in a situation of doubt by flipping
a coin or drawing a card from a deck.
• When utilizing a randomized method, keep the following two considerations in mind:
• It takes source of random numbers and makes random choices during execution along
with the input.
• Behavior of an algorithm varies even on fixed inputs.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
BACKTRACKING ALGORITHMS

• Backtracking means that if the current solution isn’t working, you should go back and
attempt another option.
• It is a method for resolving issues recursively by attempting to construct a solution
incrementally, one piece at a time, discarding any solutions that do not satisfy the
problem’s constraints at any point in time.
• This approach is used to resolve problems having multiple solutions. For example(next
slide) if we want to find all the possible ways of arranging
2 boys and 1 girl on 3 benches with a constraint that Girl should not be
on the middle bench.
So there will be 3! = 6 possibilities to solve this problem.
We will try all possible ways recursively to get the required solution.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
BACKTRACKING ALGORITHMS

Backtracking Example Possibilities

possible solutions for the above mentioned problem


Swati Jain, Assistant Professor, VSIT,
VIPS - TC
EFFICIENCY OF ALGORITHMS

• The performances of algorithms can be measured on the scales of time and space. The
performance of a program is the amount of computer memory and time needed to run a
program.
• We use two approaches to determine the performance of a program. One is analytical
and the other is experimental. In performance analysis we use analytical methods,
while in performance measurement we conduct experiments.
• Time Complexity: The time complexity of an algorithm or a program is a function of
the running time of the algorithm or a program. In other words, it is the amount of
computer time it needs to run to completion.
• Space Complexity: The space complexity of an algorithm or program is a function of
the space needed by the algorithm or program to run to completion.
• The time complexity of an algorithm can be computed either by an empirical or
theoretical approach. The empirical or posteriori testing approach calls for
implementing the complete algorithms and executing them on a computer for various
instances of the problem. The time taken by the execution of the programs for various
instances of the problem are noted and compared. The algorithm whose implementation
Swati Jain, Assistant Professor, VSIT,
VIPS - TC
yields the least time is considered as the best among the candidate algorithmic
ANALYSING ALGORITHMS

• The Theoretical or Apriori Approach calls for mathematically determining the


resources such as time and space needed by the algorithm, as a function of parameter
related to the instances of the problem considered.
• Apriori analysis computed the efficiency of the program as a function of the total
frequency count of the statements comprising the program.
• Suppose M is an algorithm, and suppose n is the size of the input data. Clearly the
complexity f(n) of M increases as n increases. It is usually the rate of increase of
f(n) with some standard functions. The most common computing times are O(1), O(log2
n), O(n), O(n log2 n), O(n2 ), O(n3 ), O(2n )
• Example – Consider three programs and calculate their corresponding frequency counts

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ANALYSING ALGORITHMS

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ANALYSING ALGORITHMS

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ANALYSING ALGORITHMS

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ANALYSING ALGORITHMS

• The total frequency counts of the program segments A, B and C given by 1, (3n+1) and
(3n^2 +3n+1) respectively are expressed as O(1), O(n) and O(n2 ).
• These are referred to as the time complexities of the program segments since they are
indicative of the running times of the program segments.
• In a similar manner space complexities of a program can also be expressed in terms of
mathematical notations, 6 which is nothing but the amount of memory they require for
their execution.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
TIME COMPLEXITY - ASYMPTOTIC NOTATIONS

• By definition, time complexity is the amount of time taken by an algorithm to run, as


a function of the length of the input.
• Here, the length of input indicates the number of operations to be performed by the
algorithm.
• It is not going to examine the total execution time of an algorithm. Rather, it is
going to give information about the variation (increase or decrease) in execution
time when the number of operations (increase or decrease) in an algorithm.
• Yes, as the definition says, the amount of time taken is a function of the length of
input only.
• To elaborate, Time complexity measures the time taken to execute each statement of
code in an algorithm.
• If a statement is set to execute repeatedly then the number of times that statement
gets executed is equal to N multiplied by the time required to run that function each
time.
Swati Jain, Assistant Professor, VSIT,
VIPS - TC
TIME COMPLEXITY - ASYMPTOTIC NOTATIONS

• By now, you could have concluded that when an algorithm uses statements that get
executed only once, will always require the same amount of time,
• and when the statement is in loop condition, the time required increases depending on
the number of times the loop is set to run.
• And, when an algorithm has a combination of both single executed statements and LOOP
statements or with nested LOOP statements, the time increases proportionately, based
on the number of times each statement gets executed.
• Since the algorithm's performance may vary with different types of input data, hence
for an algorithm we usually use the worst-case Time complexity of an algorithm
because that is the maximum time taken for any input size.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ASYMPTOTIC NOTATIONS – BIG OH(O)

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ASYMPTOTIC NOTATIONS – BIG O(O)

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ASYMPTOTIC NOTATIONS - OMEGA

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ASYMPTOTIC NOTATIONS - OMEGA

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ASYMPTOTIC NOTATIONS - THETA

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ASYMPTOTIC NOTATIONS - THETA

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ASYMPTOTIC NOTATIONS – SMALL O

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
TIME COMPLEXITY

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
TIME COMPLEXITY

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
TIME – SPACE TRADEOFF

• In computer science, a space-time or time-memory tradeoff is a way of solving a problem or


calculation in less time by using more storage space (or memory), or by solving a problem in
very little space by spending a long time. Most computers have a large amount of space, but not
infinite space. Also, most people are willing to wait a little while for a big calculation, but
not forever. So if your problem is taking a long time but not much memory, a space-time
tradeoff would let you use more memory and solve the problem more quickly. Or, if it could be
solved very quickly but requires more memory than you have, you can try to spend more time
solving the problem in the limited memory.
• A space-time tradeoff can be used with the problem of data storage. If data is stored
uncompressed, it takes more space but less time than if the data were stored compressed (since
compressing the data decreases the amount of space it takes, but it takes time to run
the compression algorithm).
• Smaller code or Loop Unrolling: Smaller code occupies less space in memory but it requires
high computation time that is required for jumping back to the beginning of the loop at the end
of each iteration. Loop unrolling can optimize execution speed at the cost of increased binary
size. It occupies more space in memory but requires less computation time.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
DATA STRUCTURE

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
DATA STRUCTURE

• Data structures are a specific way of organizing data in a specialized format on a


computer so that the information can be organized, processed, stored, and retrieved
quickly and effectively.
• They are a means of handling information, rendering the data for easy use.

• Every application, piece of software, or programs foundation consists of two


components: algorithms and data.
• Data is information, and algorithms are rules and instructions that turn the data
into something useful to programming.
• Put another way, remember these two simple equations:
• Related data + Permissible operations on the data = Data Structures
• Data structures + Algorithms = Programs

• Types of DS:
Ø Abstract
Ø Primitive (Simple)
Ø Compound
Swati Jain, Assistant Professor, VSIT,
VIPS - TC
DATA STRUCTURE

Abstract data types


Abstract data types are like user-defined data types, which define the operations on the
values using functions without specifying what is inside the function and how the
operations are performed.

For example:

Stack ADT: Here, the stack consists of elements of the same type arranged in sequential
order. The following are the operations that can be performed on the stack:

•initialize(): This method initializes the stack to be empty.


•push(): It is a method used for inserting an element into the stack.
•pop(): It is a method used for removing the element from the stack.
•isEmpty(): This method is used to check whether the stack is empty or not.
•isfull(): It checks whether the stack is full or not.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
DATA STRUCTURE

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
DATA STRUCTURE

• Simple DS can be constructed with the help of a primitive DS. A primitive DS is used
to represent the standard data types of any programming languages. Variables, arrays,
pointers, structures, unions, etc. are examples of primitive DS.
• Compound DS can be constructed with the help of any one primitive DS and it has a
specific functionality. It can be designed by the user.
• It is further classified into two types:
• Linear DS - Linear data structures can be constructed as a continuous arrangement of
data elements in the memory. It can be constructed by using array data type. In the
linear DS the relationship of adjacency is maintained between the data elements.
Arrays, Linked List, Stacks, Queues are Linear DS.
• Non Linear DS – Non-linear data be constructed as a collection of randomly
distributed set of data item joined together by using a special pointer (tag). In
non-linear Data structure the relationship of adjacency is not maintained between the
data items. Trees and Graphs are non linear DS.
Swati Jain, Assistant Professor, VSIT,
VIPS - TC
DATA STRUCTURE

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
DATA STRUCTURE

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
WHY ARE DATA STRUCTURES USEFUL?

• Data structures offer many advantages to IT-related processes, especially as applications get
more complex and the amount of existing data keeps growing. Here are some reasons why data
structures are essential.
• They facilitate greater processing speeds. Large amounts of data require faster processing, and
data structures help organize the data into forms that are easier to work with and process.
• They make it easier to search for data. Data structures organize information into workable
forms that are easier to conduct required searches for.
• They are reusable. Once you implement a given data structure, it can be used anywhere. There is
no need to make a new structure. This function saves time and resources.
• They make it easy to handle multiple requests. You rarely find a single user accessing a
database. Instead, it’s common to have hundreds, if not thousands, of users searching and
interacting with a database. Data structures arrange information so that users don’t have to
search every item — they can instantly search just the required data.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
WHAT IS DATA STRUCTURE AND ITS APPLICATIONS?

• Data Storage - Data structures facilitate efficient data persistence, like specifying attribute
collections and corresponding structures used in database management systems to store records.
• Data Exchange - Organized information, defined by data structures, can be shared between
applications like TCP/IP packets.
• Resource and Service Management - Data structures such as linked lists can enable core
operating systems resources and services to perform functions like file directory management,
memory allocation, and processing scheduling queues.
• Scalability - Big data applications rely on data structures to manage and allocate data storage
across many distributed storage locations. This function guarantees scalability and high
performance.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ARRAYS

• Arrays are defined as the collection of similar types of data items stored at contiguous memory
locations. It is one of the simplest data structures where each data element can be randomly
accessed by using its index number.
• In C programming, they are the derived data types that can store the primitive type of data
such as int, char, double, float, etc. For example, if we want to store the marks of a student
in 6 subjects, then we don't need to define a different variable for the marks in different
subjects. Instead, we can define an array that can store the marks in each subject at the
contiguous memory locations.
• There are some of the properties of an array that are listed as follows -
• Each element in an array is of the same data type and carries the same size.
• Elements in the array are stored at contiguous memory locations from which the first element is
stored at the smallest memory location.
• Elements of the array can be randomly accessed since we can calculate the address of each
element of the array with the given base address and the size of the data element.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ARRAYS

• Representation of an array
• We can represent an array in various ways in different programming languages. As an
illustration, let's see the declaration of array in C language -

• As per the above illustration, there are some of the following important points -
• Index starts with 0.
• The array's length is 10, which means we can store 10 elements.
• Each element in the array can be accessed via its index.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
MEMORY ALLOCATION OF AN ARRAY

• As stated above, all the data elements of an array are stored at contiguous locations in the
main memory. The name of the array represents the base address or the address of the first
element in the main memory. Each element of the array is represented by proper indexing.
• We can define the indexing of an array in the below ways -
• 0 (zero-based indexing): The first element of the array will be arr[0].
• 1 (one-based indexing): The first element of the array will be arr[1].
• n (n - based indexing): The first element of the array can reside at any random index number.

In the above image, we have shown the memory


allocation of an array arr of size 5. The array
follows a 0-based indexing approach. The base address
of the array is 100 bytes. It is the address of arr[0].
Here, the size of the data type used is 4 bytes;
therefore, each element will take 4 bytes in the
memory.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HOW TO ACCESS AN ELEMENT FROM THE ARRAY?

• We required the information given below to access any random element from the array -
• Base Address of the array, Size of an element in bytes, Type of indexing, array follows.
• The formula to calculate the address to access an array element -
• Byte address of element A[i] = base address + size * ( i - first index)
• Here, size represents the memory taken by the primitive data types. As an instance, int takes 2
bytes, float takes 4 bytes of memory space in C programming.
• Suppose an array, A[-10 ..... +2 ] having Base address (BA) = 999 and size of an element = 2
bytes, find the location of A[-1].
• L(A[-1]) = 999 + 2 x [(-1) - (-10)]
• = 999 + 18
• = 1017

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
INSERT AN ELEMENT INTO THE ARRAY

• // C Program to Insert an element at a // element to be inserted


specific position in an Array x = 50;
• #include <stdio.h> // position at which element
• int main() // is to be inserted
pos = 5;
• { // increase the size by 1
• int arr[100] = { 0 }; n++;
• int i, x, pos, n = 10; // shift elements forward
for (i = n-1; i >= pos; i--)
• // initial array of size 10
arr[i] = arr[i - 1];
• for (i = 0; i < 10; i++) // insert x at pos
• arr[i] = i + 1; arr[pos - 1] = x;
// print the updated array
• // print the original array
for (i = 0; i < n; i++)
• for (i = 0; i < n; i++) printf("%d ", arr[i]);
• printf("%d ", arr[i]); printf("\n");
• printf("\n");
return 0;
} Swati Jain, Assistant Professor, VSIT,
VIPS - TC
DELETE AN ELEMENT FROM THE ARRAY

printf(“\nEnter the position : “);


// C program to delete an element in an array
scanf(“%d”, &position);
#include <stdio.h>
if (position >= n+1)
int main() printf(“\nDeletion not possible.\n”);

{ else
{
int array[100], position, c, n;
for (c = position – 1; c < n – 1; c++)
printf(“Enter the number of elements of the
array[c] = array[c+1];
array : “);
printf(“\nArray after deletion : “);
scanf(“%d”, &n); for (c = 0; c < n – 1; c++)
printf(“\nInput the array elements : “); printf(“%d\n”, array[c]);
}
for (c = 0; c < n; c++)
return 0;
scanf(“%d”, &array[c]); Swati Jain, Assistant Professor, VSIT,
VIPS - TC
}
SEARCH AN ELEMENT FROM THE ARRAY

// C program to search an element in an array


printf(“\nEnter element : “);
#include <stdio.h>
scanf(“%d”, &ele);
int main() for(c = 0; c < n ; c++)

{ {
if(array[c] == ele)
int array[100], ele, c, n;
{
printf(“Enter the number of elements of the
printf(“\nElement found\n”);
array : “);
}
scanf(“%d”, &n); }
printf(“\nInput the array elements : “); return 0;

for (c = 0; c < n; c++) }

scanf(“%d”, &array[c]); Swati Jain, Assistant Professor, VSIT,


VIPS - TC
REVERSE THE ARRAY

while (i < j)
#include<stdio.h>
{
int main()
Temp = a[i];
{
a[i] = a[j];
int a[100], i, j, Size, Temp; a[j] = Temp;
printf("\nPlease Enter the size : "); i++;

scanf("%d",&Size); j--;
}
//Inserting elements in to it
printf("\nResult is: ");
for (i = 0; i < Size; i++)
for (i = 0; i < Size; i++)
scanf("%d", &a[i]); printf("%d \t", a[i]);
j = Size - 1; // Assigning j to Last element return 0;

i = 0; // Assigning i to first element }


Swati Jain, Assistant Professor, VSIT,
VIPS - TC
MERGE THE ARRAY

#include<stdio.h>
• #include<conio.h> printf("\nEnter Array 2 Size: ");
scanf("%d", &size2);
• int main() printf("Enter Array 2 Elements: ");
• { for(i=0; i<size2; i++)
• int arr1[50], arr2[50], size1, size2, i, {
k, merge[100]; scanf("%d", &arr2[i]);
merge[k] = arr2[i];
• printf("Enter Array 1 Size: "); k++;
• scanf("%d", &size1); }
• printf("Enter Array 1 Elements: "); printf("\nThe new array after merging
is:\n");
• for(i=0; i<size1; i++) for(i=0; i<k; i++)
• { printf("%d ", merge[i]);
getch();
• scanf("%d", &arr1[i]);
return 0;
• merge[i] = arr1[i]; }
• }
• k = i; Swati Jain, Assistant Professor, VSIT,
VIPS - TC
2D ARRAYS

• 2D array can be defined as an array of arrays. The 2D array is organized as matrices which can
be represented as the collection of rows and columns.
• However, 2D arrays are created to implement a relational database look alike data structure. It
provides ease of holding bulk of data at once which can be passed to any number of functions
wherever required.
• The syntax of declaring two dimensional array is very much similar to that of a one dimensional
array, given as follows.
• int arr[max_rows][max_columns];
• The image shows the two dimensional array, the
elements are organized in the form of rows and
columns.
• First element of the first row is represented by a[0][0]
where the number shown in the first index is the
number of that row while the number shown in the
second index is the number of the column.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HOW DO WE ACCESS DATA IN A 2D ARRAY

• Due to the fact that the elements of 2D arrays can be random accessed. Similar to one dimensional
arrays, we can access the individual cells in a 2D array by using the indices of the cells. There
are two indices attached to a particular cell, one is its row number while the other is its column
number.
• However, we can store the value stored in any particular cell of a 2D array to some variable x by
using the following syntax.
• int x = a[i][j]; //where i and j is the row and column number of the cell respectively.
• We can assign each cell of a 2D array to 0 by using the following code:
• for ( int i=0; i<n ;i++)
• {
• for (int j=0; j<n; j++)
• {
• a[i][j] = 0;
• }
• }
• The syntax to declare and initialize the 2D array is given as follows.
Swati Jain, Assistant Professor, VSIT,
• int arr[2][2] = {0,1,2,3}; VIPS - TC
2D ARRAY

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
MEMORY LAYOUT OF 2D ARRAY

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
CALCULATING ADDRESS OF ELEMENTS IN 2D ARRAY

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
CALCULATING ADDRESS OF ELEMENTS IN 2D ARRAY

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
STORING USER'S DATA INTO A 2D ARRAY AND PRINTING IT.

#include <stdio.h>
void main ()
{ int arr[3][3],i,j;
for (i=0;i<3;i++)
{ for (j=0;j<3;j++)
{ printf("Enter a[%d][%d]: ",i,j);
scanf("%d",&arr[i][j]);
} }
printf("\n printing the elements ....\n");
for(i=0;i<3;i++)
{ printf("\n");
for (j=0;j<3;j++)
{ printf("%d\t",arr[i][j]);
} } } Swati Jain, Assistant Professor, VSIT,
VIPS - TC
MULTIDIMENSIONAL ARRAYS

• A multidimensional array in is an array with more than two dimensions. In a matrix, the two
dimensions are represented by rows and columns.
• Multidimensional arrays are an extension of 2-D matrices and use additional subscripts for
indexing. A 3-D array, for example, uses three subscripts. The first two are just like a matrix,
but the third dimension represents pages or sheets of elements.
• Initializing Three-Dimensional Array:
• Initialization in a Three-Dimensional array is the same as that of Two-dimensional arrays.
The difference is as the number of dimensions increases so the number of nested braces will
also increase.
• Three dimensional array: int three_d[3][3][3];
• N dimensional array: int nd_array[3][3][3][3]….[3]

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
MULTIDIMENSIONAL ARRAYS

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
ACCESSING ELEMENTS IN THREE-DIMENSIONAL ARRAYS:

• int main()
• {// initializing the 3-dimensional array
• int x[2][3][2] =
• { { {0,1}, {2,3}, {4,5} },
• { {6,7}, {8,9}, {10,11} } };
• // output each element's value
• for (int i = 0; i < 2; ++i)
• { for (int j = 0; j < 3; ++j)
• { for (int k = 0; k < 2; ++k)
• { cout << "Element at x[" << i << "][" << j<< "][" << k << "] = " << x[i][j][k]<< endl;
• }
• }
• }
• return 0;
• }

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
SPARSE MATRIX

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
SPARSE MATRIX

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
Swati Jain, Assistant Professor, VSIT, VIPS - TC
SPARSE MATRIX

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
SPARSE MATRIX – ARRAY REPRESENTATION

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
// C program for Sparse Matrix Representation
// using Triplets
#include<stdio.h>
int main()
{
int S[10][10],m,n,i,k=0,size=0;

//size of matrix
printf("Enter number of rows in the matrix : ");
scanf("%d",&m);
printf("Enter number of columns in the matrix : ");
scanf("%d",&n);

//read elements of matrix


printf("Enter elements in the matrix : ");
for (int i = 0; i < m; i++)
for (int j = 0; j < n; j++)
scanf("%d",&S[i][j]);

//print original matrix


//find number of non-zero elements
printf("The matrix is \n");
for (int i = 0; i < m; i++) {
Swati Jain, Assistant Professor, VSIT, VIPS - TC
for (int j = 0; j < n; j++) {
printf(" %d ",S[i][j]);
SPARSE MATRIX – LINKED LIST REPRESENTATION

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
• // C program for Sparse Matrix Representation
• // using Linked Lists
• #include<stdio.h>
• #include<stdlib.h>
• // Node to represent sparse matrix
• struct Node
• {
• int value;
• int row_position;
• int column_postion;
• struct Node *next;
• };
• // Function to create new node
• void create_new_node(struct Node** start, int non_zero_element, int row_index, int column_index )
• {
• struct Node *temp, *r;
• temp = *start;
• if (temp == NULL)
• {
• // Create new node dynamically
• temp = (struct Node *) malloc (sizeof(struct Node));
• temp->value = non_zero_element;
• temp->row_position = row_index;
• temp->column_postion = column_index;
• temp->next = NULL;
• *start = temp;

• }
• else
• {
• while (temp->next != NULL)
• temp = temp->next;
• // Create new node dynamically
• r = (struct Node *) malloc (sizeof(struct Node));
• r->value = non_zero_element;
• r->row_position = row_index;
• r->column_postion = column_index;
• r->next = NULL; Swati Jain, Assistant Professor, VSIT,
• temp->next = r; VIPS - TC
SPARSE MATRIX – LIST OF LIST REPRESENTATION

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
LINEAR SEARCH

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
LINEAR SEARCH

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
LINEAR SEARCH

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
LINEAR SEARCH

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
LINEAR SEARCH

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
LINEAR SEARCH

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
LINEAR SEARCH

• #include <stdio.h>
• int main()
{
int array[100], search, c, n;
• printf("Enter number of elements in array\n");
scanf("%d", &n);
• printf("Enter %d integer(s)\n", n);
• for (c = 0; c < n; c++)
scanf("%d", &array[c]);
• printf("Enter a number to search\n");
scanf("%d", &search);
• for (c = 0; c < n; c++)
{
if (array[c] == search) /* If required element is found */
{
printf("%d is present at location %d.\n", search, c+1);
Swati Jain, Assistant Professor, VSIT,
break; VIPS - TC
LINEAR SEARCH C PROGRAM FOR MULTIPLE OCCURRENCES

• #include <stdio.h>

int main()
{
int array[100], search, c, n, count = 0;

printf("Enter number of elements in array\n");


scanf("%d", &n);

printf("Enter %d numbers\n", n);

for (c = 0; c < n; c++)


scanf("%d", &array[c]);

printf("Enter a number to search\n");


scanf("%d", &search);

for (c = 0; c < n; c++) {


if (array[c] == search) { Swati Jain, Assistant Professor, VSIT,
printf("%d is present at location %d.\n", search, c+1); VIPS - TC
BINARY SEARCH

• Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search
algorithm works on the principle of divide and conquer.
• For this algorithm to work properly, the data collection should be in the sorted form.
• Steps –
• Step 1: Read the element to be searched from the user.
• Step 2: Find the middle element in the sorted list.
• Step 3: Compare, the search element with the middle element in the sorted list.
• Step 4: If both are matched, then display "Given element found!" and terminate the search
process.
• Step 5: If both are not matched, then check whether the search element is smaller or larger
than middle element.
• Step 6: If the search element is smaller than middle element, then repeat steps 2, 3, 4 and
5 for the left sub-list of the middle element.
• Step 7: If the search element is larger than middle element, then repeat steps 2, 3, 4 and
5 for the right sub-list of the middle element.
• Step 8: Repeat the same process until we find the search element in theSwati
list
Jain, or until
Assistant theVSIT,
Professor,
sub-list contains only one element. • Step 9: If that element also doesn't match with the
VIPS - TC
BINARY SEARCH

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
BINARY SEARCH

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
BINARY SEARCH PROGRAM

Binary Search Complexity


Time Complexities

Best case complexity: O(1)


Average case complexity: O(log n)
Worst case complexity: O(log n)
Space Complexity

The space complexity of the binary


search is O(1).

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
BINARY SEARCH

• #include <stdio.h>

• int binarySearch(int array[], int x, int low, int high) {


• // Repeat until the pointers low and high meet each other
• while (low <= high) {
• int mid = low + (high - low) / 2;

• if (array[mid] == x)
• return mid;

• if (array[mid] < x)
• low = mid + 1;

• else
• high = mid - 1;
• } Swati Jain, Assistant Professor, VSIT,
VIPS - TC
SORTING

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
SELECTION SORT
• Selection sort is a simple and efficient sorting algorithm that works by repeatedly selecting the
smallest (or largest) element from the unsorted portion of the list and moving it to the sorted
portion of the list.
• The algorithm repeatedly selects the smallest (or largest) element from the unsorted portion of
the list and swaps it with the first element of the unsorted part. This process is repeated for
Firstlist
the remaining unsorted portion until the entire Passis sorted.
• How does Selection Sort Algorithm work?
• Lets consider the following array as an example:
• arr[] = {64, 25, 12, 22, 11}

• For the first position in the sorted array, the whole array is traversed from index 0 to 4
sequentially. The first position where 64 is stored presently, after traversing whole array it is
clear that 11 is the lowest value.
• Thus, replace 64 with 11. After one iteration 11, which happens to be the least value in the array,
tends to appear in the first position of the sorted list.
Swati Jain, Assistant Professor, VSIT,
VIPS - TC
SELECTION SORT
• Second Pass:
• For the second position, where 25 is present, again traverse the rest of the array in a sequential
manner.
• After traversing, we found that 12 is the second lowest value in the array and it should appear at
Second Pass
the second place in the array, thus swap these values.

• Third Pass:
• Now, for third place, where 25 is present again traverse the rest of the array and find the third
least value present in the array.
Third Pass
• While traversing, 22 came out to be the third least value and it should appear at the third place in
the array, thus swap 22 with element present at third position.

Swati Jain, Assistant Professor, VSIT,
VIPS - TC
SELECTION SORT
• Fourth pass:
• Similarly, for fourth position traverse the rest of the array and find the fourth least element
in the array
• As 25 is the 4th lowest value hence, it will place at the fourth position.

• Fifth Pass:
• At last the largest value present in the array automatically get placed at the last position in
the array
• The resulted array is the sorted array.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
SELECTION SORT
• // C program for implementation of selection sort
• #include <stdio.h>

• void swap(int *xp, int *yp)


• {
• int temp = *xp;
• *xp = *yp;
• *yp = temp;
• }

• void selectionSort(int arr[], int n)


• {
• int i, j, min_idx;

• // One by one move boundary of unsorted subarray



• for (i = 0; i < n-1; i++)
• {
Swati Jain, Assistant Professor, VSIT,
• // Find the minimum element in unsorted array VIPS - TC
Swati Jain, Assistant Professor, VSIT, VIPS - TC
SELECTION SORT

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
BUBBLE SORT
• Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent
elements if they are in the wrong order. This algorithm is not suitable for large data sets as
its average and worst-case time complexity is quite high.

• Bubble Sort Algorithm :


• traverse from left and compare adjacent elements and the higher one is placed at right side.
• In this way, the largest element is moved to the rightmost end at first.
• This process is then continued to find the second largest and place it and so on until the data
is sorted.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
BUBBLE SORT
• How does Bubble Sort Work?
• Let us understand the working of bubble sort with the help of the following illustration:
• Input: arr[] = {6, 3, 0, 5}

• First Pass:

• The largest element is placed in its correct position, i.e., the end of the array.
Swati Jain, Assistant Professor, VSIT,
VIPS - TC
BUBBLE SORT
• Second Pass:
• Place the second largest element at correct position

• Third Pass:
• Place the remaining two elements at their correct positions.
• Total no. of passes: n-1
• Total no. of comparisons: n*(n-1)/2

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
BUBBLE SORT
• // Optimized implementation of Bubble sort
• #include <stdbool.h>
• #include <stdio.h>

• void swap(int* xp, int* yp)


• {
• int temp = *xp;
• *xp = *yp;
• *yp = temp;
• }
• // An optimized version of Bubble Sort
• void bubbleSort(int arr[], int n)
• {
• int i, j;
• bool swapped;
• for (i = 0; i < n - 1; i++) { Swati Jain, Assistant Professor, VSIT,
VIPS - TC
BUBBLE SORT

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
INSERTION SORT
• Insertion sort is a simple sorting algorithm that works similar to the way you sort
playing cards in your hands.
• The array is virtually split into a sorted and an unsorted part.
• Values from the unsorted part are picked and placed at the correct position in the
sorted part.
• Insertion Sort Algorithm:
• To sort an array of size N in ascending order iterate over the array and compare the
current element (key) to its predecessor, if the key element is smaller than its
predecessor, compare it to the elements before. Move the greater elements one position
up to make space for the swapped element.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
INSERTION SORT

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
INSERTION SORT

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
INSERTION SORT

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
INSERTION SORT
• // C program for insertion sort
• #include <math.h>
• #include <stdio.h>
• /* Function to sort an array using insertion sort*/
• void insertionSort(int arr[], int n)
• {
• int i, key, j;
• for (i = 1; i < n; i++) {
• key = arr[i];
• j = i - 1;

• /* Move elements of arr[0..i-1], that are


• greater than key, to one position ahead
• of their current position */
• while (j >= 0 && arr[j] > key) {
• arr[j + 1] = arr[j];
Swati Jain, Assistant Professor, VSIT,
• j = j - 1; VIPS - TC
INSERTION SORT

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
MERGE SORT
• Merge sort is defined as a sorting algorithm that works by dividing an array into smaller
subarrays, sorting each subarray, and then merging the sorted subarrays back together to form the
final sorted array.
• In simple terms, we can say that the process of merge sort is to divide the array into two halves,
sort each half, and then merge the sorted halves back together. This process is repeated until
the entire array is sorted.
• How does Merge Sort work?
• Merge sort is a recursive algorithm that continuously splits the
array in half until it cannot be further divided i.e., the array has only
one element left (an array with one element is always sorted).
• Then the sorted subarrays are merged into one sorted array.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
MERGE SORT
Illustration:
Lets consider an array arr[] = {38, 27, 43, 10}
• Initially divide the array into two equal halves:

• These subarrays are further divided into


two halves.
Now they become array of unit length that can no
longer be divided and array of unit length are
always sorted.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
MERGE SORT

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
MERGE SORT PSEUDOCODE
procedure mergesort( var a as array )
if ( n == 1 ) return a

var l1 as array = a[0] ... a[n/2]


var l2 as array = a[n/2+1] ... a[n]

l1 = mergesort( l1 )
l2 = mergesort( l2 )

return merge( l1, l2 )


end procedure

procedure merge( var a as array, var b as array )

var c as array
while ( a and b have elements )
if ( a[0] > b[0] )
add b[0] to the end of c
remove b[0] from b
else
Swati Jain, Assistant Professor, VSIT, VIPS - TC
add a[0] to the end of c
MERGE SORT
// C program for Merge Sort
#include <stdio.h>
#include <stdlib.h>
// Merges two subarrays of arr[].
// First subarray is arr[l..m] Second subarray is arr[m+1..r]
void merge(int arr[], int l, int m, int r)
{
int i, j, k;
int n1 = m - l + 1;
int n2 = r - m;
// Create temp arrays
int L[n1], R[n2];
// Copy data to temp arrays L[] and R[]
for (i = 0; i < n1; i++)
L[i] = arr[l + i];
for (j = 0; j < n2; j++)
R[j] = arr[m + 1 + j];
// Merge the temp arrays back into arr[l..r
i = 0;
j = 0;
k = l;
while (i < n1 && j < n2) {
if (L[i] <= R[j]) {
arr[k] = L[i];
i++;
}
else {
arr[k] = R[j];
j++;
} Swati Jain, Assistant Professor, VSIT,
k++; VIPS - TC
}
MERGE SORT

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
MERGE SORT

Swati Jain, Assistant Professor, VSIT, VIPS - TC


HASHING

• There are several searching techniques like linear search, binary


search, search trees etc.
• In these techniques, time taken to search any particular element
depends on the total number of elements. Example- Linear Search takes
O(n) time and Binary Search takes O(logn) time to perform the search
consisting of n elements.
• The main drawback of these techniques is-
• As the number of elements increases, time taken to perform the search
also increases.
• This becomes problematic when total number of elements become too
large.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HASHING

• In data structures, Hashing is a well-known technique to search any


particular element among several elements.
• It minimizes the number of comparisons while performing the search.
• Advantage:
• Unlike other searching techniques, Hashing is extremely efficient.
• The time taken by it to perform the search does not depend upon the
total number of elements.
• It completes the search with constant time complexity O(1)

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
NEED FOR HASH DATA STRUCTURE

• Every day, the data on the internet is increasing multifold and it is always a struggle to
store this data efficiently. In day-to-day programming, this amount of data might not be that
big, but still, it needs to be stored, accessed, and processed easily and efficiently. A very
common data structure that is used for such a purpose is the Array data structure.
• Now the question arises if Array was already there, what was the need for a new data structure!
The answer to this is in the word “efficiency“. Though storing in Array takes O(1) time,
searching in it takes at least O(log n) time. This time appears to be small, but for a large
data set, it can cause a lot of problems and this, in turn, makes the Array data structure
inefficient.
• So now we are looking for a data structure that can store the data and search in it in constant
time, i.e. in O(1) time. This is how Hashing data structure came into play. With the
introduction of the Hash data structure, it is now possible to easily store data in constant
time and retrieve them in constant time as well.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
COMPONENTS OF HASHING

• There are majorly three components of hashing:


• Key: A Key can be anything string or integer which is fed as input in the hash function, the
technique that determines an index or location for storage of an item in a data structure.
• Hash Function: The hash function receives the input key and returns the index of an element in
an array called a hash table. The index is known as the hash index.
• Hash Table: Hash table is a data structure that maps keys to values using a special function
called a hash function. Hash stores the data in an associative manner in an array where each
data value has its own unique index.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
WHAT IS A HASH FUNCTION?

• The hash function creates a mapping between key and value, this is done
through the use of mathematical formulas known as hash functions.
• The result of the hash function is referred to as a hash value or hash.
The hash value is a representation of the original string of characters
but usually smaller than the original.
• For example: Consider an array as a Map where the key is the index and
the value is the value at that index. So for an array A if we have
index i which will be treated as the key then we can find the value by
simply looking at the value at A[i].

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
TYPES OF HASH FUNCTIONS:

• Types:
• Division Method.
• Mid Square Method.
• Folding Method.
• Multiplication Method
• A good hash function should have the following properties:
• Efficiently computable.
• Should uniformly distribute the keys (Each table position is equally likely for each.
• Should minimize collisions.
• Should have a low load factor(number of items in the table divided by the size of the table).
• Complexity of calculating hash value using the hash function

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HASH FUNCTION – DIVISION METHOD

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HASH FUNCTION – DIVISION METHOD

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HASH FUNCTION – DIVISION METHOD

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HASH FUNCTION – MID SQUARE METHOD

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HASH FUNCTION – MID SQUARE METHOD

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HASH FUNCTION – MID SQUARE METHOD

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HASH FUNCTION – DIGIT FOLDING METHOD

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HASH FUNCTION – DIGIT FOLDING METHOD

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HASH FUNCTION – DIGIT FOLDING METHOD

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HASH FUNCTION – MULTIPLICATION METHOD

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
HASH FUNCTION – MULTIPLICATION METHOD

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
COLLISIONS

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
COLLISION RESOLUTION – SEPARATE CHAINING

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
COLLISION RESOLUTION – SEPARATE CHAINING
Example: Let us consider a simple hash function as “key mod 7” and a sequence of keys as 50, 700, 76, 85, 92, 73,
101

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
COLLISION RESOLUTION – SEPARATE CHAINING

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
COLLISION RESOLUTION – OPEN ADDRESSING

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
COLLISION RESOLUTION – DIFFERENT WAYS OF OPEN ADDRESSING

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
OPEN ADDRESSING- LINEAR PROBING
Let us consider a simple hash function as “key mod 7” and a sequence of keys as 50, 700, 76, 85, 92, 73, 101,

which means hash(key)= key% S, here S=size of the table =7,indexed from 0 to 6.

We can define the hash function as per our choice if we want to create a hash table, although it is fixed internally with a
pre-defined formula.

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
OPEN ADDRESSING- LINEAR PROBING
Example: Let us consider a simple hash function as “key mod 5” and a sequence of keys that are
to be inserted are 50, 70, 76, 93.

Step 1 Step 2 Step 3 Step 4 Step 5

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
OPEN ADDRESSING- QUADRATIC PROBING

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
OPEN ADDRESSING- QUADRATIC PROBING
Example: Let us consider table Size = 7, hash function as Hash(x) = x % 7 and collision
resolution strategy to be f(i) = i2 . Insert = 22, 30, and 50

Step 1 Step 2 Step 3

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
OPEN ADDRESSING- DOUBLE HASHING

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
OPEN ADDRESSING- DOUBLE HASHING
Example: Insert the keys 27, 43, 692, 72 into the Hash Table of size 7. where first hash-function
is h1​(k) = k mod 7 and second hash-function is h2(k) = 1 + (k mod 5)

Step 1 Step 2 Step 3 Step 4

Swati Jain, Assistant Professor, VSIT,


VIPS - TC
COLLISION RESOLUTION - OPEN ADDRESSING

Swati Jain, Assistant Professor, VSIT,


VIPS - TC

You might also like