Unit - 1
Unit - 1
USING C
UNIT 1
• This is one of the most interesting Algorithms as it calls itself with a smaller
value as inputs which it gets after solving for the current inputs. In more simpler
words, It’s an Algorithm that calls itself repeatedly until the problem is solved.
• Problems such as the Tower of Hanoi or DFS of a Graph can be easily solved by using
these Algorithms.
• For example, here is a code that finds a factorial using a recursion Algorithm:
Fact(y)
If y is 0
return 1
return (y*Fact(y-1)) /* this is where the recursion happens*/
• A greedy algorithm is a simple, intuitive algorithm that is used in optimization problems. The
algorithm makes the optimal choice at each step as it attempts to find the overall optimal way
to solve the entire problem. Greedy algorithms are quite successful in some problems, such
as Huffman encoding which is used to compress data, or Dijkstra's algorithm, which is used to
find the shortest path through a graph.
• However, in many problems, a greedy strategy does not produce an optimal solution.
• For example, in the animation below, the greedy algorithm seeks to find the path with the
largest sum. It does this by selecting the largest available number at each step. The greedy
algorithm fails to find the largest sum, however, because it makes decisions based only on the
information it has at any one step, without regard to the overall problem.
• With a goal of reaching the largest sum, at each step, the greedy algorithm will choose what
appears to be the optimal immediate choice, so it will choose 12 instead of 3 at the second
step and will not reach the best solution, which contains 99.
• Greedy algorithms take all of the data in a particular problem, and then set a
rule for which elements to add to the solution at each step of the algorithm.
In the animation above, the set of data is all of the numbers in the graph, and the
rule was to select the largest number available at each level of the graph.
The solution that the algorithm builds is the
Swati Jain,
VIPS - TC
sumProfessor,
Assistant of all VSIT,of those choices
DYNAMIC PROGRAMMING
• For combinatory, discrete, and general mathematical optimization problems, branch and
bound algorithms are applied to determine the optimal solution.
• A branch and bound algorithm searches the set of all possible solutions before
recommending the best one.
• This algorithm enumerates possible candidate solutions in a stepwise manner by
exploring all possible set of solutions.
• How? - First of all we build a rooted decision tree where the root node represents the
entire search space. Each child node is a part of the solution set and is a partial
solution.
• Based on the optimal solution, we set an upper and lower
bound for a given problem before constructing the rooted
decision tree and we need to make a decision about which
node to include in the solution set at each level. It is very
important to find upper and lower bound and to find upper
bound any local optimization method can be used. It can also
be found by picking any point in the search space and Swati Jain, Assistant Professor, VSIT,
convex relaxation. Whereas, duality can be used for finding VIPS - TC
RANDOMIZED ALGORITHM
• Backtracking means that if the current solution isn’t working, you should go back and
attempt another option.
• It is a method for resolving issues recursively by attempting to construct a solution
incrementally, one piece at a time, discarding any solutions that do not satisfy the
problem’s constraints at any point in time.
• This approach is used to resolve problems having multiple solutions. For example(next
slide) if we want to find all the possible ways of arranging
2 boys and 1 girl on 3 benches with a constraint that Girl should not be
on the middle bench.
So there will be 3! = 6 possibilities to solve this problem.
We will try all possible ways recursively to get the required solution.
• The performances of algorithms can be measured on the scales of time and space. The
performance of a program is the amount of computer memory and time needed to run a
program.
• We use two approaches to determine the performance of a program. One is analytical
and the other is experimental. In performance analysis we use analytical methods,
while in performance measurement we conduct experiments.
• Time Complexity: The time complexity of an algorithm or a program is a function of
the running time of the algorithm or a program. In other words, it is the amount of
computer time it needs to run to completion.
• Space Complexity: The space complexity of an algorithm or program is a function of
the space needed by the algorithm or program to run to completion.
• The time complexity of an algorithm can be computed either by an empirical or
theoretical approach. The empirical or posteriori testing approach calls for
implementing the complete algorithms and executing them on a computer for various
instances of the problem. The time taken by the execution of the programs for various
instances of the problem are noted and compared. The algorithm whose implementation
Swati Jain, Assistant Professor, VSIT,
VIPS - TC
yields the least time is considered as the best among the candidate algorithmic
ANALYSING ALGORITHMS
• The total frequency counts of the program segments A, B and C given by 1, (3n+1) and
(3n^2 +3n+1) respectively are expressed as O(1), O(n) and O(n2 ).
• These are referred to as the time complexities of the program segments since they are
indicative of the running times of the program segments.
• In a similar manner space complexities of a program can also be expressed in terms of
mathematical notations, 6 which is nothing but the amount of memory they require for
their execution.
• By now, you could have concluded that when an algorithm uses statements that get
executed only once, will always require the same amount of time,
• and when the statement is in loop condition, the time required increases depending on
the number of times the loop is set to run.
• And, when an algorithm has a combination of both single executed statements and LOOP
statements or with nested LOOP statements, the time increases proportionately, based
on the number of times each statement gets executed.
• Since the algorithm's performance may vary with different types of input data, hence
for an algorithm we usually use the worst-case Time complexity of an algorithm
because that is the maximum time taken for any input size.
• Types of DS:
Ø Abstract
Ø Primitive (Simple)
Ø Compound
Swati Jain, Assistant Professor, VSIT,
VIPS - TC
DATA STRUCTURE
For example:
Stack ADT: Here, the stack consists of elements of the same type arranged in sequential
order. The following are the operations that can be performed on the stack:
• Simple DS can be constructed with the help of a primitive DS. A primitive DS is used
to represent the standard data types of any programming languages. Variables, arrays,
pointers, structures, unions, etc. are examples of primitive DS.
• Compound DS can be constructed with the help of any one primitive DS and it has a
specific functionality. It can be designed by the user.
• It is further classified into two types:
• Linear DS - Linear data structures can be constructed as a continuous arrangement of
data elements in the memory. It can be constructed by using array data type. In the
linear DS the relationship of adjacency is maintained between the data elements.
Arrays, Linked List, Stacks, Queues are Linear DS.
• Non Linear DS – Non-linear data be constructed as a collection of randomly
distributed set of data item joined together by using a special pointer (tag). In
non-linear Data structure the relationship of adjacency is not maintained between the
data items. Trees and Graphs are non linear DS.
Swati Jain, Assistant Professor, VSIT,
VIPS - TC
DATA STRUCTURE
• Data structures offer many advantages to IT-related processes, especially as applications get
more complex and the amount of existing data keeps growing. Here are some reasons why data
structures are essential.
• They facilitate greater processing speeds. Large amounts of data require faster processing, and
data structures help organize the data into forms that are easier to work with and process.
• They make it easier to search for data. Data structures organize information into workable
forms that are easier to conduct required searches for.
• They are reusable. Once you implement a given data structure, it can be used anywhere. There is
no need to make a new structure. This function saves time and resources.
• They make it easy to handle multiple requests. You rarely find a single user accessing a
database. Instead, it’s common to have hundreds, if not thousands, of users searching and
interacting with a database. Data structures arrange information so that users don’t have to
search every item — they can instantly search just the required data.
• Data Storage - Data structures facilitate efficient data persistence, like specifying attribute
collections and corresponding structures used in database management systems to store records.
• Data Exchange - Organized information, defined by data structures, can be shared between
applications like TCP/IP packets.
• Resource and Service Management - Data structures such as linked lists can enable core
operating systems resources and services to perform functions like file directory management,
memory allocation, and processing scheduling queues.
• Scalability - Big data applications rely on data structures to manage and allocate data storage
across many distributed storage locations. This function guarantees scalability and high
performance.
• Arrays are defined as the collection of similar types of data items stored at contiguous memory
locations. It is one of the simplest data structures where each data element can be randomly
accessed by using its index number.
• In C programming, they are the derived data types that can store the primitive type of data
such as int, char, double, float, etc. For example, if we want to store the marks of a student
in 6 subjects, then we don't need to define a different variable for the marks in different
subjects. Instead, we can define an array that can store the marks in each subject at the
contiguous memory locations.
• There are some of the properties of an array that are listed as follows -
• Each element in an array is of the same data type and carries the same size.
• Elements in the array are stored at contiguous memory locations from which the first element is
stored at the smallest memory location.
• Elements of the array can be randomly accessed since we can calculate the address of each
element of the array with the given base address and the size of the data element.
• Representation of an array
• We can represent an array in various ways in different programming languages. As an
illustration, let's see the declaration of array in C language -
• As per the above illustration, there are some of the following important points -
• Index starts with 0.
• The array's length is 10, which means we can store 10 elements.
• Each element in the array can be accessed via its index.
• As stated above, all the data elements of an array are stored at contiguous locations in the
main memory. The name of the array represents the base address or the address of the first
element in the main memory. Each element of the array is represented by proper indexing.
• We can define the indexing of an array in the below ways -
• 0 (zero-based indexing): The first element of the array will be arr[0].
• 1 (one-based indexing): The first element of the array will be arr[1].
• n (n - based indexing): The first element of the array can reside at any random index number.
• We required the information given below to access any random element from the array -
• Base Address of the array, Size of an element in bytes, Type of indexing, array follows.
• The formula to calculate the address to access an array element -
• Byte address of element A[i] = base address + size * ( i - first index)
• Here, size represents the memory taken by the primitive data types. As an instance, int takes 2
bytes, float takes 4 bytes of memory space in C programming.
• Suppose an array, A[-10 ..... +2 ] having Base address (BA) = 999 and size of an element = 2
bytes, find the location of A[-1].
• L(A[-1]) = 999 + 2 x [(-1) - (-10)]
• = 999 + 18
• = 1017
{ else
{
int array[100], position, c, n;
for (c = position – 1; c < n – 1; c++)
printf(“Enter the number of elements of the
array[c] = array[c+1];
array : “);
printf(“\nArray after deletion : “);
scanf(“%d”, &n); for (c = 0; c < n – 1; c++)
printf(“\nInput the array elements : “); printf(“%d\n”, array[c]);
}
for (c = 0; c < n; c++)
return 0;
scanf(“%d”, &array[c]); Swati Jain, Assistant Professor, VSIT,
VIPS - TC
}
SEARCH AN ELEMENT FROM THE ARRAY
{ {
if(array[c] == ele)
int array[100], ele, c, n;
{
printf(“Enter the number of elements of the
printf(“\nElement found\n”);
array : “);
}
scanf(“%d”, &n); }
printf(“\nInput the array elements : “); return 0;
while (i < j)
#include<stdio.h>
{
int main()
Temp = a[i];
{
a[i] = a[j];
int a[100], i, j, Size, Temp; a[j] = Temp;
printf("\nPlease Enter the size : "); i++;
scanf("%d",&Size); j--;
}
//Inserting elements in to it
printf("\nResult is: ");
for (i = 0; i < Size; i++)
for (i = 0; i < Size; i++)
scanf("%d", &a[i]); printf("%d \t", a[i]);
j = Size - 1; // Assigning j to Last element return 0;
#include<stdio.h>
• #include<conio.h> printf("\nEnter Array 2 Size: ");
scanf("%d", &size2);
• int main() printf("Enter Array 2 Elements: ");
• { for(i=0; i<size2; i++)
• int arr1[50], arr2[50], size1, size2, i, {
k, merge[100]; scanf("%d", &arr2[i]);
merge[k] = arr2[i];
• printf("Enter Array 1 Size: "); k++;
• scanf("%d", &size1); }
• printf("Enter Array 1 Elements: "); printf("\nThe new array after merging
is:\n");
• for(i=0; i<size1; i++) for(i=0; i<k; i++)
• { printf("%d ", merge[i]);
getch();
• scanf("%d", &arr1[i]);
return 0;
• merge[i] = arr1[i]; }
• }
• k = i; Swati Jain, Assistant Professor, VSIT,
VIPS - TC
2D ARRAYS
• 2D array can be defined as an array of arrays. The 2D array is organized as matrices which can
be represented as the collection of rows and columns.
• However, 2D arrays are created to implement a relational database look alike data structure. It
provides ease of holding bulk of data at once which can be passed to any number of functions
wherever required.
• The syntax of declaring two dimensional array is very much similar to that of a one dimensional
array, given as follows.
• int arr[max_rows][max_columns];
• The image shows the two dimensional array, the
elements are organized in the form of rows and
columns.
• First element of the first row is represented by a[0][0]
where the number shown in the first index is the
number of that row while the number shown in the
second index is the number of the column.
• Due to the fact that the elements of 2D arrays can be random accessed. Similar to one dimensional
arrays, we can access the individual cells in a 2D array by using the indices of the cells. There
are two indices attached to a particular cell, one is its row number while the other is its column
number.
• However, we can store the value stored in any particular cell of a 2D array to some variable x by
using the following syntax.
• int x = a[i][j]; //where i and j is the row and column number of the cell respectively.
• We can assign each cell of a 2D array to 0 by using the following code:
• for ( int i=0; i<n ;i++)
• {
• for (int j=0; j<n; j++)
• {
• a[i][j] = 0;
• }
• }
• The syntax to declare and initialize the 2D array is given as follows.
Swati Jain, Assistant Professor, VSIT,
• int arr[2][2] = {0,1,2,3}; VIPS - TC
2D ARRAY
#include <stdio.h>
void main ()
{ int arr[3][3],i,j;
for (i=0;i<3;i++)
{ for (j=0;j<3;j++)
{ printf("Enter a[%d][%d]: ",i,j);
scanf("%d",&arr[i][j]);
} }
printf("\n printing the elements ....\n");
for(i=0;i<3;i++)
{ printf("\n");
for (j=0;j<3;j++)
{ printf("%d\t",arr[i][j]);
} } } Swati Jain, Assistant Professor, VSIT,
VIPS - TC
MULTIDIMENSIONAL ARRAYS
• A multidimensional array in is an array with more than two dimensions. In a matrix, the two
dimensions are represented by rows and columns.
• Multidimensional arrays are an extension of 2-D matrices and use additional subscripts for
indexing. A 3-D array, for example, uses three subscripts. The first two are just like a matrix,
but the third dimension represents pages or sheets of elements.
• Initializing Three-Dimensional Array:
• Initialization in a Three-Dimensional array is the same as that of Two-dimensional arrays.
The difference is as the number of dimensions increases so the number of nested braces will
also increase.
• Three dimensional array: int three_d[3][3][3];
• N dimensional array: int nd_array[3][3][3][3]….[3]
• int main()
• {// initializing the 3-dimensional array
• int x[2][3][2] =
• { { {0,1}, {2,3}, {4,5} },
• { {6,7}, {8,9}, {10,11} } };
• // output each element's value
• for (int i = 0; i < 2; ++i)
• { for (int j = 0; j < 3; ++j)
• { for (int k = 0; k < 2; ++k)
• { cout << "Element at x[" << i << "][" << j<< "][" << k << "] = " << x[i][j][k]<< endl;
• }
• }
• }
• return 0;
• }
//size of matrix
printf("Enter number of rows in the matrix : ");
scanf("%d",&m);
printf("Enter number of columns in the matrix : ");
scanf("%d",&n);
• }
• else
• {
• while (temp->next != NULL)
• temp = temp->next;
• // Create new node dynamically
• r = (struct Node *) malloc (sizeof(struct Node));
• r->value = non_zero_element;
• r->row_position = row_index;
• r->column_postion = column_index;
• r->next = NULL; Swati Jain, Assistant Professor, VSIT,
• temp->next = r; VIPS - TC
SPARSE MATRIX – LIST OF LIST REPRESENTATION
• #include <stdio.h>
• int main()
{
int array[100], search, c, n;
• printf("Enter number of elements in array\n");
scanf("%d", &n);
• printf("Enter %d integer(s)\n", n);
• for (c = 0; c < n; c++)
scanf("%d", &array[c]);
• printf("Enter a number to search\n");
scanf("%d", &search);
• for (c = 0; c < n; c++)
{
if (array[c] == search) /* If required element is found */
{
printf("%d is present at location %d.\n", search, c+1);
Swati Jain, Assistant Professor, VSIT,
break; VIPS - TC
LINEAR SEARCH C PROGRAM FOR MULTIPLE OCCURRENCES
• #include <stdio.h>
int main()
{
int array[100], search, c, n, count = 0;
• Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search
algorithm works on the principle of divide and conquer.
• For this algorithm to work properly, the data collection should be in the sorted form.
• Steps –
• Step 1: Read the element to be searched from the user.
• Step 2: Find the middle element in the sorted list.
• Step 3: Compare, the search element with the middle element in the sorted list.
• Step 4: If both are matched, then display "Given element found!" and terminate the search
process.
• Step 5: If both are not matched, then check whether the search element is smaller or larger
than middle element.
• Step 6: If the search element is smaller than middle element, then repeat steps 2, 3, 4 and
5 for the left sub-list of the middle element.
• Step 7: If the search element is larger than middle element, then repeat steps 2, 3, 4 and
5 for the right sub-list of the middle element.
• Step 8: Repeat the same process until we find the search element in theSwati
list
Jain, or until
Assistant theVSIT,
Professor,
sub-list contains only one element. • Step 9: If that element also doesn't match with the
VIPS - TC
BINARY SEARCH
• #include <stdio.h>
• if (array[mid] == x)
• return mid;
• if (array[mid] < x)
• low = mid + 1;
• else
• high = mid - 1;
• } Swati Jain, Assistant Professor, VSIT,
VIPS - TC
SORTING
• For the first position in the sorted array, the whole array is traversed from index 0 to 4
sequentially. The first position where 64 is stored presently, after traversing whole array it is
clear that 11 is the lowest value.
• Thus, replace 64 with 11. After one iteration 11, which happens to be the least value in the array,
tends to appear in the first position of the sorted list.
Swati Jain, Assistant Professor, VSIT,
VIPS - TC
SELECTION SORT
• Second Pass:
• For the second position, where 25 is present, again traverse the rest of the array in a sequential
manner.
• After traversing, we found that 12 is the second lowest value in the array and it should appear at
Second Pass
the second place in the array, thus swap these values.
• Third Pass:
• Now, for third place, where 25 is present again traverse the rest of the array and find the third
least value present in the array.
Third Pass
• While traversing, 22 came out to be the third least value and it should appear at the third place in
the array, thus swap 22 with element present at third position.
•
Swati Jain, Assistant Professor, VSIT,
VIPS - TC
SELECTION SORT
• Fourth pass:
• Similarly, for fourth position traverse the rest of the array and find the fourth least element
in the array
• As 25 is the 4th lowest value hence, it will place at the fourth position.
• Fifth Pass:
• At last the largest value present in the array automatically get placed at the last position in
the array
• The resulted array is the sorted array.
• First Pass:
• The largest element is placed in its correct position, i.e., the end of the array.
Swati Jain, Assistant Professor, VSIT,
VIPS - TC
BUBBLE SORT
• Second Pass:
• Place the second largest element at correct position
• Third Pass:
• Place the remaining two elements at their correct positions.
• Total no. of passes: n-1
• Total no. of comparisons: n*(n-1)/2
l1 = mergesort( l1 )
l2 = mergesort( l2 )
var c as array
while ( a and b have elements )
if ( a[0] > b[0] )
add b[0] to the end of c
remove b[0] from b
else
Swati Jain, Assistant Professor, VSIT, VIPS - TC
add a[0] to the end of c
MERGE SORT
// C program for Merge Sort
#include <stdio.h>
#include <stdlib.h>
// Merges two subarrays of arr[].
// First subarray is arr[l..m] Second subarray is arr[m+1..r]
void merge(int arr[], int l, int m, int r)
{
int i, j, k;
int n1 = m - l + 1;
int n2 = r - m;
// Create temp arrays
int L[n1], R[n2];
// Copy data to temp arrays L[] and R[]
for (i = 0; i < n1; i++)
L[i] = arr[l + i];
for (j = 0; j < n2; j++)
R[j] = arr[m + 1 + j];
// Merge the temp arrays back into arr[l..r
i = 0;
j = 0;
k = l;
while (i < n1 && j < n2) {
if (L[i] <= R[j]) {
arr[k] = L[i];
i++;
}
else {
arr[k] = R[j];
j++;
} Swati Jain, Assistant Professor, VSIT,
k++; VIPS - TC
}
MERGE SORT
• Every day, the data on the internet is increasing multifold and it is always a struggle to
store this data efficiently. In day-to-day programming, this amount of data might not be that
big, but still, it needs to be stored, accessed, and processed easily and efficiently. A very
common data structure that is used for such a purpose is the Array data structure.
• Now the question arises if Array was already there, what was the need for a new data structure!
The answer to this is in the word “efficiency“. Though storing in Array takes O(1) time,
searching in it takes at least O(log n) time. This time appears to be small, but for a large
data set, it can cause a lot of problems and this, in turn, makes the Array data structure
inefficient.
• So now we are looking for a data structure that can store the data and search in it in constant
time, i.e. in O(1) time. This is how Hashing data structure came into play. With the
introduction of the Hash data structure, it is now possible to easily store data in constant
time and retrieve them in constant time as well.
• The hash function creates a mapping between key and value, this is done
through the use of mathematical formulas known as hash functions.
• The result of the hash function is referred to as a hash value or hash.
The hash value is a representation of the original string of characters
but usually smaller than the original.
• For example: Consider an array as a Map where the key is the index and
the value is the value at that index. So for an array A if we have
index i which will be treated as the key then we can find the value by
simply looking at the value at A[i].
• Types:
• Division Method.
• Mid Square Method.
• Folding Method.
• Multiplication Method
• A good hash function should have the following properties:
• Efficiently computable.
• Should uniformly distribute the keys (Each table position is equally likely for each.
• Should minimize collisions.
• Should have a low load factor(number of items in the table divided by the size of the table).
• Complexity of calculating hash value using the hash function
which means hash(key)= key% S, here S=size of the table =7,indexed from 0 to 6.
We can define the hash function as per our choice if we want to create a hash table, although it is fixed internally with a
pre-defined formula.