Data Structure

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 38

DATA STRUCTURES AND APPLICATION

CAP4103

School of Engineering
Department of Computer Science and Engineering

Submitted By
Student Name Rakesh Dey sarkar
Enrolment Number 230160307023
Programme Masters in Computer Application
Department Computer Science and Engineering
Session/Semester 2023-2025/Second Semester
Submitted To
Faculty Name Ms. Sapna Sharma
2

Q1.What is Data Structure ? Explain Various Types of Data Structure


.
Ans: A data structure is a way of organizing and storing data to perform operations
efficiently. It defines a set of rules or conventions for organizing and managing data,
which allows for easier access, modification, and processing. Data structures are
essential components in computer science and are used in various applications to
solve different types of problems.

There are two main types of data structures:

1. Primitive Data Structures:


 These are the basic or fundamental data structures.
 Examples include integers, floating-point numbers, characters, and
booleans.
2. Composite Data Structures:
 These structures are built by combining primitive data types and
organizing them in a specific way.
 Examples include arrays, linked lists, stacks, queues, trees, and graphs.

Common operations performed on data structures include:

 Insertion: Adding new data.


 Deletion: Removing existing data.
 Traversal: Visiting and processing all elements.
 Searching: Finding the location of a specific element.
 Sorting: Arranging elements in a specific order.

The choice of a data structure depends on the requirements of the specific


application and the operations that need to be performed efficiently. Each data
structure has its own advantages and disadvantages, and selecting the right one is
crucial for optimizing the performance of algorithms and overall system efficiency.
Data structures are fundamental components in computer science that enable
efficient organization, storage, and manipulation of data. There are various types of
data structures, each with its own characteristics, use cases, and advantages. Here, I'll
explain some of the most common types:

1. Arrays:
 An array is a collection of elements, each identified by an index or a
key.
 Elements in an array are stored in contiguous memory locations.
 Accessing elements in an array is fast, O(1) time complexity.
3

 Insertion and deletion operations can be slow, especially in the middle,


as shifting may be required.
2. Linked Lists:
 A linked list is a linear data structure where elements are stored in
nodes, and each node points to the next node in the sequence.
 Dynamic in nature, which means it can grow or shrink during program
execution.
 Insertion and deletion are generally faster compared to arrays,
especially in the middle.
 Random access to elements is slower, O(n) time complexity.
3. Stacks:
 A stack is a last-in, first-out (LIFO) data structure.
 Elements can only be added or removed from the top of the stack.
 Common operations include push (to add an element) and pop (to
remove the top element).
 Used for managing function calls, undo mechanisms, and parsing
expressions.
4. Queues:
 A queue is a first-in, first-out (FIFO) data structure.
 Elements are added at the rear (enqueue) and removed from the front
(dequeue).
 Common operations include enqueue and dequeue.
 Used in scenarios like job scheduling, breadth-first search algorithms.
5. Trees:
 A hierarchical data structure with a root node and subtrees of child
nodes.
 Binary Trees have at most two children per node.
 Binary Search Trees (BST) maintain an ordering property that allows for
efficient search, insertion, and deletion.
 Common tree-based algorithms include binary search, AVL trees, and
heaps.
6. Graphs:
 A collection of nodes (vertices) and edges connecting these nodes.
 Directed graphs have edges with a direction, while undirected graphs
do not.
 Used to represent relationships between entities and solve problems
like shortest path algorithms and network flow.
7. Hash Tables:
 Utilizes a hash function to map keys to indices in an array.
 Provides fast insertion, deletion, and lookup operations on average,
with O(1) complexity.
 Collision resolution methods, like chaining or open addressing, handle
situations where multiple keys map to the same index.
4

8. Heaps:
 A specialized tree-based data structure that satisfies the heap property.
 Max Heap: Parent nodes are greater than or equal to their child nodes.
 Min Heap: Parent nodes are less than or equal to their child nodes.
 Used for priority queues and heap sort.

These data structures serve as building blocks for designing algorithms and solving
various computational problems. The choice of the appropriate data structure
depends on the specific requirements and characteristics of the problem at hand.
5

Q.2 What do you mean by Array? Describe storage structure of


array. Also explain various types of Arrays in Details.

Ans: An array is a fundamental data structure in computer science that stores a


collection of elements, each identified by an index or a key. The elements in an array
are stored in contiguous memory locations, meaning they are placed next to each
other in the computer's memory. This arrangement allows for efficient access to
individual elements by their index.

Key characteristics of arrays include:

1. Indexing: Elements in an array are accessed using an index or a key. The


index is typically an integer that starts from zero for the first element, one for
the second, and so on.
2. Contiguous Memory: Array elements are stored in adjacent memory
locations. This feature enables quick and direct access to any element based
on its index.
3. Fixed Size: In most programming languages, the size of an array is fixed at
the time of declaration. Once defined, the size usually cannot be changed
during runtime. Some languages, however, support dynamic arrays or
resizable arrays.

Here's a simple example in Python:

# Declaring an array

my_array = [1, 2, 3, 4, 5]

# Accessing elements

first_element = my_array[0] # Value: 1

second_element = my_array[1] # Value: 2

# Modifying elements

my_array[2] = 10 # Array becomes [1, 2, 10, 4, 5]


6

Arrays are widely used in programming for tasks like storing lists of items, iterating
through collections, and implementing algorithms that require constant-time access
to elements. Despite their efficiency for random access, arrays may have limitations,
such as fixed size and potential inefficiencies in insertions and deletions, particularly
in the middle of the array. In such cases, other data structures like linked lists may be
more suitable.

The storage structure of arrays is characterized by the arrangement of elements in


contiguous memory locations. This arrangement provides several advantages, such
as efficient random access and simplicity in memory addressing. Here are the key
aspects of the storage structure in arrays:

1. Contiguous Memory Allocation:


 Array elements are stored one after the other in memory, in contiguous
or consecutive locations. This means that the memory addresses of
array elements are sequential.
 The contiguous storage allows for direct and efficient access to any
element using its index. The memory address of the first element is
used as a reference point.
2. Indexing:
 Elements in an array are accessed using indices. The index indicates the
position of an element within the array.
 The formula for accessing the memory location of an element is often
based on the starting address of the array and the size of each element.
For example, if the array starts at address A and each element occupies
S bytes, the address of the element at index i is given by A + i * S.
3. Fixed-size Allocation:
 The size of an array is typically fixed at the time of declaration. Once
defined, the array size remains constant throughout its lifetime.
 Fixed-size allocation simplifies memory management and allows for
direct computation of memory addresses during compilation.
4. Memory Efficiency:
 Contiguous storage makes efficient use of memory, as there is no need
for additional space between elements. This is in contrast to some data
structures, like linked lists, which may require extra memory for
pointers.
 The absence of extra memory overhead results in better cache locality,
which can improve performance by reducing cache misses during data
access.

Here's a simple illustration of the storage structure of an array in memory:

| Element 0 | Element 1 | Element 2 | Element 3 | ... | Element n-1 |


7

|-----------|-----------|-----------|-----------|-----|-------------|
^ Starting address of the array

In this diagram, each box represents the memory location of an array element, and
the arrow indicates the starting address of the array. Accessing an element involves a
direct calculation based on the index and the size of each element. This simplicity
and efficiency in memory addressing make arrays a fundamental and widely used
data structure in programming.

In computer science, arrays come in various types, each serving specific purposes and
addressing different requirements. Here are some of the common types of arrays:

1.One-dimensional Array:
 A simple, linear array where elements are stored in a single line or row.
 Accessing elements involves using a single index.
 Examples include arrays of integers, characters, or floating-point
numbers.
# One-dimensional array in Python

my_array = [1, 2, 3, 4, 5]

2.Multi-dimensional Array:
 Arrays with more than one dimension. Common types include 2D arrays
(matrices) and 3D arrays.
 Elements are accessed using multiple indices corresponding to the array's
dimensions.
 Useful for representing tables, grids, and matrices.
# Two-dimensional array (matrix) in Python

matrix = [

[1, 2, 3],

[4, 5, 6],

[7, 8, 9]

3. Dynamic Array:
 An array that can dynamically resize during runtime.
 Provides a higher-level abstraction than traditional fixed-size arrays.
 Commonly used in languages like Python with lists or Java with ArrayList.
8

# Dynamic array in Python using a list

dynamic_array = [1, 2, 3]

dynamic_array.append(4) # Resize happens automatically

4. Jagged Array:
 An array of arrays where each sub-array can have a different size.
 Useful when the number of elements varies in different dimensions.
 Offers flexibility but may result in uneven memory allocation.
// Jagged array in C#

int[][] jaggedArray = new int[3][];

jaggedArray[0] = new int[] { 1, 2, 3 };

jaggedArray[1] = new int[] { 4, 5 };

jaggedArray[2] = new int[] { 6, 7, 8, 9 };

5. Sparse Array:
 An array in which most of the elements have the same default value, typically
zero or null.
 Efficiently stores and represents arrays with a small number of non-default
elements.
 Often used in applications dealing with large datasets with sparse
characteristics.
# Sparse array in Python using a dictionary

sparse_array = {0: 1, 2: 3, 5: 6}

6. String Array:
 An array where each element is a string.
 Useful for storing and manipulating a collection of strings.
// String array in Java

String[] stringArray = {"apple", "banana", "orange"};

7. Circular Array:
 An array where the last element is followed by the first element, creating a
circular structure.
 Useful in scenarios where cyclic operations are required, like rotating
elements.
# Circular array in Python

circular_array = [3, 4, 5, 1, 2] # Represents rotation


9

These types of arrays cater to different programming scenarios, providing flexibility,


efficiency, and abstraction based on the specific requirements of the application or
algorithm. Choosing the appropriate type of array depends on the problem at hand
and the desired characteristics of the data structure.
10

Q3.What is Stack? Why it is known as LIFO? Write algorithm


of PUSH POP PEEP and CHANGE Operation on stack.
Ans: A stack is a fundamental data structure in computer science that follows the Last
In, First Out (LIFO) principle. In a stack, the last element added is the first one to be
removed. This structure resembles a stack of plates, where you can only take the top
plate off or add a new plate to the top.

Key characteristics of a stack:

1. Operations:
 Push: Adds an element to the top of the stack.
 Pop: Removes the element from the top of the stack.
 Peek (or Top): Retrieves the element from the top without removing it.
 isEmpty: Checks if the stack is empty.
2. Usage:
 Stacks are commonly used in various algorithms and applications,
including:
 Managing function calls and recursion (call stack).
 Undo mechanisms in applications.
 Parsing expressions and syntax checking.
 Depth-first search algorithms.
 Memory management, such as maintaining a pool of available
memory blocks.
3. Implementation:
 Stacks can be implemented using arrays or linked lists. The choice of
implementation depends on the specific requirements and constraints
of the application.

Here's a simple example of a stack implemented using a list in Python:

class Stack:

def __init__(self):

self.items = []

def is_empty(self):

return len(self.items) == 0
11

def push(self, item):

self.items.append(item)

def pop(self):

if not self.is_empty():

return self.items.pop()

def peek(self):

if not self.is_empty():

return self.items[-1]

def size(self):

return len(self.items)

Usage of the stack::

my_stack = Stack()

my_stack.push(1)

my_stack.push(2)

my_stack.push(3)

print(my_stack.peek()) # Output: 3

print(my_stack.pop()) # Output: 3

print(my_stack.pop()) # Output: 2

print(my_stack.size()) # Output: 1
12

In this example, the stack is used to push and pop elements. The last element pushed
onto the stack (3) is the first one to be popped off. Stacks are versatile and play a
crucial role in many aspects of computer science and programming.

The term "LIFO" stands for "Last In, First Out," and it describes the fundamental
behavior of a stack data structure. In a stack, the last element that is added (or
"pushed") onto the stack is the first one to be removed (or "popped") from the stack.

The LIFO principle is analogous to a real-world scenario, such as a stack of plates in a


cafeteria. When you add a new plate to the stack, you place it on the top. When you
need to take a plate, you naturally take the one from the top, which is the last one
added. This order ensures that the newest item is always the first one to be used or
removed.

The LIFO property simplifies the operations on a stack and has practical applications
in various computing scenarios. For example:

1. Function Call Management:


 In programming, the call stack follows the LIFO principle. When a
function is called, its execution is added to the top of the call stack.
When the function completes, it is removed from the stack. This allows
for the return to the previous function in the reverse order of the calls.
2. Undo Mechanisms:
 Many applications use stacks to implement undo mechanisms. Each
action that can be undone is added to the stack, and when the user
requests an undo, the most recent action is reversed.
3. Expression Evaluation:
 In parsing expressions, the LIFO property is often used. For example,
when evaluating an expression in reverse Polish notation (postfix
notation), operands are pushed onto the stack, and operations are
performed on the top elements.

The LIFO principle simplifies the management of items in a stack, making it a natural
and efficient choice for various applications in computer science and programming.

Below are simple algorithms for the basic operations on a stack: PUSH, POP, PEEK,
and CHANGE.

Stack Implementation
13

class Stack:

def __init__(self):

self.items = []

def is_empty(self):

return len(self.items) == 0

def push(self, item):

self.items.append(item)

def pop(self):

if not self.is_empty():

return self.items.pop()

def peek(self):

if not self.is_empty():

return self.items[-1]

def size(self):

return len(self.items)

def change(self, index, new_value):

if 0 <= index < self.size():


14

self.items[index] = new_value

PUSH Algorithm

The PUSH operation adds an element to the top of the stack.

Algorithm PUSH(stack, element):

stack.push(element)

POP Algorithm

The POP operation removes the element from the top of the stack.

Algorithm POP(stack):

return stack.pop()

PEEK Algorithm

The PEEK operation retrieves the element from the top of the stack without removing
it.

Algorithm PEEK(stack):

return stack.peek()

CHANGE Algorithm

The CHANGE operation modifies the value of an element at a specific index in the
stack

Algorithm CHANGE(stack, index, new_value):

if 0 <= index < stack.size():

stack.change(index, new_value)

These algorithms assume the stack is implemented using the Stack class mentioned
earlier. Note that the CHANGE operation checks whether the index is within the
valid range before attempting to change the value. Adjustments may be needed
based on the specific language or context of your implementation.
15

Q4. What is Circular Linked List? State the advantages and


disadvantages of Circular Link List Over Doubly Linked List
and Singly Linked List. Also write advantages of Linked List
over an Array.
Ans: A Circular Linked List is a variation of a linked list in which the last node of the
list points back to the first node instead of having a null reference. In other words,
the next pointer of the last node in the list is connected to the first node, forming a
closed loop or circle.

Key characteristics of a Circular Linked List:

1. Circular Structure:
 The last node in the list points to the first node, creating a circular or
cyclic structure.
 In a regular linked list, the last node typically points to null, indicating
the end of the list.
2. Traversal:
 Traversal in a circular linked list can start from any node, and you can
continue traversing the entire list by following the next pointers until
you reach the starting node again.
 This circular structure eliminates the need to check for null pointers
during traversal.
3. Insertion and Deletion:
 Insertion and deletion operations are generally more straightforward in
a circular linked list compared to a singly linked list. Adding or
removing a node involves adjusting the next pointers of neighboring
nodes.
4. Applications:
 Circular linked lists are used in applications where a continuous cycle of
operations is required, or when you need to efficiently rotate elements
in a circular fashion.
 Examples include scheduling algorithms, where a set of tasks need to
be executed in a circular manner, or in gaming applications for
managing player turns.
5. Advantages:
 Circular linked lists offer constant time complexity for insertion and
deletion at both the beginning and end of the list compared to singly
linked lists, which require traversal to the end for certain operations.
6. Disadvantages:
 Traversing a circular linked list requires careful handling to avoid an
infinite loop, as there is no null pointer to indicate the end of the list.
16

Here's a simple representation of a circular linked list:

+-----+ +-----+ +-----+ +-----+

| 1 | --> | 2 | --> | 3 | --> | 1 |

+-----+ +-----+ +-----+ +-----+

In this example, the next pointer of the last node (with value 3) points back to the
first node (with value 1), forming a circular structure.

Circular linked lists, singly linked lists, and doubly linked lists are different types of
linked list structures, each with its own set of advantages and disadvantages. Let's
compare Circular Linked Lists with Singly Linked Lists and Doubly Linked Lists:

Advantages of Circular Linked Lists Over Singly Linked Lists:

1. Efficient Operations at Both Ends:


 Circular linked lists allow constant-time insertion and deletion at both
the beginning and end of the list. In contrast, singly linked lists require
traversal to reach the end, which is not the case in circular linked lists.
2. Simpler Insertion and Deletion:
 Insertion and deletion operations are generally more straightforward in
circular linked lists compared to singly linked lists, as there is no need
to update the next pointer of the previous node when inserting at the
beginning or end.

Disadvantages of Circular Linked Lists Over Singly Linked Lists:

1. Complex Traversal:
 Traversing a circular linked list requires careful handling to avoid an
infinite loop, as there is no null pointer to indicate the end of the list.
Extra caution is needed during traversal.
2. More Memory Overhead:
 Circular linked lists may have slightly more memory overhead
compared to singly linked lists because each node contains an
additional next pointer, and there is an extra link from the last node
back to the first.

Advantages of Circular Linked Lists Over Doubly Linked Lists:

1. Simpler Implementation:
17

 The implementation of a circular linked list is generally simpler than


that of a doubly linked list. Doubly linked lists require maintaining both
next and previous pointers, making them more complex.
2. Constant Time Insertion and Deletion at Both Ends:
 Similar to the advantage over singly linked lists, circular linked lists
allow constant-time insertion and deletion at both ends, while doubly
linked lists require more adjustments due to the presence of previous
pointers.

Disadvantages of Circular Linked Lists Over Doubly Linked Lists:

1. Lack of Bidirectional Traversal:


 Circular linked lists only support forward traversal. Doubly linked lists,
on the other hand, allow bidirectional traversal, enabling operations
such as reverse traversal and easier removal of a node when given a
reference to it.
2. Memory Overhead:
 Circular linked lists may have a slightly higher memory overhead than
doubly linked lists due to the extra next pointer at each node.

In summary, the choice between circular linked lists, singly linked lists, and doubly
linked lists depends on the specific requirements of the application. Circular linked
lists are advantageous when constant-time operations at both ends are crucial, and
simplicity in implementation is preferred. However, careful handling is required
during traversal to avoid infinite loops. Doubly linked lists provide bidirectional
traversal but come with increased complexity and potentially higher memory
overhead. Singly linked lists are a more straightforward alternative but may not offer
the same efficiency in certain operations.
Linked lists and arrays are both data structures used for organizing and storing
collections of elements, but they have different characteristics. Here are some
advantages of linked lists over arrays:

1. Dynamic Size:
 Linked lists can easily grow or shrink in size during program execution.
Memory can be dynamically allocated or deallocated, allowing for
efficient memory usage based on the actual number of elements in the
list. In contrast, arrays typically have a fixed size when declared.
2. Constant-Time Insertion and Deletion:
 Inserting or deleting elements in a linked list, particularly in the middle,
can be done in constant time, provided the reference to the node is
available. In arrays, these operations may require shifting elements,
resulting in a time complexity of O(n), where n is the number of
elements.
18

3. Efficient Memory Utilization:


 Linked lists don't require contiguous memory locations, which means
that memory can be scattered across different regions. This allows for
efficient utilization of memory, especially when dealing with memory
fragmentation. In contrast, arrays need contiguous memory, and
resizing may involve copying elements to a new location.
4. Ease of Implementation for Insertions:
 Inserting an element in the middle of a linked list involves updating
pointers, making it a simpler operation compared to arrays, where
elements may need to be shifted to accommodate the new element.
5. No Pre-allocation of Memory:
 Linked lists don't require pre-allocation of memory for a specific size,
unlike arrays. This flexibility is particularly useful when the number of
elements in the data structure is unknown or may change frequently.
6. No Wasted Memory:
 In a linked list, memory is allocated only as needed, avoiding the
problem of wasted memory that can occur with arrays, where a larger
size may be allocated than necessary.
7. Ease of Merging and Splitting:
 Linked lists can be easily merged or split by adjusting pointers, making
them more flexible for certain operations compared to arrays.
8. Support for Various Data Types:
 Each node in a linked list can potentially hold a different data type,
allowing for more flexibility in the types of elements that can be stored.
In contrast, arrays typically require elements of the same data type.

While linked lists have these advantages, it's important to note that they also come
with trade-offs, such as increased memory overhead due to the storage of pointers
and the lack of constant-time random access. The choice between a linked list and an
array depends on the specific requirements of the application and the nature of the
operations performed on the data.

Q5. Write an algorithm for Inorder, Preorder and Postorder


Traversal operation on Binary tree. Construct a binary tree
for the following Preorder: A B D HECFGIJ
19

Inorder: D H B EAFCIGJ

Ans: Tree traversal algorithms are used to visit and process all the nodes of a binary
tree. Here are the algorithms for Inorder, Preorder, and Postorder Traversal of a
binary tree:

Binary Tree Node Definition

First, let's define a simple binary tree node:

class TreeNode:

def __init__(self, key):

self.key = key

self.left = None

self.right = None

Inorder Traversal Algorithm

In Inorder traversal, we visit the left subtree, then the current node, and finally the
right subtree.

Algorithm InorderTraversal(node):
if node is not null:
InorderTraversal(node.left)
Process(node)
InorderTraversal(node.right)

Preorder Traversal Algorithm

In Preorder traversal, we visit the current node first, followed by the left subtree, and
then the right subtree.

Algorithm PreorderTraversal(node):

if node is not null:

Process(node)

PreorderTraversal(node.left)

PreorderTraversal(node.right)
20

Postorder Traversal Algorithm

In Postorder traversal, we visit the left subtree, the right subtree, and finally the
current node.

Algorithm PostorderTraversal(node):

if node is not null:

PostorderTraversal(node.left)

PostorderTraversal(node.right)

Process(node)

Example Usage

Here's an example of creating a binary tree and using the above traversal algorithms:

def Process(node):

print(node.key, end=' ')

# Create a sample binary tree

root = TreeNode(1)

root.left = TreeNode(2)

root.right = TreeNode(3)

root.left.left = TreeNode(4)

root.left.right = TreeNode(5)

root.right.left = TreeNode(6)

root.right.right = TreeNode(7)

# Inorder Traversal

print("Inorder Traversal:")

InorderTraversal(root)

print()
21

# Preorder Traversal

print("Preorder Traversal:")

PreorderTraversal(root)

print()

# Postorder Traversal

print("Postorder Traversal:")

PostorderTraversal(root)

This will output:

yaml
Inorder Traversal:

4251637

Preorder Traversal:

1245367

Postorder Traversal:

4526731

These traversal algorithms are fundamental for exploring and processing binary trees
in various applications, such as expression tree evaluation, tree-based searches, and
more.

To construct a binary tree from its preorder and inorder traversals, you can follow
these steps recursively. The idea is to pick the root node from the preorder traversal,
then find its position in the inorder traversal to determine the left and right subtrees.
Repeat this process for each subtree until the entire tree is constructed.

Let's construct the binary tree for the given preorder and inorder traversals:

Preorder: A B D H E C F G I J Inorder: D H B E A F C I G J

1. Pick the Root:


 The first element in the preorder traversal is the root: A.
2. Find Root in Inorder:
22

In the inorder traversal, find the position of A. The elements to the left

of A (D H B E) are the left subtree, and the elements to the right (F C I G
J) are the right subtree.
3. Repeat for Left and Right Subtrees:
 For the left subtree, the next element in the preorder traversal is B. In
the left subtree's inorder traversal (D H B E), B is the root.
 For the right subtree, the next element in the preorder traversal is C. In
the right subtree's inorder traversal (F C I G J), C is the root.
4. Repeat for Each Subtree:
 Continue this process recursively for each subtree until all nodes are
placed.

Let's represent this in code:

class TreeNode:

def __init__(self, key):

self.key = key

self.left = None

self.right = None

def build_tree(preorder, inorder):

if not preorder or not inorder:

return None

root_val = preorder[0]

root = TreeNode(root_val)

root_index = inorder.index(root_val)

root.left = build_tree(preorder[1:1 + root_index], inorder[:root_index])

root.right = build_tree(preorder[1 + root_index:], inorder[root_index + 1:])

return root
23

def print_inorder(node):

if node:

print_inorder(node.left)

print(node.key, end=' ')

print_inorder(node.right)

# Given traversals

preorder_traversal = ['A', 'B', 'D', 'H', 'E', 'C', 'F', 'G', 'I', 'J']

inorder_traversal = ['D', 'H', 'B', 'E', 'A', 'F', 'C', 'I', 'G', 'J']

# Build the tree

root_node = build_tree(preorder_traversal, inorder_traversal)

# Print the inorder traversal of the constructed tree to verify

print("Inorder Traversal of Constructed Tree:")

print_inorder(root_node)

This will output:

Inorder Traversal of Constructed Tree:

DHBEAFCIGJ

Q6. Discuss following with reference to trees.


24

(i)Height of the tree (ii) Complete Binary Tree (iii)


Expression tree (iv) Sibling (V) Full Binary Tree

Ans: (i) Height of the Tree:

The height of a tree is the length of the longest path from the root to a leaf node. It
represents the depth or level of the tree. The height is usually measured as the
number of edges on the longest path, so a tree with a single node has a height of 0.

 Balanced Trees: Trees with nearly equal heights for left and right subtrees are
considered balanced. Examples include AVL trees and Red-Black trees.
 Unbalanced Trees: Trees where one subtree is significantly deeper than the
other are unbalanced. Unbalanced trees can lead to poor performance in
terms of search and retrieval.

(ii) Complete Binary Tree:

A complete binary tree is a binary tree in which every level is completely filled, except
possibly the last level, which is filled from left to right. In other words, all nodes are as
left as possible.

 Properties:
 The last level is filled from left to right.
 If a node has a left child, it must have a right child.
 The height of a complete binary tree is logarithmic with respect to the
number of nodes.

Complete binary trees are efficient for array-based representations and heap data
structures.

(iii) Expression Tree:

An expression tree is a binary tree used to represent expressions in a natural way that
reflects the structure of the expression. Each node in the tree represents an operator
or operand, and the leaves are the operands.

 Construction: The tree is constructed based on the order of operations and


the associativity of operators.
 Evaluation: Expression trees can be evaluated by recursively evaluating the
subtrees and applying the operators at each node.
 Example: For the infix expression "2 * (3 + 4)," the corresponding expression
tree would be:
25

/\

2 +

/\

3 4

(iv) Sibling:

In the context of trees, siblings are nodes that share the same parent. If two nodes
are siblings, they are at the same level in the tree and have the same parent.

 Example: In the tree below, nodes B and C are siblings, as are nodes D and E.
A

/\

B C

/\

D E

 Sibling Relationship: Nodes that are not siblings are either ancestors or
descendants of each other.

(v) Full Binary Tree:

A full binary tree (sometimes called a proper or 2-tree) is a binary tree in which every
node has either 0 or 2 children. In other words, every level is completely filled.

 Properties:
 The number of leaf nodes in a full binary tree is equal to the number of
internal nodes plus one.
 The height of a full binary tree with n nodes is log₂(n + 1) - 1.
 Example: The following is an example of a full binary tree:
A

/\

B C

/\

D E
26

Full binary trees are often encountered in computer science, especially in the analysis
of algorithms and data structures. They have efficient array-based representations
and are used in applications like Huffman coding and binary heaps.
27

Q 7. What is abstract data type? What are all not concerned


in an ADT? What is the advantage of an ADT?

Ans:
An abstract data type (ADT) is a high-level description of a set of operations that can
be performed on a data structure, along with the properties of these operations. It
defines a logical model for the data and the operations that can be performed on
that data, without specifying the internal details of how the data is represented or
how the operations are implemented.

In other words, an abstract data type encapsulates the behavior of a data structure,
focusing on what operations can be performed on it and what properties these
operations should have, rather than how those operations are implemented. This
separation of concerns allows for flexibility in choosing different implementations
while maintaining a consistent interface.

Key characteristics of abstract data types:

1. Encapsulation: ADTs encapsulate the data and operations into a single unit,
hiding the internal details from the user. Users interact with the data structure
through a well-defined interface.
2. Abstraction: ADTs provide a level of abstraction by specifying what
operations can be performed and their expected behavior without detailing
how these operations are carried out. This abstraction allows users to work
with the data structure without needing to understand its internal workings.
3. Flexibility: ADTs allow for multiple implementations. As long as the specified
operations and their properties are satisfied, different underlying data
structures can be used to achieve the same abstract behavior.
4. Data Hiding: ADTs hide the internal representation of data from the users.
Users only need to be aware of the interface and not the implementation
details.
5. Reusability: ADTs promote code reusability. Once an ADT is defined, it can be
reused in various applications without modification, as long as the interface
remains consistent.

Examples of abstract data types include:

 Stack: An abstract data type that supports operations like push, pop, and
peek.
 Queue: An abstract data type that supports operations like enqueue and
dequeue.
28

 List: An abstract data type supporting operations for managing a linear


collection of elements.
 Set: An abstract data type representing a collection of distinct elements with
operations like add, remove, and check membership.

Languages like Java and Python often provide standard libraries that include
implementations of various abstract data types, allowing developers to use these
data structures without worrying about their internal details.

In an Abstract Data Type (ADT), the focus is on specifying the logical behavior of the
data and the operations that can be performed on it, while intentionally ignoring
certain details related to the internal representation and implementation. Here are
aspects that are typically not concerned in an ADT:

1. Implementation Details:
 ADTs do not specify how the data and operations are implemented
internally. The actual algorithm, data structures, or programming details
are hidden from users.
 For example, the implementation of a stack could use an array, a linked
list, or some other structure, but the ADT only defines the push, pop,
and peek operations without specifying how these are carried out.
2. Data Representation:
 The ADT does not dictate how the data is stored or represented in
memory. It abstracts away the details of the internal representation.
 For instance, a set ADT may describe operations like adding, removing,
and checking membership of elements, but it doesn't specify whether a
hash table, a balanced tree, or another structure is used to implement
the set.
3. Algorithmic Complexity:
 ADTs do not prescribe the time or space complexity of the operations.
The efficiency of the operations is not part of the ADT definition.
 For instance, a queue ADT may define enqueue and dequeue
operations, but it doesn't specify whether these operations should have
constant, logarithmic, or linear time complexity.
4. Concurrency and Synchronization:
 ADTs do not address issues related to concurrent access and
synchronization. How the data structure behaves in a multi-threaded
environment is not specified.
 For example, a list ADT might define operations for adding and
removing elements, but it doesn't detail how these operations should
behave when accessed concurrently by multiple threads.
5. Error Handling:
29

ADTs typically do not define specific error handling mechanisms or



exceptions. The behavior in exceptional cases may not be explicitly
addressed in the ADT.
 For instance, a stack ADT may not specify how it behaves if a pop
operation is attempted on an empty stack.
6. Memory Management:
 ADTs do not specify how memory is allocated or deallocated. The
management of memory resources is typically left to the
implementation.
 For example, a dynamic array ADT may allow resizing, but the ADT
doesn't detail how memory is allocated or freed during resizing.

By abstracting away these details, ADTs provide a high-level interface for users,
allowing them to work with the data structure based on its logical behavior without
needing to be concerned about the underlying implementation complexities. This
abstraction promotes flexibility, reusability, and modularity in software design.

The use of Abstract Data Types (ADTs) offers several advantages in software
development. Here are some key benefits:

1. Abstraction and Modularity:


 ADTs provide a high level of abstraction by defining a set of operations
and properties without revealing the internal details of the
implementation. This abstraction allows developers to focus on the
logical behavior of the data structure, promoting modularity in
software design.
2. Encapsulation:
 ADTs encapsulate the data and operations into a single unit. The
internal details are hidden from the users, allowing for changes to the
implementation without affecting the users of the ADT. This
encapsulation enhances information hiding and reduces complexity.
3. Reusability:
 Once an ADT is defined, it can be reused in various applications without
modification. Developers can utilize the same abstract interface in
different contexts, promoting code reuse and minimizing the need for
redundant code.
4. Flexibility and Multiple Implementations:
 ADTs allow for multiple implementations as long as they satisfy the
specified operations and properties. This flexibility enables developers
to choose the most suitable underlying data structure or algorithm for
a particular application.
5. Simplified Maintenance:
30

 Changes to the internal representation or implementation of an ADT do


not impact the external users, making maintenance and updates more
manageable. Modifications can be made to improve efficiency or fix
bugs without affecting the ADT's interface.
6. Ease of Understanding:
 ADTs provide a clear and concise way to describe data structures and
their behavior. Developers can work with abstract concepts without
getting bogged down in low-level details, making it easier to
understand and reason about the code.
7. Collaboration:
 ADTs facilitate collaboration among developers working on different
parts of a system. By defining clear interfaces, teams can work
independently on various components, provided they adhere to the
specified ADTs, fostering collaboration and parallel development.
8. Standardization:
 ADTs provide a standardized way to interact with data structures. This
standardization helps in creating a common understanding among
developers, making it easier to communicate and share code within a
team or across projects.
9. Ease of Testing:
 The abstract nature of ADTs simplifies testing, as developers can focus
on verifying the correctness of the specified operations rather than
testing intricate details of the implementation. This separation of
concerns improves the reliability of testing.
10. Promotion of Good Software Engineering Practices:
 ADTs encourage the separation of concerns, modular design, and
adherence to interfaces. These practices contribute to the creation of
more maintainable, scalable, and understandable software systems.

In summary, the advantages of ADTs lie in their ability to abstract away


implementation details, promote modularity and code reuse, facilitate collaboration,
and provide a clear and standardized interface for working with data structures.
These benefits contribute to the development of robust, flexible, and maintainable
software systems.
31

Q 8. How is memory allocated in C. explain in detail types


of memory allocation and memory deallocation with
suitable example .

Ans: In C, memory can be allocated in two primary ways: stack allocation and heap
allocation.

1. Stack Allocation:

 Description:
 Stack memory is a region of memory where local variables and function
call information are stored.
 Memory is automatically allocated and deallocated as functions are
called and return.
 It follows a Last In, First Out (LIFO) order, meaning the most recently
allocated memory is the first to be deallocated.
 Example:
#include <stdio.h>

int main() {

int a = 5; // Variable 'a' is allocated on the stack

printf("Stack-allocated variable: %d\n", a);

// Memory is automatically deallocated when the function returns

return 0;

2. Heap Allocation:

 Description:
 Heap memory is a region of memory used for dynamic memory
allocation.
 Memory must be managed explicitly, and functions like malloc(),
calloc(), and realloc() are used for allocation.
 Memory allocated on the heap persists until explicitly deallocated using
free().
 Heap memory provides more flexibility in terms of memory size and
lifetime compared to stack memory.
32

#include <stdio.h>

#include <stdlib.h>

int main() {

// Allocate memory for an integer on the heap

int *p = (int *)malloc(sizeof(int));

if (p != NULL) {

*p = 10;

printf("Heap-allocated variable: %d\n", *p);

free(p); // Deallocate the memory when done

} else {

printf("Memory allocation failed\n");

return 0;

#include <stdio.h>

#include <stdlib.h>

int main() {

// Allocate memory for an integer on the heap

int *p = (int *)malloc(sizeof(int));

if (p != NULL) {

*p = 10;

printf("Heap-allocated variable: %d\n", *p);


33

free(p); // Deallocate the memory when done

} else {

printf("Memory allocation failed\n");

return 0;

Memory Allocation Functions:

1. malloc():
 Allocates a specified number of bytes of memory.
int *p = (int *)malloc(5 * sizeof(int));

calloc():
 Allocates memory for an array of specified elements, initializing them to zero.
int *p = (int *)calloc(5, sizeof(int));

realloc():
 Changes the size of the previously allocated memory.
int *p = (int *)malloc(5 * sizeof(int));

p = (int *)realloc(p, 10 * sizeof(int));

Memory Deallocation Function:

 free():
 Deallocates the memory previously allocated by malloc(), calloc(), or
realloc().
free(p);

Differences and Considerations:

 Scope:
 Stack memory is limited to the scope of the function or block in which
it is allocated.
34

 Heap memory has a broader scope and can persist beyond the function
or block.
 Automatic vs. Manual Management:
 Stack memory is managed automatically by the compiler.
 Heap memory requires manual management, and it's the responsibility
of the programmer to allocate and deallocate memory appropriately.
 Size and Flexibility:
 Stack memory is typically limited, and the size needs to be known at
compile time.
 Heap memory provides more flexibility, and the size can be determined
at runtime.
 Lifetime:
 Stack memory is deallocated automatically when the function or block
exits.
 Heap memory persists until explicitly deallocated, making it suitable for
long-term storage.

Choosing between stack and heap allocation depends on factors such as the desired
scope, lifetime, and flexibility of memory usage in a given program. Careful
management of heap-allocated memory is crucial to prevent memory leaks and
other issues.
35

Q9. Distinguish between


a. Prefix and postfix expression
b. Iteration and Recursion
c. Array and Linked list

Ans: a. Prefix and Postfix Expression:

1. Prefix Expression:
 The operators are written before their operands.
 Evaluation starts from the leftmost side.
 Examples: +AB (Infix: A + B), *-AB/CD (Infix: (A - B) * (C / D)).
2. Postfix Expression:
 The operators are written after their operands.
 Evaluation starts from the leftmost side.
 Examples: AB+ (Infix: A + B), AB-CD/* (Infix: (A - B) * (C / D)).

Example:

 Infix Expression: A + B * C
 Prefix Expression: +A*BC
 Postfix Expression: ABC*+

b. Iteration and Recursion:

1. Iteration:
 Description:
 A repetitive execution of a set of statements using loops or
other control flow constructs.
 It is based on the concept of repeating a block of code until a
specified condition is met.
 Advantages:
 Often more memory-efficient.
 Generally faster due to lower overhead.
2. Recursion:
 Description:
 A function calls itself directly or indirectly to solve a smaller
instance of the same problem.
 It is based on the concept of breaking down a problem into
smaller subproblems.
36

 Advantages:
 Can lead to more concise and readable code.
 Suitable for solving problems with inherent recursive structures.

Example:

 Iteration: Computing the factorial of a number using a loop.


 Recursion: Computing the factorial of a number using a recursive function.

c. Array and Linked List:

1. Array:
Description:
 A data structure that stores elements of the same type in
contiguous memory locations.
 The size of the array is fixed during declaration.
 Access to elements is done using indices.
 Advantages:
 Constant time access to elements using indices.
 Memory-efficient for a fixed-size collection.
2. Linked List:
 Description:
 A data structure that consists of nodes, where each node
contains data and a reference (link) to the next node in the
sequence.
 The size of the linked list can change dynamically during
program execution.
 Access to elements is done sequentially, starting from the head
or tail.
 Advantages:
 Dynamic size adjustment (no need to declare size in advance).
 Efficient insertion and deletion of elements.

Example:

 Array: int arr[5] = {1, 2, 3, 4, 5};


 Linked List: struct Node { int data; struct Node* next; };

In summary, the distinctions between prefix and postfix expressions, iteration and
recursion, and arrays and linked lists lie in their representations, usage patterns, and
characteristics, each serving different needs in programming and problem-solving.
37

Q10. What are two types of complexities? Explain them.


Explain the concept of Big O and Big Omegas and Thetha
notation.

Ans: In computer science and algorithm analysis, there are two main types of
complexities: time complexity and space complexity.

1. Time Complexity:

Definition:

 Time complexity is a measure of the amount of time an algorithm takes to


complete as a function of the input size.

Key Points:

 It describes how the runtime of an algorithm grows as the size of the input
increases.
 Time complexity is expressed using big O notation (e.g., O(n), O(log n),
O(n^2)), representing an upper bound on the growth rate of the algorithm's
runtime.

Example:

def linearSearch(arr, target):

for element in arr:

if element == target:

return True

return False

# Time complexity: O(n) - Linear time complexity

2. Space Complexity:

Definition:

 Space complexity is a measure of the amount of memory an algorithm uses as


a function of the input size.
38

Key Points:

 It describes how the memory requirements of an algorithm grow with the


input size.
 Space complexity is also expressed using big O notation, representing an
upper bound on the growth rate of the algorithm's memory usage.

Example:

def sumOfElements(arr):

total = 0

for element in arr:

total += element

return total

# Space complexity: O(1) - Constant space complexity

Note:

 While time complexity focuses on the efficiency of an algorithm in terms of


execution time, space complexity focuses on the efficiency in terms of
memory usage.
 Both time and space complexity are crucial for analyzing and comparing
algorithms, helping developers choose the most suitable algorithm for a given
problem based on the available resources and constraints.

You might also like