DAA Assignment-1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

DAA Assignment-1

1. Explain how many algorithms you can write for solving


find the prime numbers. Compare which is the simplest
and the most efficient by comparing their time
complexity.

A.
There are several algorithms that can be used to find prime
numbers. Here are some of the most common ones:

Trial Division:
This is the simplest algorithm for finding prime numbers. In
this algorithm, we divide the number n by all the numbers
between 2 and n-1. If none of the numbers evenly divide n,
then n is a prime number. The time complexity of this
algorithm is O(n).

Sieve of Eratosthenes:
This is a more efficient algorithm for finding prime numbers.
In this algorithm, we create a list of all the numbers between
2 and n. We then mark all the multiples of 2, 3, 5, 7, etc. as
composite numbers. The remaining numbers are prime. The time
complexity of this algorithm is O(n log log n).

Sieve of Sundaram:
This is another efficient algorithm for finding prime numbers.
In this algorithm, we create a list of numbers from 1 to n/2.
We then generate all the numbers of the form i+j+2ij, where
i<=j and i+j+2ij <= n/2. We then mark all these numbers as
composite numbers. The remaining numbers are prime. The time
complexity of this algorithm is O(n log n).

Miller-Rabin Primality Test:


This is a probabilistic algorithm for finding prime numbers.
In this algorithm, we randomly choose a number between 2 and
n-1. We then check if that number is a witness for the
primality of n. If n is composite, then there are at least (n-
1)/4 witnesses for n. By choosing enough random witnesses, we
can determine whether n is a prime or composite number with
high probability. The time complexity of this algorithm is O(k
log^3 n), where k is the number of iterations.

AKS Primality Test:


This is a deterministic algorithm for finding prime numbers.
It was discovered in 2002 by Agrawal, Kayal, and Saxena. The
AKS algorithm uses polynomial time to determine whether a
number is prime or composite. The time complexity of this
algorithm is O(log^12 n).
Among these algorithms, Trial Division is the simplest but
also the least efficient. The Sieve of Eratosthenes and Sieve
of Sundaram are more efficient than Trial Division but require
more memory. The Miller-Rabin Primality Test is a
probabilistic algorithm and can determine whether a number is
prime or composite with high probability. The AKS Primality
Test is a deterministic algorithm and is the most efficient
algorithm, but it has a very high time complexity.

2. Explain Insertion Sort Algorithm with suitable example.


(Definition, Algorithm, Example)

A.
Insertion Sort is a simple sorting algorithm that works by
building the final sorted array one item at a time. It is a
comparison-based algorithm in which each element is compared
with the elements that come before it and is inserted into its
correct position in the final sorted array.

Example:

9 6 2 5 4

6 9 2 5 4

6 2 9 5 4

6 2 5 9 4

6 2 5 4 9

2 6 5 4 9

2 5 6 4 9

2 5 4 6 9

2 4 5 6 9

Algorithm:
insertionSort(array)
mark first element as sorted
for each unsorted element X
'extract' the element X
for j <- lastSortedIndex down to 0
if current element j > X
move sorted element to the right by 1
break loop and insert X here
end insertionSort
3. Explain the properties of an algorithm with an example.

A.
An algorithm is a step-by-step procedure for solving a
problem. It is important that an algorithm has certain
properties that make it effective and efficient in solving the
problem. Here are some important properties of an algorithm:

Input: An algorithm must have input, which is the data or


information that the algorithm processes to produce the
output. The input can come in different forms, such as a set
of numbers, a string of characters, or a database.

Output: An algorithm must have output, which is the result


produced by the algorithm after processing the input. The
output can also come in different forms, such as a sorted
list, a calculated value, or a report.

Definiteness: An algorithm must be clear and unambiguous in


its instructions. Each step of the algorithm must be well-
defined and understandable to the person or machine executing
it.

Finiteness: An algorithm must have a finite number of steps.


It must eventually terminate and produce a result within a
reasonable amount of time. If the algorithm runs forever or
takes too long to produce a result, it is not an effective
algorithm.

Effectiveness: Algorithms shouldn’t contain unnecesarry


statements.

Here's an example of an algorithm that exhibits these


properties:

Algorithm: Find the maximum number in an array of integers


Input: An array of integers
Output: The maximum number in the array

1. Set max to the first element in the array


2. Loop through the array starting at the second element
3. If the current element is greater than max, set max to the
current element
4. When the loop is finished, return max as the output
This algorithm has input, output, definiteness, finiteness,
correctness, and generality. It takes an array of integers as
input and returns the maximum number in the array as output.
The instructions are clear and unambiguous, and there is a
finite number of steps. The algorithm is correct and will
always produce the correct output for any valid input. It is
also general enough to work with arrays of any size and with
any valid integers.

4. Give the algorithm for matrix multiplication and find


the time complexity of the algorithm using step count
method.

A.
The algorithm for matrix multiplication is as follows:

Algorithm: Matrix Multiplication


Input: Two matrices A and B of dimensions m x n and n x p,
respectively
Output: A matrix C of dimensions m x p, which is the product
of matrices A and B

1. Create an empty matrix C of dimensions m x p


2. For i from 1 to m:
3. For j from 1 to p:
4. Set C[i][j] to 0
5. For k from 1 to n:
6. Set C[i][j] to C[i][j] + A[i][k] * B[k][j]
7. Return C as the output
The time complexity of the algorithm can be calculated using
the step count method. In this method, we count the number of
basic operations performed by the algorithm and use that count
to estimate the time complexity. Here's how we can apply the
step count method to this algorithm:

Step 1: Creating an empty matrix C requires 1 operation.


Step 2: The outer loop runs m times, which requires m
operations.
Step 3: The inner loop runs p times, which requires p
operations.
Step 4: Setting C[i][j] to 0 requires 1 operation.
Step 5: The innermost loop runs n times, which requires n
operations.
Step 6: Setting C[i][j] to C[i][j] + A[i][k] * B[k][j]
requires 3 operations (1 multiplication, 1 addition, and 1
assignment).
Step 7: Returning the output requires 1 operation.

Therefore, the total number of operations performed by the


algorithm is:

(n+1)+n*(n+1)+n^2+n*n(n+1)+n*n*n+1
Simplifying this expression, we get:

n+1+n^2+n+n^3+n^2+n^3+1=2n^3+3n^2+2n+1
Now take the higher exponent and ignore other things
2n^3, Now remove the constant multiplication
n^3
Hence, the time complexity of the matrix multiplication
algorithm is O(n^3). This means that the time taken by the
algorithm increases as the product of the dimensions of the
matrices increases.

5. Calculate the time complexity of Linear search


Algorithm using step count method.

A.
The time complexity of an algorithm measures how the running
time of the algorithm grows as the size of the input
increases. Linear search is a simple algorithm for finding a
target value in an unsorted list, and it has a worst-case time
complexity of O(n), where n is the size of the list.

To calculate the time complexity of linear search using the


step count method, we need to count the number of basic
operations performed by the algorithm for a given input size.
In the case of linear search, the basic operation is the
comparison of the target value with each element of the list
until the target value is found or the end of the list is
reached.

Let's consider an unsorted list of size n and a target value


that is not in the list. The worst-case scenario is that the
target value must be compared to every element of the list,
and the algorithm must iterate through the entire list of n
elements.

Here's how the step count method can be used to calculate the
time complexity of linear search:

Initialization: Set a counter variable c to 0.

For loop: For each element in the list, compare it to the


target value and increment the counter c by 1. If the target
value is found, exit the loop.

Return: If the target value is found, return the index of the


element. If the end of the list is reached without finding the
target value, return -1.

The number of basic operations performed by the algorithm can


be expressed as a function of n and is equal to c = n + 1 in
the worst-case scenario, where the target value is not in the
list and the algorithm must iterate through the entire list.
Therefore, the time complexity of linear search using the step
count method is O(n), since the number of basic operations
performed by the algorithm grows linearly with the size of the
input list.

6. Find the time complexity for the following function


using step count method.
Function sum2( a: ElemList ; n : integer) : real;
var k : integer;
s: real;
Begin
s := 0;
For k := 1 to n do
s := s + a[k];
sum1 := s;
End;

A.

The time complexity of the given function sum2 using the step
count method can be calculated as follows:

Initialization: Set s to 0. This operation takes constant


time, which we denote as O(1).

For loop: The loop runs for n iterations, adding the current
element of the array to s. This takes n iterations, and each
iteration takes constant time, i.e., O(1).

Return: Return the value of s. This operation takes constant


time, which we denote as O(1).

Therefore, the total time complexity of the function sum2 can


be expressed as the sum of the time taken by each step, which
is:

T(n) = O(1) + n * O(1) + O(1)


T(n) = O(n)

Hence, the time complexity of the function sum2 using the step
count method is O(n), meaning that the running time of the
algorithm increases linearly with the size of the input n.

7. Differentiate between Best, average and worst case


efficiency with an Example.

A.
In algorithm analysis, we often consider different cases to
evaluate the efficiency of an algorithm. These cases include
the best-case scenario, the average-case scenario, and the
worst-case scenario.

Best-case efficiency: The best-case efficiency of an algorithm


is the minimum possible time or space required to solve a
problem. It occurs when the input is already in the desired
state or when the algorithm performs exceptionally well on a
specific input. In other words, it is the fastest or most
efficient performance of an algorithm. The best-case
efficiency of an algorithm is rarely a good measure of its
overall efficiency because it is typically an unrealistic or
rare scenario.
Example: Consider the search algorithm for finding the minimum
element in an array. The best-case scenario is when the
minimum element is at the first position in the array, and the
algorithm needs to check only one element. In this case, the
best-case efficiency of the algorithm is O(1), which means
that the algorithm runs in constant time.

Worst-case efficiency: The worst-case efficiency of an


algorithm is the maximum possible time or space required to
solve a problem. It occurs when the input is the most
challenging or when the algorithm performs exceptionally
poorly on a specific input. In other words, it is the slowest
or least efficient performance of an algorithm. The worst-case
efficiency of an algorithm is an essential measure of its
overall efficiency because it is the scenario that we want to
avoid.
Example: Consider the search algorithm for finding the minimum
element in an array. The worst-case scenario is when the
minimum element is at the last position in the array, and the
algorithm needs to check all n elements. In this case, the
worst-case efficiency of the algorithm is O(n), which means
that the algorithm runs in linear time.

Average-case efficiency: The average-case efficiency of an


algorithm is the expected time or space required to solve a
problem when the input is chosen randomly from all possible
inputs of a given size. It is a more realistic measure of the
efficiency of an algorithm because it takes into account the
probability distribution of inputs. The average-case
efficiency of an algorithm can be challenging to calculate
because it requires knowing the distribution of inputs.
Example: Consider the search algorithm for finding the minimum
element in an array. The average-case efficiency depends on
the distribution of the minimum element in the array. If the
minimum element is uniformly distributed among all positions
in the array, the average-case efficiency of the algorithm is
O(n/2), which means that the algorithm runs in linear time on
average. However, if the minimum element is more likely to
occur at the beginning or end of the array, the average-case
efficiency may be worse than O(n/2).

8. Explain any algorithm and find the time complexity


using operation count.
A.
One algorithm that is widely used in computer science is the
binary search algorithm, which is used to search for a target
value in a sorted array. The algorithm works by repeatedly
dividing the array in half until the target value is found or
it is determined that the value is not in the array.

Here is an example implementation of the binary search


algorithm in Python:

def binary_search(arr, target):


low = 0
high = len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1
In this implementation, arr is the sorted array being
searched, and target is the value being searched for. The
algorithm works by maintaining two pointers, low and high,
which define the range of the array being searched. The
algorithm repeatedly computes the midpoint of the range using
the formula (low + high) // 2, and compares the value at the
midpoint to the target value. If the midpoint value is equal
to the target value, the function returns the index of the
midpoint. If the midpoint value is less than the target value,
the algorithm updates the low pointer to be one index higher
than the midpoint, effectively excluding the lower half of the
range from the search. Similarly, if the midpoint value is
greater than the target value, the algorithm updates the high
pointer to be one index lower than the midpoint, effectively
excluding the upper half of the range from the search.

The time complexity of the binary search algorithm can be


analyzed using the concept of "big O" notation, which provides
an upper bound on the number of operations performed by the
algorithm as a function of the input size. In the case of
binary search, the input size is the length of the array being
searched, denoted as n. The algorithm works by repeatedly
dividing the array in half until the target value is found or
the range is empty. The number of times the array can be
divided in half before the range is empty is equal to the
logarithm base 2 of the length of the array, denoted as log n.
Therefore, the time complexity of the binary search algorithm
is O(log n).

To compute the exact number of operations performed by the


algorithm, we can use the concept of "operation count". This
involves counting the number of times each operation in the
algorithm is performed, and expressing the total as a function
of the input size. In the case of the binary search algorithm,
the main operations are the comparisons between the midpoint
value and the target value, and the updates to the low and
high pointers. The number of comparisons performed by the
algorithm is at most log n, since the range of the search is
divided in half at each step. The number of pointer updates is
also at most log n, since the search range is divided in half
at each step, and each update involves moving one pointer by
one index. Therefore, the total number of operations performed
by the algorithm is O(log n).

9. Give the Euclid’s Algorithm for Computing Gcd(m,n),


Calculate the time complexity.

A.
Euclid's algorithm is an efficient method for computing the
greatest common divisor (GCD) of two positive integers, m and
n. The algorithm works by repeatedly taking the remainder of m
divided by n, and setting m to be n and n to be the remainder,
until the remainder is zero. The GCD is then the value of m.

Here is an example implementation of Euclid's algorithm in


Python:
def gcd(m, n):
while n != 0:
r = m % n
m = n
n = r
return m
In this implementation, the variables m and n represent the
two positive integers whose GCD is to be computed. The
algorithm works by repeatedly computing the remainder r of m
divided by n, and setting m to be n and n to be r. The loop
terminates when n is equal to zero, at which point the GCD is
the value of m.

The time complexity of Euclid's algorithm can be analyzed


using the concept of "big O" notation, which provides an upper
bound on the number of operations performed by the algorithm
as a function of the input size. In the case of Euclid's
algorithm, the input size is the maximum of the two input
integers, denoted as max(m,n). The algorithm works by
repeatedly taking the remainder of the two integers, which can
be shown to be at most log(max(m,n)) by the division
algorithm. Therefore, the time complexity of Euclid's
algorithm is O(log(max(m,n))).

To compute the exact number of operations performed by the


algorithm, we can use the concept of "operation count". This
involves counting the number of times each operation in the
algorithm is performed, and expressing the total as a function
of the input size. In the case of Euclid's algorithm, the main
operations are the remainder computations and the updates to
the variables m and n. The number of remainder computations is
at most log(max(m,n)), since the remainder is at most half the
divisor in each step. The number of updates to the variables m
and n is also at most log(max(m,n)). Therefore, the total
number of operations performed by the algorithm is
O(log(max(m,n))).

10. Write an algorithm to find the number of binary digits


in the binary representation of a positive decimal
integer. Calculate the time complexity.

A.
Here's an algorithm to find the number of binary digits in the
binary representation of a positive decimal integer:

Initialize a counter variable count to 0.


While the decimal integer is greater than 0, do the following:
a. Divide the decimal integer by 2 and take the integer
quotient.
b. Increment the count variable by 1.
c. Set the decimal integer to be the integer quotient from
step 2a.
Return the value of count.
Here's an implementation of this algorithm in Python:
def binary_digits(decimal_int):
count = 0
while decimal_int > 0:
decimal_int //= 2
count += 1
return count
The time complexity of this algorithm is O(log n), where n is
the decimal integer input to the function. This is because in
each iteration of the loop, the decimal integer is divided by
2, effectively cutting the number of digits in half.
Therefore, the number of iterations required to reduce the
decimal integer to 0 is logarithmic in the size of the input.

11. Give a non-recursive algorithm to find out the largest


element in a list of n numbers.

A.
Here is a non-recursive algorithm to find the largest element
in a list of n numbers:

Initialize a variable largest to the first element of the


list.
Loop through the rest of the list, comparing each element to
largest.
If an element is greater than largest, update largest to be
that element.
After the loop, largest will contain the largest element in
the list.
Here is the Python code for this algorithm:

def find_largest(nums):
largest = nums[0]
for i in range(1, len(nums)):
if nums[i] > largest:
largest = nums[i]
return largest
In this implementation, the function find_largest takes a list
nums of n numbers as input. The variable largest is
initialized to be the first element of the list, and the loop
starts from the second element of the list (index 1) and
compares each subsequent element to largest. If an element is
greater than largest, largest is updated to be that element.
After the loop, largest contains the largest element in the
list, which is returned by the function.

This algorithm has a time complexity of O(n), since it


requires looping through the entire list once to find the
largest element.
12. Write a recursive algorithm for computing the nth
fibonacci number.
A.
def fibonacci(n):
if n <= 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2)
The algorithm works by checking if n is less than or equal to
1. If it is, the function simply returns n. Otherwise, it
recursively calls itself with n-1 and n-2 as arguments and
returns the sum of those two results.

To use the algorithm, you simply need to call the fibonacci


function with the desired value of n, like so:

>>> fibonacci(5)
5
>>> fibonacci(10)
55
>>> fibonacci(20)
6765
Note that the recursive algorithm can be slow for large values
of n, since it has to compute the same subproblems multiple
times. A more efficient approach is to use dynamic programming
or memoization to store previously computed values and avoid
redundant computations.

13. Explain Selection Sort Algorithm with suitable example.

A.
Selection sort is a simple and efficient sorting algorithm
that works by repeatedly selecting the smallest (or largest)
element from the unsorted portion of the list and moving it to
the sorted portion of the list. The algorithm repeatedly
selects the smallest (or largest) element from the unsorted
portion of the list and swaps it with the first element of the
unsorted portion. This process is repeated for the remaining
unsorted portion of the list until the entire list is sorted.
One variation of selection sort is called “Bidirectional
selection sort” which goes through the list of elements by
alternating between the smallest and largest element, this way
the algorithm can be faster in some cases.

The algorithm maintains two subarrays in a given array.

The subarray which already sorted.


The remaining subarray was unsorted.
Algorithm:
SELECTION SORT(arr, n)

Step 1: Repeat Steps 2 and 3 for i = 0 to n-1


Step 2: CALL SMALLEST(arr, i, n, pos)
Step 3: SWAP arr[i] with arr[pos]
[END OF LOOP]
Step 4: EXIT

SMALLEST (arr, i, n, pos)


Step 1: [INITIALIZE] SET SMALL = arr[i]
Step 2: [INITIALIZE] SET pos = i
Step 3: Repeat for j = i+1 to n
if (SMALL > arr[j])
SET SMALL = arr[j]
SET pos = j
[END OF if]
[END OF LOOP]
Step 4: RETURN pos

Example:

Lets consider the following array as an example: arr[] = {64,


25, 12, 22, 11}

First pass:

For the first position in the sorted array, the whole array is
traversed from index 0 to 4 sequentially. The first position
where 64 is stored presently, after traversing whole array it
is clear that 11 is the lowest value.
64 25 12 22 11
Thus, replace 64 with 11. After one iteration 11, which
happens to be the least value in the array, tends to appear in
the first position of the sorted list.
11 25 12 22 64
Second Pass:

For the second position, where 25 is present, again traverse


the rest of the array in a sequential manner.
11 25 12 22 64
After traversing, we found that 12 is the second lowest value
in the array and it should appear at the second place in the
array, thus swap these values.
11 12 25 22 64
Third Pass:

Now, for third place, where 25 is present again traverse the


rest of the array and find the third least value present in
the array.
11 12 25 22 64
While traversing, 22 came out to be the third least value and
it should appear at the third place in the array, thus swap 22
with element present at third position.
11 12 22 25 64
Fourth pass:

Similarly, for fourth position traverse the rest of the array


and find the fourth least element in the array
As 25 is the 4th lowest value hence, it will place at the
fourth position.
11 12 22 25 64
Fifth Pass:

At last the largest value present in the array automatically


get placed at the last position in the array
The resulted array is the sorted array.
11 12 22 25 64

14. Explain Bubble Sort Algorithm with suitable example.

A.
Bubble Sort is the simplest sorting algorithm that works by
repeatedly swapping the adjacent elements if they are in the
wrong order. This algorithm is not suitable for large data
sets as its average and worst-case time complexity is quite
high.
Input: arr[] = {5, 1, 4, 2, 8}

First Pass:

Bubble sort starts with very first two elements, comparing


them to check which one is greater.
( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the
first two elements, and swaps since 5 > 1.
( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are
already in order (8 > 5), algorithm does not swap them.
Second Pass:

Now, during second iteration it should look like this:


( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Third Pass:

Now, the array is already sorted, but our algorithm does not
know if it is completed.
The algorithm needs one whole pass without any swap to know it
is sorted.
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )

15. Explain Space Complexity. Analyze the Space complexity


of the Algorithm for Adding two scalar variables.
A.
Space Complexity refers to the amount of memory or space
required by an algorithm to solve a problem. It includes the
memory required to store the input data, output data, and any
additional variables used by the algorithm during its
execution.

The algorithm for adding two scalar variables involves


creating a new variable to store the sum of the two variables.
Since scalar variables only hold a single value, the space
complexity of this algorithm is constant, i.e., O(1). This
means that the amount of memory required by the algorithm is
fixed, regardless of the size of the input.

In general, the space complexity of an algorithm can be


analyzed by looking at the memory required to store the input,
output, and any intermediate variables. If the amount of
memory required grows with the size of the input, the
algorithm has a space complexity of O(n), where n is the size
of the input. On the other hand, if the amount of memory
required remains constant, the algorithm has a space
complexity of O(1).

In the case of adding two scalar variables, since the


algorithm only uses a constant amount of memory, its space
complexity is O(1).

You might also like