DAA Assignment-1
DAA Assignment-1
DAA Assignment-1
A.
There are several algorithms that can be used to find prime
numbers. Here are some of the most common ones:
Trial Division:
This is the simplest algorithm for finding prime numbers. In
this algorithm, we divide the number n by all the numbers
between 2 and n-1. If none of the numbers evenly divide n,
then n is a prime number. The time complexity of this
algorithm is O(n).
Sieve of Eratosthenes:
This is a more efficient algorithm for finding prime numbers.
In this algorithm, we create a list of all the numbers between
2 and n. We then mark all the multiples of 2, 3, 5, 7, etc. as
composite numbers. The remaining numbers are prime. The time
complexity of this algorithm is O(n log log n).
Sieve of Sundaram:
This is another efficient algorithm for finding prime numbers.
In this algorithm, we create a list of numbers from 1 to n/2.
We then generate all the numbers of the form i+j+2ij, where
i<=j and i+j+2ij <= n/2. We then mark all these numbers as
composite numbers. The remaining numbers are prime. The time
complexity of this algorithm is O(n log n).
A.
Insertion Sort is a simple sorting algorithm that works by
building the final sorted array one item at a time. It is a
comparison-based algorithm in which each element is compared
with the elements that come before it and is inserted into its
correct position in the final sorted array.
Example:
9 6 2 5 4
6 9 2 5 4
6 2 9 5 4
6 2 5 9 4
6 2 5 4 9
2 6 5 4 9
2 5 6 4 9
2 5 4 6 9
2 4 5 6 9
Algorithm:
insertionSort(array)
mark first element as sorted
for each unsorted element X
'extract' the element X
for j <- lastSortedIndex down to 0
if current element j > X
move sorted element to the right by 1
break loop and insert X here
end insertionSort
3. Explain the properties of an algorithm with an example.
A.
An algorithm is a step-by-step procedure for solving a
problem. It is important that an algorithm has certain
properties that make it effective and efficient in solving the
problem. Here are some important properties of an algorithm:
A.
The algorithm for matrix multiplication is as follows:
(n+1)+n*(n+1)+n^2+n*n(n+1)+n*n*n+1
Simplifying this expression, we get:
n+1+n^2+n+n^3+n^2+n^3+1=2n^3+3n^2+2n+1
Now take the higher exponent and ignore other things
2n^3, Now remove the constant multiplication
n^3
Hence, the time complexity of the matrix multiplication
algorithm is O(n^3). This means that the time taken by the
algorithm increases as the product of the dimensions of the
matrices increases.
A.
The time complexity of an algorithm measures how the running
time of the algorithm grows as the size of the input
increases. Linear search is a simple algorithm for finding a
target value in an unsorted list, and it has a worst-case time
complexity of O(n), where n is the size of the list.
Here's how the step count method can be used to calculate the
time complexity of linear search:
A.
The time complexity of the given function sum2 using the step
count method can be calculated as follows:
For loop: The loop runs for n iterations, adding the current
element of the array to s. This takes n iterations, and each
iteration takes constant time, i.e., O(1).
Hence, the time complexity of the function sum2 using the step
count method is O(n), meaning that the running time of the
algorithm increases linearly with the size of the input n.
A.
In algorithm analysis, we often consider different cases to
evaluate the efficiency of an algorithm. These cases include
the best-case scenario, the average-case scenario, and the
worst-case scenario.
A.
Euclid's algorithm is an efficient method for computing the
greatest common divisor (GCD) of two positive integers, m and
n. The algorithm works by repeatedly taking the remainder of m
divided by n, and setting m to be n and n to be the remainder,
until the remainder is zero. The GCD is then the value of m.
A.
Here's an algorithm to find the number of binary digits in the
binary representation of a positive decimal integer:
A.
Here is a non-recursive algorithm to find the largest element
in a list of n numbers:
def find_largest(nums):
largest = nums[0]
for i in range(1, len(nums)):
if nums[i] > largest:
largest = nums[i]
return largest
In this implementation, the function find_largest takes a list
nums of n numbers as input. The variable largest is
initialized to be the first element of the list, and the loop
starts from the second element of the list (index 1) and
compares each subsequent element to largest. If an element is
greater than largest, largest is updated to be that element.
After the loop, largest contains the largest element in the
list, which is returned by the function.
>>> fibonacci(5)
5
>>> fibonacci(10)
55
>>> fibonacci(20)
6765
Note that the recursive algorithm can be slow for large values
of n, since it has to compute the same subproblems multiple
times. A more efficient approach is to use dynamic programming
or memoization to store previously computed values and avoid
redundant computations.
A.
Selection sort is a simple and efficient sorting algorithm
that works by repeatedly selecting the smallest (or largest)
element from the unsorted portion of the list and moving it to
the sorted portion of the list. The algorithm repeatedly
selects the smallest (or largest) element from the unsorted
portion of the list and swaps it with the first element of the
unsorted portion. This process is repeated for the remaining
unsorted portion of the list until the entire list is sorted.
One variation of selection sort is called “Bidirectional
selection sort” which goes through the list of elements by
alternating between the smallest and largest element, this way
the algorithm can be faster in some cases.
Example:
First pass:
For the first position in the sorted array, the whole array is
traversed from index 0 to 4 sequentially. The first position
where 64 is stored presently, after traversing whole array it
is clear that 11 is the lowest value.
64 25 12 22 11
Thus, replace 64 with 11. After one iteration 11, which
happens to be the least value in the array, tends to appear in
the first position of the sorted list.
11 25 12 22 64
Second Pass:
A.
Bubble Sort is the simplest sorting algorithm that works by
repeatedly swapping the adjacent elements if they are in the
wrong order. This algorithm is not suitable for large data
sets as its average and worst-case time complexity is quite
high.
Input: arr[] = {5, 1, 4, 2, 8}
First Pass:
Now, the array is already sorted, but our algorithm does not
know if it is completed.
The algorithm needs one whole pass without any swap to know it
is sorted.
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )