Question 1:
Describe an algorithm for finding both the largest and the smallest integers in a finite sequence of integers in an array and count them how many comparison operation are involved.
Describe an algorithm for finding both the largest and the smallest integers in a finite sequence of integers in an array and count them how many comparison operation are involved.
Solution:-
C code to find largest and smallest number in an array
#include<stdio.h> int main() { int a[50], size, i, big, small, count = 0; printf("\nEnter the size of the array: "); scanf("%d", &size); printf("\n Enter %d elements in to the array: ", size); for(i=0; i<size; i++) scanf("%d", &a[i]); big = a[0]; for(i = 1; i < size; i++){ if(big < a[i]) big = a[i]; { count++; } } printf(" Largest element: %d \n ", big); small = a[0]; for(i=1; i<size; i++) { if(small > a[i]) { count++; } small = a[i]; } printf(" Smallest element: %d \n ", small); printf(" number of comparison %d ", count); return 0; }
Algorithm:
step 1 : Start step 2 : Declare variables int a[50], size, i, big, small, count = 0 step 3 : read value step 4 : get the value element for the array step 5 : begin the operation for largest number in loop if big is less than a[i] do small = a[i] count the comparison comparison ++ step 6 : Display Largest element step 7 : begin the operation for small number in loop if small is greater than a[i] do big = a[i] count the comparison comparison ++ step 8 : Display Smallest element step 9 : stop.
Question 2:Write Prim’s algorithm and apply it to find a minimum cost spanning tree of the following Graph. Show all the steps.
Solution:-
Solution:-
How does Prim’s Algorithm Work?The idea behind Prim’s algorithm is simple, a spanning tree means all vertices must be connected. So the two disjoint subsets of vertices must be connected to make a Spanning Tree. And they must be connected with the minimum weight edge to make it a Minimum Spanning Tree (mst).
Pick the vertex with minimum key value. The vertex B is picked, as it has mstSET value 3 We repeat this steps until mstSet doesn’t include all vertices of given graph.
Question 3:
(i) Apply Dijkstra’s algorithm to find shortest path from source vertex A to each of other vertices of following directed graph. Show all the steps.
(i) Apply Dijkstra’s algorithm to find shortest path from source vertex A to each of other vertices of following directed graph. Show all the steps.
(ii) Differentiate between Bellman-Ford and Dijkstra’s algorithm to find a shortest path in a graph.
Solution:-(i) Apply Dijkstra’s algorithm to find shortest path from source vertex A to each of other vertices of following directed graph. Show all the steps.
Initial step:
sDist[A] = 0; the value to the source itselfsDist[B]= ∞, sDist[C]= ∞, sDist[D]= ∞, sDist[E]= ∞; the nodes not processed yet
sDist[A] = 0; the value to the source itselfsDist[B]= ∞, sDist[C]= ∞, sDist[D]= ∞, sDist[E]= ∞; the nodes not processed yet
Step 1Adj[A] = {B,C}; computing the value of the adjacent vertices of the graphsDist[B] = 10;
sDist[C] = 7;
sDist[C] = 7;
Step 2Computation from vertex CAdj[C] = {B, D, E};
sDist[B] > sDist[C] + EdgeCost[C,B]
10 > 7 + 2 (True)
Therefore, sDist[B] = 9;
sDist[D] = 11;
sDist[E] = 10;
sDist[B] > sDist[C] + EdgeCost[C,B]
10 > 7 + 2 (True)
Therefore, sDist[B] = 9;
sDist[D] = 11;
sDist[E] = 10;
Step 3Adj[E]=0; means there is no outgoing edges from E
And no more vertices, algorithm terminated. Hence the path which follows the algorithm is:
And no more vertices, algorithm terminated. Hence the path which follows the algorithm is:
Figure: the path obtained using Dijkstra’s Algorithm
(ii) Differentiate between Bellman-Ford and Dijkstra’s algorithm to find a shortest path in a graph.
Bellman-Ford algorithm is a single-source shortest path algorithm, which allows for negative edge weight and can detect negative cycles in a graph.
Dijkstra algorithm is also another single-source shortest path algorithm. However, the weight of all the edges must be non-negative.
Dijkstra algorithm is also another single-source shortest path algorithm. However, the weight of all the edges must be non-negative.
The vertexes in Dijkstra’s algorithm contain the whole information of a network. There is no such thing that every vertex only cares about itself and its neighbors. On the other hand, Bellman-Ford algorithm’s nodes contain only the information that are related to. This information allows that node just to know about which neighbor nodes can it connect and the node that the relation come from, mutually. Dijkstra’s algorithm is faster than Bellman-Ford’s algorithm however the second algorithm can be more useful to solve some problems, such as negative weights of paths.
In above Graph, as far as the total cost is concerned, there will be no difference, since the edges in the graph have non-negative weight. However, Dijkstra’s algorithm is usually used, since the typical implementation with binary heap has
Theta ((|E|+|V|)log|V|)
time complexity, while Bellman-Ford algorithm has O(|V||E|)
complexity.
Question 4:
Write a pseudo code for Quicksort and partition algorithms. Illustrate the operation of partition procedure on the following sequence.
Write a pseudo code for Quicksort and partition algorithms. Illustrate the operation of partition procedure on the following sequence.
A = <25,5,30,40,15,65,20,35>
Solution:-
(a) Pseudo code for Quicksort:
Quicksort (A, p, r) // Sort A[p .. r] into ascending order. 1. if (p < r) 1.1 q = Partition(A, p, r) 1.2 Quicksort(A, p, q-1) 1.3 Quicksort(A, q+1, r)
(b) Pseudo code for Partition:
Partition (A, p, r) // It is assumed that p <= r. 1. x = A[r]; i = p-1; // x is the pivot value. 2. for j = p to r-1 do if (A[j] <= x) {i = i + 1 Exchange A[i] with A[j].} 3. Exchange A[i+1] with A[r]. 4. return i+1.
Example of Sequence A = <25, 5 30, 40, 15, 65, 20, 35>
/* A typical recursive implementation of quick sort */
#include<stdio.h> // A utility function to swap two elements void swap(int* a, int* b) { int t = *a; *a = *b; *b = t; } /* This function takes last element as pivot, places the pivot element at its correct position in sorted array, and places all smaller (smaller than pivot) to left of pivot and all greater elements to right of pivot */
int partition (int arr[], int l, int h) { int x = arr[h]; // pivot int i = (l - 1); // Index of smaller element for (int j = l; j <= h- 1; j++) { // If current element is smaller than or equal to pivot if (arr[j] <= x) { i++; // increment index of smaller element swap(&arr[i], &arr[j]); // Swap current element with index } } swap(&arr[i + 1], &arr[h]); return (i + 1); } /* arr[] --> Array to be sorted, l --> Starting index, h --> Ending index */
void quickSort(int arr[], int l, int h) { if (l < h) { int p = partition(arr, l, h); /* Partitioning index */ quickSort(arr, l, p - 1); quickSort(arr, p + 1, h); } } /* Function to print an array */ void printArray(int arr[], int size) { int i; for (i=0; i < size; i++) printf("%d ", arr[i]); printf("\n"); } // Driver program to test above functions int main() { int arr[] = {25,5,30,40,15,65,20,35}; int n = sizeof(arr)/sizeof(arr[0]); quickSort(arr, 0, n-1); printf("Sorted array: \n"); printArray(arr, n); return 0; }
Question 5:Illustrate representation of the following graph:(a) Through adjacency matrix and adjacency list
(b) (i) Suppose most of the entries in the adjancy matrix are zeroes, i.e. when a graph is sparse, how much time is needed to find m number of edges in a graph.
(ii) What is the storage required for an adjacency list of any graph.
(b) (i) Suppose most of the entries in the adjancy matrix are zeroes, i.e. when a graph is sparse, how much time is needed to find m number of edges in a graph.
It will take much less time if say 0 (e + n), where e is the number of edges is a graph and e << n2/2. But this can be achieved if a graph is represented through an adjacency list where only the edges will be represented.
(ii) What is the storage required for an adjacency list of any graph.
Adjacency ListPutting it in another way of an adjacency list represents only columns of the adjacency matrix for a given vertex that contains entries as 1’s. It is to be observed that adjacency list compared to adjacency matrix consumes less memory space if a graph is sparse. A graph with few edges is called sparse graph. If the graph is dense, the situation is reverse. A dense graph, is a graph will relatively few missing edges. In case of an undirected graph with n vertices and e edge adjacency list requires n head and 2 e list nodes (i.e. each edges is represented twice).
The storage required (in terms of bits) for an adjacency list of any graph.
(i) For storing n (n vertices) head nodes – we require – log2 n bits –
(ii) For storing list nodes for each head n nodes – we require log n + log e
(i) For storing n (n vertices) head nodes – we require – log2 n bits –
(ii) For storing list nodes for each head n nodes – we require log n + log e
Therefore total storage requirement in item of bits for adjacency matrix is
2log2n (2log2n + log2e)
2log2n (2log2n + log2e)
Question 6:Explain the following items with(i) Asymptotic bounds(ii) Greedy techniques(iii) DFS(iv) Recursion tree method
Solution:-
(i) Asymptotic boundsAn algorithm provides an approach to solve a given problem. The key components of an algorithm are input, processing and output. Generally all algorithms works well for small size input irrespective of the complexity. So we need to analyze the algorithm for large value of input size. It is also possible that one problem have many algorithmic solutions. To select the best algorithm for an instance of task or input we need to compare the algorithm to find out how long a particular solution will take to generate the desired output. We will determine the behavior of function and running time of an algorithm as a function of input size for large value of n. This behavior can be expressed using asymptotic notations.
To understand concepts of the asymptotic notations first we need to get idea of lower bound, upper bound and how to represent time complexity expression for various algorithms. This is like expressing cost component of an algorithm.
The basic five asymptotic notations are:
1 Theta Notation ()
2 Big Oh Notation (O)
3 Big Omega Notation ()
4 Small o Notation (o)
5 Small Omega Notation
1 Theta Notation ()
2 Big Oh Notation (O)
3 Big Omega Notation ()
4 Small o Notation (o)
5 Small Omega Notation
(ii) Greedy techniquesGreedy technique is a general algorithm design strategy, built on following elements:
• configurations: different choices, values to find• objective function: some configurations to be either maximized or minimized
• configurations: different choices, values to find• objective function: some configurations to be either maximized or minimized
The method:• Applicable to optimization problems ONLY• Constructs a solution through a sequence of steps
• Each step expands a partially constructed solution so far, until a complete solution to the problem is reached.
On each step, the choice made must be
• Each step expands a partially constructed solution so far, until a complete solution to the problem is reached.
On each step, the choice made must be
• Feasible: it has to satisfy the problem’s constraints
• Locally optimal: it has to be the best local choice among all feasible choices available on that step
• Irrevocable: Once made, it cannot be changed on subsequent steps of the algorithm
• Locally optimal: it has to be the best local choice among all feasible choices available on that step
• Irrevocable: Once made, it cannot be changed on subsequent steps of the algorithm
NOTE:• Greedy method works best when applied to problems with the greedy-choice property
• A globally-optimal solution can always be found by a series of local improvements from a starting configuration.
• A globally-optimal solution can always be found by a series of local improvements from a starting configuration.
(iii) DFS Depth-First Search We are aware of tree traversal mechanism. Give a tree, we can traverse it using preorder, inorder and postorder. Similarly given an undirected graph we can traverse it or visit its nodes using breadth first-search and depth-first search.
Searching in breadth-first search or depth first search means exploring a given graph. Through searching a graph one can find out whether a graph is connected or not? There are many more applications of graph searching algorithms.
The logic behind this algorithm is to go as far as possible from the given starting node searching for the target. In case, we get a node that has no adjacent/successor node, we get back (recursively) and continue with the last vertex that is still not visited.
Broadly it is divided into 3 steps:
- Take a vertex that is not visited yet and mark it visited
- Go to its first adjacent non-visited (successor) vertex and mark it visited
- If all the adjacent vertices (successors) of the considered vertex are already visited or it doesn’t have any more adjacent vertex (successor) – go back to its parent vertex
(iv) Recursion tree method A recursion tree is useful for visualizing what happens when a recurrence is iterated. It diagrams the tree of recursive calls and the amount of work done at each call.
For instance, consider the recurrence
For instance, consider the recurrence
T(n) = 2T(n/2) + n2.
The recursion tree for this recurrence has the following form:
In this case, it is straightforward to sum across each row of the tree to obtain the total work done at a given level: This a geometric series, thus in the limit the sum is O(n2). The depth of the tree in this case does not really matter; the amount of work at each level is decreasing so quickly that the total is only a constant factor more than the root.
In this case, it is straightforward to sum across each row of the tree to obtain the total work done at a given level: This a geometric series, thus in the limit the sum is O(n2). The depth of the tree in this case does not really matter; the amount of work at each level is decreasing so quickly that the total is only a constant factor more than the root.
Recursion trees can be useful for gaining intuition about the closed form of a recurrence, but they are not a proof (and in fact it is easy to get the wrong answer with a recursion tree, as is the case with any method that includes ”…” kinds of reasoning). As we saw last time, a good way of establishing a closed form for a recurrence is to make an educated guess and then prove by induction that your guess is indeed a solution. Recurrence trees can be a good method of guessing.
Let’s consider another example,
T(n) = T(n/3) + T(2n/3) + n.
Note that the tree here is not balanced: the longest path is the rightmost one, and its length is log3/2 n. Hence our guess for the closed form of this recurrence is O(n log n).
Question 7:Briefly describe running time of important algorithms under different classes. (10 Marks)
Solution:-
Click Here to get the Solution for this Question no. 7
Question 8:What are the different approaches of solution to recursion relation? Solve the following recurrence relation by master method.
T (n) =2T(n/2)+n
Solution:-
Many algorithms are recursive in nature. When we analyze them, we get a recurrence relation for time complexity. We get running time on an input of size n as a function of n and the running time on inputs of smaller sizes. For example in Merge Sort, to sort a given array, we divide it in two halves and recursively repeat the process for the two halves. Finally we merge the results. Time complexity of Merge Sort can be written as T(n) = 2T(n/2) + cn. There are many other algorithms like Binary Search, Tower of Hanoi, etc.
There are mainly three ways for solving recurrences.
1) Substitution Method: We make a guess for the solution and then we use mathematical induction to prove the guess is correct or incorrect.
2) Recurrence Tree Method: In this method, we draw a recurrence tree and calculate the time taken by every level of tree. Finally, we sum the work done at all levels. To draw the recurrence tree, we start from the given recurrence and keep drawing till we find a pattern among levels. The pattern is typically a arithmetic or geometric series.
3) Master Method: Master Method is a direct way to get the solution. The master method works only for following type of recurrences or for recurrences that can be transformed to following type.
2) Recurrence Tree Method: In this method, we draw a recurrence tree and calculate the time taken by every level of tree. Finally, we sum the work done at all levels. To draw the recurrence tree, we start from the given recurrence and keep drawing till we find a pattern among levels. The pattern is typically a arithmetic or geometric series.
3) Master Method: Master Method is a direct way to get the solution. The master method works only for following type of recurrences or for recurrences that can be transformed to following type.
The master method is a cookbook method for solving recurrences. Although it cannot solve all recurrences, it is nevertheless very handy for dealing with many recurrences seen in practice.
T (n) =2T(n/2)+n
T(n) = 2T(n/2)+ n = 2(2T(n/4))+ n/2)+ n = 4T(n=4)+2n = 8T(n=8)+3n =
It looks like T(n) satisfies the recurrence T(n) = 2kT(n=2k)+ kn for any positive integer k. Let’s verify this by induction.
T(n) = 2T(n=2)+ n = 21T(n=21)+1.n [k = 1, given recurrence] T(n) = 2k-1T(n=2 k-1)+(k -1)n [inductive hypothesis] = 2 k-1(2T(n/2k)+ n/2 k-1)+(k -1)n [substitution] = 2kT(n=2k)+ kn [algebra]
Our guess was right! The recurrence becomes trivial when n/2k = 1, or equivalently,
when k = log2 n:
when k = log2 n:
T(n) = nT(1)+ n log2 n = n log2 n+ n.
Finally, we have to put back the _’s we stripped off; our final closed-form
solution is T(n) = Ø (n logn).
solution is T(n) = Ø (n logn).
No comments:
Post a Comment