1 / 63

Analysis of Algorithms Chapter - 06 Greedy Graph Algorithms

Analysis of Algorithms Chapter - 06 Greedy Graph Algorithms. This Chapter Contains the following Topics: Introduction Graph Categorization Graph Terminology Graph Representation Searching Graphs Depth-First Search Breadth-First Search Greedy Methods Fractional Knapsack Problem

fidelina
Télécharger la présentation

Analysis of Algorithms Chapter - 06 Greedy Graph Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis of Algorithms Chapter - 06 Greedy Graph Algorithms

  2. This Chapter Contains the following Topics: • Introduction • Graph Categorization • Graph Terminology • Graph Representation • Searching Graphs • Depth-First Search • Breadth-First Search • Greedy Methods • Fractional Knapsack Problem • A Task-Scheduling Problem • Minimum Cost Spanning Trees • Spanning trees • Kruskal’s Algorithm • Prim’s Algorithm • Shortest Path Problem • Dijkstra’s Algorithm

  3. Introduction

  4. A Graph is a data structure which consists of a set of vertices, and a set of edges that connect (some of) them. That is, G = ( V, E ), Where V- set of vertices, E - set of edges 3 1 Vertex (Node) 2 Edge 4 5 What is Graph? V = {1, 2, 3, 4, 5} E = { (1,2), (1,3), (1,4), (2,3), (3,5), (4,5) }

  5. Computer Networks Electrical Circuits Road Map Computer Resistor/Inductor/… City Applications

  6. A Directed GraphorDigraph is a graph where each edge has a direction The edges in a digraph are called Arcs or Directed Edges Example: G = (V, E), where V = {1, 2, 3, 4, 5, 6} and E = {(1,4), (2,1), (2,3), (3,2), (4,3), (4,5), (4,6), (5,3), (6,1), (6,5)} (1, 4) = 1→4 where 1 is the tail and 4 is the head 1 6 4 2 5 3 Graph Categorization

  7. An Undirected Graphis a graph where the edges have no directions The edges in an undirected graph are called Undirected Edges Example: G = (V, E), where V = {1, 2, 3, 4, 5} and E = {(1,2), (1,3), (1,4), (2,3), (3,5), (4,5)} 3 1 2 4 5 Graph Categorization (Contd.)

  8. 40 3 1 1 20 60 10 2 70 4 2 3 50 5 Graph Categorization (Contd.) • A Weighted Graphis a graph where all the edges are assigned weights. • If the same pair of vertices have more than one edge, that graph is called a Multigraph

  9. Adjacent vertices: If (i,j) is an edge of the graph, then the nodes i and j are adjacent. An edge (i,j) is Incidenttovertices i and j. Vertices 2 and 5 are not adjacent Loop or self edges:An edge ( i,i ) is called a self edge or a loop. In graphs loops are not permitted ( 1,1 ) and ( 4,4 ) are self edges 3 1 3 1 2 2 4 4 5 5 Graph Terminology

  10. Path: A sequence of edges in the graph. There can be more than one path between two vertices. Vertex A is reachable from B if there is a path from A to B. Paths from B to D B, A, D B, C, D Simple Path:A path whereall the vertices are distinct. 1,4,5,3 is a simple path. But 1,4,5,4 is not a simple path. G 3 1 A 2 F D 4 B 5 E C Graph Terminology (Contd.)

  11. Length :Sum of the lengths of the edges on the path. Length of the path 1,4,5,3 is 3 Circuit: A path whose first and last vertices are the same. The path 3,2,1,4,5,3 is a circuit. Cycle: A circuit where all the vertices are distinct except for the first (and the last) vertex. 1,4,5,3,1 is a cycle, but 1,4,5,4,1 is not a cycle. Hamiltonian Cycle: A Cycle that contains all the vertices of the graph. 1,4,5,3,2,1 is a Hamiltonian Cycle. 3 1 2 4 5 Graph Terminology (Contd.)

  12. Degree of a Vertex : In an undirected graph, the no. of edges incident to the vertex In-degree: The no. of edges entering the vertex in a digraph Out-Degree: The no. of edges leaving the vertex in a digraph In-degree of 1 is 3 Out-degree of 1 is 1 A Subgraph of graph G=(V,E) is a graph H=(U,F) such that U ЄV and FЄE 3 1 3 1 1 2 3 2 4 2 5 4 5 H=(U,F) G=(V,E) Graph Terminology (Contd.)

  13. A graph is said to be Connected if there is at least one path from every vertex to every other vertex in the graph. Tree: A connected undirected graph that contains no cycles Forest: A graph that does not contain a cycle 3 3 3 1 1 3 1 1 2 2 2 2 4 4 4 4 5 5 5 5 Tree Forest Connected Unconnected Graph Terminology (Contd.)

  14. The Spanning Tree of a Graph G is a subgraph of G that is a tree and contains all the vertices of G. 3 3 1 1 2 2 4 4 5 5 SpanningTree Graph Graph Terminology (Contd.)

  15. Adjacency Matrix (A) The Adjacency Matrix A=(ai,j) of a graph G=(V,E) with n nodes is an nXn matrix Each element of A is either 0 or 1, depending on the adjacency of the nodes aij = 1, if (i,j) Є E, = 0, otherwise Example: Find the adjacency matrices of the following graphs. 3 1 2 3 4 1 5 2 Representation of Graphs

  16. Adjacency Matrix of a Weighted Graph The weight of the edge can be shown in the matrix when the vertices are adjacent A nil value (0 or ∞) depending on the problem is used when they are not adjacent Example: To find the minimum distance between nodes... 9 3 1 5 4 2 Representation of Graphs (Contd.)

  17. Adjacency List An Adjacency list is an array of lists, each list showing the vertices a given vertex is adjacent to…. Adjacency List of a Weighted Graph The weight is included in the list 5 4 5 2 3 2 1 5 2 1 1 3 5 1 1 2 3 1 2 3 9 1 9 3 3 1 2 4 5 4 4 2 5 5 Representation of Graphs (Contd.)

  18. Searching Graphs

  19. Depth-First Search • Why do we need to search graphs? • To find paths • To look for connectivity • Depth-First Search (DFS) • Start from an arbitrary node • Visit (Explore) an unvisited adjacent edge • If the node visited is a dead end, go back tothe previous node (Backtrack) • Stop when no unvisited nodes are found and no backtracking can be done • Implemented using a Stack • Explore if possible, Backtrack otherwise…

  20. DFS Algorithm • Algorithm DFS(G) • { • for each vertex u Є V[G] do • { • Color[u] := white; • Parent[u] := nil; • } • for each vertex u Є V[G] do • if (Color[u] = white) then • DFS_Visit(u); • } • Algorithm DFS_Visit(u) • { • Color[u] := gray • for each vertex v Є Adj[u] do • if (Color[v] = white) then • { • Parent[v] := u; • DFS_Visit(v); • } • Color[u] := black; • } white - Unvisited gray - Discovered black - Finished

  21. B F H A C G D E Example and Analysis A , F , C , B , G , E • Two for loops of DFS take Θ(V) time, excluding the time to execute the calls to DFS_Visit(). • The procedure DFS_Visit() is called exactly once for each vertex of the graph, since DFS_Visit() is invoked only on white vertices and the first thing it does is paint the vertex grey. • During an execution of DFS_Visit(v), the 2nd for loop is executed |Adj[v]| times. • Since Σ|Adj[v]| = Θ(E), the total cost of executing 2nd for loop is Θ(E). • So, the total running time of DFS is Θ(E+V) .

  22. Breadth-First Search (BFS) • Start from an arbitrary node • Visit all the adjacent nodes (distance=1) • Visit the nodes adjacent to the visited nodes (distance=2, 3 etc.) • Stop when a dead end is met • Implemented using a Queue • Explore all nodes at distance d…

  23. BFS Algorithm Algorithm BFS(G, s) { for each vertex u Є V[G] – {s} do { Color[u] := white; Distance[u] := ∞; Parent[u] := nil; } Color[s] := gray; Distance[s] := 0; Parent[s] := nil; Q := Ø; Enqueue (Q, s); while (Q ≠ Ø) do { u := Dequeue (Q); for each v Є Adj[u] do if (Color[v] = white) then { Color[v] := gray; Distance[v] := Distance[u] + 1; Parent[v] := u; Enqueue (Q,v); } Color[u] := black; } white - Unvisited gray - Discovered black - Finished s - SourceVertex Q - FIFOQueue

  24. B F H A C G D E Example and Analysis A , C , E , B , F , G • After initialization, no vertex is ever whitened. • Thus each vertex is enqueued at most once, and hence dequeued at most once. • The operations of enqueueing and dequeueing take O(1) time, so the total time devoted to queue operations is O(V). • Because the adjacency list of each vertex is scanned only when the vertex is dequeued, each adjacency list is scanned at most once. • Since the sum of the lengths of all the adjacency lists is Θ(E), the total time spent in scanning adjacency list is O(E). • S, the total running time of BFS is O(V+E).

  25. Greedy Methods

  26. 4 B A 5 3 4 E 2 1 3 C 2 D The Greedy Method • A greedy algorithm obtains an optimal solution to a problem by making a sequence of choices. • For each decision point in the algorithm, the choice that seems best at the moment is chosen. • This chooses the Locally Optimal Choice hoping that it would give a Globally Optimal Solution. • But it does not always produce an optimal solution. • Of course, it is powerful and works for many problems. • Example- • Find the shortest path from A to E using the Greedy Method.

  27. 4 B A 5 3 4 E 2 1 4 B A 5 3 2 C D 3 4 E 2 1 3 C 2 D Solutions • Greedy solution – • Order of visit is A → C → D → B → E • Path cost is 2+2+1+5 = 10 • The best possible solution we can achieve. • In the example the optimal solution is • A → D → E • Path cost is 3+3 = 6

  28. 10 ml The Fractional Knapsack Problem • Given: A set S of n items, with each item i having bi - a positive benefit, and wi - a positive weight • Goal: Choose items with maximum total benefit but with weight at most W. • If we are allowed to take fractional amounts, then this is the fractional knapsack problem. • In this case, we let xi denote the amount we take of item i • Objective: maximize • Constraint: • Example: “knapsack” Items: • Solution: • 1 ml of 5 • 2 ml of 3 • 6 ml of 4 • 1 ml of 2 1 2 3 4 5 Weight: 4 ml 8 ml 2 ml 6 ml 1 ml Benefit: $12 $32 $40 $30 $50 Value: 3 4 20 5 50 ($ per ml)

  29. The Algorithm • Greedy choice: Keep taking item with highest value (benefit to weight ratio) • Since • Input: set S of items with benefit bi and weight wi; max. weight W • Output: amount xi of each item i to maximize benefit with weight at most W Algorithm FractionalKnapsack(S, W) { for each item iε S do { xi :=0; vi :=bi / wi ; //value w :=0 ; //total weight if (w < W) then { //remove item i with highest value (vi) xi := min{wi , W - w}; w :=w + x; } } }

  30. A Task-Scheduling Problem • Given: a set T of n tasks, each having: A start time, si A finish time, fi (where si < fi) • Goal: Perform all the tasks using a minimum number of “machines.” • For example: • [1,4], [1,3], [2,5], [3,7], [4,7], [6,9], [7,8] (ordered by start) • Greedy choice: consider tasks by their start time and use as few machines as possible with this order. Machine 3 Machine 2 Machine 1 1 2 3 4 5 6 7 8 9

  31. The Algorithm • Input: Set T of tasks with start time siand finish time fi. • Output: non-conflicting schedule with minimum number of machines Algorithm TaskSchedule(T) { m :=0; //no. of machines while (T is not empty) do { remove task i with smallest si; if (there’s a machine j for i) then schedule i on machine j; else m :=m + 1; schedule i on machine m; } }

  32. Minimum Cost Spanning Trees

  33. B B A A B A C C C D D D E E E Undirected Graph Some Spanning Trees Spanning Tree • A Tree is a connected undirected graph that contains no cycles is called a tree • A Spanning Tree of a graph G is a subgraph of G that is a tree and contains all the vertices of G • Properties • The spanning tree of a n –vertex Undirected Graph has exactly n-1 edges • It connects all the Vertices in the Graph • A Spanning tree has no Cycles

  34. 6 B A B A 1 1 1 1 C 3 2 C 3 2 4 D D E 5 E Weighted Undirected Graph MCST Minimum Cost Spanning Tree (MCST) • The Tree among all the Spanning Trees with the Lowest Cost. • Applications: • Computer Networks • To find how to connect a set of computers using the minimum amount of wire • Shipping/Airplane Lines • To find the fastest way between locations

  35. Constructing a MCST • We shall examine two algorithms for solving the MCST problem: Kruskal’s algorithm and Prim’s algorithm. • Each can easily be made to run in time O(E lg V) using ordinary binary heaps. • By using Fibonaci heaps, Prim’s algorithm can be sped up to run in time O(E+V lg V), which is an improvement if |V| is much less than |E|. • Both the algorithms use a greedy approach to the problem. • This greedy strategy is captured by following algorithm, which grows the minimum spanning tree one edge at a time. • Algorithm MCST(G, w) { T := Ø; while (T does not form a MCST) do { find an edge (u,v) that is safe for T; T := T υ {(u, v)}; } return T; }

  36. Kruskal’s Algorithm • Mark each vertex as being in a Set • Initially each vertex is in a set of it’s own • Sort the edges in increasing order of the weight • Take the edges in the sorted order (smallest one first) • If it’s Safe to add the edge • Add it to the tree, don’t worry about the overall structure • It is Safe to connect to vertices from different sets, No Cycles will be formed. • Our implementation of Kruskal’s algorithm uses a disjoint-set data structure to maintain several set of elements. • Each set contains the vertices in a tree of the current forest. • The opreration Find-Set(u) returns a representative element from the set that contains u. • Thus, we can determine whether two vertices u and v belong to the same tree by testing whether Find-Set(u)=Find-Set(v). • The combining of trees is accomplished by the Union() procedure.

  37. Kruskal’s Algorithm (Contd.) • To implement a disjoint-set forest with the union-by-rank heuristic, we must keep track of ranks. • With each node x, we maintain the integer value rank[x], which is an upper bound on the height of x. • When a singleton set is created by Make-Set(), the initial rank of the single node in the corresponding tree is 0. • Each Find-Set() operation leaves all ranks unchanged. • When applying Union() to two trees, there are two cases, depending on whether the roots have equal rank. • If unequal ranks, we make root of the higher rank the parent of the root of other, but their ranks remain same. • If equal ranks, we arbitrarily choose one of the roots as the parent and increment its rank. • Let us put this method into pseudocode.

  38. Kruskal’s Algorithm (Contd.) Algorithm Make-Set(x) { Π(x) := x; rank[x] := 0; } Algorithm Union(x, y) { Link(Find-set(x), Find-set(y)); } Algorithm Link(x, y) { if (rank[x] > rank[y]) then Π[y] := x; else { Π[x] := y; if (rank[x] = rank[y]) then rank[y] := rank[y] + 1; } } Algorithm Find-Set(x) { if (x ≠ Π[x]) then Π[x] := Find-Set( Π[x] ); return Π[x]; }

  39. Kruskal’s Algorithm (Contd.) • Algorithm MCST-Kruksal(G, w) • { • T := Ø; • for (each vertex v Є V[G]) do • Make-Set(v); // Make separate sets for vertices • sort the edges by increasing weight w • for (each edge (u,v) Є E, in sorted order) do • if (Find-Set(u) ≠ Find-Set(v)) then • {// if no cycles are formed • T := T U {(u,v)}; // Add edge to Tree • Union(u,v); // Combine Sets • } • return T; • } • The MCST resulting from this Algorithm is Optimal.

  40. 4 4 2 e a f c b d 8 (f,d) (b,e) (c,d) (a,b) (b,c) (e,d) (a,f) 6 2 1 Illustration Initially T = Φ, Sets – {a}, {b}, {c}, {d}, {e}, {f}. E (Sorted in Ascending Order ) Step 1 Take (f, d); Set(f) ≠ Set(d) => Add (f, d) to T, Combine Set(f) & Set(d); T = {(f, d)}. Sets – {a}, {b}, {c}, {e}, {f, d}.

  41. 4 4 2 e d a c f b 8 (f,d) (b,e) (c,d) (a,b) (b,c) (e,d) (a,f) 6 2 1 Illustration (Contd.) Step 2: Take (b, e); Set(b) ≠ Set(e) => Add (b, e) to T, Combine Set(b) & Set(e); T = {(f, d), (b, e)}. Sets – {a}, {b, e}, {c}, {f, d}.

  42. 4 4 2 e d a c f b 8 (f,d) (b,e) (c,d) (a,b) (b,c) (e,d) (a,f) 6 2 1 Illustration (Contd.) Step 3: Take (c, d); Set(c) ≠ Set(d) => Add (c, d) to T, Combine Set(c) & Set(d); T = {(f, d), (b, e), (c, d)}. Sets – {a}, {b, e}, {f, d, c}.

  43. 4 4 2 e d a c f b 8 (f,d) (b,e) (c,d) (a,b) (b,c) (e,d) (a,f) 6 2 1 Illustration (Contd.) Step 4: Take (a, b); Set(a) ≠ Set(b) => Add (a, b) to T, Combine Set(a) & Set(b); T = {(f, d), (b, e), (c, d), (a, b)}. Sets – {b, e, a}, {f, d, c}.

  44. 4 4 2 e d a c f b 8 (f,d) (b,e) (c,d) (a,b) (b,c) (e,d) (a,f) 6 2 1 Illustration (Contd.) Step 5: Take (b, c); Set(b) ≠ Set(c) => Add (b, c) to T, Combine Set(b) & Set(c); T = {(f, d), (b, e), (c, d), (a, b), (b, c)}. Sets – {b, e, a, f, d, c}.

  45. 4 4 2 e d a c f b 8 (f,d) (b,e) (c,d) (a,b) (b,c) (e,d) (a,f) 6 2 1 Illustration (Contd.) Step 6: Take (e, d); Set(e) = Set(d) => Ignore T = {(f, d), (b, e), (c, d), (a, b), (b, c)}. Sets – {b, e, a, f, d, c}.

  46. 4 4 2 e d a c f b 8 (f,d) (b,e) (c,d) (a,b) (b,c) (e,d) (a,f) 6 2 1 Illustration (Contd.) Step 7: Take (a, f); Set(a) = Set(fd) => Ignore T = {(f, d), (b, e), (c, d), (a, b), (b, c)}. Sets – {b, e, a, f, d, c}.

  47. Analysis of Kruskal’s Algorithm • The running time of Kruskal’s Algorithm for a graph G = (V, E). • Initializing the set T takes O(1) time. • Cost of Make-Set() is |V|. • The time to sort the edges is O(E lg E). • The for loop performs O(E) Find-Set() and Union() operations on the disjoint-set forest. • Along with |V| Make-Set() operations, these take a total of O((V+E) α(V)) time, where α is very slowly growing function. • Since, G is assumed to be connected, we have |E| ≥ |V| - 1, so, the disjoint-set operations take O(E α(V)) time. • Since α(|V|) = O(lg V) = O(lg E) • The total running time of Kruskal’s algorithm is O(E lg E). • Observing that |E| ≤ |V|2, we have lg |E| = O(lgV). • So, we can restate the running time of Kruskals’ algorithm as O(E lg V).

  48. f d c b a e e a c b d f a f c b e d 4 4 2 2 1 MCST Prim’s Algorithm • Pick any Vertex v • Choose the Shortest Edge from v to any other Vertex w • Add the edge (v,w) to the MCST • Continue to add at Every Step, the Shortest Edge from a Vertex in the MCST, to a Vertex Outside, with out worrying about overall structure. • Stop when all the Vertices are in the MCST 4 4 2 8 6 2 1 Graph

  49. Prim’s Algorithm (Contd.) • Q – Priority Queue, r – Starting vertex • Key[v] – Key of Vertex v, π[v] –Parent of Vertex v • Adj[v] – Adjacency List of v. • Algorithm MST-Prim(G, w, r) • { Q := V[G]; // Initially Q holds all vertices • for (each u Є Q) do • { Key[u] := ∞; // Initialize all Keys to ∞ • π[u] := Nil; • } • Key[r] ← 0 • while (Q ≠ Ø) do • { u := Extract_min(Q); // Get the min key node • for (each v Є Adj[u]) do • if (v Є Q and w(u,v) < Key[v]) then • { π[v] := u; • Key[v] := w(u,v); • } • } • } • The MCST resulting from this Algorithm is Optimal

  50. f d c b a e b c d f a e Illustration 4 Initially 4 2 8 6 2 1 Step 1 Extract_Min(Q) => a b Є Q and Key[b] > w(a, b) => π[b] = a, Key[b] = 4 f Є Q and Key[f] > w(a, f) => π[f] = a, Key[f] = 8

More Related