1 / 64

Algorithms analysis and design

Algorithms analysis and design. BY Lecturer: Aisha Dawood. 7. 10. i = [7,10]; i  low = 7; i  high = 10. 5. 11. 17. 19. 4. 8. 15. 18. 21. 23. Interval Trees. The problem: maintain a set of intervals E.g., time intervals for a scheduling program:. 7. 10.

Télécharger la présentation

Algorithms analysis and design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Algorithms analysis and design BY Lecturer: Aisha Dawood 64

  2. 7 10 i = [7,10]; i low = 7; ihigh = 10 5 11 17 19 4 8 15 18 21 23 Interval Trees • The problem: maintain a set of intervals • E.g., time intervals for a scheduling program: 64

  3. 7 10 i = [7,10]; i low = 7; ihigh = 10 5 11 17 19 4 8 15 18 21 23 Review: Interval Trees • The problem: maintain a set of intervals • E.g., time intervals for a scheduling program: • Query: find an interval in the set that overlaps a given query interval • [14,16]  [15,18] • [16,19]  [15,18] or [17,19] • [12,14]  NULL 64

  4. Review: Interval Trees • Following the methodology: • Pick underlying data structure • Red-black trees will store intervals, keyed on ilow • Decide what additional information to store • Store the maximum endpoint in the subtree rooted at i • Figure out how to maintain the information • Update max as traverse down during insert • Recalculate max after delete with a traversal up the tree • Develop the desired new operations 64

  5. Review: Interval Trees intmax [17,19]23 [5,11]18 [21,23]23 [4,8]8 [15,18]18 [7,10]10 Note that: 64

  6. Review: Searching Interval Trees IntervalSearch(T, i) { x = T->root; while (x != NULL && !overlap(i, x->interval)) if (x->left != NULL && x->left->max  i->low) x = x->left; else x = x->right; return x } • What will be the running time? 64

  7. Review: Correctness of IntervalSearch() • Key idea: need to check only 1 of node’s 2 children • Case 1: search goes right • Show that  overlap in right subtree, or no overlap at all • Case 2: search goes left • Show that  overlap in left subtree, or no overlap at all 64

  8. Correctness of IntervalSearch() • Case 1: if search goes right,  overlap in the right subtree or no overlap in either subtree • If  overlap in right subtree, we’re done • Otherwise: • xleft = NULL, or x  left  max < i  low (Why?) • Thus, no overlap in left subtree! while (x != NULL && !overlap(i, x->interval)) if (x->left != NULL && x->left->max  i->low) x = x->left; else x = x->right; return x; 64

  9. Review: Correctness of IntervalSearch() • Case 2: if search goes left,  overlap in the left subtree or no overlap in either subtree • If  overlap in left subtree, we’re done • Otherwise: • i low  x left max, by branch condition • x left max = y high for some y in left subtree • Since i and y don’t overlap and i low  y high,i high < y low • Since tree is sorted by low’s, i high < any low in right subtree • Thus, no overlap in right subtree while (x != NULL && !overlap(i, x->interval)) if (x->left != NULL && x->left->max  i->low) x = x->left; else x = x->right; return x; 64

  10. Graphs • A graph G = (V, E) • V = set of vertices • E = set of edges = subset of V  V • Thus |E| = O(|V|2) 64

  11. Graph Variations • Variations: • A connected graphhas a path from every vertex to every other • In an undirected graph: • Edge (u,v) = edge (v,u) • No self-loops • In a directed graph: • Edge (u,v) goes from vertex u to vertex v, notated uv 64

  12. Graph Variations • More variations: • A weighted graph associates weights with either the edges or the vertices • E.g., a road map: edges might be weighted w/ distance • A multigraph allows multiple edges between the same vertices • E.g., the call graph in a program (a function can get called from multiple points in another function) 64

  13. Graphs • We will typically express running times in terms of |E| and |V| (often dropping the |’s) • If |E|  |V|2 the graph is dense • If |E|  |V| the graph is sparse • If you know you are dealing with dense or sparse graphs, different data structures may make sense 64

  14. Representing Graphs • Assume V = {1, 2, …, n} • An adjacency matrixrepresents the graph as a n x n matrix A: • A[i, j] = 1 if edge (i, j)  E (or weight of edge) = 0 if edge (i, j)  E 64

  15. Graphs: Adjacency Matrix • Example: 1 a d 2 4 b c 3 64

  16. Graphs: Adjacency Matrix • Example: 1 a d 2 4 b c 3 64

  17. Graphs: Adjacency Matrix • How much storage does the adjacency matrix require? • A: O(V2) • What is the minimum amount of storage needed by an adjacency matrix representation of an undirected graph with 4 vertices? • A: 6 bits • Undirected graph  matrix is symmetric • No self-loops don’t need diagonal 64

  18. Graphs: Adjacency List • Adjacency list: for each vertex v  V, store a list of vertices adjacent to v • Example: • Adj[1] = {2,3} • Adj[2] = {3} • Adj[3] = {} • Adj[4] = {3} • Variation: can also keep a list of edges coming into vertex 1 2 4 3 64

  19. Graphs: Adjacency List • How much storage is required? • The degree of a vertex v = # incident edges • Directed graphs have in-degree, out-degree • For directed graphs, # of items in adjacency lists is out-degree(v) = |E|For undirected graphs, # items in adj lists is  degree(v) = 2 |E| (handshaking lemma) 64

  20. Graph Searching • Given: a graph G = (V, E), directed or undirected • Goal: methodically explore every vertex and every edge • Ultimately: build a tree on the graph • Pick a vertex as the root • Choose certain edges to produce a tree • Note: might also build a forest if graph is not connected 64

  21. Breadth-First Search • “Explore” a graph, turning it into a tree • One vertex at a time • Expand frontier of explored vertices across the breadth of the frontier • Builds a tree over the graph • Pick a source vertex to be the root • Find (“discover”) its children, then their children, etc. 64

  22. Breadth-First Search • Again will associate vertex “colors” to guide the algorithm • White vertices have not been discovered • All vertices start out white • Grey vertices are discovered but not fully explored • They may be adjacent to white vertices • Black vertices are discovered and fully explored • They are adjacent only to black and gray vertices • Explore vertices by scanning adjacency list of grey vertices 64

  23. Breadth-First Search BFS(G, s) { initialize vertices; Q = {s}; // Q is a queue (duh); initialize to s while (Q not empty) { u = RemoveTop(Q); for each v  u->adj { if (v->color == WHITE) v->color = GREY; v->d = u->d + 1; v->p = u; Enqueue(Q, v); } u->color = BLACK; } } v->d is the discovered v->p is the parent 64

  24. Breadth-First Search: Example r s t u         v w x y 64

  25. Breadth-First Search: Example r s t u  0       v w x y Q: s 64

  26. Breadth-First Search: Example r s t u 1 0    1   v w x y Q: w r 64

  27. Breadth-First Search: Example r s t u 1 0 2   1 2  v w x y Q: r t x 64

  28. Breadth-First Search: Example r s t u 1 0 2  2 1 2  v w x y Q: t x v 64

  29. Breadth-First Search: Example r s t u 1 0 2 3 2 1 2  v w x y Q: x v u 64

  30. Breadth-First Search: Example r s t u 1 0 2 3 2 1 2 3 v w x y Q: v u y 64

  31. Breadth-First Search: Example r s t u 1 0 2 3 2 1 2 3 v w x y Q: u y 64

  32. Breadth-First Search: Example r s t u 1 0 2 3 2 1 2 3 v w x y Q: y 64

  33. Breadth-First Search: Example r s t u 1 0 2 3 2 1 2 3 v w x y Q: Ø 64

  34. Touch every vertex: O(V) u = every vertex, but only once : O(V) So v = every vertex that appears in some other vert’s adjacency list: O(E) BFS: The Code Again BFS(G, s) { initialize vertices; Q = {s}; while (Q not empty) { u = RemoveTop(Q); for each v  u->adj { if (v->color == WHITE) v->color = GREY; v->d = u->d + 1; v->p = u; Enqueue(Q, v); } u->color = BLACK; } } What will be the running time? Total running time: O(2V+E) 64

  35. Breadth-First Search: Properties • BFS calculates the shortest-path distance to the source node • Shortest-path distance (s,v) = minimum number of edges from s to v. • BFS builds breadth-first tree, in which paths to root represent shortest paths in G • Thus can use BFS to calculate shortest path from one vertex to another in O(2V+E) time 64

  36. Depth-First Search • Depth-first search is another strategy for exploring a graph • Explore “deeper” in the graph whenever possible • Edges are explored out of the most recently discovered vertex v that still has unexplored edges • When all of v’s edges have been explored, backtrack to the vertex from which v was discovered 64

  37. Depth-First Search • Vertices initially colored white • Then colored gray when discovered • Then black when finished 64

  38. DFS(G) { for each vertex u  G->V { u->color = WHITE; } time = 0; for each vertex u  G->V { if (u->color == WHITE) DFS_Visit(u); } } DFS_Visit(u) { u->color = GREY; time = time+1; u->d = time; for each v  u->Adj[] { if (v->color == WHITE) DFS_Visit(v); } u->color = BLACK; time = time+1; u->f = time; } Depth-First Search: The Code u->d : discovered , u -> f : finished 64

  39. DFS(G) { for each vertex u  G->V { u->color = WHITE; } time = 0; for each vertex u  G->V { if (u->color == WHITE) DFS_Visit(u); } } DFS_Visit(u) { u->color = GREY; time = time+1; u->d = time; for each v  u->Adj[] { if (v->color == WHITE) DFS_Visit(v); } u->color = BLACK; time = time+1; u->f = time; } Depth-First Search: The Code Running time: O(n2) because call DFS_Visit on each vertex, and the loop over Adj[] can run as many as |V| times 64

  40. DFS(G) { for each vertex u  G->V { u->color = WHITE; } time = 0; for each vertex u  G->V { if (u->color == WHITE) DFS_Visit(u); } } DFS_Visit(u) { u->color = GREY; time = time+1; u->d = time; for each v  u->Adj[] { if (v->color == WHITE) DFS_Visit(v); } u->color = BLACK; time = time+1; u->f = time; } Depth-First Search: The Code BUT, there is actually a tighter bound. How many times will DFS_Visit() actually be called? 64

  41. DFS(G) { for each vertex u  G->V { u->color = WHITE; } time = 0; for each vertex u  G->V { if (u->color == WHITE) DFS_Visit(u); } } DFS_Visit(u) { u->color = GREY; time = time+1; u->d = time; for each v  u->Adj[] { if (v->color == WHITE) DFS_Visit(v); } u->color = BLACK; time = time+1; u->f = time; } Depth-First Search: The Code So, running time of DFS = O(V+E) 64

  42. Depth-First Sort Analysis • This running time argument is an example of amortized analysis • “Charge” the exploration of edge to the edge: • Each loop in DFS_Visit can be attributed to an edge in the graph • Runs once/edge if directed graph, twice if undirected • Thus loop will run in O(E) time, algorithm O(V+E) 64

  43. DFS Example sourcevertex 64

  44. DFS Example sourcevertex d f 1 | | | | | | | | 64

  45. DFS Example sourcevertex d f 1 | | | 2 | | | | | 64

  46. DFS Example sourcevertex d f 1 | | | 2 | | 3 | | | 64

  47. DFS Example sourcevertex d f 1 | | | 2 | | 3 | 4 | | 64

  48. DFS Example sourcevertex d f 1 | | | 2 | | 3 | 4 5 | | 64

  49. DFS Example sourcevertex d f 1 | | | 2 | | 3 | 4 5 | 6 | 64

  50. DFS Example sourcevertex d f 1 | 8 | | 2 | 7 | 3 | 4 5 | 6 | 64

More Related