470 likes | 597 Vues
This lecture discusses the Interval Partitioning algorithm, focusing on scheduling lectures to minimize the number of classrooms required without overlapping sessions. The goal is to find the minimum classroom allocation for lectures with specified start and end times. Key concepts include the depth of overlapping intervals, a greedy algorithm for assigning classrooms compatible with lecture timings, and optimal scheduling strategies. Examples illustrate the approach, ensuring that all provided interval lectures are accounted for. Key observations highlight that the number of required classrooms is at least equal to the maximum depth of overlapping intervals.
E N D
Analysis of AlgorithmsCS 477/677 Instructor: Monica Nicolescu Lecture 18
Interval Partitioning • Lecture j starts at sj and finishes at fj • Goal: find minimum number of classrooms to schedule all lectures so that no two occur at the same time in the same room • Ex: this schedule uses 4 classrooms to schedule 10 lectures e j g c d b h a f i 3 3:30 4 4:30 9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 Time CS 477/677 - Lecture 18
Interval Partitioning • Lecture j starts at sj and finishes at fj • Goal: find minimum number of classrooms to schedule all lectures so that no two occur at the same time in the same room • Ex: this schedule uses only 3 f c d j i b g a h e 3 3:30 4 4:30 9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 Time CS 477/677 - Lecture 18
Interval Partitioning: Lower Bound on Optimal Solution • The depth of a set of open intervals is the maximum number that contain any given time • Key observation: • The number of classrooms needed depth • Ex: Depth of schedule below = 3 schedule below is optimal • Does there always exist a schedule equal to depth of intervals? a, b, c all contain 9:30 f c d j i b g a h e 3 3:30 4 4:30 9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 Time CS 477/677 - Lecture 18
Greedy Strategy • Consider lectures in increasing order of start time: assign lecture to any compatible classroom • Labels set {1, 2, 3, …, d}, where d is the depth of the set of intervals • Overlapping intervals are given different labels • Assign a label that has not been assigned to any previous interval that overlaps it CS 477/677 - Lecture 18
Greedy Algorithm • Sort intervals by start times, such that s1 s2 ... sn(let I1, I2, .., Indenote the intervals in this order) • forj = 1 ton • Exclude from set {1, 2, …, d} the labels of preceding and overlapping intervals Ii from consideration for Ij • if there is any label from {1, 2, …, d} that was not excluded assign that label to Ij • else • leave Ij unlabeled CS 477/677 - Lecture 18
Example a b c d e f g h j i 3 3:30 4 4:30 9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 Time 3 f c d j 2 i b g 1 a h e 3 3:30 4 4:30 9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 Time CS 477/677 - Lecture 18
Claim • Every interval will be assigned a label • For interval Ij, assume there are t intervals earlier in the sorted order that overlap it • We have t + 1 intervals that pass over a common point on the timeline • t + 1 ≤ d, thus t ≤ d – 1 • At least one of the d labels is not excluded by this set of t intervals, which we can assign to Ij CS 477/677 - Lecture 18
Claim • No two overlapping intervals are assigned the same label • Consider I and I’ that overlap, and I precedes I’ in the sorted order • When I’ is considered, the label for I is excluded from consideration • Thus, the algorithm will assign a different label to I CS 477/677 - Lecture 18
Greedy Choice Property • The greedy algorithm schedules every interval on a resource, using a number of resources equal to the depth of the set of intervals. This is the optimal number of resources needed. • Proof: • Follows from previous claims • Structural proof • Discover a simple “structural”bound asserting that every possible solution must have a certain value • Then show that your algorithm always achieves this bound CS 477/677 - Lecture 18
1 2 3 4 5 6 tj 3 2 1 4 3 2 dj 6 8 9 9 14 15 Scheduling to Minimizing Lateness • Single resource processes one job at a time • Job j requires tj units of processing time, is due at time dj • If j starts at time sj, it finishes at time fj = sj + tj • Lateness: j = max { 0, fj - dj} • Goal: schedule all jobs to minimize maximumlateness L = max j • Example: lateness = 2 lateness = 0 max lateness = 6 d3 = 9 d2 = 8 d6 = 15 d1 = 6 d5 = 14 d4 = 9 12 13 14 15 0 1 2 3 4 5 6 7 8 9 10 11 CS 477/677 - Lecture 18
1 1 2 2 1 1 10 10 100 2 10 10 Greedy Algorithms • Greedy strategy: consider jobs in some order • [Shortest processing time first] Consider jobs in ascending order of processing time tj • [Smallest slack] Consider jobs in ascending order of slack dj - tj tj counterexample dj tj counterexample dj CS 477/677 - Lecture 18
d1 = 6 d2 = 8 d3 = 9 d4 = 9 d5 = 14 d6 = 15 Greedy Algorithm • Greedy choice: earliest deadline first Sort n jobs by deadline so that d1 d2 … dn t 0 for j = 1 to n Assign job j to interval [t, t + tj] sj t, fj t + tj t t + tj output intervals [sj, fj] max lateness = 1 12 13 14 15 0 1 2 3 4 5 6 7 8 9 10 11 CS 477/677 - Lecture 18
Minimizing Lateness: No Idle Time • Observation: The greedy schedule has no idle time • Observation: There exists an optimal schedule with noidle time d = 4 d = 6 d = 12 0 1 2 3 4 5 6 7 8 9 10 11 d = 4 d = 6 d = 12 0 1 2 3 4 5 6 7 8 9 10 11 CS 477/677 - Lecture 18
Minimizing Lateness: Inversions • An inversion in schedule S is a pair of jobs i and j such that: di< djbut j scheduled before i • Observation: greedy schedule has no inversions inversion j i CS 477/677 - Lecture 18
Greedy Choice Property • Optimal solution: di < dj but j scheduled before i • Greedy solution: i scheduled before j • Job i finishes sooner, no increase in latency Lateness(Job j)GREEDY = fi – dj Lateness(Job i)OPT = fi– di fj fi j i Optimal Sol Greedy Sol i j dj di ≤ No increase in latency CS 477/677 - Lecture 18
Greedy Analysis Strategies • Exchange argument • Gradually transform any solution to the one found by the greedy algorithm without hurting its quality • Structural • Discover a simple “structural” bound asserting that every possible solution must have a certain value, then show that your algorithm always achieves this bound • Greedy algorithm stays ahead • Show that after each step of the greedy algorithm, its solution is at least as good as any other algorithm’s CS 477/677 - Lecture 18
Coin Changing • Given currency denominations: 1, 5, 10, 25, 100, devise a method to pay amount to customer using fewest number of coins • Ex: 34¢ • Ex: $2.89 CS 477/677 - Lecture 18
Greedy Algorithm • Greedy strategy: at each iteration, add coin of the largest value that does not take us past the amount to be paid Sort coins denominations by value: c1 < c2 < … < cn. S while (x 0) { let k be largest integer such that ck x if (k = 0) return "no solution found" x x - ck S S {k} } return S coins selected CS 477/677 - Lecture 18
Greedy Choice Property • Algorithm is optimal for U.S. coinage: 1, 5, 10, 25, 100 Change = D * 100 + Q * 25 + D * 10 + N * 5 + P • Consider optimal way to change ck x < ck+1: greedy takes coin k • We claim that any optimal solution must also take coin k • If not, it needs enough coins of type c1, …, ck-1 to add up to x • Problem reduces to coin-changing x - ck cents, which, by induction, is optimally solved by greedy algorithm CS 477/677 - Lecture 18
Greedy Choice Property • Algorithm is optimal for U.S. coinage: 1, 5, 10, 25, 100 Change = Dl * 100 + Q * 25 + D * 10 + N * 5 + P • Optimal solution: Dl Q D N P • Greedy solution: Dl’Q’ D’ N’ P’ • Value < 5 • Both optimal and greedy use the same # of coins • 10 (D) > Value > 5 (N) • Greedy uses one N and then pennies after that • If OPT does not use N, then it should use pennies for the entire amount => could replace 5 P for 1 N CS 477/677 - Lecture 18
Greedy Choice Property Change = Dl * 100 + Q * 25 + D * 10 + N * 5 + P • Optimal solution: Dl Q D N P • Greedy solution: Dl’Q’ D’ N’ P’ • 25 (Q) > Value > 10 (D) • Greedy uses dimes (D’s) • If OPT does not use D’s, it needs to use either 2 coins (2 N), or 6 coins (1 N and 5 P) or 10 coins (10 P) to cover 10 cents • Could replace those with 1 D for a better solution CS 477/677 - Lecture 18
Greedy Choice Property Change = Dl * 100 + Q * 25 + D * 10 + N * 5 + P • Optimal solution: Dl Q D N P • Greedy solution: Dl’Q’ D’ N’ P’ • 100 (Dl) > Value > 25 (Q) • Greedy picks at least one quarter (Q), OPT does not • If OPT has no Ds: take all the Ns and Ps and replace 25 cents into one quarter (Q) • If OPT has 2 or fewer dimes: it uses at least 3 coins to cover one quarter, so we can replace 25 cents with 1 Q • If OPT has 3 or more dimes (e.g., 40 cents: with 4 Ds): take the first 3 Ds and replace them with 1 Q and 1 N CS 477/677 - Lecture 18
Coin-ChangingUS Postal Denominations • Observation: greedy algorithm is sub-optimal for US postal denominations: • $.01, .02, .03, .04, .05, .10, .20, .32, .40, .44, .50, .64, .65, .75, .79, .80, .85, .98 • $1, $1.05, $2, $4.95, $5, $5.15, $18.30, $18.95 • Counterexample: 160¢ • Greedy: 105, 50, 5 • Optimal: 80, 80 CS 477/677 - Lecture 18
C C C C C C C Selecting Breakpoints • Road trip from Princeton to Palo Alto along fixed route • Refueling stations at certain points along the way (red marks) • Fuel capacity = C • Goal: • makes as few refueling stops as possible • Greedy strategy: • go as far as you can before refueling Princeton Palo Alto 1 2 3 4 5 6 7 CS 477/677 - Lecture 18
Greedy Algorithm • Implementation:O(n log n) • Use binary search to select each breakpoint p Sort breakpoints so that: 0 = b0 < b1 < b2 < ... < bn= L S {0} x 0 while (x bn) let p be largest integer such that bp x + C if (bp = x) return "no solution" x bp S S {p} return S breakpoints selected current location CS 477/677 - Lecture 18
Greedy Choice Property • Let 0 = g0 < g1 < . . . < gp = L denote set of breakpoints chosen by the greedy • Let 0 = f0 < f1 < . . . < fq= L denote set of breakpoints in an optimal solution with f0 = g0, f1= g1 , . . . , fr = gr • Note: gr+1 > fr+1 by greedy choice of algorithm gr+1 g0 g1 g2 gr The greedy solution has the same number of breakpoints as the optimal Greedy: . . . OPT: f0 f1 f2 fr fr+1 fq why doesn't optimal solution drive a little further? CS 477/677 - Lecture 18
Problem – Buying Licenses • Your company needs to buy licenses for n pieces of software • Licenses can be bought only one per month • Each license currently sells for $100, but becomes more expensive each month • The price increases by a factor rj > 1 each month • License j will cost 100*rjt if bought t months from now • ri rj for license i j • In which order should the company buy the licenses, to minimize the amount of money spent? CS 477/677 - Lecture 18
Solution • Greedy choice: • Buy licenses in decreasing order of rate rj • r1>r2>r3… • Proof of greedy choice property • Optimal solution: …. ri rj….. ri < rj • Greedy solution: …. rj ri….. • Cost by optimal solution: • Cost by greedy solution: CG – CO = 100 * (rjt + rit+1 - rit - rjt+1) < 0 rit+1 – rit < rjt+1 - rjt rit(ri -1) < rjt(rj-1) 100* rit + 100* rjt+1 100* rjt + 100* rit+1 • OK! (because ri < rj) CS 477/677 - Lecture 18
Graphs • Applications that involve not only a set of items, but also the connections between them Maps Schedules Computer networks Hypertext Circuits CS 477/677 - Lecture 18
2 2 1 1 2 1 3 4 3 4 3 4 Graphs - Background Graphs = a set of nodes (vertices) with edges (links) between them. Notations: • G = (V, E) - graph • V = set of vertices V = n • E = set of edges E = m Directed graph Undirected graph Acyclic graph CS 477/677 - Lecture 18
2 1 3 4 2 1 2 1 9 4 8 6 3 3 4 7 4 Other Types of Graphs • A graph is connected if there is a path between every two vertices • A bipartite graph is an undirected graph G = (V, E) in which V = V1 + V2 and there are edges only between vertices in V1 and V2 Connected Not connected CS 477/677 - Lecture 18
2 1 3 5 4 Graph Representation • Adjacency list representation of G = (V, E) • An array of V lists, one for each vertex in V • Each list Adj[u] contains all the vertices v such that there is an edge between u and v • Adj[u] contains the vertices adjacent to u (in arbitrary order) • Can be used for both directed and undirected graphs 1 2 3 4 5 Undirected graph CS 477/677 - Lecture 18
2 1 2 1 3 5 4 3 4 Properties of Adjacency List Representation • Sum of the lengths of all the adjacency lists • Directed graph: • Edge (u, v) appears only once in u’s list • Undirected graph: • u and v appear in each other’s adjacency lists: edge (u, v) appears twice Directed graph E 2 E Undirected graph CS 477/677 - Lecture 18
2 1 2 1 3 5 4 3 4 Properties of Adjacency List Representation • Memory required • (|V| + |E|) • Preferred when • the graph is sparse: E << V 2 • Disadvantage • no quick way to determine whether there is an edge between node u and v • Time to list all vertices adjacent to u: • (degree(u)) • Time to determine if (u, v) E: • O(degree(u)) Undirected graph Directed graph CS 477/677 - Lecture 18
0 1 0 1 0 0 1 1 1 1 2 1 0 1 0 1 0 3 1 0 1 0 1 5 4 1 1 0 0 1 Graph Representation • Adjacency matrix representation of G = (V, E) • Assume vertices are numbered 1, 2, … V • The representation consists of a matrix A V x V : • aij = 1 if (i, j) E 0 otherwise 1 2 3 4 5 For undirected graphs matrix A is symmetric: aij = aji A = AT 1 2 3 4 Undirected graph 5 CS 477/677 - Lecture 18
Properties of Adjacency Matrix Representation • Memory required • (|V|2), independent on the number of edges in G • Preferred when • The graph is dense: E is close to V 2 • We need to quickly determine if there is an edge between two vertices • Time to list all vertices adjacent to u: • (|V|) • Time to determine if (u, v) E: • (1) CS 477/677 - Lecture 18
Weighted Graphs • Weighted graphs = graphs for which each edge has an associated weight w(u, v) w:E R, weight function • Storing the weights of a graph • Adjacency list: • Store w(u,v) along with vertex v in u’s adjacency list • Adjacency matrix: • Store w(u, v) at location (u, v) in the matrix CS 477/677 - Lecture 18
Searching in a Graph • Graph searching = systematically follow the edges of the graph so as to visit the vertices of the graph • Two basic graph searching algorithms: • Breadth-first search • Depth-first search • The difference between them is in the order in which they explore the unvisited edges of the graph • Graph algorithms are typically elaborations of the basic graph-searching algorithms CS 477/677 - Lecture 18
Breadth-First Search (BFS) • Input: • A graph G = (V, E) (directed or undirected) • A source vertex s V • Goal: • Explore the edges of G to “discover” every vertex reachable from s, taking the ones closest to s first • Output: • d[v] = distance (smallest # of edges) from s to v, for all v V • A “breadth-first tree” rooted at s that contains all reachable vertices CS 477/677 - Lecture 18
2 2 2 1 1 1 3 3 3 5 5 5 4 4 4 Breadth-First Search (cont.) • Keeping track of progress: • Color each vertex in either white, gray or black • Initially, all vertices are white • When being discovered a vertex becomes gray • After discovering all its adjacent vertices the node becomes black • Use FIFO queue Q to maintain the set of gray vertices source CS 477/677 - Lecture 18
2 1 3 5 4 Breadth-First Tree • BFS constructs a breadth-first tree • Initially contains the root (source vertex s) • When vertex v is discovered while scanning the adjacency list of a vertex u vertex v and edge (u, v) are added to the tree • u is the predecessor (parent) of v in the breadth-first tree • A vertex is discovered only once it has only one parent source CS 477/677 - Lecture 18
2 1 3 5 4 BFS Additional Data Structures • G = (V, E) represented using adjacency lists • color[u] – the color of the vertex for all u V • [u] – predecessor of u • If u = s (root) or node u has not yet been discovered [u] = NIL • d[u] – the distance from the source s to vertex u • Use a FIFO queue Q to maintain the set of gray vertices source d=1 =1 d=2 =2 d=1 =1 d=2 =5 CS 477/677 - Lecture 18
r r r s s s t t t u u u 0 v v v w w w x x x y y y BFS(V, E, s) • for each u V - {s} • docolor[u] WHITE • d[u] ← • [u] = NIL • color[s] GRAY • d[s] ← 0 • [s] = NIL • Q • Q ← ENQUEUE(Q, s) Q: s CS 477/677 - Lecture 18
r s t u r r s s t t u u 0 1 0 0 1 1 v w x y v v w w x x y y BFS(V, E, s) • while Q • do u ← DEQUEUE(Q) • for each v Adj[u] • do if color[v] = WHITE • thencolor[v] = GRAY • d[v] ← d[u] + 1 • [v] = u • ENQUEUE(Q, v) • color[u] BLACK Q: s Q: w Q: w, r CS 477/677 - Lecture 18
r s t u r s t u r s t u 0 1 0 1 0 2 1 1 2 v w x y v w x y v w x y r s t u r s t u r s t u 1 0 2 1 0 2 3 1 0 2 3 2 1 2 2 1 2 2 1 2 3 v w x y v w x y v w x y r s t u r s t u r s t u 1 0 2 3 1 0 2 3 1 0 2 3 2 1 2 3 2 1 2 3 2 1 2 3 v w x y v w x y v w x y Example Q: s Q: w, r Q: r, t, x Q: t, x, v Q: x, v, u Q: v, u, y Q: u, y Q: y Q: CS 477/677 - Lecture 18
Readings • Chapter 16 CS 477/677 - Lecture 18 47