1 / 29

Online Algorithms to Minimize Resource Reallocation and Network Communication

Online Algorithms to Minimize Resource Reallocation and Network Communication. Sashka Davis, UCSD Jeff Edmonds, York University, Canada Russell Impagliazzo, UCSD. Resource Allocation Problems [KKD02, PL95, IRSD99, Edm00]. Given: Multi-processor machine with T identical processors.

soo
Télécharger la présentation

Online Algorithms to Minimize Resource Reallocation and Network Communication

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Online Algorithms to Minimize Resource Reallocation and Network Communication Sashka Davis, UCSD Jeff Edmonds, York University, Canada Russell Impagliazzo, UCSD APPROX and RANDOM 2006

  2. Resource Allocation Problems[KKD02, PL95, IRSD99, Edm00] • Given: Multi-processor machine with T identical processors. • Problem: assign processors to parallel jobs whose requirements are evolving and malleable. • Goal: schedule jobs, satisfy processor requirements of each job, minimize preemption. APPROX and RANDOM 2006

  3. The Weak Department Chair Problem I want 12! 19 17 12 15 10 APPROX and RANDOM 2006 4 4 5 3

  4. RAP: Resource Allocation Problem RAP Instance • T identical processors. • n users. Input: (i,rt,i ) - at time t user i requests ri,t processors. Output: (lt,i ) - the algorithm must allocate lt,iprocessors to i, lt,i ≥ rt,i . Constraints: ∑ rt,i≤ T and ∑ lt,i≤ T, for all t. Objective: Minimize changes to the global state. Cost = |{(lt,i ,lt+1,i), where lt,i ≠ lt+1,i}|. The algorithm is not notified when users current demands fall bellow their current allocations. APPROX and RANDOM 2006

  5. The Strong Department Chair Problem You can’t have 30! I take the penalty! I want 30, If not – penalty! 19 15 10 APPROX and RANDOM 2006 4 4 5 3

  6. RAPP: Resource Allocation Problem with Penalties RAPP Instance • T identical processors. • n users. Input: (i,rt,i, pt,i) - at time t user i requests rt,iprocessors and penalty pt,i. Output: (lt,i) - allocation of lt,i, processors to i s.t., lt,i ≥ rt,i or do nothing. Constraints: ∑ rt,i≤ T and ∑ lt,i≤ T, for all t. Objective: Minimize changes to the global state, i.e., reallocations. Cost: |{(lt,i ,lt+1,i), where lt,i ≠ lt+1,i}| + ∑ pt,i, when the scheduler fails to satisfy the t’th request. The algorithm is not notified when its current demand falls bellow its current allocation. APPROX and RANDOM 2006

  7. The Humble Chair Problem ? I want MORE! 19 16 13 15 10 APPROX and RANDOM 2006 4 4 5 3

  8. ? RRAP: Restricted Resource Allocation Problem RRAP Instance • T identical processors • n users Input: (i) - at time t user icomplains. Output:(lt,i),such thatlt,i ≥ lt-1,i. Constraints: ∑ lj,t≤ T, for all t. Objective: Minimize changes to the global state, i.e., reallocations. Cost: |{(lt,i ,lt+1,i)}|, such that lt,i ≠ lt+1,i. The algorithm never learns theprecise demands exactly, only an upper bound for each. APPROX and RANDOM 2006

  9. Network Communication Problem [OLW01, CKA02, CYV06 ] • Central cache and a network of low-power sensors. • Sensors read values. • Cache must know the values read exactly – #sensor reads = #network transmissions. • Sensors are low-power devices and we want to minimize network communication. • Solution: Settle for approximation. APPROX and RANDOM 2006

  10. TMAV: Transmission Minimizing Approximate Value Problem n sensors reading values v1 Sensor 1 [L1,,H1] Central Cache v1[L′1, H′1,] Sensor n [Ln,Hn] Precision T ≥ ∑(Hi-Li) vn[Ln,Hn] Constraints: T ≥ ∑(Hi-Li); vi[Li,Hi], for all t, i Objective: Minimize network communication. Cost: The number of transmissions between sensors and cache. APPROX and RANDOM 2006

  11. Two Online Problems RAPP RAP RRAP ? TMAV Minimize Resource Reallocation Minimize Network Communication Central Control Maintains State. Must satisfy the demands of many users. Objective:Minimize changes to the state. A property: online algorithms do NOT know the precise requirements of users. APPROX and RANDOM 2006

  12. Bi-criteria Online Algorithms • Adversary uses T resources/precision. • Algorithm: • use sT resources/precision. • the precise requirements of users are unknown to the algorithm. Goal: Find randomized, competitive online algorithms for RAP, RRAP, RAPP, and TMAV problems using the smallest possible s. When s=1 then the competitive ratio is infinity. APPROX and RANDOM 2006

  13. ? Results: Upper Bounds • O(logsn)-competitive algorithm for RRAP, where s is a constant, s≥3. • Modified the solution for RRAP and obtained algorithms with similar competitive ratios O(logsn) for RAP, RAPP, and TMAV. APPROX and RANDOM 2006

  14. Results: Lower Bounds • For s = 1 no competitive algorithm for RAP and TMAV exists. • Defined the notion of competitive ratio preserving online reduction with respect to adaptive online adversary “≤AD_ON’’. • RAP ≤AD_ONTMAV • RAP ≤AD_ONRAPP APPROX and RANDOM 2006

  15. Results: Lower Bounds Using Reductions (h,k)-paging ≤AD_ON RAP • No online algorithm, using (1+ε) resources can achieve competitive ratio better than Ω(1/ ε) against an adaptive online adversary, using resource of size 1. • No online algorithm using (1+ ε) resources can achieve competitive ratio better than Ω(log(1/ ε)) against an oblivious adversary using resource of size 1. APPROX and RANDOM 2006

  16. The Remainder of the Talk • Steal From the Rich – a randomized O(logsn)-competitive algorithm for RRAP. • For s=1 no competitive algorithm for RAP and TMAV exists. APPROX and RANDOM 2006

  17. ? RRAP: Restricted Resource Allocation Problem RRAP Instance : • T identical processors, • n users. Input: (i) - at time t user icomplains. Output:(li,t),such thatlt,i≥ lt-1,i. Constraints: ∑ lt,i≤ T, for all t. Cost: Number of pairs (lt,i ,lt+1,i), such that lt,i ≠ lt+1,i. The algorithm never learns the precise demands exactly, only an upper bound for each. APPROX and RANDOM 2006

  18. user 1 user 2 user n sT/n sT/n sT/n Steal From the Rich Algorithm Let s be a constant, and r=Θ(√s), μbe a constants, which depend on s, but not the instance. Initially partition sT resources evenly among the nusers. APPROX and RANDOM 2006

  19. user j δ lt,j user n lt,n Steal From the Rich Algorithm At time t+1 user j complains. SFR picks a user k from [n]-{j} with probability lt,k/(sT-lt,j). lt+1,k ←lt,k-δ; lt+1,,j+1←lt,j+δ; SFR OPT user k lt,k δ user 2 user j user k lt,2 lt,j user 1 lt,1 μT/n APPROX and RANDOM 2006

  20. How Much to Steal from the Rich? SFR maintains the following invariants: • All users have at least μT/n • lt+1,k ≥ μT/n, henceδ ≤lt,k - μT/n; • lt+1,kdoes not shrink by a factor more than 1/r • lt+1,k ≥ lk,t /r, henceδ ≤lk,t (r-1)/r; • lt+1,jdoes not grow by a factor more than r • lt+1,j ≤ rlt,j,, henceδ ≤lj,t (r-1); δ = min {lt,k-μT/n; lt,k (r-1)/r; lt,j(r-1)}. APPROX and RANDOM 2006

  21. SFR Analysis Want to show that for any req. sequence σ E(SFRs(σ)) ≤ O(logsn)OPT(σ)+d. Φ: Rn Rn → R+; at=SFRt+(Φt-Φt-1) E(SFRs(σ)) = E(∑SFRt)=E(∑at)-Φend+Φ0 Want to prove that for all t: • Φt ≤ O(n logsn), for all t, • E(at) ≤ O(logsn)OPTt. Then Φ0 ≤ O(n logsn), and we use d = O(n logsn). APPROX and RANDOM 2006

  22. SFR Potential Function • ΔΦ is small when SFR and OPT have proportional allocations. • When SFR has cost and OPT does not, then ΔΦ is negative and compensates for the actual cost of SFR. APPROX and RANDOM 2006

  23. Amortized Update Cost E(at) = E(SFRt+ ΔΦt) ≤ O(logsn)OPTt Case 1: OPTt ≠ 0, SFR = 0. E(at) = E(0 + #changed intervals O(logs n)) ≤ O(logsn)OPTt Case 2: OPTt= 0, SFR = 2. E(at) = E(2+ΔΦt) E(ΔΦt)≤ -2. In Case 2, SFR does: • lt,j grows by a factor of r thenΔΦt )≤-14; • lt,k shrinks by a factor of 1/r thenΔΦt ≤-14; • Neither:(δ = lt,k-μT/n) thenΔΦt ≥ 0(unfortunate but rare event). Concluding: E(SFRs(σ)) ≤ O(logsn)OPT(σ)+d. APPROX and RANDOM 2006

  24. The Additional Resource is Vital Theorem:There is no online algorithm using T resources that is f(n) competitive against and adversary using T resources, for any function f. Consider RAP with 2 users and T=1. APPROX and RANDOM 2006

  25. r[0,1] S4,1<r S2,1<r S4,1<r S≥r S3,2=1-S S1,1< r If s=1 then competitive ratio is ∞ 1 0 user1 user2 • Adversary cost is 2. • Probability of incurring cost during t’th request is 1/8t. • The expected cost of the algorithm diverges as t goes to infinity. APPROX and RANDOM 2006

  26. ? SFR SFR SFR Relating the Hardness of the Problems SFR RRAP TMAV RAPP RAP ≤AD_ON ≤AD_ON APPROX and RANDOM 2006

  27. Conclusions • We obtained O(logs n)-competitive algorithms for four different problems. • Justified the need for sT resource. • Defined a notion of online reduction with respect to adaptive online adversary. • Related the hardness of the problems using online reductions. • Reduced (h-k)-Paging to RAP and transferred the standard paging lower bounds to the four problems. APPROX and RANDOM 2006

  28. New Issues • We studied memoryless online algorithms that do not know the current demands exactly. • Online reductions to leverage existing lower bounds and relate hardness of online problems. APPROX and RANDOM 2006

  29. Open problems • Close the gap between the upper and lower bounds. • Can competitive ratio preserving reductions with respect to adaptive online adversary deliver other lower bounds for other problems? • Do other problems have similar memoryless online solutions, where the algorithm does not know the demands exactly, but only an upper bound approximation of it. APPROX and RANDOM 2006

More Related