1 / 23

Quincy: Fair Scheduling for Distributed Computing Clusters

Quincy: Fair Scheduling for Distributed Computing Clusters. Michael Isard , Vijayan Prabhakaran , Jon Currey , Udi Wieder , Kunal Talwar and Andrew Goldberg Microsoft Research, Silicon Valley — Mountain View, CA, USA {misard, vijayanp, jcurrey, uwieder, kunal, goldberg}@microsoft.com.

gelsey
Télécharger la présentation

Quincy: Fair Scheduling for Distributed Computing Clusters

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quincy: Fair Scheduling for Distributed ComputingClusters Michael Isard, VijayanPrabhakaran, Jon Currey, UdiWieder, KunalTalwar and Andrew Goldberg Microsoft Research, Silicon Valley — Mountain View, CA, USA {misard, vijayanp, jcurrey, uwieder, kunal, goldberg}@microsoft.com 이남수 fantalns@gmail.com 류한걸 873131236@qq.com

  2. subject Approach to the problem Cluster architecture Queue-based scheduling Flow-based scheduling Conclusion

  3. Approach to the problem N computers J jobs 각 작업은 최소 N/J 컴퓨터들을 할당 받아야 한다. 하지만, 사실상 정확히 다른 자원들을 완벽하게 공유하지 못한다. (Ex. 네트워크와 같은)

  4. Cluster architecture • Key words • Job • Root task • Job을 관리 • 클러스터 컴퓨터들 중 한 곳에서 실행되고 있음 • task가 완료되었는지 혹은 실행할 준비가 되었는지를 모니터링 • Worker task • Job을 실행함에 관여

  5. Cluster architecture (cont.) w j n r j • nth worker in job j : • root task : • CS • Core switch • RS • Rack contains a switch

  6. Cluster architecture (cont.) Ref. http://www.sigops.org/sosp/sosp09/slides/quincy/QuincyTestPage.html

  7. Queue-based scheduling C C C R C C C R X 1 2 3 1 4 5 6 2 w 1 7 w 1 6 w 1 7 w 1 5 w w 1 6 w 1 6 2 4 w w 1 6 2 4 w 1 7 w 1 5 w 1 4 w w 1 4 2 4 w 1 4 w 1 6 w 2 3 w 2 3 w 1 2 w 2 1 r 2 w 1 3 r 1 w 2 2

  8. Queue-based scheduling (cont.) • Scheduling algorithms • Baseline algorithm without fairness • computer m becomes free & first ready task on Cm • if any, is dispatched to m • Cm does not have any ready tasks • then the first ready task on Rlis dispatched to m • If neither Cm nor Rlcontains a ready task then the first ready task • if any, on X is dispatched to m • This algorithm : “G” for “greedy.” • Simple greedy fairness • implement the same fairness scheme as the Hadoop Fair Scheduler • Denoted “GF” • Fairness with preemption • Refer to this algorithm as “GFP”

  9. Flow-based scheduling Ref. http://www.sigops.org/sosp/sosp09/slides/quincy/QuincyTestPage.html

  10. Flow-based scheduling (cont.)

  11. Flow-based scheduling (cont.)

  12. Flow-based scheduling (cont.)

  13. Flow-based scheduling (cont.)

  14. Flow-based scheduling (cont.)

  15. Flow-based scheduling (cont.)

  16. Flow-based scheduling (cont.)

  17. Flow-based scheduling (cont.)

  18. Flow-based scheduling (cont.)

  19. Flow-based scheduling (cont.) • Min-cost flow • v : node • e : edges • edges e is annotated with a non-negative integer capacity ye and a cost pe • nodes v is annotated with an integer “supply” ev where • non-negative integer flow fe <= ye to each edge • Ivis the set of incoming edges to v • Ovis the set of outgoing edges from v • Ref. IG systems at www.igsystems.com

  20. Flow-based scheduling (cont.) • Controlling fairness policy • Fair sharing with preemption • This policy is denoted “QFP” • Fair sharing without preemption • This policy is denoted “QF” • Unfair sharing with preemption • This policy is denoted “QP” • Unfair sharing without preemption • This policy is denoted “Q”

  21. Conclusion

  22. Conclusion (cont.) Fair sharing without preemption Fair sharing with preemption

  23. Q & A Q & A

More Related