1 / 34

Virtual Machine Hosting for Networked Clusters: Building the Foundations for “Autonomic” Orchestration

Virtual Machine Hosting for Networked Clusters: Building the Foundations for “Autonomic” Orchestration. Based on paper by Laura Grit, David Irwin, Aydan Yumerefendi , Jeff Chase Department of Computer Science Duke University. What is “federated cluster”? And why do we need them?. Problem.

zoltan
Télécharger la présentation

Virtual Machine Hosting for Networked Clusters: Building the Foundations for “Autonomic” Orchestration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Virtual Machine Hosting for Networked Clusters:Building the Foundations for “Autonomic” Orchestration Based on paper by Laura Grit, David Irwin, AydanYumerefendi, Jeff Chase Department of Computer Science Duke University

  2. What is “federated cluster”?And why do we need them?

  3. Problem • Cloud1 • API1 • Policy1 • Etc.1 • Cloud2 • API2 • Policy2 • Etc.2 • Cloud3 • API3 • Policy3 • Etc.3

  4. How to .. • write an application that uses resources of multiple clouds? • communicate between applications that run on different clouds? • migrate between clouds?

  5. User end answer • Write an application that implements multiple APIs. • Write multiple applications and redeploy them then migration is needed.

  6. Cloud provider answer FEDERATION

  7. So what is a “federation”? A Federation is multiple computing and/or network providers agreeing upon standards of operation in a collective fashion.

  8. In other words • Cloud1 • API1 • Policy1 • Etc.1 • Cloud2 • API2 • Policy2 • Etc.2 • Cloud3 • API3 • Policy3 • Etc.3 • Federated cloud • Federated API • Federated Policy • Federated etc.

  9. Shiraco: A Quick Tour

  10. Shiraco: A Quick Tour • The Shirako toolkit offers a common, extensible resource leasing core. • Leases are dynamic and renewable. • The leasing abstraction applies to any kind of computing resource that is partition able as a measured quantity. • Fitted with resource drivers to instantiate and control Xen VMs.

  11. Shiraco: A Quick Tour • Shirako is a toolkit for implementing actors, whichare servers interacting with a messaging protocol such as SOAP or XMLRPC.

  12. Actors: • consumers(request resources according to the projected needs of a hosted service, environment, user, or group.) • providers(cluster site, data center, or other resource domain) • brokers(Resource providers cooperate with brokers to arbitrate requests, e.g., to allocate resources to their highest and best use.)

  13. Actors referred as: • consumers - service managers. • providers - site authorities. • brokers – brokers.

  14. While this paper focuses on resource management, “orchestration “ for virtual computing also includes the configuration and control of hosted application services. Service managers include application-specific plug-in code to interact with a guest service or application, monitor its functioning, and respond to events by requesting changes to sliver sizing or to the number of virtual machines or their location.

  15. “Autonomic” Orchestration

  16. Key challenge Developing practical policies for on-demand, adaptive, and reliable allocation of networked computing resources from a common pool.

  17. It is hard because: • it is heuristic– involves projection under uncertainty. • it is dynamic– must adapt to changing workloads. • it is organic and emergent– must balance the needs and interests of multiple independent stakeholders.

  18. Overview of Policy Challenges Shirakodefines interfaces for pluggable policy modules for each actor. A key principle is that provider sites control the placement of VMs on their own clusters, but delegate limited provisioning power to brokers. This decoupling offers a general and flexible basis for managing a shared pool with many resource contributors.

  19. Overview of Policy Challenges Example: Brokers can coordinate site selection and sliver sizing for virtual machines, while sites can control placement of the hosted VMs on their local servers based on local considerations. These policies may function within a continuous optimizing feedback loop

  20. Overview of Policy Challenges: Provisioning How much resource is allocated to each request, and when? Answer is NP-hard. Heuristics and approximations exist, but the choices may be sensitive to workload.

  21. Overview of Policy Challenges: Placement Site-level placement policies determine the mapping of provisioned VMs onto a site's server network. A cluster placement policy might consider several factors: • failures and maintenance • overbooking • thermal and energy management

  22. Provisioning vs. Placement Must work together, but … In practice, the decoupling of provisioning and placement creates a tension between these policies. This decoupling simplifies provisioning and enables brokered federation. But if the site and broker policies work at cross-purposes, then it may increase the need for migration.

  23. Brokering and Migration in Shirako

  24. Question • What happens then broker and cluster site have competing policies?

  25. Example The cluster site places machines following a greedy best-fit policy. The broker has two hosts delegated to it from the site which it places into its logical inventory. And it follows room-for-growth policy.

  26. Example continued The conflict between these policies occurs when a guest requests its VM to be resized multiple times: due to its policy, the broker has space on its inventory host, but the site must eventually migrate the VM to a different host to accommodate the growth.

  27. Migration Experiment • Emulation of two competing applications in a hosting site deployment of 100 physical machines. • Two guests request resources; one based on a one month trace from a production application service in a major e-commerce website, and the other based on a job trace from a production compute cluster at Duke University

  28. Migration Experiment • Guests request VMs at the granularity of half the size of a physical machine. • The cluster site delegates full control to the broker over subdividing its resources into VMs. • The broker provisions VMs for guest applications and, in doing so, creates a logical placement of resources within its logical inventory

  29. Migration Experiment Cluster site placement policies: • follow-broker - follows the broker's provisioning decisions when performing its placements. • energy-aware that tightly packs machines. Broker provisioning policies: • room-for-growth that loosely allocates VMs to machines. • migrating variant that allows broker migration

  30. Migration Experiment results

  31. Conclusion

  32. The ability to control the resources assigned to a single VM creates a need for an intelligent infrastructure that can adapt to changing demands and conditions by automatically “turning the knobs” of performance, isolation, migration, and sliver resizing.

  33. The Shirako architecture factors provisioning and placement where provider sites retain control over VM placement, but delegate limited provisioning power to brokers. A cluster site is unlikely to contribute substantial resources to an infrastructure that does not allow it to control placement objectives.

  34. Conclusion Migration is a useful mechanism to resolve conflicts between the separate provisioning and placement policies.

More Related