1 / 14

Virtual Machine Appliances for Ad-hoc, Opportunistic Grids

Virtual Machine Appliances for Ad-hoc, Opportunistic Grids. Renato Figueiredo ACIS Laboratory University of Florida. Overview. Goal: plug-and-play, easy to install software for opportunistic computing Use cases: Desktop campus grids Ad-hoc lab clusters Pooling across multiple domains

torin
Télécharger la présentation

Virtual Machine Appliances for Ad-hoc, Opportunistic Grids

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Virtual Machine Appliances for Ad-hoc, Opportunistic Grids Renato Figueiredo ACIS Laboratory University of Florida

  2. Overview • Goal: plug-and-play, easy to install software for opportunistic computing • Use cases: • Desktop campus grids • Ad-hoc lab clusters • Pooling across multiple domains • Technologies: • Virtual machines (VMs, Xen, VMware) • Virtual networks (a la VPN) • Batch schedulers (Condor)

  3. Virtual machines VM image Context WOWs • Wide-area • Virtual machines • Self-organizing overlay IP tunnels, P2P routing NOWs, COWs • Local-area • Physical machines • Self-organizing switching (e.g. Ethernet spanning tree) Installation image Switched network Physical machines

  4. SURAGrid context • This can be a vehicle for dissemination of Grid middleware to institutions • E.g. facilitate the deployment of desktop campus grids • Can pool resources together across multiple sites through Condor flocking • Complementary to the existing SURAgrid setup of more traditional cluster infrastructures

  5. 1) System Virtual Machines • Virtualization of instruction sets (ISAs) • Language-independent, binary-compatible (not JVM) • VMware, Microsoft, Xen, Parallels, … Intel VT, AMD Pacifica • ISA + OS + libraries + software = execution environment • Time-share multiple O/Ss • Near-native performance for CPU-intensive workloads

  6. Networking VMs • System VM isolates user from host • Great! Now how do I access it? • Users: want full TCP/IP connectivity • Facilitate programming and deployment • But cross-domain communication subject to NAT, firewall policies • Providers: want to isolate traffic • Users with admin privileges inside VM still pose security problems: viruses, DoS

  7. 2) Virtual networking • Isolation: dealt with similarly to VMs • Multiple, isolated virtual networks time-share physical network • Key technique: tunneling • Similar to VPNs • Our approach: peer-to-peer network tunnelling • Virtual network should be self-configured • Avoid administrative overhead of VPNs • Including cross-domain NAT traversal • Virtual network should be isolated • Virtual private address space decoupled from Internet address space

  8. Example – physical machines Hosts: 2.4GHz Xeon, Linux 2.4.20, VMware GSX Host: 1.3GHz P-III Linux 2.4.21 VMPlayer Host: 1.7GHz P4, Win XP SP2, VMPlayer Wide-area Overlay of virtual Workstations (WOW) 34 compute nodes, 118-node PlanetLab P2P routers

  9. Example: virtual view Looks like a cluster Heterogeneous hardware, but homogeneous software PBS scheduler and NFS server on head node, 32 worker nodes 4000 jobs 1 job/second PBS head node NFS server WOW worker nodes

  10. 3) Condor • High-throughput computing scheduler • University of Wisconsin – circa 1988 • Fault-tolerant, scalable, flexible • Pools of few machines to 1000s exist today • Standard and vanilla worlds • Condor-linked – checkpointing & migration • Vanilla – unmodified applications • Great for long-running, parameter sweeping sequential jobs • Easy to submit large # jobs from single command • Flocking • Enable jobs submitted to a local pool that is 100% utilized to “flock” to remote pools

  11. Use case: putting things together • A VM “appliance” for Condor-based opportunistic computing • We created and tailored the appliance • Takes expertise and time • E.g. a Condor appliance that self-configures ad-hoc pools with flocking • Can complement with well-packaged examples and documentation for users to get started quickly • Users download and boot VM up • This takes zero configuration • VM acquires a virtual IP address and becomes routable on the overlay, becoming a resource • E.g. allowing Condor jobs to run and to be submitted

  12. Demonstration • You can try out this software on any x86-based Windows or Linux PC • http://www.acis.ufl.edu/~ipop/grid_appliance • Follow README file: • Should be able to install VMware and boot up appliance in 15-30 minutes • And run a demo Condor job on an ad-hoc pool with machines at UF right after boot up

  13. Demo screenshot VMware Condor pool access Debian VM Windows XP host

  14. For further information • http://wow.acis.ufl.edu • Code, papers • http://www.cs.wisc.edu/condor • Plentiful info on Condor • http://www.vmware.com • Free VMs for x86 Linux/Windows (Player, Server)

More Related