1 / 37

Selecting the Correct Hypervisor

Selecting the Correct Hypervisor. Boston Virtualization Deep Dive Day 2011 Tim Mackey XenServer Evangelist. What to Expect Today …. Balanced representation of each hypervisor Where the sweet spots are for each vendor No discussion of performance No discussion of ROI and TCO

rachel
Télécharger la présentation

Selecting the Correct Hypervisor

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Selecting the Correct Hypervisor Boston Virtualization Deep Dive Day 2011 Tim Mackey XenServer Evangelist

  2. What to Expect Today …. • Balanced representation of each hypervisor • Where the sweet spots are for each vendor • No discussion of performance • No discussion of ROI and TCO • What you should be thinking of with cloud

  3. The Land Before Time … • Virtualization meant mainframe/mini • x86 was “real mode” • Until 1986 and the 80386DX changed the world • Now “protected mode” and rings of execution (typically ring 0 and ring 3) • Real mode OS vs. Protected mode • x86 always boots to real mode (even today) • Kernel takes power on and enables protection models • Early kernels performed poorly in protected mode • Focus was on application virtualization not OS virtualization

  4. VMware Creates Mainstream x86 Virtualization • Early 2001 ESX released as first type-1 for x86 • ESX uses an emulation model known as “binary translation” to trap protected mode operations and execute protected operations cleanly in the VMkernel • Heavily tuned over years of experience • Leverages 80386 protection rings and exception handlers • Can result in FASTER code execution

  5. Enter Hardware Assist • 2005-2006 Intel and AMD introduce hardware assist • Idea was to take non-trappable privileged CPU OP codes and isolate them • Introduced “user mode” and “kernel mode” • Introduced “Ring -1” • Binary translation could still be faster • 2008-2009 Intel and AMD introduce memory assist • CPU Op code only addressed part of the problem • Memory paging seen as key to future performance • Hardware + Moore’s Law > Software + Tuning

  6. What About IO? • Shared IO bottlenecks • VM density magnifies problem • Throughput demands impact peer VMs • Enter SR-IOV in 2010 • Hardware is virtualized in hardware • Virtual Function presented to guest

  7. The Core Architectures

  8. vSphere Hypervisor • ESX • VMkernel provides hypervisor • Service console is for management • IO is managed through emulated devices • ESX is EOL long live ESXi • Service console is gone • Management via API/CLI • VMkernel now includes management, agents and support consoles • Security vastly improved over ESX

  9. XenServer • Based on Open Source Xen • Requires hardware assist • Management through Linux control domain (dom0) • IO managed using split drivers

  10. Hyper-V • Requires hardware assist • Management through Windows 2008 “Parent partition” • VMs run as child partitions • Linux enabled using “Xenified” kernels • IO is managed through parent partition and enlightened drivers

  11. KVM • Requires hardware assist • KVM modules part of Linux kernel • Converts Linux into type-1 • Each VM is a process • Defined as “guest mode” • IO managed via Linux and VirtIO

  12. Commercial Free Contenders for Your Budget

  13. VMware vSphere Hypervisor (ESXi)

  14. Microsoft Hyper-V Server R2 SP1

  15. Red Hat Enterprise Virtualization (KVM)

  16. Oracle VM

  17. Citrix XenServer

  18. Hypervisor is now a commodity!!

  19. Maximizing Your Budget • Single hypervisor model is flawed • Wasted dollars, wasted performance • Spend your resources where you need to • OS compatibility • VM density • IO performance • Application support models • Application availability

  20. Deconstructing Key Functionality

  21. Memory Over Commit • Objective: Increase VM density and efficiently use host RAM • Risks: Performance and Security • Options: Ballooning, Page sharing, Compression, Swap

  22. Load Balancing • Objective: Ensure optimal performance of guests and hosts • Risks: Performance and Security • Options: Input metrics, reporting, variable usage models

  23. Virtual Networking • Objective: Support data center and cloud networking • Risks: Data leakage and performance • Requirement: Make server virtualization compatible with networking

  24. The Sweet Spots

  25. VMware vSphere 4.1 Key play: Legacy server virtualization • Large operating system support • Large eco-system => experienced talent readily available Bonus opportunities • Feature rich data center requirements • Cloud consolidation through Cisco Nexus 1000V Weaknesses • Complex licensing model • Reliance on SQL Server management database

  26. Microsoft Hyper-V R2 SP1 Key play: Desktop virtualization • VM density is key • Memory over commit + deep understanding of Windows 7 => success Bonus opportunities • Microsoft Server software • Ease of management for System Center customers Weaknesses • Complex desktop virtualization licensing model • Complex setup at scale • “Patch Tuesday” reputation

  27. RedHat KVM Key plays: Linux virtualization • RHEL data centers Weaknesses • Limited enterprise level feature set • Niche deployments and early adopter syndrome • Support only model may limit feature set

  28. Oracle VM Key play: Hosted Oracle Applications • Oracle only supports its products on OVM Bonus opportunities • Server virtualization • Applications requiring application level high availability • Data centers requiring secure VM motion Weaknesses • Limited penetration outside of Oracle application suite • Support only model may limit future development

  29. Citrix XenServer 5.6 FP1 Key play: Cloud platforms • Largest public cloud deployments Bonus opportunities • Citrix infrastructure • Linux data centers • General purpose virtualization • Windows XP/Vista desktop virtualization Weaknesses • Application support statements • HCL gaps

  30. Beyond the Data Center and into the Cloud

  31. Hybrid Cloud Hybrid Cloud Public Cloud Traditional Datacenter • Off premise • Low utility cost • Self-service • Fully elastic • On premise • High fixed cost • Full control • Known security • On/off premise • Low utility cost • Self-service • Fully elastic • Trusted security • Corporate control

  32. Transparency is a Key Requirement Hybrid Cloud Hybrid Cloud Public Cloud Traditional Datacenter Traditional Datacenter • Off premise • Low utility cost • Self-service • Fully elastic • On premise • High fixed cost • Full control • Known security • On/off premise • Low utility cost • Self-service • Fully elastic • Trusted security • Corporate control • Issues • Disparate Networks • Disjoint User Experience • Unpredictable SLAs • Different Locations

  33. Enabling Transparency Enables Hybrid Cloud Cloud Provider Traditional Datacenter • OpenCloudBridge • Network transparency for Disparate Networks • Latency transparency to preserve the same User Experience • Services transparency to make SLAs predictable • Location transparency to allow Anywhere Access BRIDGE ACCESS

  34. OpenCloud Bridge Use-Case Switch Premise Datacenter Cloud = Netscaler VPX Hypervisor Hypervisor IP: 192.168.1.100 Subnet: 255.255.254.0 Reqs: DB, Web and LDAP vSwitch vSwitch Storage Private Public Public Private Switch DB Server LDAP Network: 10.2.1.0 Subnet: 255.255.254.0

  35. It’s Your Budget … Spend it Wisely

  36. Shameless XenServer Plug • Social Media • Twitter: @XenServerArmy • Facebook: http://www.facebook.com/CitrixXenServer • LinkedIn: http://www.linkedin.com/groups?mostPopular=&gid=3231138 • Major Events • XenServer Master Class – March 23rd next edition • Citrix Synergy – San Francisco May 25-27 2011 (http://citrixsynergy.com)

More Related