370 likes | 477 Vues
Discover valuable insights from the Boston Virtualization Deep Dive Day 2011 by Tim Mackey, XenServer Evangelist. Gain a balanced view of hypervisors, identify vendor sweet spots, and explore cloud considerations. Learn about the evolution of virtualization from mainframe to x86, VMWare's role in x86 virtualization, hardware assistance, and I/O management techniques. Uncover key functionalities, virtual networking requirements, and optimal hypervisor choices to maximize your budget and performance.
E N D
Selecting the Correct Hypervisor Boston Virtualization Deep Dive Day 2011 Tim Mackey XenServer Evangelist
What to Expect Today …. • Balanced representation of each hypervisor • Where the sweet spots are for each vendor • No discussion of performance • No discussion of ROI and TCO • What you should be thinking of with cloud
The Land Before Time … • Virtualization meant mainframe/mini • x86 was “real mode” • Until 1986 and the 80386DX changed the world • Now “protected mode” and rings of execution (typically ring 0 and ring 3) • Real mode OS vs. Protected mode • x86 always boots to real mode (even today) • Kernel takes power on and enables protection models • Early kernels performed poorly in protected mode • Focus was on application virtualization not OS virtualization
VMware Creates Mainstream x86 Virtualization • Early 2001 ESX released as first type-1 for x86 • ESX uses an emulation model known as “binary translation” to trap protected mode operations and execute protected operations cleanly in the VMkernel • Heavily tuned over years of experience • Leverages 80386 protection rings and exception handlers • Can result in FASTER code execution
Enter Hardware Assist • 2005-2006 Intel and AMD introduce hardware assist • Idea was to take non-trappable privileged CPU OP codes and isolate them • Introduced “user mode” and “kernel mode” • Introduced “Ring -1” • Binary translation could still be faster • 2008-2009 Intel and AMD introduce memory assist • CPU Op code only addressed part of the problem • Memory paging seen as key to future performance • Hardware + Moore’s Law > Software + Tuning
What About IO? • Shared IO bottlenecks • VM density magnifies problem • Throughput demands impact peer VMs • Enter SR-IOV in 2010 • Hardware is virtualized in hardware • Virtual Function presented to guest
vSphere Hypervisor • ESX • VMkernel provides hypervisor • Service console is for management • IO is managed through emulated devices • ESX is EOL long live ESXi • Service console is gone • Management via API/CLI • VMkernel now includes management, agents and support consoles • Security vastly improved over ESX
XenServer • Based on Open Source Xen • Requires hardware assist • Management through Linux control domain (dom0) • IO managed using split drivers
Hyper-V • Requires hardware assist • Management through Windows 2008 “Parent partition” • VMs run as child partitions • Linux enabled using “Xenified” kernels • IO is managed through parent partition and enlightened drivers
KVM • Requires hardware assist • KVM modules part of Linux kernel • Converts Linux into type-1 • Each VM is a process • Defined as “guest mode” • IO managed via Linux and VirtIO
Maximizing Your Budget • Single hypervisor model is flawed • Wasted dollars, wasted performance • Spend your resources where you need to • OS compatibility • VM density • IO performance • Application support models • Application availability
Memory Over Commit • Objective: Increase VM density and efficiently use host RAM • Risks: Performance and Security • Options: Ballooning, Page sharing, Compression, Swap
Load Balancing • Objective: Ensure optimal performance of guests and hosts • Risks: Performance and Security • Options: Input metrics, reporting, variable usage models
Virtual Networking • Objective: Support data center and cloud networking • Risks: Data leakage and performance • Requirement: Make server virtualization compatible with networking
VMware vSphere 4.1 Key play: Legacy server virtualization • Large operating system support • Large eco-system => experienced talent readily available Bonus opportunities • Feature rich data center requirements • Cloud consolidation through Cisco Nexus 1000V Weaknesses • Complex licensing model • Reliance on SQL Server management database
Microsoft Hyper-V R2 SP1 Key play: Desktop virtualization • VM density is key • Memory over commit + deep understanding of Windows 7 => success Bonus opportunities • Microsoft Server software • Ease of management for System Center customers Weaknesses • Complex desktop virtualization licensing model • Complex setup at scale • “Patch Tuesday” reputation
RedHat KVM Key plays: Linux virtualization • RHEL data centers Weaknesses • Limited enterprise level feature set • Niche deployments and early adopter syndrome • Support only model may limit feature set
Oracle VM Key play: Hosted Oracle Applications • Oracle only supports its products on OVM Bonus opportunities • Server virtualization • Applications requiring application level high availability • Data centers requiring secure VM motion Weaknesses • Limited penetration outside of Oracle application suite • Support only model may limit future development
Citrix XenServer 5.6 FP1 Key play: Cloud platforms • Largest public cloud deployments Bonus opportunities • Citrix infrastructure • Linux data centers • General purpose virtualization • Windows XP/Vista desktop virtualization Weaknesses • Application support statements • HCL gaps
Hybrid Cloud Hybrid Cloud Public Cloud Traditional Datacenter • Off premise • Low utility cost • Self-service • Fully elastic • On premise • High fixed cost • Full control • Known security • On/off premise • Low utility cost • Self-service • Fully elastic • Trusted security • Corporate control
Transparency is a Key Requirement Hybrid Cloud Hybrid Cloud Public Cloud Traditional Datacenter Traditional Datacenter • Off premise • Low utility cost • Self-service • Fully elastic • On premise • High fixed cost • Full control • Known security • On/off premise • Low utility cost • Self-service • Fully elastic • Trusted security • Corporate control • Issues • Disparate Networks • Disjoint User Experience • Unpredictable SLAs • Different Locations
Enabling Transparency Enables Hybrid Cloud Cloud Provider Traditional Datacenter • OpenCloudBridge • Network transparency for Disparate Networks • Latency transparency to preserve the same User Experience • Services transparency to make SLAs predictable • Location transparency to allow Anywhere Access BRIDGE ACCESS
OpenCloud Bridge Use-Case Switch Premise Datacenter Cloud = Netscaler VPX Hypervisor Hypervisor IP: 192.168.1.100 Subnet: 255.255.254.0 Reqs: DB, Web and LDAP vSwitch vSwitch Storage Private Public Public Private Switch DB Server LDAP Network: 10.2.1.0 Subnet: 255.255.254.0
Shameless XenServer Plug • Social Media • Twitter: @XenServerArmy • Facebook: http://www.facebook.com/CitrixXenServer • LinkedIn: http://www.linkedin.com/groups?mostPopular=&gid=3231138 • Major Events • XenServer Master Class – March 23rd next edition • Citrix Synergy – San Francisco May 25-27 2011 (http://citrixsynergy.com)