1 / 49

Operating System Support for Virtual Machines

Operating System Support for Virtual Machines. Samuel King, George Dunlap, Peter Chen Univ of Michigan. Ashish Gupta. Two classifications for VM. 1. Higher Level Interface. VMWare Guest tools VAX VMM Security Kernel. VM/370 VMWare. UMLinux SimOS Xen. Denali. u-kernels. JVM.

shauna
Télécharger la présentation

Operating System Support for Virtual Machines

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operating System Support for Virtual Machines Samuel King, George Dunlap, Peter Chen Univ of Michigan Ashish Gupta

  2. Two classifications for VM 1 Higher Level Interface VMWare Guest toolsVAX VMM Security Kernel VM/370VMWare UMLinuxSimOSXen Denali u-kernels JVM

  3. Two classifications for VM Convenience Performance 2 Underlying Platform VM/370VMWare ESXDiscoDenaliXen VMWare WorkstationVirtualPC SimOSUMLinux Type II Type I

  4. UMLinux • Higher level interface slightly different • Guest OS needs to be modified • Simple device drivers added • Emulation of certain instructions (iret and in/out) • Kernel Re-linked to different address • 17,000 lines of change • ptrace  virtualization • Intercepts guest system calls • Tracks transitions

  5. Advantage of Type II VM Virtual CPU Guest Machine Process Virtual I/O Devices Host files anddevices Virtual Interrupts Host Signals Virtual MMU mmapmunmap

  6. The problem

  7. Compiling the Linux Kernel + 510 lines to Host OS

  8. Compiling the Linux Kernel + 510 lines to Host OS

  9. Optimization OneSystem calls

  10. Lots of context switches between VMM < -- > Guest machine process

  11. Use VMM as a Kernel module Modification to Host OS also…

  12. ?

  13. Optimization TwoMemory protection

  14. Frequent switching between Guest Kernel and Guest application

  15. Guest Kernel to Guest User

  16. Guest User to Guest Kernel Through mmap, munmap and mprotect Very expensive…

  17. Host Linux Memory Management • x86 paging provides built-in protection to memory pages • Linux uses page tables for translation and protection • Segments used only to switch between privilege levels • Uses supervisor bit to disallow ring 3 to access certain pages The idea: segments bound features are relatively unused

  18. Solution: Change Segment bounds for each mode

  19. Optimization ThreeContext Switching

  20. The problem with context switching: • Have to remap user process’s virtual memory to the “virtual” physical memory • Generates large number of mmaps  costly • The solution: • Allow one process to maintain multiple address-spaces • Each address space  different set of page tables • New system call : switch guest, whenever context switching

  21. Multiple Page Table Sets guest proc b switchguest syscall guest proc a Guest OS Page Table Ptr Host operating system

  22. Conclusion • Type II VMM CAN be as fast as type Iby modifying the Host OS • Is the title of paper justified ?

  23. Virtualizing I/O Devices on VMware Workstation’sHosted VMM Jeremy Sugerman, Ganesh Venkitachalam and Beng-Hong Lim VMware, Inc.

  24. Introduction • VM Definition from IBM: • a “virtual machine” is a fully protected and isolated copy of the underlying physical machine’s hardware. • The choice for hosted architecture • Relies upon host OS for device support • Primary Advantage • Copes with diversity of hardware • Compatible with pre-existing PC software • Near native performance for CPU intensive workloads

  25. The major tradeoff • I/O performance degradation • I/O emulation done in host world • Switching between the host world and the VMM world

  26. How I/O works I/O Virtualization Interrupt reasserted VM Driver ApplicationPortion PrivilegedPortion VM App VMM CPU Virtualization I/O Request H/w interrupt

  27. I/O Virtualization • VMM intercepts all I/O operations • Usually privileged IN , OUT operations • Emulated either in VMM on in VMApp • Host OS drivers understand the semantics of port I/O, VMM doesn’t • Physical Hardware I/O must be handled in Host OS • Lot of Overhead from world switching • Which devices get affected ? • CPU gets saturated before I/O…

  28. The Goal of this paper I/O CPU I/O CPU

  29. The Network Card • Virtual NIC appears as a full fledged PCI Ethernet Controller, with its own MAC address • Connection implemented by a VMNet driver loaded in the Host OS • Virtual NIC : a combination of code in the VMM and VMApp • Virtual I/O Ports and Virtual IRQs

  30. VMM H O S T Sending a Packet

  31. H O S T Receiving a Packet V M M H O S T

  32. Experimental Setup Nettest: throughput tests

  33. Time profiling Extra work: • Switching worlds for every I/O instruction: most expensive • I/O interrupt for every packet sent and received: • VMM, host and guest interrupt handlers are run ! • Packet trans: two device drivers • Packet copy on transmit

  34. Optimization One • Primary aim: Reduce world switches • Idea: Only a third of the I/O instructions trigger packet trans. • Emulate the rest in VMM • The Lance NIC address I/O has memory semantics • I/O  MOV ! • Strips away several layers of virtualization

  35. Optimization Two • Very high interrupt rate for data trans. • When does a world switch occur: • A packet is to be transmitted • A real interrupt occurs e.g. timer interrupt • The Idea: Piggyback the packet interrupts on the real interrupts • Queue the packets in a ring buffer • Transmit all buffered packets on next switch • Works well for I/O intensive workloads

  36. Packet Transmit Real Interrupt

  37. Optimization Three • Reduce host system calls for packet sends and receives • Idea: Instead of select, use a shared bit-vector, to indicate packet availability • Eliminates costly select() ?

  38. Summary of three optimizations • Native • VM/733 MHz Optimized • VM/733 MHz Version 2.0 Guest OS idles

  39. Summary of three optimizations • Native • VM/350 MHz Optimized • VM/350 MHz Version 2.0

  40. Most effective Optimization ? • Emulating IN and OUT to Lance I/O ports directly in VMM • Why ? • Eliminates lots of world switches • I/O changed to MOV instruction

  41. Further avenues for Optimization ? • Modify the Guest OS • Substitute expensive-to-virtualize instructions e.g. MMU instructions . Example ?? • Import some OS functionality into VMM • Tradeoff: can use off-the-shelf Oses • An idealized virtual NIC (Example ??) • Only one I/O for packet transmit instead of 12 ! • Cost: custom device drivers for every OS • VMWare Server version

  42. Further avenues for Optimization ? • Modify the Host OS: Example ?? • Change the Linux networking stack • Poor buffer management • Cost: requires co-operation from OS Vendors • Direct Control of Hardware: VMWare ESX • Fundamental limitations of Hosted Architecture • Idea: Let VMM drive I/O directly, no switching • Cost ??

More Related