1 / 21

XenSocket: VM-to-VM IPC

Presented at ACM Middleware: November 28, 2007. virtual machine. inter-process communication. XenSocket: VM-to-VM IPC. John Linwood Griffin Jagged Technology. Suzanne McIntosh, Pankaj Rohatgi, Xiaolan Zhang IBM Research. What we did: Reduce work on the critical path.

ohio
Télécharger la présentation

XenSocket: VM-to-VM IPC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Presented at ACM Middleware: November 28, 2007 virtual machine inter-process communication XenSocket: VM-to-VM IPC John Linwood GriffinJagged Technology Suzanne McIntosh, Pankaj Rohatgi, Xiaolan Zhang IBM Research

  2. What we did: Reduce work on the critical path Put packet into a page Ask Xen to remap page Route packet Ask Xen to remap page before XenSocket: VM 1 Domain-0 VM 2 Xen Allocate pool of pages (once) Ask Xen to share pages (once) with XenSocket: VM 1 VM 2 Xen Write into pool Read from pool

  3. The standard outline • What we did • (Why) we did what we did • (How) we did what we did • What we did (again)

  4. Video IBM building a stream processing system with high-throughput requirements Enormous volume of data enters the system Independent nodes process and forward data objects Design for isolated, audited, and profiled execution environments

  5. x86 virtualization technology provides isolation in our security architecture 2 2 Other physical nodes Other physical nodes 1 1 4 4 3 3 VM 1 Node 1 VM 2 Node 2 VM 3 Node 3 VM 4 Node 4 3 1 2 4 Xen

  6. Using Xen virtual network resulted in low throughput @ max CPU usage UNIX socket 14 Gbit/s Linux Process 1 Process 2 TCP socket 0.14 Gbit/s VM 1 Domain-0 VM 2 Xen Xen 100% CPU 20% CPU 100% CPU

  7. Our belief: root causes are Xen hypercalls and network stack Put packet into a page Ask Xen to swap pages Packet routed Ask Xen to swap pages before XenSocket: VM 1 Domain-0 VM 2 Xen Victim pages must be zeroed Uses 1.5 KB of 4 KB page May invoke Xen hypercall after only 1 packet queued

  8. The standard outline • What we did • (Why) we did what we did • (How) we did what we did • What we did (again)

  9. XenSocket hypothesis: Cooperative memory buffer improves throughput Allocate 128 KB pool of pages Ask Xen to share pages Pages reused in circular buffer with XenSocket: VM 1 VM 2 Xen Writes are visible immediately No per-packet processing Still requires hypercalls for signaling (but fewer)

  10. Caveat emptor • We used Xen 3.0—Latest is Xen 3.1 • Xen networking is reportedly improved • Shared-memory concepts remain valid • Released under GPL as XVMSockethttp://sourceforge.net/projects/xvmsocket/ Community is porting to Xen 3.1

  11. Server socket(); bind(sockaddr_inet); listen(); accept(); socket(); bind(sockaddr_xen); Client socket(); connect(sockaddr_inet); socket(); connect(sockaddr_xen); Sockets interface; new socket family used to set up shared memory • Local port # • Remote address • Remote port # • Remote VM # • Remote VM # • Remote grant # System returns grant # for client

  12. After setup, steady-state operation needs little (if any) synchronization write(“XenSocket”) read(3)  “Xen” VM 1 VM 2 X e n S o c k e t If receiver is blocked, send signal via Xen

  13. Design goal (future work): Support for efficient local multicast Future writes wrap around;block on first unread page X e n S o c k e t read(3)  “Xen” write(“XenSocket”) VM 2 VM 1 VM 3 read(5)  “XenSo”

  14. The standard outline • What we did • (Why) we did what we did • (How) we did what we did • What we did (again)

  15. Figure 5: Pretty good performance UNIX socket: 14 MB/s 14 XenSocket: 9 MB/s 7 Bandwidth (Mbit/s) INET socket: 0.14 MB/s 0.5 16 Message size (KB, log scale)

  16. Figure 6: Interesting cache effects UNIX socket 14 7 Bandwidth (Mbit/s) XenSocket INET socket 0.01 0.1 1 10 100 Message size (MB, log scale)

  17. Throughput limited by CPU usage; Advantageous to offload Domain-0 XenSocket 9 Gbit/s TCP socket 0.14 Gbit/s VM 1 Domain-0 VM 2 Xen Xen 100% CPU 20% CPU 100% CPU VM 1 Domain-0 VM 2 Xen Xen 100% CPU 1% CPU 100% CPU

  18. Adjusted communications integrity and relaxing of pure VM isolation Possible solution: Use a proxy for pointer updates along the reverse path But now this path is bidirectional(?) VM 2 VM 1 VM 3 Any masters students looking for a project?

  19. Potential memory leak: Xen didn’t (doesn’t?) support page revocation VM 1 VM 2 Setup VM 1 shares pages VM 1 VM 2 Scenario #1 VM 2 releases pages VM 1 VM 2 Scenario #2 VM 1 cannot safely reuse pages

  20. Xen shared memory: Hot topic! XenSocket Middleware’07 | make a better virtual network MVAPICH-ivc: Huang and colleagues (Ohio State, USA) SC’07 | What we did, but with a custom HPC API XWay: Kim and colleagues (ETRI, Korea) ’07 | What we did, but hidden behind TCP sockets Menon and colleagues (HP, USA) VEE’05, USENIX’06 | make the virtual network better

  21. Conclusion: XenSocket is awesome Shared memory enables high-throughput VM-to-VM communication in Xen (a broadly applicable result?) John Linwood GriffinJohn.Griffin @ JaggedTechnology.comAlso here at Middleware: Sue McIntosh

More Related