1 / 13

Infiniband

Infiniband. Bart Taylor. What it is.

grace
Télécharger la présentation

Infiniband

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Infiniband Bart Taylor

  2. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be built, deployed and managed. By creating a centralized I/O fabric, InfiniBand Architecture enables greater server performance and design density while creating data center solutions that offer greater reliability and performance scalability. InfiniBand technology is based upon a channel-based switched fabric point-to-point architecture. --www.infinibandta.org

  3. History • Infiniband is the result of a merger of two competing designs for an inexpensive high-speed network. • Future I/O combined with Next Generation I/O form what we know as Infiniband. • Future I/O was being developed by Compaq, IBM, and HP • Next Generation I/O was being developed by Intel, Microsoft, and Sun Microsystems • Infiniband Trade Association maintains the specification

  4. The Basic Idea • High speed, low latency data transport • Bidirectional serial bus • Switched fabric topology • Several devices communicate at once • Data transferred in packets that together form messages • Messages are direct memory access, channel send/receive, or mulitcast • Host Channnel Adapters (HCAs) are deployed on PCI cards

  5. Main Features • Low Latency Messaging: < 6 microseconds • Highly Scalable: Tens of thousands of nodes • Bandwidth: 3 levels of link performance • 2.5 Gbps • 10 Gbps • 30 Gbps • Allows multiple fabrics on a single cable • Up to 8 virtual lanes per link • No interdependency between different traffic flows

  6. Physical Devices • Standard copper cabling • Max distance of 17 meters • Fiber-optic cabling • Max distance of 10 kilometers • Host Channnel Adapters on PCI cards • PCI, PCI-X, PCI-Express • InfiniBand Switches • 10Gbps non-blocking, per port • Easily cascadable

  7. Host Channel Adapters • Standard PCI • 133 MBps • PCI 2.2 - 533 MBps • PCI-X • 1066 MBps • PCI-X 2 - 2133 MBps • PCI-Express • x1 5Gbps • x4 20Gbps • x8 40Gbps • x16 80Gbps

  8. DAFS • Direct Access File System • Protocol for file storage and access • Data transferred as logical files, not physical storage blocks • Transferred directly from storage to client • Bypasses CPU and Kernel • Provides RDMA functionality • Uses the Virtual Interface (VI) architecture • Developed by Microsoft, Intel, and Compaq in 1996

  9. RDMA

  10. TCP/IP Packet Overhead

  11. Latency Comparison • Standard Ethernet TCP/IP Driver • 80 to 100 microseconds latency • Standard Ethernet Dell NIC with MPICH over TCP/IP • 65 microseconds latency • Infiniband 4X with MPI Driver • 6 microseconds • Myrinet • 6 microseconds • Quadrics • 3 microseconds

  12. Latency Comparison

  13. References • Infiniband Trade Association - www.infinibandta.org • OpenIB Alliance - www.openib.org • TopSpin - www.topspin.com • Wikipedia - www.wikipedia.org • O’Reilly - www.oreillynet.com • Sourceforge - infiniband.sourceforge.net • Performance Comparison of MPI Implementations over InfiniBand, Myrinet and Quadrics. Computer and Information Science. Ohio State University. - nowlab.cis.ohio-state.edu/projects/mpi-iba/publication/sc03.pdf

More Related