1 / 16

InfiniBand Under the guidance of Mr. Pradeep Kumar Jena Advisor Submitted by

InfiniBand Under the guidance of Mr. Pradeep Kumar Jena Advisor Submitted by D. Sudhir CS 200118133. A Brief History Future I/O (FIO) was being developed by IBM, Compaq Computer, Hewlett-Packard, 3Com, Adaptec, and Cisco.

birch
Télécharger la présentation

InfiniBand Under the guidance of Mr. Pradeep Kumar Jena Advisor Submitted by

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. InfiniBand Under the guidance of Mr. Pradeep Kumar Jena Advisor Submitted by D. Sudhir CS 200118133

  2. A Brief History Future I/O (FIO) was being developed by IBM, Compaq Computer, Hewlett-Packard, 3Com, Adaptec, and Cisco. Next Generation I/O (NGIO) was being developed by Intel, Dell Computers, Sun, and others. FIO and NGIO were competing technologies Neither would “win” so they combined forces to form Serial I/O (SIO) which combined the best of both technologies The name SIO could not escape the powerful clutches of the Intel Marketing department and hence was renamed InfiniBand Architectureor IBA for short

  3. What is InfiniBand? • A technology used to interconnect processing nodes to I/O nodes to form a System Area Network • Intended to be a replacement for PCI • Heavily leverages best-of-breed technologies

  4. The Problem • The need for a cost-effective interconnect technology for building clusters. • Bus-based architectures (i.e. PCI) are limited to a single host system and cannot easily extend beyond the confines of the “box”. • Bandwidth and Latency between boxes using existing system area networks are limited and/or expensive. • This suggests that I/O interconnects need to radically change every few years in order to maintain system performance.

  5. The Solution - InfiniBand • Network-based Architecture • Serial communication technology • Supported by a very large consortium – 220 members in the IBTA to date • Targets a volume market in order to take advantage of economies of scale

  6. InfiniBand System Fabric

  7. Areas of Operation • Application Clustering • Storage Area Networks • Inter-Tier communication • Inter-Processor Communication (IPC)

  8. I/O Architectures – Shared Bus Architecture

  9. I/O Architectures – Switched Fabric Architecture

  10. InfiniBand Technical Overview • Layered Protocol - Physical, Link, Network, Transport, Upper Layers • Packet Based Communication • Three Link Speeds IX-2.5 Gb/s, 4 wire4X - 10 Gb/s, 16 wire 12X - 30 Gb/s, 48 wire • Subnet Management Protocol • Remote DMA Support • Multicast and Unicast Support • Reliable Transport Methods - Message Queuing • Communication Flow Control - Link Level and End to End

  11. InfiniBand Layers

  12. InfiniBand Architecture

  13. InfiniBand Elements • Channel Adapters • Switch • Router • Subnet Manager

  14. InfiniBand Support for the Virtual Interface Architecture

  15. Conclusion The collective effort of industry leaders has successfully transitioned InfiniBand from technology demonstrations to the first real product deployments. The IBTA has a vision to improve and simplify the data center through InfiniBand technology and the fabric it by creating an interconnect for servers, communications and storage. Thank You!

More Related