1 / 13

The NE010 iWARP Adapter

The NE010 iWARP Adapter. Gary Montry Senior Scientist +1-512-493-3241 GMontry@NetEffect.com. Users. switch. ▪ ▪ ▪ ▪ ▪ ▪ ▪. LAN Ethernet. NAS. Applications. Block Storage Fibre Channel. networking. adapter. block storage. adapter. SAN. clustering. adapter. Clustering

nell-downs
Télécharger la présentation

The NE010 iWARP Adapter

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The NE010 iWARP Adapter Gary Montry Senior Scientist +1-512-493-3241 GMontry@NetEffect.com

  2. Users switch ▪ ▪ ▪ ▪ ▪ ▪ ▪ LAN Ethernet NAS Applications Block Storage Fibre Channel networking adapter block storage adapter SAN clustering adapter Clustering Myrinet, Quadrics, InfiniBand, etc. switch ▪ ▪ ▪ ▪ ▪ ▪ ▪ switch ▪ ▪ ▪ ▪ ▪ ▪ ▪ Today’s Data Center

  3. OS device driver app buffer app buffer application to OS context switches I/O cmd 40% context switch OS OS buffer Intermediate buffer copies Intermediate buffer copies I/O cmd 20% 20% transport processing 40% device driver driver buffer I/O cmd application to OS context switches application to OS context switches 40% 40% adapter buffer I/O cmd I/O cmd I/O cmd standard Ethernet TCP/IP packet Overhead & Latency in Networking The iWARP* Solution Sources Command Context Switches Packet Processing Intermediate Buffer Copies user application % CPU Overhead I/O cmd 100% I/O library kernel 60% server software TCP/IP 40% software • Transport (TCP) offload hardware • RDMA / DDP I/O adapter TCP/IP • User-Level Direct Access / OS Bypass * iWARP: IETF ratified extensions to the TCP/IP Ethernet Standard

  4. switch ▪ ▪ ▪ ▪ ▪ ▪ ▪ Tomorrow’s Data CenterSeparate Fabrics for Networking, Storage, and Clustering Users LAN Ethernet LAN iWARP Ethernet NAS Applications Storage iWARP Ethernet Block Storage Fibre Channel networking networking storage clustering adapter switch storage networking storage clustering adapter ▪ ▪ ▪ ▪ ▪ ▪ ▪ SAN Clustering iWARP Ethernet Clustering Myrinet, Quadrics, InfiniBand, etc. clustering networking storage clustering adapter switch ▪ ▪ ▪ ▪ ▪ ▪ ▪

  5. switch ▪ ▪ ▪ ▪ ▪ ▪ ▪ networkingstorageclustering adapter Single Adapter for All TrafficConverged Fabric for Networking, Storage, and Clustering Users • Smaller footprint • Lower complexity • Higher bandwidth • Lower power • Lower heat dissipation NAS Applications Converged iWARP Ethernet SAN

  6. NetEffect’s NE010 iWARP Ethernet Channel Adapter

  7. InfiniBand - PCI-X 10 GbE NIC InfiniBand - PCIe NE010 High Bandwidth – Storage/Clustering Fabrics Bandwidth Tests NE010 vs InfiniBand & 10 GbE NIC 1000 900 800 700 600 Bandwidth MB/s 500 400 300 200 100 0 128 256 512 1024 2048 4096 8192 16384 Message Size (Bytes) Source: InfiniBand - OSU website, 10 Gb NIC and NE010 - NetEffect Measured

  8. NE010 (1 QP) NE010 (2048 QP) InfiniBand (1 QP) InfiniBand (2048 QP) Multiple Connections Realistic Data Center Loads Multiple QP Comparison NE010 vs InfiniBand 1000 900 800 700 600 BW (MB/s) 500 400 300 200 100 0 128 256 512 1024 2048 4096 8192 16384 Message Size (Bytes) Source: InfiniBand - Mellanox Website; NE010 - NetEffect Measured

  9. 1000 100% 900 90% 800 80% 700 70% 600 60% 500 50% CPU Utilization Bandwidth MB/s 400 40% 300 30% 200 20% 100 10% 0 0% 128 256 512 1024 2048 4096 8192 16384 Message Size (Bytes) BW NIC BW NE010 CPU Util NIC CPU Util NE010 NE010 Processor Off-load Bandwidth and CPU Utilization NE010 vs 10 GbE NIC

  10. InfiniBand 10 GbE NIC NE010 Low Latency – Required for Clustering Fabrics Application Latency NE010 vs InfiniBand & 10 GbE NIC 160 140 120 100 Latency (usec) 80 60 40 20 - 128 256 512 1024 2048 4096 8192 16384 Message Size (Bytes) Source: InfiniBand - OSU website, 10 Gb NIC and NE010 - NetEffect Measured

  11. Low Latency – Required for Clustering Fabrics Application Latency D. Dalessandro – OSC May ‘06

  12. Summary & Contact Information • iWARP Enhanced TCP/IP Ethernet • Next Generation of Ethernet for All Data Center Fabrics • Full Implementation Necessary to Meet the Most Demanding Requirement – Clustering & Grid • True Converged I/O Fabric for Large Clusters • NetEffect Delivers “Best of Breed” Performance • Latency Directly Competitive with Proprietary Clustering Fabrics • Superior Performance under Load for Throughput and Bandwidth • Dramatically Lower CPU Utilization For more information contact: Gary Montry Senior Scientist +1-512-493-3241 | GMontry@NetEffect.com www.NetEffect.com

More Related