1 / 104

Progress and Challenges toward 100Gbps Ethernet

Progress and Challenges toward 100Gbps Ethernet. Joel Goergen VP of Technology / Chief Scientist.

toki
Télécharger la présentation

Progress and Challenges toward 100Gbps Ethernet

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the progress and challenges for development of technology and standards for 100 GbE.  Joel is an active contributor to IEEE802.3 and the Optical Internetworking Forum (OIF) standards process.  Joel will discuss design methodology, enabling technologies, emerging specifications, and crucial considerations for performance and reliability for this next iteration of LAN/WAN technology.

  2. Overview • Network Standards Today • Available Technology Today • Feasible Technology for 2009 • The Push for Standards within IEEE and OIF • Anatomy of a 100Gbps or 160Gbps Solution • Summary • Backup Slides

  3. 10 GbE 1 GbE 100 GbE ??? 100Mb 10 Mb 1996 2002 2010??? 1994 1983 Network Standards Today:The Basic Evolution

  4. Network Standards Today:The Basic Structure

  5. Network Standards Today:The Desk top • 1Gbps Ethernet • 10/100/1000 Copper ports have been shipping with most desktop and laptop machines for a few years. • Fiber SMF/MMF • IEEE 802.11a/b/g Wireless • Average useable bandwidth reaching 50Mbps

  6. Network Standards Today:Clusters and Servers • 1Gbps Ethernet • Copper • 10Gbps Ethernet • Fiber • CX-4

  7. Network Standards Today:Coming Soon • 10Gbps LRM • Multi-mode fiber to 220meters. • 10Gbps Base-T • 100meters at more then 10Watts per port ??? • 30meters short reach at 3Watts per port ??? • 10Gbps Back Plane • 1Gbps, 4x3.125Gbps, 1x10Gbps over 1meter improved fr-4 material.

  8. Front End Front End Line Card Line Card B Fabric A Fabric Available Technology Today:System Implementation A+B 10G 10G 1G A 1G SPIx B 10G A 10G SPIx 1G B 1G Passive Copper Backplane A B

  9. Front End Front End Line Card Line Card L1 L1 1st Switch Fabric Nth Switch Fabric N+1 Switch Fabric Ln Ln Ln+1 Ln+1 Available Technology Today:System Implementation N+1 10G 10G 1G SPIx L1 1G 10G 10G SPIx Ln+1 1G 1G Passive Copper Backplane

  10. Front End Front End Line Card Line Card L1 L1 1st Switch Fabric Nth Switch Fabric N+1 Switch Fabric Ln Ln Ln+1 Ln+1 Available Technology Today:Zoom to Front-end 10G 10G 1G SPIx L1 1G 10G 10G SPIx Ln+1 1G 1G Passive Copper Backplane

  11. Available Technology Today:Front-end • Copper • RJ45 • RJ21 (mini … carries 6 ports) • Fiber • XFP and variants (10Gbps) • SFP and variants (1Gbps) • XENPAK • LC/SC bulkhead for WAN modules

  12. Available Technology Today:Front-end System Interfaces • TBI • 10bit Interface. Max speed 3.125Gbps. • SPI-4 / SXI • System Protocol Interface. 16bit Interface. Max speed 11Gbps. • SPI-5 • System Protocol Interface. 16bit Interface. Max speed 50Gbps. • XFI • 10Gbps Serial Interface.

  13. Available Technology Today:Front-end Pipe Diameter • 1Gbps • 1Gbps doesn’t handle a lot of data anymore. • Non standard parallel also available based on OIF VSR. • 10Gbps LAN/WAN or OC-192 • As port density increases, using 10Gbps as an upstream pipe will no longer be effective. • 40Gbps OC-768 • Not effective port density in an asynchronous system. • Optics cost close to 30times 10Gbps Ethernet.

  14. Available Technology Today:Front-end Distance Requirements • x00 m (MMF) • SONET/SDH (Parallel): OIF VSR-4, VSR-5 • Ethernet:: 10GBASE-SR, 10GBASE-LX4, 10GBASE-LRM • 2-10 km • SONET/SDH: OC-192/STM-64 SR-1/I-64.1, OC-768/STM-256 VSR2000-3R2/etc. • Ethernet: 10GBASE-LR • ~40 km • SONET/SDH: OC-192/STM-64 IR-2/S-64.2, OC-768/STM-256 • Ethernet: 10GBASE-ER • ~100 km • SONET/SDH: OC-192/STM-64 LR-2/L-64.2, OC-768/STM-256 • Ethernet: 10GBASE-ZR • DWDM • OTN: ITU G.709 OTU-2, OTU-3 • Assertion • Each of these applications must be solved for ultra high data rate interfaces.

  15. Available Technology Today:Increasing Pipe Diameter • 1Gbps LAN by 10links parallel • 10Gbps LAN by x-links WDM • 10Gbps LAN by x physical links • Multiple OC-192 or OC-768 Channels

  16. Front End Front End Line Card Line Card L1 L1 1st Switch Fabric Nth Switch Fabric N+1 Switch Fabric Ln Ln Ln+1 Ln+1 Available Technology Today:Zoom to Back Plane 10G 10G 1G SPIx L1 1G 10G 10G SPIx Ln+1 1G 1G Passive Copper Backplane

  17. Available Technology Today:Back Plane Data Packet Line Cards --GbE / 10 GbE RPMs SFMs Power Supplies SERDES Backplane Traces

  18. Available Technology Today:Making a Back Plane Simple! It’s just multiple sheets of glass with copper traces and copper planes added for electrical connections. Reference: Isola

  19. Available Technology Today:Back Plane Pipe Diameter • 1.25Gbps • Used in systems with five to ten year old technology. • 2.5Gbps/3.125Gbps • Used in systems with five year old or less technology. • 5Gbps/6.25Gbps • Used within the last 12 months.

  20. Available Technology Today:Increasing Pipe Diameter • Can’t WDM copper • 10.3Gbps/12.5Gbps • Not largely deployed at this time. • Increasing the pipe diameter on a back plane with assigned slot pins can only be done by changing the glass construction.

  21. Available Technology Today:Pipe Diameter is NOT Flexible • Once the pipe is designed and built to a certain pipe speed, making the pipe faster is extremely difficult, if not impossible.

  22. Available Technology Today:Gbits Density per Slot with Front End and Back Plane Interfaces Combined

  23. Feasible Technology for 2009: Defining the Next Generation • The overall network architecture for next generation ultra high (100, 120 and 160Gbps) data rate interfaces should be similar in concept to the successful network architecture deployed today using 10Gbps and 40Gbps interfaces. • The internal node architectures for ultra high (100, 120 and 160Gbps) data rate interfaces should follow similar concepts in use for 10Gbps and 40Gbps interfaces. • All new concepts need to be examined, but there are major advantages to scaling current methods with new technology.

  24. Feasible Technology for 2009:Front-end Pipe Diameter • 80Gbps … not enough Return On Investment • 100Gbps • 120Gbps • 160Gbps • Reasonable Channel Widths • 10λ by 10-16 Gbps • 8λ by 12.5-20 Gbps • 4λ by 25-40 Gbps • 1λ by 100-160 Gbps • Suggest starting at an achievable channel width while pursuing a timeline to optimize the width in terms of density, power, feasibility, and cost - depending on optical interface application/reach.

  25. Feasible Technology for 2009:Front-end Distance Requirements • x00 m (MMF) • SONET/SDH: OC-3072/STM-1024 VSR • Ethernet: 100GBASE-S • 2-10 km • SONET/SDH: OC-3072/STM-1024 SR • Ethernet: 100GBASE-L • ~40 km • SONET/SDH: OC-3072/STM-1024 IR-2 • Ethernet: 100GBASE-E • ~100 km • SONET/SDH: OC-3072/STM-1024 LR-2 • Ethernet: 100GBASE-Z • DWDM (OTN) • SONET/SDH: Mapping of OC-3072/STM-1024 • Ethernet: Mapping of 100GBASE • Assertion • These optical interfaces are defined today at the lower speeds. It is highly likely that industry will want these same interface specifications for the ultra high speeds. • Optical interfaces, with exception of VSR, are not typically defined in OIF. In order to specify the system level electrical interfaces, some idea of what industry will do with the optical interface has to be discussed. It is not the intent of this presentation to launch these optical interface efforts within OIF.

  26. Feasible Technology for 2009:Front-end System Interfaces • Reasonable Channel Widths (SPI-?) • 16 lane by 6.25-10Gbps • 10 lane by 10-16Gbps • 8 lane by 12.5-20Gbps • 5 lane by 20-32Gbps • 4 lane by 25-40Gbps • Port Density is impacted by channel width. Fewer lanes translates to higher Port Density and less power.

  27. Feasible Technology for 2009:Back Plane Pipe Diameter • Reasonable Channel Widths • 16 lane by 6.25-10Gbps • 10 lane by10-16Gbps • 8 lane by 12.5-20Gbps • 5 lane by 20-32Gbps • 4 lane by 25-40Gbps • Port Density is impacted by channel width. Fewer lanes translates to higher Port Density and less power.

  28. Feasible Technology for 2009: Pipe Diameter is NOT Flexible • New Back Plane designs will have to have pipes that can handle 20Gbps to 25Gbps.

  29. Feasible Technology for 2009: Gbits Density per Slot with Front End and Back Plane Interfaces Combined

  30. Feasible Technology for 2009:100Gbps Options Bit rate shown above is based on 100Gbps. Scale the bit rate accordingly to achieve 160Gbps.

  31. The Push for Standards:Interplay Between the OIF & IEEE • OIF defines multi-source agreements within the Telecom Industry. • Optics and EDC for LAN/WAN • SERDES definition • Channel models and simulation tools • IEEE 802 covers LAN/MAN Ethernet • 802.1 and 802.3 define Ethernet over copper cables, fiber cables, and back planes. • 802.3 leverages efforts from OIF. • Membership in both bodies is important for developing next generation standards.

  32. The Push for Standards:OIF • Force10 Labs introduced three efforts within OIF to drive 100Gbps to 160Gbps connectivity. • Two interfaces for interconnecting optics, ASICs, and backplanes. • A 25Gbps SERDES • Updates of design criteria to the Systems User Group

  33. Case Study: Standards ProcessP802.3ah – Nov 2000 / Sept 2004 Call for Interest By a member of 802.3 50% WG vote Study Group Open participation 75% WG PAR vote, 50% EC & Stds Bd Task Force Open participation 75% WG vote Working Group Ballot Members of 802.3 75% WG ballot, EC approval Sponsor Ballot Public ballot group 75% of ballot group Standards Board Approval RevCom & Stds Board 50% vote Publication IEEE Staff, project leaders

  34. Time line Nov03CFI Jan04Study Group May04Taskforce Nov03TF Ballot Mar05WG BAllot Dec05Sponsor Ballot Mid-06Standard Case Study: Standards Process10GBASE_LRM: 2003 / 2006 Optical Power Budget (OMA) • 10GBASE-LRM Innovations: • TWDPSoftware reference equalizerDetermines EDC penalty of transmitter • Dual LaunchCentre and MCPMaximum coverage for minimum EDC penalty • Stress ChannelsPrecursor, split and post-cursor Canonical tests for EDC Launch power (min) - 4.5 dBm 0.5 dB: Transmitter implementation 0.4 dB: Fiber attenuation 0.3 dB: RIN 0.2 dB: Modal noise 4.4 dB: TP3 TWDP and connector loss @ 99% confidence level 0.9 dB: Unallocated power - 11.2 dBm Required effective receiver sensitivity Reference: David Cunningham – Avago Technologies

  35. Case Study: Standards Process10GBASE_LRM Specified optical power levels (OMA) Optical input to receiver (TP3) compliance test allocation Power budgetstarting at TP2 Launch power minimum - 4.5dBm Transmit implementationallowance = 0.5 dB Connector losses = 1.5dB Attenuation (2 dB) Fiber attenuation = 0.4 dB Fiber attenuation = 0.4dB Modal noise = 0.2 dB Interaction penalty = 0.1dB - 6.5 dBm Stressed receiver sensitivity RIN = 0.3 dB Modal noise = 0.2 dB Noise (0.5 dB) RIN = 0.3 dB Ideal EDC power penalty, PIE_D = 4.2dB TWDP and connector loss at 99th percentile (4.4 dB) Dispersion (4.2 dB) Unallocated margin 0.9 dB - 11.2 dBm Effective maximum unstressed 10GBASE-LRM receiver sensitivity Reference: David Cunningham – Avago Technologies

  36. Case Study: Standards Process10GBASE_T: 2002 / 2006 • Techno-babble • 64B/65B encoding (similar to 10GBASE-R) • LDPC(1723,2048) framing • DSQ128 constellation mapping (PAM16 with ½ the code points removed) • Tomlinson-Harshima precoder • Reach • Cat 6 up to 55 m with the caveat of meeting TIA TSB-155 • Cat 6A up to 100 m • Cat 7 up to 100 m • Cat 5 and 5e are not specified • Power • Estimates for worst case range from 10 to 15 W • Short reach mode (30 m) has a target of sub 4 W

  37. Case Study: Standards Process10GBASE_T • Noise and EMI • Alien crosstalk has the biggest impact on UTP cabling • Screened and/or shielded cabling has better performance • Power • Strong preference for copper technologies, even though higher power • Short reach and better performance cable reduce power requirement • Timeline • The standard is coming… products in the market end of `06, early `07 Tutorial & CFI PAR Task Force review 802.3 Ballot Sponsor Ballot NOV 2002 MAR 2003 JUL NOV MAR 2004 JUL NOV MAR 2005 JUL NOV MAR 2006 JUL 1st Technical Presentation D1.0 D2.0 D3.0 STD

  38. Birth of A StandardIt Takes About 5 Years • Ideas from industry • Feasibility and research • Call for Interest (CFI) –100 GbE EFFORT IS HERE • Marketing / Sales potential, technical feasibility • Study Group • Work Group • Drafts • Final member vote

  39. The Push for Standards:IEEE • Force10 introduces a Call for Interest (CFI) in July 2006 IEEE802 with Tyco Electronics. • Meetings will be held in the coming months to determine the CFI and the efforts required. • We target July 2006 because of resources within IEEE. • Joel Goergen and John D’Ambrosia will chair the CFI effort. The anchor team is composed of key contributors from Force10, Tyco, Intel, Quake, and Cisco. It has since broadened to include over 30 companies.

  40. The Ethernet AlliancePromoting All Ethernet IEEE Work • Key IEEE 802 Ethernet projects include • 100 GbE • Backplane • 10 GbE LRM / MMF • 10 G Base-T • Force10 is on the BoD, principle member • 20 companies at launch • Sun, Intel, Foundry, Broadcam. . . • Now approaching 40 companies • Launch January 10, 2006 • Opportunity for customers to speak on behalf of 100 GbE Ethernet

  41. Anatomy of a 100Gbps Solution: Architectural Disclaimers • There Are Many Ways to Implement a system • This section covers two basic types. • Issues facing 100Gbps ports are addressed in basic form. • Channel Performance or ‘Pipe Capacity’ is difficult to measure • Two Popular Chassis Heights • 24in to 34in Height (2 or 3 Per Rack) • 10in to 14in Height (5 to 8 Per Rack)

  42. Anatomy of a 100Gbps Solution: What is a SERDES? • Device that attaches to the ‘channel’ or ‘pipe’ • Transmitter: • Parallel to serial • Tap values • Pre-emphasis • Receiver: • Serial to Parallel • Clock and Data Recovery • DFE • Circuits are very sensitive to power noise and low Signal to Noise Ration (SNR) Reference: Altera

  43. Anatomy of a 100Gbps Solution: Interfaces that use SERDES • TBI • 10bit Interface. Max speed 3.125Gbps across all 10 lanes. This is a parallel interface that does not use SERDES technology. • SPI-4 / SXI • System Protocol Interface. 16bit Interface. Max speed 11Gbps. This is a parallel interface that does not use SERDES technology. • SPI-5 • System Protocol Interface. 16bit Interface. Max speed 50Gbps. This uses 16 SERDES interfaces at speeds up to 3.125Gbps. • XFI • 10Gbps Serial Interface. This uses 1 SERDES at 10.3125Gbps. • XAUI • 10Gbps 4 lane Interface. This uses 4 SERDES devices at 3.125Gbps each.

  44. Anatomy of a 100Gbps Solution:Power Noise thought … • Line Card SERDES Noise Limits • Analog target 60mVpp ripple • Digital target 150mVpp ripple • Fabric SERDES Noise Limits • Analog target 30mVpp ripple • Digital target 100mVpp ripple • 100Gbps interfaces won’t operate well if these limits can not be meet.

  45. Anatomy of a 100Gbps Solution: Memory Selection • Advanced Content-Addressable Memory (CAM) • Goal: Less power per search • Goal: 4 times more performance • Goal: Enhanced flexible table management schemes • Memories • Replacing SRAMs with DRAMs when performance allows to conserve cost • Quad Data Rate III SRAMs for speed • SERDES based DRAMs for buffer memory • Need to drive JEDEC for serial memories that can be easily implemented in a communication system. • The industry is going to have to work harder to get high speed memories for Network Processing in order to reduce latency. • Memory chips are usually the last thought! This will need to change for 100Gbps sustained performance.

  46. Anatomy of a 100Gbps Solution: ASIC Selection • High Speed Interfaces • Interfaces to MACs, Backplane, Buffer Memory are all SERDES based. SERDES all the way. Higher gate counts with internal memories target 3.125 to 6.25 SERDES; higher speeds difficult to design in this environment. • SERDES used to replace parallel busing for reduced pin and gate count • Smaller Process Geometry • Definitely 0.09 micron or lower • More gates(100% more gates over 0.13 micron process) • Better performance(25% better performance) • Lower power(1/2 the 0.13 micron process power) • Use power optimized libraries • Hierarchical Placement and Layout of the Chips • Flat placement is no longer a viable option • To achieve cost control, ASIC SERDES speed is limited to 6.25Gbps in high density applications.

  47. Front End Front End Line Card Line Card L1 L1 1st Switch Fabric N+1 Switch Fabric Nth Switch Fabric Ln Ln Ln+1 Ln+1 Anatomy of a 100Gbps Solution: N+1 Redundant Fabric - BP SPIx L1 SPIx Ln+1 Passive Copper Backplane

  48. Front End Front End Line Card Line Card Passive Copper Midplane L1 L1 1st Switch Fabric N+1 Switch Fabric Nth Switch Fabric Ln Ln Ln+1 Ln+1 Anatomy of a 100Gbps Solution: N+1 Redundant Fabric – MP L1 SPIx SPIx Ln+1 SPIx SPIx

  49. Line Card Line Card Line Card Line Card 1st Switch Fabric Nth Switch Fabric N+1 Switch Fabric 2nd Switch Fabric Anatomy of a 100Gbps Solution: N+1 High Speed Channel Routing

  50. Front End Front End Line Card Line Card A Fabric B Fabric Anatomy of a 100Gbps Solution: A/B Redundant Fabric - BP A SPIx B A SPIx B Passive Copper Backplane A B

More Related