1 / 26

100 Gigabit Ethernet Requirements & Implementation

100 Gigabit Ethernet Requirements & Implementation. Fall 2006 Internet2 Member Meeting December 6, 2006 Serge Melle smelle@infinera.com 408-572-5200. Drew Perkins dperkins@infinera.com 408-572-5208. Internet Backbone Growth.

dewey
Télécharger la présentation

100 Gigabit Ethernet Requirements & Implementation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 100 Gigabit Ethernet Requirements & Implementation Fall 2006 Internet2 Member Meeting December 6, 2006 Serge Melle smelle@infinera.com 408-572-5200 Drew Perkins dperkins@infinera.com 408-572-5208

  2. Internet Backbone Growth • Industry consensus indicates a sustainable growth rate of 75% to 100% per year in aggregate traffic demand • Traffic increased more than 10,000x from 1990 to 2000 • Traffic projected to increase an additional 1,000x from 2000 to 2010 [1] K. G. Coffman and A. M. Odlyzko, ‘Growth of the Internet’, Optical Fiber Telecommunications IV B: Systems and Impairments, I. P. Kaminow and T. Li, eds. Academic Press, 2002, pp. 17-56.

  3. The Future Belongs to Tb/s Links! • Carriers deployed Nx10 Gb/s networks several years ago • ECMP and LAG • N now reaching hardware limit around 16 in some networks • Now evaluating deployment of (Nx) 40 Gb/s router networks • Is this like putting out a 5-alarm fire with a garden hose? • Current Backbone growth rates, if sustained, will require IP link capacity to scale to > 1 Tb/s by 2010

  4. Proposed Requirements for Higher Speed Ethernet • Protocol Extensible for Speed • Ethernet tradition has been 10x scaling • But at current growth rates, 100 Gb/s will be insufficient by 2010 • Desirable to standardize method of extending available speed without re-engineering the protocol stack • Incremental Growth • Most organizations deploy new technologies with a 4-5 yr lifetime • Pre-deploying based on the speed requirement 5 yrs in advance is economically burdensome • Assuming 5 yr window and 100% growth per year, ability to grow link speed incrementally over 25 = 32x without a “forklift upgrade” seems highly desirable

  5. Proposed Requirements (cont’d) • Hitless Growth • Problematic to “take down” core router links for a substantial period of time without customer service degradations • SLAs may be compromised or require complicated temporary workarounds if substantial down time is required for upgrade. • Ideally, upgrade of the link capacity should therefore be hitless, or at least only momentarily service-impacting. • Resiliency and Graceful Degradation • Protocol should provide rapid recovery from failure of an individual channel or component • If the failure is such that full performance can not be provided, degradation should only be proportional to the failed element(s).

  6. Proposed Requirements (cont’d) • Technology Reuse • Highly desirable to leverage existing 10G PHYs, including 10GBASE-R, W, X, S, L, E, Z and LRM in order to foster ubiquity and avoid duplication of standards efforts • Deterministic Performance • Latency/Delay Variation should be low for support of real-time packet based services, e.g. • Streaming video • VOIP • Gaming

  7. Proposed Requirements (cont’d) • WAN Manageability • 100 GbE will be transported over wide area networks • It should include features for low OpEx and should be: • Economical • Reliable • Operationally Manageable (e.g. simple fault isolation) • It should support equivalents for conventional transport network OAM mechanisms, e.g. • Alarm Indication Signal (AIS) • Forward Defect Indication (FDI) • Backward Defect Indication (BDI) • Tandem Connection Monitoring (TCM), etc.) • WAN Transportability • Operation over WAN fiber optic networks • Transport across regional, national and inter-continental networks • The protocol should be resilient to intra-channel/intra-wavelength propagation delay differences (skew)

  8. Time Division Multiplexing (ie: Baud Rate) 100 Gbps Modulation (ie: Bits per Hz) 10 Gbps 8 (e.g. QAM-256) 1 Gbps 4 (e.g. QAM-16) 100 Mbps 2 (e.g. PAM-4, (D)QPSK) 10 Mbps 1 (e.g. NRZ) Wavelength Division Multiplexing (i.e. ls) 1 2 4 6 8 10 1 CWDM DWDM 4 8 12 Space Division Multiplexing (ie: Parallel Optics) Technological Approaches to 100 Gb/s Transport

  9. Time Division Multiplexing (ie: Baud Rate) Modulation (ie: Bits per Hz) Wavelength Division Multiplexing (i.e. ls) Space Division Multiplexing (ie: Parallel Optics) Which Ethernet Application? • Ethernet is used today for many applications over different distances • Distances > 100m primarily use optical technologies • Performance for each application may be best advanced using a different approach

  10. Too many problems! • 65nm CMOS will cap out long before 100Gb/s • 100x shorter reach due to dispersion (modal, chromatic, PMD, etc.) • Bandwidth of copper backplane technology • Fundamental R&D required to develop enabling technologies for low cost Time Division Multiplexing (ie: Baud Rate) 100 Gbaud Modulation (ie: Bits per Hz) 10 Gbaud 8 (e.g. QAM-256) 1 Gbaud 4 (e.g. QAM-16) 100 Mbaud 2 (e.g. PAM-4, (D)QPSK) 10 Mbaud 1 (e.g. NRZ) Wavelength Division Multiplexing (i.e. ls) 1 2 4 6 8 10 1 CWDM DWDM 4 8 12 Space Division Multiplexing (ie: Parallel Optics) Scaling Beyond 10Gb/s: TDM ý 100 Gb/s TDM unlikely to be a low-cost approach for any application in near future

  11. Digital Communication theory is well-established • Proven technology for copper technologies 1000BASE-T, DSL, Cable Modems, etc. • Limited use with optical technology • May be used in conjunction with other approaches Time Division Multiplexing (ie: Baud Rate) 100 Gbps Modulation (ie: Bits per Hz) 10 Gbps 8 (e.g. QAM-256) 1 Gbps 4 (e.g. QAM-16) 100 Mbps 2 (e.g. PAM-4, (D)QPSK) 10 Mbps 1 (e.g. NRZ) Wavelength Division Multiplexing (i.e. ls) 1 2 4 6 8 10 1 CWDM DWDM 4 8 12 Space Division Multiplexing (ie: Parallel Optics) Scaling Beyond 10Gb/s: Modulation ý Has never been applied to a high-volume optical standard and difficult for most applications of interest

  12. Time Division Multiplexing (ie: Baud Rate) 100 Gbps Modulation (ie: Bits per Hz) 10 Gbps • OIF standards for Parallel Optical Interfaces • 10Gb/s VSR4 and 40Gb/s VSR5 • Slow adoption due to minimal market traction • Low volumes limits economic savings • Could be extended to 100 Gbps • 12x 10 Gbps VCSELs 8 (e.g. QAM-256) 1 Gbps 4 (e.g. QAM-16) 100 Mbps 2 (e.g. PAM-4, (D)QPSK) 10 Mbps 1 (e.g. NRZ) Wavelength Division Multiplexing (i.e. ls) 1 2 4 6 8 10 1 CWDM DWDM 4 8 12 Space Division Multiplexing (ie: Parallel Optics) Scaling Beyond 10Gb/s: SDM Most applicable to VSR applications

  13. Time Division Multiplexing (ie: Baud Rate) • Extensive WDM technology development in past decade • Proven deployments in all telecom networks • Focus on cost reduction: CWDM, EMLs, etc. • 10GBASE-LX4 achieved success • 4-color CWDM • SR applications 100 Gbps Modulation (ie: Bits per Hz) 10 Gbps 8 (e.g. QAM-256) 1 Gbps 4 (e.g. QAM-16) 100 Mbps 2 (e.g. PAM-4, (D)QPSK) 100 Gbps 10 Mbps 1 (e.g. NRZ) Wavelength Division Multiplexing (i.e. ls) 1 2 4 6 8 10 1 CWDM DWDM 4 8 12 Space Division Multiplexing (ie: Parallel Optics) Scaling Beyond 10Gb/s: WDM Proven approach to reach Tb/s level bandwidth for even long reach applications

  14. Drivers for a Super-l (Multi-wavelength) Protocol • Per-channel bit rate growth historically and dramatically out-paced by Core Router interconnection demand growth • Requirement for WAN transportability strongly favors approach leveraging multiple wavelengths (Super-l service)

  15. Won’t 802.3ad Link Aggregation (LAG) Solve the Scaling Problem? • LAG and ECMP rely on statistical flow distribution mechanisms • Provide fixed assignment of “conversations” to channels • Unacceptable performance as individual flows reach Gb/s range • A single 10 Gb/s flow will exhaust one LAG member yielding 1/N blocking probability for all other flows • VPN and security technologies make all flows appear as one • True deterministic ≥ 40G link technology required today • Deterministic packet/fragment/word/byte distribution mechanism

  16. Possible Channel Bonding Techniques • Traffic may be distributed over multiple links by a variety of techniques • Bit/Octet/Word Distribution • Fixed units of the serial stream are assigned sequentially to lanes • Small additional overhead allows re-alignment at the receiver • Examples: 10GBASE-X, SONET/SDH/OTN Virtual Concatenation (VCAT) • Packet Distribution • Sequence numbers added to packets to enable re-ordering at the receiver • Large packets within the stream may induce excessive delay/delay variation to smaller, latency-sensitive packets • Examples: Multilink PPP, 802.3ah PME Aggregation Clause 61 • Packet Distribution with Fragmentation • Fragmentation bounds buffering requirements and delay associated with packet size and packet size variation • Overhead/link inefficiency is a function of the maximum fragment size chosen • At 100 Gb/s and above, a fragment size can be chosen such that an effective compromise between link efficiency and the QoS of individual, time-sensitive flows can be readily achieved • Examples: 802.3ah PME Aggregation, Multilink PPP

  17. 10Gigabit Ethernet Protocol (Link Aggregation Group) LAG (Media Access Control) MAC Reconciliation XGMII PCS PMA PHY PMD MDI Medium 1

  18. Multilink Ethernet – N x 10G LAG – Link Aggregation Group MAC – Media Access Control LAG MAC Multilink Reconciliation Reconciliation Reconciliation XGMII XGMII XGMII PCS PCS PCS PMA PMA PMA PHY PHY PHY PMD PMD PMD MDI MDI MDI Medium Medium Medium 2 1 N (ie: N = 10 for 100GbE) MultiLink Ethernet a.k.a. Aggregation at the Physical Layer (APL)

  19. Multilink Ethernet Benefits • Ensures ordered delivery • Resilient and scalable • Incremental hitless growth up to 32 channels • Minimal added latency • Line code independent, preserves all existing 10G PHYs • Orthogonal to and lower level than LAG • Scales into future as individual channel speeds increase

  20. Multilink Ethernet Benefits (Cont.) • Concept well proven • Packet fragmentation, distribution, collection and reassembly similar to 802.3ah PME aggregation • Fits well with multi-port (4x, 5x, 10x, etc.) PHYs • Preserves existing interfaces (e.g. XGMII, XAUI) • Compatible with physical layer transport implementation over N x wavelengths

  21. Live 100 GbE Demo - Chicago to New York 100GbE MAC with packet reordering, implemented by UCSC 10 x 10Gb/s XFP boards, provided by Finisar Infinera DTN, provided by Infinera New internet2 network Chicago – New York FPGA provided by Xilinx Optical loopbacks 2000km 10x10Gb/s electrical 10x10Gb/s 1310nm 10x11.1Gb/s 15xxnm * 100 GbE first demonstrated Nov 13 at SC06 between Tampa and Houston

  22. Summary • 100 GbE Requirements • Protocol extensible for speed • Hitless, incremental growth • Resiliency and graceful degradation • WAN transportability • Technology reuse • Deterministic performance • Multi-channel operation • Multilink Ethernet meets the requirements • Technology proven over real networks

  23. Thanks! Serge Melle smelle@infinera.com 408-572-5200 Drew Perkins dperkins@infinera.com 408-572-5308

More Related