1 / 21

100G Packet Ring Architectures Gady Rosenfeld VP Marketing gady@corrigent

100G Packet Ring Architectures Gady Rosenfeld VP Marketing gady@corrigent.com. October 2007. The need for 100G. Cox – "100GE needed for broadband customer aggregation urgently in the core by 2009 and across the board by 2011", John Weil, Apr'07

keilah
Télécharger la présentation

100G Packet Ring Architectures Gady Rosenfeld VP Marketing gady@corrigent

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 100G Packet Ring ArchitecturesGady RosenfeldVP Marketing gady@corrigent.com October 2007

  2. The need for 100G • Cox – "100GE needed for broadband customer aggregation urgently in the core by 2009 and across the board by 2011", John Weil, Apr'07 • Comcast – “There is a market need for 100GE”, Vik Saxena, Jan’07 • Equinix – Requirements for “100 Gbps or greater”, Louis Lee, Jan’07 • Level 3 – Using 8x10 GbE LAG today • Yahoo! – Using 4x10 GbE LAG today

  3. Generic Triple-Play Network Architecture MGW Metro Node NHO National Video Content Distribution Network (IP Multicast) Local IPTV Video Distribution Network Video Acquisition System VoIP Digital Video Server Metro Node NHO ISP1 BRAS Local IPTV Video Distribution Network VoIP ISP2 Digital Video Server ISP3 Metro Transport Network (Tier 2 aggregation network) IP Core Network (Tier 1 aggregation network) National Content Insertion Regional Content Insertion Local Distribution

  4. Copper Fiber Triple-Play Network – Metro Transport DSLAM Local Content Insertion Metro Node PA Nx10GE ER Digital Video Server Nx10G 10G metro rings Metro Node Nx10GE PA PA Digital Video Server ER KEY ER : Edge Router (Layer-3) PA : Packet Aggregator (Layer-2) Customer Premises Packet transport switch

  5. Bandwidth Requirements • IPTV • 2007 – 300 channels, 10% HD: 1.1-1.4 Gbits/s (MPEG-4/MPEG-2) • 2010 – 300 channels, 50% HD: 2.0-3.2 Gbits/s • VoD (2500 subscribers per node) • 2007 – 5% VoD penetration: 0.5-0.6 Gbits/s • 2010 – 30% VoD penetration: 5.0-8.0 Gbits/s (MPEG-4/MPEG-2) • Total bandwidth requirements – 6 nodes per ring • 2007 – 3.5-4.5 Gbits/s • 2010 – 32-51 Gbits/s

  6. IEEE 802.3 HSSG Status • IEEE 802.3 HSSG • Agreed on PAR for 40GE and 100GE, July’07 • Identify bandwidth-hungry applications: data centers, internet exchanges, high-performance computing and video on demand • Parallel optics for 100GE (4x25G, 10x10G) discussed for dedicated fiber and limited distances applications. Serial options for MAN/WAN applications still under evaluation • Polarization multiplexing, Phase coding • Standard is still at least 4 years away

  7. Alternative for Network Scalability • Add separate rings • Complex network operation – multiple networks, Traffic Engineering • No redundancy between rings • Limited statistical multiplexing • Upgrade to 40 Gbits/s • Disruptive and costly process • High equipment cost – optics, network processors, traffic management • Limited capacity

  8. High-Capacity Packet Rings • High-capacity (HC) packet rings are achieved through advanced bonding techniques • Multiple 10G RPR instances are combined to create a single logical ring • 40G links can also be added to the bundle 100G MAC Layer nx10G PHY • Flow-aware hashing for load balancing and distributing packets over parallel physical links • Guarantees traffic integrity, by uniquely identifying and classifying each individual flow over the same physical link, avoiding re-ordering

  9. HC Packet Rings – Traffic Distribution • No mis-ordering within a flow • Each flow is consistently delivered on the same channel • Packet ordering is maintained even if each channel is carried in different route with different length • Flexible combinations of fields used for hashing to provide load balancing in different applications Link Failure Transmitted packets over 4 channels 6 flows 2 1 4 3 2 1 3 2 1 4 3 3 2 2 1 1 2 1 4 3 2 2 1 1 3 2 1 4 3 4 4 2 1 3 2 1 4 3 After the failure packets are distributed over 3 channels 2 1 4 3

  10. HC Packet Rings Survivability RPR Steer protection TDM Flow Data Flow RPR Steer protection - Logical port - Physical RPR MAC

  11. HC Packet Rings Enhanced Survivability TDM Flow Data service RPR Link#2 is Down - Logical port - Physical RPR MAC

  12. Customer B Example – Growth of Existing Services (1/3) • L2-VPN service to interconnect between enterprise's branches • VPLS over ring network • Can be infrastructure service to multiple end-user services Customer A Customer C Customer C Customer A Customer C Customer B Network Capacity Customer A – 3G Customer B – 3G Customer C – 4G Total net capacity : 10G Network Capacity Customer A – 3G Customer B – 4G Customer C – 4G Total net capacity : 11G Customer B Customer B Customer A Customer C

  13. Customer B Customer B Customer B Customer B Example – Growth of Existing Services (2/3) Option 1 – Multi-ring configuration • Add additional ring instance – ringlet #2 • Disconnect all CustomerB locations from ringlet #1 • Re-provisioning Customer B service on ringlet #2 Customer A Customer C Customer C Customer A Customer C Customer A Customer C

  14. Customer B Example – Growth of Existing Services (3/3) Option 2 – HC-RPR • Increase RPR ring capacity to 20G • Connect Customer B 4th location to the existing L2-VPN service Customer A Customer C Customer C Customer A Customer C Customer B Customer B Customer B Customer A Customer C

  15. Multi-Phy HC Packet Rings • Description • Allow combination of RPRoSTM64 and RPRo10GE in the same HC-RPR group. • Motivation • Reduce cost while maintaining ring synchronization. • Clock distribution across the ring via SONET/SDH interface • Data and TDM traffic will run on top of both Ethernet and SONET/SDH interfaces – full flexibility • Implementation aspects • Eliminate miss-order by per flow hashing • Fine flow granularity to assure equal load sharing between RPR instances • Flow granularity: MAC ( S+D) + IP (S+D) + Port • No issue of equal load sharing between different Phy layers • OC192 payload rate (net rate): 9.51Gbps • 10GE tri-model average payload rate: 9.5Gpbs Equal net rates

  16. Asymmetric Operation (AHC-RPR) and Management S5 • Best for incremental network growth • Install RPR blades and optics only as node capacity demand increases • At least one ring must be common to all stations • Each station is represented by HC (group) MAC and physical MAC • HC MAC is used for data forwarding and IP level • Physical MAC used for topology • Reference topology has group entity and per ring entities S4 S1 2x10G HC-RPR S2 S3

  17. The CM4000 Packet Transport Switch Layer Transport Plane Monitoring, Survivability and multiplexing TDM Ethernet IP/MPLS PPP FC HDLC Classification Marking Queuing Tagging Policing Interworking Point to point Multipoint Point to Multipoint MPLS LSP SONET/SDH Path SONET/SDH Line 1 GE 10 GE Nx10GE RPR NxRPR OTN (G.709) SONET/SDH Ethernet • Packet-based Path/Link Technologies • Packet-based Multiplexing, Survivability and Monitoring at the Path/Link layers

  18. Summary • HC Packet Transport • Network scalability up to 100 Gbits/s for high bandwidth applications is required today • 100GE is at least 4 years away • Cost effective network migration path is required • In-service network scalability in 10G or 40G increments • Resiliency to fiber and equipment failures • Implemented with available low-cost optical components

  19. Questions? Thank You

More Related